Wednesday, May 6, 2009

Radio Wave Propagation

Radio Wave Propagation
The initial understanding of radio wave propagation goes back to the pioneering work of James Clerk
Maxwell, who in 1864 formulated the electromagnetic theory of light and predicted the existence of radio
waves. In 1887, the physical existence of these waves was demonstrated by Heinrich Hertz. However, Hertz
saw no practical use for radio waves, reasoning that since audio frequencies were low, where propagation
was poor, radio waves could never carry voice. The work of Maxwell and Hertz initiated the field of radio
communications: in 1894 Oliver Lodge used these principles to build the first wireless communication
system, however its transmission distance was limited to 150 meters. By 1897 the entrepreneur Guglielmo
Marconi had managed to send a radio signal from the Isle of Wight to a tugboat 18 miles away, and in
1901 Marconi’s wireless system could traverse the Atlantic ocean. These early systems used telegraph
signals for communicating information. The first transmission of voice and music was done by Reginald
Fessenden in 1906 using a form of amplitude modulation, which got around the propagation limitations
at low frequencies observed by Hertz by translating signals to a higher frequency, as is done in all wireless
systems today.
Electromagnetic waves propagate through environments where they are reflected, scattered, and
diffracted by walls, terrain, buildings, and other objects. The ultimate details of this propagation can
be obtained by solving Maxwell’s equations with boundary conditions that express the physical charac-
teristics of these obstructing objects. This requires the calculation of the Radar Cross Section (RCS) of
large and complex structures. Since these calculations are difficult, and many times the necessary param-
eters are not available, approximations have been developed to characterize signal propagation without
resorting to Maxwell’s equations.
The most common approximations use ray-tracing techniques. These techniques approximate the
propagation of electromagnetic waves by representing the wavefronts as simple particles: the model
determines the reflection and refraction effects on the wavefront but ignores the more complex scattering
phenomenon predicted by Maxwell’s coupled differential equations. The simplest ray-tracing model is the
two-ray model, which accurately describes signal propagation when there is one direct path between the
transmitter and receiver and one reflected path. The reflected path typically bounces off the ground, and the two-ray model is a good approximation for propagation along highways, rural roads, and over water.
We next consider more complex models with additional reflected, scattered, or diffracted components.
Many propagation environments are not accurately reflected with ray tracing models. In these cases it
is common to develop analytical models based on empirical measurements, and we will discuss several of
the most common of these empirical models.
Often the complexity and variability of the radio channel makes it difficult to obtain an accurate
deterministic channel model. For these cases statistical models are often used. The attenuation caused
by signal path obstructions such as buildings or other objects is typically characterized statistically, as
described in Section 2.7. Statistical models are also used to characterize the constructive and destructive
interference for a large number of multipath components, as described in Chapter 3. Statistical models
are most accurate in environments with fairly regular geometries and uniform dielectric properties. In-
door environments tend to be less regular than outdoor environments, since the geometric and dielectric
characteristics change dramatically depending on whether the indoor environment is an open factory, cu-
bicled office, or metal machine shop. For these environments computer-aided modeling tools are available
to predict signal propagation characteristics.

Path Loss and Shadowing

Path Loss and Shadowing
The wireless radio channel poses a severe challenge as a medium for reliable high-speed communication.
It is not only susceptible to noise, interference, and other channel impediments, but these impediments
change over time in unpredictable ways due to user movement. In this chapter we will characterize the
variation in received signal power over distance due to path loss and shadowing. Path loss is caused by
dissipation of the power radiated by the transmitter as well as effects of the propagation channel. Path
loss models generally assume that path loss is the same at a given transmit-receive distance1. Shadowing
is caused by obstacles between the transmitter and receiver that absorb power. When the obstacle
absorbs all the power, the signal is blocked. Variation due to path loss occurs over very large distances
(100-1000 meters), whereas variation due to shadowing occurs over distances proportional to the length
of the obstructing object (10-100 meters in outdoor environments and less in indoor environments). Since
variations due to path loss and shadowing occur over relatively large distances, this variation is sometimes
refered to as large-scale propagation effects or local mean attenuation. Chapter 3 will deal with
variation due to the constructive and destructive addition of multipath signal components. Variation due
to multipath occurs over very short distances, on the order of the signal wavelength, so these variations are
sometimes refered to as small-scale propagation effects or multipath fading. Figure 2.1 illustrates
the ratio of the received-to-transmit power in dB versus log-distance for the combined effects of path loss,
shadowing, and multipath.
After a brief introduction and description of our signal model, we present the simplest model for
signal propagation: free space path loss. A signal propagating between two points with no attenuation or
reflection follows the free space propagation law. We then describe ray tracing propagation models. These
models are used to approximate wave propagation according to Maxwell’s equations, and are accurate
models when the number of multipath components is small and the physical environment is known. Ray
tracing models depend heavily on the geometry and dielectric properties of the region through which
the signal propagates. We therefore also present some simple generic models with a few parameters that
are commonly used in practice for system analysis and “back-of-the-envelope” system design. When the
number of multipath components is large, or the geometry and dielectric properties of the propagation
environment are unknown, statistical models must be used. These statistical multipath models will be
described later.
While this chapter gives a brief overview of channel models for path loss and shadowing, comprehen-
sive coverage of channel and propagation models at different frequencies of interest merits a book in its
own right, and in fact there are several excellent texts on this topic [3, 4]. Channel models for specialized
systems, e.g. multiple antenna (MIMO) and ultrawideband (UWB) systems.

The Wireless Spectrum

The Wireless Spectrum

- Methods for Spectrum Allocation

Most countries have government agencies responsible for allocating and controlling the use of the radio
spectrum. In the United States spectrum allocation is controlled by the Federal Communications Com-
mission (FCC) for commercial use and by the Office of Spectral Management (OSM) for military use. The
government decides how much spectrum to allocate between commercial and military use. Historically
the FCC allocated spectral blocks for specific uses and assigned licenses to use these blocks to specific
groups or companies. For example, in the 1980s the FCC allocated frequencies in the 800 MHz band
for analog cellular phone service, and provided spectral licenses to two companies in each geographical
area based on a number of criteria. While the FCC still typically allocates spectral blocks for specific
purposes, over the last decade they have turned to spectral auctions for assigning licenses in each block
to the highest bidder. While some argue that this market-based method is the fairest way for the gov-
ernment to allocate the limited spectral resource, and it provides significant revenue to the government
besides, there are others who believe that this mechanism stifles innovation, limits competition, and hurts technology adoption. Specifically, the high cost of spectrum dictates that only large conglomerates can
purchase it. Moreover, the large investment required to obtain spectrum delays the ability to invest in
infrastructure for system rollout and results in very high initial prices for the end user. The 3G spectral
auctions in Europe, in which several companies have already defaulted, have provided fuel to the fire
against spectral auctions.
In addition to spectral auctions, the FCC also sets aside specific frequency bands that are free to
use according to a specific set of etiquette rules. The rules may correspond to a specific communications
standard, power levels, etc. The purpose of these “free bands” is to encourage innovation and low-cost
implementation. Two of the most important emerging wireless systems, 802.11b wireless LANs and
Bluetooth, co-exist in the free National Information Highway (NIH) band set aside at 2.5 GHz. However,
one difficulty with free bands is that they can be killed by their own success: if a given system is widely
used in a given band, it will generate much interference to other users colocated in that band. Satellite
systems cover large areas spanning many countries and sometimes the globe. For wireless systems that
span multiple countries, spectrum is allocated by the International Telecommunications Union Radio
Communications group (ITU-R). The standards arm of this body, ITU-T, adopts telecommunication
standards for global systems that must interoperate with each other across national boundaries.

- Allocations for Existing Systems

Most wireless applications reside in the radio spectrum between 30 MHz and 30 GHz. These frequencies
are natural for wireless systems since they are not affected by the earth’s curvature, require only mod-
erately sized antennas, and can penetrate the ionosphere. Note that the required antenna size for good
reception is inversely proportional to the signal frequency, so moving systems to a higher frequency allows
for more compact antennas. However, received signal power is proportional to the inverse of frequency
squared, so it is harder to cover large distances with higher frequency signals. These tradeoffs will be
examined in more detail in later chapters.
As discussed in the previous section, spectrum is allocated either in licensed bands (which the FCC
assigns to specific operators) or in unlicensed bands (which can be used by any operator subject to
certain operational requirements). The following table shows the licensed spectrum allocated to major
commercial wireless systems in the U.S. today.
Note that digital TV is slated for the same bands as broadcast TV. By 2006 all broadcasters are
expected to switch from analog to digital transmission. Also, the 3G broadband wireless spectrum is
currently allocated to UHF TV stations 60-69, but is slated to be reallocated for 3G. Both analog and
2G digital cellular services occupy the same cellular band at 800 MHz, and the cellular service providers
decide how much of the band to allocate between digital and analog service.
Unlicensed spectrum is allocated by the governing body within a given country. Often countries try
to match their frequency allocation for unlicensed use so that technology developed for that spectrum is
compatible worldwide. The following table shows the unlicensed spectrum allocations in the U.S.
ISM Band I has licensed users transmitting at high power that interfere with the unlicensed users.
Therefore, the requirements for unlicensed use of this band is highly restrictive and performance is
somewhat poor. The NII bands were set aside recently to provide a total of 300 MHz of spectrum with
very few restrictions. It is expected that many new applications will take advantage of this large amount
of unlicensed spectrum.

Tuesday, May 5, 2009

Satellite Networks

Satellite Networks
Satellite systems provide voice, data, and broadcast services with widespread, often global, coverage to
high-mobility users as well as to fixed sites. Satellite systems have the same basic architecture as cellular
systems, except that the cell base-stations are satellites orbiting the earth. Satellites are characterized
by their orbit distance from the earth. There are three main types of satellite orbits: low-earth orbit
(LEOs) at 500-2000 Kms, medium-earth orbit (MEO) at 10,000 Kms, and geosynchronous orbit (GEO)
at 35,800 Kms. A geosynchronous satellite has a large coverage area that is stationary over time, since
the earth and satellite orbits are synchronous. Satellites with lower orbits have smaller coverage areas,
and these coverage areas change over time so that satellite handoff is needed for stationary users or fixed
point service.
Since geosynchronous satellites have such large coverage areas just a handful of satellites are needed
for global coverage. However, geosynchronous systems have several disadvantages for two-way communi-
cation. It takes a great deal of power to reach these satellites, so handsets are typically large and bulky.
In addition, there is a large round-trip propagation delay: this delay is quite noticeable in two-way voice
communication. Recall also from Section 15 that high-capacity cellular systems require small cell sizes.
Since geosynchronous satellites have very large cells, these systems have small capacity, high cost, and
low data rates, less than 10 Kbps. The main geosynchronous systems in operation today are the global
Inmarsat system, MSAT in North America, Mobilesat in Australia, and EMS and LLM in Europe.
The trend in current satellite systems is to use the lower LEO orbits so that lightweight handheld
devices can communicate with the satellites and propagation delay does not degrade voice quality. The
best known of these new LEO systems are Globalstar and Teledesic. Globalstar provides voice and data
services to globally-roaming mobile users at data rates under 10 Kbps. The system requires roughly 50
satellites to maintain global coverage. Teledesic uses 288 satellites to provide global coverage to fixed-point
users at data rates up to 2 Mbps. Teledesic is set to be deployed in 2005. The cell size for each satellite
in a LEO system is much larger than terrestrial macrocells or microcells, with the corresponding decrease
in capacity associated with large cells. Cost of these satellites, to build, to launch, and to maintain, is
also much higher than that of terrestrial base stations, so these new LEO systems are unlikely to be
cost-competitive with terrestrial cellular and wireless data services. Although these LEO systems can
certainly complement these terrestrial systems in low-population areas, and are also appealing to travelers
desiring just one handset and phone number for global roaming, it remains to be seen if there are enough
such users willing to pay the high cost of satellite services to make these systems economically viable. In
fact, Iridium, the largest and best-known of the LEO systems, was forced into bankruptcy and disbanded.

Paging Systems

Paging Systems
Paging systems provide very low rate one-way data services to highly mobile users over a very wide
coverage area. Paging systems have experienced steady growth for many years and currently serve about
56 million customers in the United States. However, the popularity of paging systems is declining as
cellular systems become cheaper and more ubiquitous. In order to remain competitive paging companies
have slashed prices, and few of these companies are currently profitable. To reverse their declining
fortunes, a consortium of paging service providers have recently teamed up with Microsoft and Compaq
to incorporate paging functionality and Internet access into palmtop computers [2].
Paging systems broadcast a short paging message simultaneously from many tall base stations or
satellites transmitting at very high power (hundreds of watts to kilowatts). Systems with terrestrial
transmitters are typically localized to a particular geographic area, such as a city or metropolitan region,
while geosynchronous satellite transmitters provide national or international coverage. In both types of
systems no location management or routing functions are needed, since the paging message is broad-
cast over the entire coverage area. The high complexity and power of the paging transmitters allows
low-complexity, low-power, pocket paging receivers with a long usage time from small and lightweight
batteries. In addition, the high transmit power allows paging signals to easily penetrate building walls.
Paging service also costs less than cellular service, both for the initial device and for the monthly usage
charge, although this price advantage has declined considerably in recent years. The low cost, small and
lightweight handsets, long battery life, and ability of paging devices to work almost anywhere indoors or
outdoors are the main reasons for their appeal.
Some paging services today offer rudimentary (1 bit) answer-back capabilities from the handheld
paging device. However, the requirement for two-way communication destroys the asymmetrical link
advantage so well exploited in paging system design. A paging handset with answer-back capability
requires a modulator and transmitter with sufficient power to reach the distant base station. These
requirements significantly increase the size and weight and reduce the usage time of the handheld pager.
This is especially true for paging systems with satellite base stations, unless terrestrial relays are used.

Fixed Wireless Access

Fixed Wireless Access

Fixed wireless access provides wireless communications between a fixed access point and multiple ter-
minals. These systems were initially proposed to support interactive video service to the home, but the
application emphasis has now shifted to providing high speed data access (tens of Mbps) to the Internet,
the WWW, and to high speed data networks for both homes and businesses. In the U.S. two frequency
bands have been set aside for these systems: part of the 28 GHz spectrum is allocated for local distribution
systems (local multipoint distribution systems or LMDS) and a band in the 2 GHz spectrum is allocated
for metropolitan distribution systems (multichannel multipoint distribution services or MMDS). LMDS
represents a quick means for new service providers to enter the already stiff competition among wireless
and wireline broadband service providers. MMDS is a television and telecommunication delivery system
with transmission ranges of 30-50 Km. MMDS has the capability to deliver over one hundred digital
video TV channels along with telephony and access to emerging interactive services such as the Internet.
MMDS will mainly compete with existing cable and satellite systems. Europe is developing a standard
similar to MMDS called Hiperaccess.

Wide Area Wireless Data Services

Wide Area Wireless Data Services

Wide area wireless data services provide wireless data to high-mobility users over a very large coverage
area. In these systems a given geographical region is serviced by base stations mounted on towers,
rooftops, or mountains. The base stations can be connected to a backbone wired network or form a
multihop ad hoc network.
Initial wide area wireless data services has very low data rates, below 10 Kbps, which gradually
increased to 20 Kbps. There were two main players providing this service: Motient and Bell South Mobile
Data (formerly RAM Mobile Data). Metricom provided a similar service with a network architecture
consisting of a large network of small inexpensive base stations with small coverage areas. The increased
efficiency of the small coverage areas allowed for higher data rates in Metricom, 76 Kbps, than in the
other wide-area wireless data systems. However, the high infrastructure cost for Metricom eventually
forced it into bankruptcy, and the system was shut down. Some of the infrastructure was bought and is
operating in a few araas as Ricochet.
The cellular digital packet data (CDPD) system is a wide area wireless data service overlayed on
the analog cellular telephone network. CDPD shares the FDMA voice channels of the analog systems,
since many of these channels are idle due to the growth of digital cellular. The CDPD service provides
packet data transmission at rates of 19.2 Kbps, and is available throughout the U.S. However, since newer
generations of cellular systems also provide data services, CDPD is mostly being replaced by these newer
services.
All of these wireless data services have failed to grow as rapidly or to attract as many subscribers as
initially predicted, especially in comparison with the rousing success of wireless voice systems and wireless LANs. However, this might change with the rollout of the widely anticipated Wi-Max systems. Wi-Max
is based on the IEEE 802.16 standard. The core 802.16 specification is a standard for broadband wireless
access systems operating at radio frequencies between 10 GHz and 66 GHz with a target average data
rate of 70 Mb/s and peak rates of up to 268 Mb/s. The core standard has evolved in the 802.16a standard
to specify multiple physical layer specifications and an enhanced multiple access specification. Products
compatible with the Wi-Max standard should be available over the next few years. The proliferation of
laptop and palmtop computers and the explosive demand for constant Internet access and email exchange
indicates a possibly bright future for Wi-Max, but how Wi-Max ultimately plays out will depend on its
adoption by equipment vendors, pricing, and competition from other wireless services.

Wireless LANs

Wireless LANs
Wireless LANs provide high-speed data within a small region, e.g. a campus or small building, as users
move from place to place. Wireless devices that access these LANs are typically stationary or moving at
pedestrian speeds. Nearly all wireless LANs in the United States use one of the ISM frequency bands.
The appeal of these frequency bands, located at 900 MHz, 2.4 GHz, and 5.8 GHz, is that an FCC license
is not required to operate in these bands. However, this advantage is a double-edged sword, since many
other systems operate in these bands for the same reason, causing a great deal of interference between
systems. The FCC mitigates this interference problem by setting a limit on the power per unit bandwidth
for ISM-band systems. Wireless LANs can have either a star architecture, with wireless access points or
hubs placed throughout the coverage region, or a peer-to-peer architecture, where the wireless terminals
self-configure into a network.
Dozens of wireless LAN companies and products appeared in the early 1990’s to capitalize on the
“pent-up demand” for high-speed wireless data. These first generation wireless LANs were based on
proprietary and incompatible protocols, although most operated in the 900 MHz ISM band using direct
sequence spread spectrum with data rates on the order of 1-2 Mbps. Both star and peer-to-peer architec-
tures were used. The lack of standardization for these products led to high development costs, low-volume
production, and small markets for each individual product. Of these original products only a handful
were even mildly successful. Only one of the first generation wireless LANs, Motorola’s Altair, operated
outside the 900 MHz ISM band. This system, operating in the licensed 18 GHz band, had data rates on
the order of 6 Mbps. However, performance of Altair was hampered by the high cost of components and
the increased path loss at 18 GHz, and Altair was discontinued within a few years of its release.
The second generation of wireless LANs in the United States operate with 80 MHz of spectrum in
the 2.4 GHz ISM band. A wireless LAN standard for this frequency band, the IEEE 802.11b standard,
was developed to avoid some of the problems with the proprietary first generation systems. The standard
specifies frequency hopped spread spectrum with data rates of around 1.6 Mbps (raw data rates of 11
Mbps) and a range of approximately 500 ft. The network architecture can be either star or peer-to-
peer. Many companies have developed products based on the 802.11b standard, and these products are
constantly evolving to provide higher data rates and better coverage at very low cost. The market for 802.11b wireless LANs is growing, and most computer manufacturers integrate 802.11b wireless LAN
cards directly into their laptops. Many companies and universities have installed 802.11b base stations
throughout their locations, and even local coffee houses are installing these base stations to offer wireless
access to customers. After fairly slow growth initially, 802.11b has experienced much higher growth in
the last few years.
In addition to 802.11b, there are two additional wireless LAN standards that have recently been
deployed in the marketplace. The IEEE 802.11a wireless LAN standard operates in 300 MHz of spectrum
the 5 GHz unlicensed band, which does not have interference from ISM primary users as in the 2.4
GHz band. The 802.11a standard is based on OFDM modulation and provides 20-70 Mbps data rates.
Since 802.11a has much more bandwidth and consequently many more channels than 802.11b, it can
support more users at higher data rates. There was some initial concern that 802.11a systems would be
significantly more expensive than 802.11b systems, but in fact they are becoming quite competitive in
price. The other standard, 802.11g, also uses OFDM and can be used in either the 2.4 GHz and 5 GHz
bands with speeds of up to 54 Mbps. Many new laptops and base stations have wireless LAN cards that
support all three standards to avoid incompatibilities.
In Europe wireless LAN development revolves around the HIPERLAN (high performance radio LAN)
standards. The first HIPERLAN standard, HIPERLAN Type 1, is similar to the IEEE 802.11a wireless
LAN standard and promises data rates of 20 Mbps at a range of 50 meters (150 feet). This system
operates in the 5 GHz band. Its network architecture is peer-to-peer, and the channel access mechanism
uses a variation of ALOHA with prioritization based on the lifetime of packets. The next generation of
HIPERLAN, HIPERLAN Type 2, is still under development, but the goal is to provide data rates on the
order of 54 Mbps with a similar range, and also to support access to cellular, ATM, and IP networks.
HIPERLAN Type 2 is also supposed to include support for Quality-of-Service (QoS), however it is not
yet clear how and to what extent this will be done.

Cordless Phones

Cordless Phones
Cordless telephones first appeared in the late 1970’s and have experienced spectacular growth ever since.
Roughly half of the phones in U.S. homes today are cordless. Cordless phones were originally designed
to provide a low-cost low-mobility wireless connection to the PSTN, i.e. a short wireless link to replace
the cord connecting a telephone base unit and its handset. Since cordless phones compete with wired
handsets, their voice quality must be similar: initial cordless phones had poor voice quality and were
quickly discarded by users. The first cordless systems allowed only one phone handset to connect to each
base unit, and coverage was limited to a few rooms of a house or office. This is still the main premise
behind cordless telephones in the U.S. today, although these phones now use digital technology instead
of analog. In Europe and the Far East digital cordless phone systems have evolved to provide coverage
over much wider areas, both in and away from home, and are similar in many ways to today’s cellular
telephone systems.
Digital cordless phone systems in the U.S. today consist of a wireless handset connected to a single
base unit which in turn is connected to the PSTN. These cordless phones impose no added complexity
on the telephone network, since the cordless base unit acts just like a wireline telephone for networking
purposes. The movement of these cordless handsets is extremely limited: a handset must remain within
range of its base unit. There is no coordination with other cordless phone systems, so a high density of
these systems in a small area, e.g. an apartment building, can result in significant interference between
systems. For this reason cordless phones today have multiple voice channels and scan between these
channels to find the one with minimal interference. Spread spectrum cordless phones have also been
introduced to reduce interference from other systems and narrowband interference.
In Europe and the Far East the second generation of digital cordless phones (CT-2, for cordless
telephone, second generation) have an extended range of use beyond a single residence or office. Within
a home these systems operate as conventional cordless phones. To extend the range beyond the home
base stations, also called phone-points or telepoints, are mounted in places where people congregate, like
shopping malls, busy streets, train stations, and airports. Cordless phones registered with the telepoint
provider can place calls whenever they are in range of a telepoint. Calls cannot be received from the
telepoint since the network has no routing support for mobile users, although some newer CT-2 handsets
have built-in pagers to compensate for this deficiency. These systems also do not handoff calls if a user
moves between different telepoints, so a user must remain within range of the telepoint where his call
was initiated for the duration of the call. Telepoint service was introduced twice in the United Kingdom
and failed both times, but these systems grew rapidly in Hong Kong and Singapore through the mid
1990’s. This rapid growth deteriorated quickly after the first few years, as cellular phone operators cut
prices to compete with telepoint service. The main complaint about telepoint service was the incomplete
radio coverage and lack of handoff. Since cellular systems avoid these problems, as long as prices were
competitive there was little reason for people to use telepoint services. Most of these services have now
disappeared.
Another evolution of the cordless telephone designed primarily for office buildings is the European
DECT system. The main function of DECT is to provide local mobility support for users in an in-building
private branch exchange (PBX). In DECT systems base units are mounted throughout a building, and
each base station is attached through a controller to the PBX of the building. Handsets communicate to
the nearest base station in the building, and calls are handed off as a user walks between base stations.
DECT can also ring handsets from the closest base station. The DECT standard also supports telepoint
services, although this application has not received much attention, probably due to the failure of CT-2
services. There are currently around 7 million DECT users in Europe, but the standard has not yet
spread to other countries.
The most recent advance in cordless telephone system design is the Personal Handyphone System
(PHS) in Japan. The PHS system is quite similar to a cellular system, with widespread base station
deployment supporting handoff and call routing between base stations. With these capabilities PHS does
not suffer from the main limitations of the CT-2 system. Initially PHS systems enjoyed one of the fastest
growth rates ever for a new technology. In 1997, two years after its introduction, PHS subscribers peaked
at about 7 million users, and has declined slightly since then due mainly to sharp price cutting by cellular
providers. The main difference between a PHS system and a cellular system is that PHS cannot support
call handoff at vehicle speeds. This deficiency is mainly due to the dynamic channel allocation procedure
used in PHS. Dynamic channel allocation greatly increases the number of handsets that can be serviced
by a single base station, thereby lowering the system cost, but it also complicates the handoff procedure.
It is too soon to tell if PHS systems will go the same route as CT-2. However, it is clear from the recent
history of cordless phone systems that to extend the range of these systems beyond the home requires
either the same functionality as cellular systems or a significantly reduced cost.

Current Wireless Systems

Current Wireless Systems
1. Cellular Telephone Systems:
Cellular telephone systems, also referred to as Personal Communication Systems (PCS), are extremely
popular and lucrative worldwide: these systems have sparked much of the optimism about the future
of wireless networks. Cellular systems today provide two-way voice and data communication at vehicle
speeds with regional or national coverage. Cellular systems were initially designed for mobile terminals inside vehicles with antennas mounted on the vehicle roof. Today these systems have evolved to support
lightweight handheld mobile terminals operating inside and outside buildings at both pedestrian and
vehicle speeds.
The basic premise behind cellular system design is frequency reuse, which exploits path loss to reuse
the same frequency spectrum at spatially-separated locations. Specifically, the coverage area of a cellular
system is divided into nonoverlapping cel ls where some set of channels is assigned to each cell. This
same channel set is used in another cell some distance away, as shown in Figure 1.4, where fi denotes
the channel set used in a particular cell. Operation within a cell is controlled by a centralized base
station, as described in more detail below. The interference caused by users in different cells operating
on the same channel set is called intercell interference. The spatial separation of cells that reuse the
same channel set, the reuse distance, should be as small as possible to maximize the spectral efficiency
obtained by frequency reuse. However, as the reuse distance decreases, intercell interference increases, due
to the smaller propagation distance between interfering cells. Since intercell interference must remain
below a given threshold for acceptable system performance, reuse distance cannot be reduced below
some minimum value. In practice it is quite difficult to determine this minimum value since both the
transmitting and interfering signals experience random power variations due to path loss, shadowing,
and multipath. In order to determine the best reuse distance and base station placement, an accurate
characterization of signal propagation within the cells is needed. This characterization is usually obtained
using detailed analytical models, sophisticated computer-aided modeling, or empirical measurements.
Initial cellular system designs were mainly driven by the high cost of base stations, approximately
one million dollars apiece. For this reason early cellular systems used a relatively small number of cells
to cover an entire city or region. The cell base stations were placed on tall buildings or mountains and
transmitted at very high power with cell coverage areas of several square miles. These large cells are called
macrocells. Signals propagated out from base stations uniformly in all directions, so a mobile moving in a
circle around the base station would have approximately constant received power. This circular contour of constant power yields a hexagonal cell shape for the system, since a hexagon is the closest shape to a
circle that can cover a given area with multiple nonoverlapping cells.
Cellular telephone systems are now evolving to smaller cells with base stations close to street level or
inside buildings transmitting at much lower power. These smaller cells are called microcells or picocells,
depending on their size. This evolution is driven by two factors: the need for higher capacity in areas with
high user density and the reduced size and cost of base station electronics. A cell of any size can support
roughly the same number of users if the system is scaled accordingly. Thus, for a given coverage area a
system with many microcells has a higher number of users per unit area than a system with just a few
macrocells. Small cells also have better propagation conditions since the lower base stations have reduced
shadowing and multipath. In addition, less power is required at the mobile terminals in microcellular
systems, since the terminals are closer to the base stations. However, the evolution to smaller cells has
complicated network design. Mobiles traverse a small cell more quickly than a large cell, and therefore
handoffs must be processed more quickly. In addition, location management becomes more complicated,
since there are more cells within a given city where a mobile may be located. It is also harder to develop
general propagation models for small cells, since signal propagation in these cells is highly dependent on
base station placement and the geometry of the surrounding reflectors. In particular, a hexagonal cell
shape is not a good approximation to signal propagation in microcells. Microcellular systems are often
designed using square or triangular cell shapes, but these shapes have a large margin of error in their
approximation to microcell signal propagation [7].
All base stations in a given geographical area are connected via a high-speed communications link
to a mobile telephone switching office (MTSO), as shown in Figure 1.5. The MTSO acts as a central
controller for the network, allocating channels within each cell, coordinating handoffs between cells when
a mobile traverses a cell boundary, and routing calls to and from mobile users. The MTSO can route
voice calls through the public switched telephone network (PSTN) or provide Internet access for data
exchange. A new user located in a given cell requests a channel by sending a call request to the cell’s
base station over a separate control channel. The request is relayed to the MTSO, which accepts the call
request if a channel is available in that cell. If no channels are available then the call request is rejected.
A call handoff is initiated when the base station or the mobile in a given cell detects that the received
signal power for that call is approaching a given minimum threshold. In this case the base station informs
the MTSO that the mobile requires a handoff, and the MTSO then queries surrounding base stations to
determine if one of these stations can detect that mobile’s signal. If so then the MTSO coordinates a
handoff between the original base station and the new base station. If no channels are available in the
cell with the new base station then the handoff fails and the call is terminated. False handoffs may also
be initiated if a mobile is in a deep fade, causing its received signal power to drop below the minimum
threshold even though it may be nowhere near a cell boundary.
The first generation of cellular systems were analog and the second generation moved from analog to
digital technology. Digital technology has many advantages over analog. The components are cheaper,
faster, smaller, and require less power. Voice quality is improved due to error correction coding. Digital
systems also have higher capacity than analog systems since they are not limited to frequency division
for multiple access, and they can take advantage of advanced compression techniques and voice activity
factors. In addition, encryption techniques can be used to secure digital signals against eavesdropping.
Third generation cellular systems enhanced the digital voice capabilities of the second generation with
digital data, including short messaging, email, Internet access, and imaging capabilities (camera phones).
There is still widespread coverage of first generation cellular systems throughout the US, and some rural
areas only have analog cellular. However, due to their lower cost and higher efficiency, service providers
have used aggressive pricing tactics to encourage user migration from analog to digital systems. Digital
systems do not always work as well as the old analog ones. Users can experience poor voice quality,
frequent call dropping, short battery life, and spotty coverage in certain areas. System performance
will certainly improve as the technology and networks mature. Indeed, in some areas cellular phones
provide almost the same quality as wireline service, and a segment of the US population has replaced
their wireline telephone service inside the home with cellular service. This process has been accelerated
by cellular service plans with free long distance throughout the US.
Spectral sharing in digital cellular can be done using frequency-division, time-division, code-division
(spread spectrum), or hybrid combinations of these techniques (see Chapter 14). In time-division the
signal occupies the entire frequency band, and is divided into time slots ti which are reused in distant
cells [8]. Time division is depicted by Figure 1.4 if the fis are replaced by tis. Time-division is more
difficult to implement than frequency-division since the users must be time-synchronized. However, it is
easier to accommodate multiple data rates with time-division since multiple timeslots can be assigned
to a given user. Spectral sharing can also be done using code division, which is commonly implemented
using either direct-sequence or frequency-hopping spread spectrum [9]. In direct-sequence each user
modulates its data sequence by a different pseudorandom chip sequence which is much faster than the
data sequence. In the frequency domain, the narrowband data signal is convolved with the wideband chip
signal, resulting in a signal with a much wider bandwidth than the original data signal - hence the name
spread spectrum. In frequency hopping the carrier frequency used to modulate the narrowband data
signal is varied by a pseudorandom chip sequence which may be faster or slower than the data sequence.
Since the carrier frequency is hopped over a large signal bandwidth, frequency-hopping also spreads the
data signal to a much wider bandwidth. Typically spread spectrum signals are superimposed onto each
other within the same signal bandwidth. A spread spectrum receiver can separate each of the distinct
signals by separately decoding each spreading sequence. However, since the codes are semi-orthogonal,
the users within a cell interfere with each other (intracell interference), and codes that are reused in other
cells also cause interference (intercell interference). Both the intracell and intercell interference power is
reduced by the spreading gain of the code. Moreover, interference in spread spectrum systems can be
further reduced through multiuser detection and interference cancellation.
In the U.S. the standards activities surrounding the second generation of digital cellular systems
provoked a raging debate on multiple access for these systems, resulting in several incompatible standards
[10, 11, 12]. In particular, there are two standards in the 900 MHz (cellular) frequency band: IS-54, which
uses a combination of TDMA and FDMA, and IS-95, which uses semi-orthogonal CDMA [13, 14]. The
spectrum for digital cellular in the 2 GHz (PCS) frequency band was auctioned off, so service providers
could use an existing standard or develop proprietary systems for their purchased spectrum. The end
result has been three different digital cellular standards for this frequency band: IS-136 (which is basically the same as IS-54 at a higher frequency), IS-95, and the European digital cellular standard GSM, which
uses a combination of TDMA and slow frequency-hopping. The digital cellular standard in Japan is
similar to IS-54 and IS-136 but in a different frequency band, and the GSM system in Europe is at a
different frequency than the GSM systems in the U.S. This proliferation of incompatible standards in
the U.S. and abroad makes it impossible to roam between systems nationwide or globally without using
multiple phones (and phone numbers).
All of the second generation digital cellular standards have been enhanced to support high rate
packet data services [15]. GSM systems provide data rates of up to 100 Kbps by aggregating all timeslots
together for a single user. This enhancement was called GPRS. A more fundamental enhancement, called
Enhanced Data Services for GSM Evolution (EDGE), further increases data rates using a high-level
modulation format combined with FEC coding. This modulation is more sensitive to fading effects, and
EDGE uses adaptive modulation and coding to mitigate this problem. Specifically, EDGE defines six
different modulation and coding combinations, each optimized to a different value of received SNR. The
received SNR is measured at the receiver and fed back to the transmitter, and the best modulation and
coding combination for this SNR value is used. The IS-54 and IS-136 systems currently provide data rates
of 40-60 Kbps by aggregating time slots and using high-level modulation. This new TDMA standard is
referred to as IS-136HS (high-speed). Many of these time-division systems are moving toward GSM, and
their corresponding enhancements to support high speed data. The IS-95 systems support higher data
using a time-division technique called high data rate (HDR)[16].
The third generation of cellular phones is based on a wideband CDMA standard developed within
the auspices of the International Telecommunications Union (ITU) [15]. The standard, initially called
International Mobile Telecommunications 2000 (IMT-2000), provides different data rates depending on
mobility and location, from 384 Kbps for pedestrian use to 144 Kbps for vehicular use to 2 Mbps for
indoor office use. The 3G standard is incompatible with 2G systems, so service providers must invest in
a new infrastructure before they can provide 3G service. The first 3G systems were deployed in Japan,
where they have experienced limited success with a somewhat slower growth than expected. One reason
that 3G services came out first in Japan is the process of 3G spectrum allocation, which in Japan was
awarded without much up-front cost. The 3G spectrum in both Europe and the U.S. is allocated based
on auctioning, thereby requiring a huge initial investment for any company wishing to provide 3G service.
European companies collectively paid over 100 billion dollars in their 3G spectrum auctions. There has
been much controversy over the 3G auction process in Europe, with companies charging that the nature
of the auctions caused enormous overbidding and that it will be very difficult if not impossible to reap a
profit on this spectrum. A few of the companies have already decided to write off their investment in 3G
spectrum and not pursue system buildout. In fact 3G systems have not yet come online in Europe, and
it appears that data enhancements to 2G systems may suffice to satisfy user demands. However, the 2G
spectrum in Europe is severely overcrowded, so users will either eventually migrate to 3G or regulations
will change so that 3G bandwidth can be used for 2G services (which is not currently allowed in Europe).
3G development in the U.S. has lagged far behind that of Europe. The available 3G spectrum in the U.S.
in only about half that available in Europe. Due to wrangling about which parts of the spectrum will be
used, the spectral auctions have been delayed. However, the U.S. does allow the 1G and 2G spectrum
to be used for 3G, and this flexibility may allow a more gradual rollout and investment than the more
restrictive 3G requirements in Europe. It appears that delaying 3G in the U.S. will allow U.S. service
providers to learn from the mistakes and successes in Europe and Japan.
Efficient cellular system designs are interference-limited, i.e. the interference dominates the noise
floor since otherwise more users could be added to the system. As a result, any technique to reduce
interference in cellular systems leads directly to an increase in system capacity and performance. Some methods for interference reduction in use today or proposed for future systems include cell sectorization
[6], directional and smart antennas [19], multiuser detection [20], and dynamic channel and resource
allocation [21, 22].

Technical Issues

Technical Issues
The technical problems that must be solved to deliver high-performance wireless systems extend across
all levels of the system design. At the hardware level the terminal must have multiple modes of operation
to support the different applications and media. Desktop computers currently have the capability to
process voice, image, text, and video data, but breakthroughs in circuit design are required to implement
multimode operation in a small, lightweight, handheld device. Since most people don’t want to carry
around a twenty pound battery, the signal processing and communications hardware of the portable
terminal must consume very little power, which will impact higher levels of the system design. Many of
the signal processing techniques required for efficient spectral utilization and networking demand much
processing power, precluding the use of low power devices. Hardware advances for low power circuits
with high processing ability will relieve some of these limitations. However, placing the processing burden
on fixed sites with large power resources has and will continue to dominate wireless system designs. The
associated bottlenecks and single points-of-failure are clearly undesirable for the overall system. Moreover,
in some applications (e.g. sensors) network nodes will not be able to recharge their batteries. In this
case the finite battery energy must be allocated efficiently across all layers of the network protocol stack
[5]. The finite bandwidth and random variations of the communication channel will also require robust
compression schemes which degrade gracefully as the channel degrades.
The wireless communication channel is an unpredictable and difficult communications medium. First
of all, the radio spectrum is a scarce resource that must be allocated to many different applications and
systems. For this reason spectrum is controlled by regulatory bodies both regionally and globally. In the U.S. spectrum is allocated by the FCC, in Europe the equivalent body is the European Telecommu-
nications Standards Institute (ETSI), and globally spectrum is controlled by the International Telecom-
munications Union (ITU). A regional or global system operating in a given frequency band must obey
the restrictions for that band set forth by the corresponding regulatory body as well as any standards
adopted for that spectrum. Spectrum can also be very expensive since in most countries, including the
U.S., spectral licenses are now auctioned to the highest bidder. In the 2 GHz spectral auctions of the
early 90s, companies spent over nine billion dollars for licenses, and the recent auctions in Europe for 3G
spectrum garnered over 100 billion dollars. The spectrum obtained through these auctions must be used
extremely efficiently to get a reasonable return on its investment, and it must also be reused over and
over in the same geographical area, thus requiring cellular system designs with high capacity and good
performance. At frequencies around several Gigahertz wireless radio components with reasonable size,
power consumption, and cost are available. However, the spectrum in this frequency range is extremely
crowded. Thus, technological breakthroughs to enable higher frequency systems with the same cost and
performance would greatly reduce the spectrum shortage, although path loss at these higher frequencies
increases, thereby limiting range.
As a signal propagates through a wireless channel, it experiences random fluctuations in time if the
transmitter or receiver is moving, due to changing reflections and attenuation. Thus, the characteristics
of the channel appear to change randomly with time, which makes it difficult to design reliable systems
with guaranteed performance. Security is also more difficult to implement in wireless systems, since
the airwaves are susceptible to snooping from anyone with an RF antenna. The analog cellular systems
have no security, and you can easily listen in on conversations by scanning the analog cellular frequency
band. All digital cellular systems implement some level of encryption. However, with enough knowledge,
time and determination most of these encryption methods can be cracked and, indeed, several have been compromised. To support applications like electronic commerce and credit card transactions, the wireless
network must be secure against such listeners.
Wireless networking is also a significant challenge. The network must be able to locate a given user
wherever it is amongst millions of globally-distributed mobile terminals. It must then route a call to that
user as it moves at speeds of up to 100 mph. The finite resources of the network must be allocated in
a fair and efficient manner relative to changing user demands and locations. Moreover, there currently
exists a tremendous infrastructure of wired networks: the telephone system, the Internet, and fiber optic
cable, which should be used to connect wireless systems together into a global network. However, wireless
systems with mobile users will never be able to compete with wired systems in terms of data rate and
reliability. The design of protocols to interface between wireless and wired networks with vastly different
performance capabilities remains a challenging topic of research.
Perhaps the most significant technical challenge in wireless network design is an overhaul of the
design process itself. Wired networks are mostly designed according to the layers of the OSI model.
The most relevant layers of this model for wireless systems are the link or physical layer, which handles
bit transmissions over the communications medium, the multiple access layer, which handles shared
access to the communications medium, the network layer, which routes data across the networks, and
the application layer, which dictates the end-to-end data rates and delay constraints associated with
the application. In the OSI model each layer of the protocol stack is designed independent from the
other layers with baseline mechanisms to interface between layers. This methodology greatly simplifies
network design, although it leads to some inefficiency and performance loss due to the lack of a global
design optimization. However, the large capacity and good reliability of wired network links make it
easier to buffer high-level network protocols from the lower level protocols for link transmission and
access, and the performance loss resulting from this isolated protocol design is fairly low. However, the
situation is very different in a wireless network. Wireless links can exhibit very poor performance, and
this performance along with user connectivity and network topology changes over time. In fact, the very
notion of a wireless link is somewhat fuzzy due to the nature of radio propagation. The dynamic nature
and poor performance of the underlying wireless communication channel indicates that high-performance
wireless networks must be optimized for this channel and must adapt to its variations as well as to user
mobility. Thus, these networks will require an integrated and adaptive protocol stack across all layers
of the OSI model, from the link layer to the application layer. This cross-layer design approach draws
from many areas of expertise, including physics, communications, signal processing, network theory and
design, software design, and hardware design. Moreover, given the fundamental limitations of the wireless
channels and the explosive demand for its utilization, communication between these interdisciplinary
groups is necessary to implement systems that can achieve the wireless vision described in the previous
section.
In the next section we give an overview of the wireless systems in operation today. It will be clear
from this overview that the wireless vision remains a distant goal, with many challenges remaining before
it will be realized. Many of these challenges will be examined in detail in later chapters.

Wireless VisionWireless VisionWireless VisionWireless Vision

Wireless Vision
The vision of wireless communications supporting information exchange between people or devices is
the communications frontier of the next century. This vision will allow people to operate a virtual
office anywhere in the world using a small handheld device - with seamless telephone, modem, fax, and
computer communications. Wireless networks will also be used to connect together palmtop, laptop, and
desktop computers anywhere within an office building or campus, as well as from the corner cafe. In
the home these networks will enable a new class of intelligent home electronics that can interact with
each other and with the Internet in addition to providing connectivity between computers, phones, and
security/monitoring systems. Such smart homes can also help the elderly and disabled with assisted
living, patient monitoring, and emergency response. Video teleconferencing will take place between
buildings that are blocks or continents apart, and these conferences can include travelers as well, from
the salesperson who missed his plane connection to the CEO off sailing in the Caribbean. Wireless
video will be used to create remote classrooms, remote training facilities, and remote hospitals anywhere
in the world. Wireless sensors have an enormous range of both commercial and military applications.
Commercial applications include monitoring of fire hazards, hazardous waste sites, stress and strain
in buildings and bridges, or carbon dioxide movement and the spread of chemicals and gasses at a
disaster site. These wireless sensors will self-configure into a network to process and interpret sensor
measurements and then convey this information to a centralized control location. Military applications
include identification and tracking of enemy targets, detection of chemical and biological attacks, and
the support of unmanned robotic vehicles. Finally, wireless networks enable distributed control systems,
with remote devices, sensors, and actuators linked together via wireless communication channels. Such
networks are imperative for coordinating unmanned mobile units and greatly reduce maintenance and
reconfiguration costs over distributed control systems with wired communication links, for example in
factory automation.
The various applications described above are all components of the wireless vision. So what, ex-
actly, is wireless communications? There are many different ways to segment this complex topic into
different applications, systems, or coverage regions. Wireless applications include voice, Internet access,
web browsing, paging and short messaging, subscriber information services, file transfer, video telecon-
ferencing, sensing, and distributed control. Systems include cellular telephone systems, wireless LANs,
wide-area wireless data systems, satellite systems, and ad hoc wireless networks. Coverage regions in-
clude in-building, campus, city, regional, and global. The question of how best to characterize wireless
communications along these various segments has resulted in considerable fragmentation in the industry,
as evidenced by the many different wireless products, standards, and services being offered or proposed.
One reason for this fragmentation is that different wireless applications have different requirements. Voice
systems have relatively low data rate requirements (around 20 Kbps) and can tolerate a fairly high prob-
ability of bit error (bit error rates, or BERs, of around 10−3), but the total delay must be less than 100
msec or it becomes noticeable to the end user. On the other hand, data systems typically require much
higher data rates (1-100 Mbps) and very small BERs (the target BER is 10−8 and all bits received in
error must be retransmitted) but do not have a fixed delay requirement. Real-time video systems have
high data rate requirements coupled with the same delay constraints as voice systems, while paging and
short messaging have very low data rate requirements and no delay constraints. These diverse require-
ments for different applications make it difficult to build one wireless system that can satisfy all these
requirements simultaneously. Wired networks are moving towards integrating the diverse requirements
of different systems using a single protocol (e.g. ATM or SONET). This integration requires that the
most stringent requirements for all applications be met simultaneously. While this is possible on wired
networks, with data rates on the order of Gbps and BERs on the order of 10−12,itisnotpossibleon wireless networks, which have much lower data rates and higher BERs. Therefore, at least in the near
future, wireless systems will continue to be fragmented, with different protocols tailored to support the
requirements of different applications.
Will there be a large demand for all wireless applications, or will some flourish while others vanish?
Companies are investing large sums of money to build multimedia wireless systems, yet many multimedia
wireless systems have gone bankrupt in the past. Experts have been predicting a huge market for wireless
data services and products for the last 10 years, but the market for these products remains relatively
small, although in recent years growth has picked up substantially. To examine the future of wireless data,
it is useful to see the growth of various communication services, as shown in Figure 1.1. In this figure
we see that cellular and paging subscribers have been growing exponentially. This growth is exceeded
only by the growing demand for Internet access, driven by web browsing and email exchange. The
number of laptop and palmtop computers is also growing steadily. These trends indicate that people
want to communicate while on the move. They also want to take their computers wherever they go. It
is therefore reasonable to assume that people want the same data communications capabilities on the
move as they enjoy in their home or office. Yet exponential growth for high-speed wireless data has not
yet materialized, except for relatively stationary users accessing the network via a wireless LAN. Why
the discrepancy? Perhaps the main reason for the lack of enthusiasm in wireless data for highly mobile
users is the high cost and poor performance of today’s systems, along with a lack of “killer applications”
for mobile users beyond voice and low-rate data. However, this might change with some of the emerging
standards on the horizon.

History of Wireless Communications

History of Wireless Communications
The first wireless networks were developed in the Pre-industrial age. These systems transmitted infor-
mation over line-of-sight distances (later extended by telescopes) using smoke signals, torch signaling,
flashing mirrors, signal flares, or semaphore flags. An elaborate set of signal combinations was developed
to convey complex messages with these rudimentary signals. Observation stations were built on hilltops
and along roads to relay these messages over large distances. These early communication networks were
replaced first by the telegraph network (invented by Samuel Morse in 1838) and later by the telephone.
In 1895, a few decades after the telephone was invented, Marconi demonstrated the first radio trans-
mission from the Isle of Wight to a tugboat 18 miles away, and radio communications was born. Radio
technology advanced rapidly to enable transmissions over larger distances with better quality, less power,
and smaller, cheaper devices, thereby enabling public and private radio communications, television, and wireless networking.
Early radio systems transmitted analog signals. Today most radio systems transmit digital signal
composed of binary bits, where the bits are obtained directly from a data signal or by digitizing an
analog voice or music signal. A digital radio can transmit a continuous bit stream or it can group
the bits into packets. The latter type of radio is called a packet radio and is characterized by bursty
transmissions: the radio is idle except when it transmits a packet. The first network based on packe
radio, ALOHANET, was developed at the University of Hawaii in 1971. This network enabled compute
sites at seven campuses spread out over four islands to communicate with a central computer on Oahu
via radio transmission. The network architecture used a star topology with the central computer at its
hub. Any two computers could establish a bi-directional communications link between them by going
through the central hub. ALOHANET incorporated the first set of protocols for channel access and
routing in packet radio systems, and many of the underlying principles in these protocols are still in
use today. The U.S. military was extremely interested in the combination of packet data and broadcas
radio inherent to ALOHANET. Throughout the 70’s and early 80’s the Defense Advanced Research
Projects Agency (DARPA) invested significant resources to develop networks using packet radios for
tactical communications in the battlefield. The nodes in these ad hoc wireless networks had the ability to
self-configure (or reconfigure) into a network without the aid of any established infrastructure. DARPA’
investment in ad hoc networks peaked in the mid 1980’s, but the resulting networks fell far short o
expectations in terms of speed and performance. DARPA has continued work on ad hoc wireless network
research for military use, but many technical challenges in terms of performance and robustness remain
Packet radio networks have also found commercial application in supporting wide-area wireless data
services. These services, first introduced in the early 1990’s, enable wireless data access (including email
file transfer, and web browsing) at fairly low speeds, on the order of 20 Kbps. The market for these
wide-area wireless data services is relatively flat, due mainly to their low data rates, high cost, and lack
of “killer applications”. Next-generation cellular services are slated to provide wireless data in addition
to voice, which will provide stiff competition to these data-only services.
The introduction of wired Ethernet technology in the 1970’s steered many commercial companie
away from radio-based networking. Ethernet’s 10 Mbps data rate far exceeded anything available using
radio, and companies did not mind running cables within and between their facilities to take advantage
of these high rates. In 1985 the Federal Communications Commission (FCC) enabled the commercia
development of wireless LANs by authorizing the public use of the Industrial, Scientific, and Medica
(ISM) frequency bands for wireless LAN products. The ISM band was very attractive to wireless LAN
vendors since they did not need to obtain an FCC license to operate in this band. However, the wireless
LAN systems could not interfere with the primary ISM band users, which forced them to use a low powe
profile and an inefficient signaling scheme. Moreover, the interference from primary users within thi
frequency band was quite high. As a result these initial LAN systems had very poor performance in
terms of data rates and coverage. This poor performance, coupled with concerns about security, lack
of standardization, and high cost (the first network adaptors listed for $1,400 as compared to a few
hundred dollars for a wired Ethernet card) resulted in weak sales for these initial LAN systems. Few o
these systems were actually used for data networking: they were relegated to low-tech applications like
inventory control. The current generation of wireless LANS, based on the IEEE 802.11b and 802.11a
standards, have better performance, although the data rates are still relatively low (effective data rate
on the order of 2 Mbps for 802.11b and around 10 Mbps for 802.11a) and the coverage area is still smal
(100-500 feet). Wired Ethernets today offer data rates of 100 Mbps, and the performance gap between
wired and wireless LANs is likely to increase over time without additional spectrum allocation. Despite
the big data rate differences, wireless LANs are becoming the prefered Internet access method in many homes, offices, and campus environments due to their convenience and freedom from wires. However,
most wireless LANs support applications that are not bandwidth-intensive (email, file transfer, web
browsing) and typically have only one user at a time accessing the system. The challenge for widespread
wireless LAN acceptance and use will be for the wireless technology to support many users simultaneously,
especially if bandwidth-intensive applications become more prevalent.
By far the most successful application of wireless networking has been the cellular telephone system.
Cellular telephones are projected to have a billion subscribers worldwide within the next few years. The
convergence of radio and telephony began in 1915, when wireless voice transmission between New York
and San Francisco was first established. In 1946 public mobile telephone service was introduced in 25 cities
across the United States. These initial systems used a central transmitter to cover an entire metropolitan
area. This inefficient use of the radio spectrum coupled with the state of radio technology at that time
severely limited the system capacity: thirty years after the introduction of mobile telephone service the
New York system could only support 543 users.
A solution to this capacity problem emerged during the 50’s and 60’s when researchers at AT&T
Bell Laboratories developed the cellular concept [1]. Cellular systems exploit the fact that the power
of a transmitted signal falls off with distance. Thus, the same frequency channel can be allocated to
users at spatially-separate locations with minimal interference between the users. Using this premise, a
cellular system divides a geographical area into adjacent, non-overlapping, “cells”. Different channel sets
are assigned to each cell, and cells that are assigned the same channel set are spaced far enough apart so
that interference between the mobiles in these cells is small. Each cell has a centralized transmitter and
receiver (called a base station) that communicates with the mobile units in that cell, both for control
purposes and as a call relay. All base stations have high-bandwidth connections to a mobile telephone
switching office (MTSO), which is itself connected to the public-switched telephone network (PSTN). The
handoff of mobile units crossing cell boundaries is typically handled by the MTSO, although in current
systems some of this functionality is handled by the base stations and/or mobile units.
The original cellular system design was finalized in the late 60’s. However, due to regulatory delays
from the FCC, the system was not deployed until the early 80’s, by which time much of the original
technology was out-of-date. The explosive growth of the cellular industry took most everyone by surprise,
especially the original inventors at AT&T, since AT&T basically abandoned the cellular business by the
early 80’s to focus on fiber optic networks. The first analog cellular system deployed in Chicago in 1983
was already saturated by 1984, at which point the FCC increased the cellular spectral allocation from 40
MHz to 50 MHz. As more and more cities became saturated with demand, the development of digital
cellular technology for increased capacity and better performance became essential.
The second generation of cellular systems are digital. In addition to voice communication, these
systems provide email, voice mail, and paging services. Unfortunately, the great market potential for
cellular phones led to a proliferation of digital cellular standards. Today there are three different digital
cellular phone standards in the U.S. alone, and other standards in Europe and Japan, none of which are
compatible. The fact that different cities have different incompatible standards makes roaming throughout
the U.S. using one digital cellular phone impossible. Most cellular phones today are dual-mode: they
incorporate one of the digital standards along with the old analog standard, since only the analog standard
provides universal coverage throughout the U.S. More details on today’s digital cellular systems will be
given in Section 15.
Radio paging systems are another example of an extremely successful wireless data network, with 50
million subscribers in the U.S. alone. However, their popularity is starting to wane with the widespread
penetration and competitive cost of cellular telephone systems. Paging systems allow coverage over very
wide areas by simultaneously broadcasting the pager message at high power from multiple base stations or satellites. These systems have been around for many years. Early radio paging systems were analog 1 bit
messages signaling a user that someone was trying to reach him or her. These systems required callback
over the regular telephone system to obtain the phone number of the paging party. Recent advances
now allow a short digital message, including a phone number and brief text, to be sent to the pagee as
well. In paging systems most of the complexity is built into the transmitters, so that pager receivers
are small, lightweight, and have a long battery life. The network protocols are also very simple since
broadcasting a message over all base stations requires no routing or handoff. The spectral inefficiency
of these simultaneous broadcasts is compensated by limiting each message to be very short. Paging
systems continue to evolve to expand their capabilities beyond very low-rate one-way communication.
Current systems are attempting to implement “answer-back” capability, i.e. two-way communication.
This requires a major change in the pager design, since it must now transmit signals in addition to
receiving them, and the transmission distances can be quite large. Recently many of the major paging
companies have teamed up with the palmtop computer makers to incorporate paging functions into these
devices [2]. This development indicates that short messaging without additional functionality is no longer
competitive given other wireless communication options.
Commercial satellite communication systems are now emerging as another major component of the
wireless communications infrastructure. Satellite systems can provide broadcast services over very wide
areas, and are also necessary to fill the coverage gap between high-density user locations. Satellite mobile
communication systems follow the same basic principle as cellular systems, except that the cell base
stations are now satellites orbiting the earth. Satellite systems are typically characterized by the height
of the satellite orbit, low-earth orbit (LEOs at roughly 2000 Km. altitude), medium-earth orbit (MEOs
at roughly 9000 Km. altitude), or geosynchronous orbit (GEOs at roughly 40,000 Km. altitude). The
geosynchronous orbits are seen as stationary from the earth, whereas the satellites with other orbits have
their coverage area change over time. The disadvantage of high altitude orbits is that it takes a great
deal of power to reach the satellite, and the propagation delay is typically too large for delay-constrained
applications like voice. However, satellites at these orbits tend to have larger coverage areas, so fewer
satellites (and dollars) are necessary to provide wide-area or global coverage.
The concept of using geosynchronous satellites for communications was first suggested by the science
fiction writer Arthur C. Clarke in 1945. However, the first deployed satellites, the Soviet Union’s Sputnik
in 1957 and the Nasa/Bell Laboratories’ Echo-1 in 1960, were not geosynchronous due to the difficulty
of lifting a satellite into such a high orbit. The first GEO satellite was launched by Hughes and Nasa in
1963 and from then until recently GEOs dominated both commercial and government satellite systems.
The trend in current satellite systems is to use lower orbits so that lightweight handheld devices can
communicate with the satellite [3]. Inmarsat is the most well-known GEO satellite system today, but
most new systems use LEO orbits. These LEOs provide global coverage but the link rates remain low
due to power and bandwidth constraints. These systems allow calls any time and anywhere using a single
communications device. The services provided by satellite systems include voice, paging, and messaging
services, all at fairly low data rates [3, 4]. The LEO satellite systems that have been deployed are not
experiencing the growth they had anticipated, and one of the first systems (Iridium) was forced into
bankruptcy and went out of business.
A natural area for satellite systems is broadcast entertainment. Direct broadcast satellites operate in
the 12 GHz frequency band. These systems offer hundreds of TV channels and are major competitors to
cable. Satellite-delivered digital radio is an emerging application in the 2.3 GHz frequency band. These
systems offer digital audio broadcasts nationwide at near-CD quality. Digital audio broadcasting is also
quite popular in Europe.

Overview of Wireless Communications

Overview of Wireless Communications
Wireless communications is, by any measure, the fastest growing segment of the communications industry.
As such, it has captured the attention of the media and the imagination of the public. Cellular phones
have experienced exponential growth over the last decade, and this growth continues unabated worldwide,
with more than a billion worldwide cell phone users projected in the near future. Indeed, cellular phones
have become a critical business tool and part of everyday life in most developed countries, and are
rapidly supplanting antiquated wireline systems in many developing countries. In addition, wireless
local area networks are currently poised to supplement or replace wired networks in many businesses
and campuses. Many new applications, including wireless sensor networks, automated highways and
factories, smart homes and appliances, and remote telemedicine, are emerging from research ideas to
concrete systems. The explosive growth of wireless systems coupled with the proliferation of laptop and
palmtop computers indicate a bright future for wireless networks, both as stand-alone systems and as
part of the larger networking infrastructure. However, many technical challenges remain in designing
robust wireless networks that deliver the performance necessary to support emerging applications. In
this introductory chapter we will briefly review the history of wireless networks, from the smoke signals
of the Pre-industrial age to the cellular, satellite, and other wireless networks of today. We then discuss
the wireless vision in more detail, including the technical challenges that must be overcome to make this
vision a reality. We will also describe the current wireless systems in operation today as well as emerging
systems and standards. The huge gap between the performance of current systems and the vision for
future systems indicates that much research remains to be done to make the wireless vision a reality.
Signal Paths from Analog to Digital

Introduction

Designers of analog electronic control systems have continually faced the following obstacles in arriving at
a satisfactory design:
1. Instability and drift due to temperature variations.
2. Dynamic range of signals and nonlinearity when pressing the limits of the range.
3. Inaccuracies of computation when using analog quantities.
4. Adequate signal frequency range.
Today’s designers, however, have a significant alternative offered to them by the advances in integrated
circuit technology, especially low-power analog and digital circuits. The alternative new design technique
for analog systems is to sense the analog signal, convert it to digital signals, use the speed and accuracy of
digital circuits to do the computations, and convert the resultant digital output back to analog signals.
The new design technique requires that the electronic system designer interface between two distinct design
worlds. First, between analog and digital systems, and second, between the external human world and the
internal electronics world. Various functions are required to make the interface. First, from the human world
to the electronics world and back again and, in a similar fashion, from the analog systems to digital systems
and back again. Analog and Digital Circuits for Control System Applications identifies the electronic functions
needed, and describes how electronic circuits are designed and applied to implement the functions,
and gives examples of the use of the functions in systems.

A Refresher
Since the book deals with the electronic functions and circuits that interface or couple analog-to-digital
circuits and systems, or vice versa, a short review is provided so it is clearly understood what analog means

and what digital means.
Analog
Analog quantities vary continuously, and analog systems represent the analog information using electrical

signals that vary smoothly and continuously over a range. A good example of an analog system is the recording
thermometer shown in Figure 1-1. The actual equipment is shown in Figure 1-1a. An ink pen records the
Add Video
temperature in degrees Fahrenheit (ºF)
and plots it continuously against time on
a special graph paper attached to a drum
as the drum rotates. The record of the
temperature changes is shown in Figure
1-1b. Note that the temperature changes
smoothly and continuously. There are no
abrupt steps or breaks in the data.
Another example is the automobile fuel
gauge system shown in Figure 1-2. The
electrical circuit consists of a potentiometer,
basically a resistor connected
across a car battery from the positive
terminal to the negative terminal, which
is grounded. The resistor has a variable
tap that is rotated by a float riding on the
surface of the liquid inside the gas tank.
A voltmeter reads the voltage from the variable tap to the negative side of the battery (ground). The voltmeter
indicates the information about the amount of fuel in the gas tank. It represents the fuel level in the tank.
The greater the fuel level in the tank the greater the voltage reading on the voltmeter. The voltage is said to
be an analog of the fuel level. An analog
of the fuel level is said to be a copy of the
fuel level in another form—it is analogous
to the original fuel level. The voltage (fuel
level) changes smoothly and continuously
so the system is an analog system, but is
also an analog system because the system
output voltage is a copy of the actual output
parameter (fuel level) in another form.
Digital
Digital quantities vary in discrete levels.
In most cases, the discrete levels are just
two values—ON and OFF. Digital systems
carry information using combinations of
ON-OFF electrical signals that are usually
in the form of codes that represent the
information. The telegraph system is an
example of a digital system.
The system shown in Figure 1-3 is a
simplified version of the original telegraph
system, but it will demonstrate the principle
and help to define a digital system.
The electrical circuit (Figure 1-3a) is a
battery with a switch in the line at one end
and a light bulb at the other. The person

at the switch position is remotely located from the person at the light bulb. The information is transmitted
from the person at the switch position to the person at the light bulb by coding the information to be sent
using the International Morse telegraph code.
Morse code uses short pulses (dots) and long pulses (dashes) of current to form the code for letters or
numbers as shown in Figure 1-3b. As shown in Figure 1-3c, combining the codes of dots and dashes for
the letters and numbers into words sends the information. The sender keeps the same shorter time interval
between letters but a longer time interval between words. This allows the receiver to identify that the code
sent is a character in a word or the end of a word itself. The T is one dash (one long current pulse). The H is
four short dots (four short current pulses). The R is a dot-dash-dot. And the two Es are a dot each. The two
states are ON and OFF—current or no current. The person at the light bulb position identifies the code by
watching the glow of the light bulb. In the original telegraph, this person listened to a buzzer or “sounder”
to identify the code.
Coded patterns of changes from one state to another as time passes carry the information. At any instant of
time the signal is either one of two levels. The variations in the signal are always between set discrete levels,
but, in addition, a very important component of digital systems is the timing of signals. In many cases, digital
signals, either at discrete levels, or changing between discrete levels, must occur precisely at the proper
time or the digital system will not work. Timing is maintained in digital systems by circuits called system
clocks. This is what identifies a digital signal and the information being processed in a digital system.
Binary
The two levels—ON and OFF—are most commonly identified
as 1(one) and zero (0) in modern binary digital systems, and
the 1 and 0 are called binary digits or bits for short. Since the
system is binary (two levels), the maximum code combinations
2n depends on the number of bits, n, used to represent the
information. For example, if numbers were the only quantities
represented, then the codes would look like Figure 1-4, when
using a 4-bit code to represent 16 quantities. To represent larger
quantities more bits are added. For example, a 16-bit code can
represent 65,536 quantities. The first bit at the right edge of the
code is called the least significant bit (LSB). The left-most bit
is called the most significant bit (MSB).
Binary Numerical Quantities
Our normal numbering system is a decimal system. Figure 1-5
is a summary showing the characteristics of a decimal and a binary
numbering system. Note that each system in Figure 1-5 has
specific digit positions with specific assigned values to each position. Only eight digits are shown for each
system in Figure 1-5. Note that in each system, the LSB is either 100 in the decimal system or 20 in the binary
system. Each of these has a value of one since any number to the zero power is equal to one. The following
examples will help to solidify the characteristics of the two systems and the conversion between them.