SIGNAL PROCESSING FOR WIRELESS COMMUNICATION SYSTEMS edited by
H. Vincent Poor Princeton University and Lang Tong Cornell University
Reprinted from a Special Issue of the
Journal of VLSI SIGNAL PROCESSING SYSTEMS for Signal, Image, and Video Technology Volume 30, Nos. 1-3 January-March, 2002
KLUWER ACADEMIC PUBLISHERS NEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW
This page intentionally left blank
Journal of VLSI
SIGNAL PROCESSING SYSTEMS for Signal, Image, and Video Technology Volume 30, Nos. 1–3, January–March 2002 Special Triple Issue on Signal Processing for Wireless Communication Systems Guest Editors: H. Vincent Poor and Lang Tong Guest Editorial: Signal Processing for Wireless Communication Systems
H. Vincent Poor and Lang Tong
5
Tradeoffs of Source Coding, Channel Coding and Spreading in Frequency Selective Rayleigh Fading Channels Qinghua Zhao, Pamela Cosman and Laurence B. Milstein
7
VLSI Implementation of the Multistage Detector for Next Generation Wideband CDMA Receivers Gang Xu, Sridhar Rajagopal, Joseph R. Cavallaro and Behnaam Aazhang
21
Modulation and Coding for Noncoherent Communications
Michael L. McCloud and Mahesh K. Varanasi
35
Multiple Antenna Enhancements for a High Rate CDMA Packet Data System Howard Huang, Harish Viswanathan, Andrew Blanksby and Mohamed A. Haleem
55
Systems, Networking, and Implementation Issues
Deterministic Time-Varying Packet Fair Queueing for Integrated Services Networks Anastasios Stamoulis and Georgios B. Giannakis
71
Xiaodong Wang, Rong Chen and Jun S. Liu
89
Bounds on SIMO and MIMO Channel Estimation and Equalization with Side Information Brian M. Sadler, Richard J. Kozick, Terrence Moore and Ananthram Swami
107
Channel Estimation and Equalization Monte Carlo Bayesian Signal Processing for Wireless Communications
On Blind Timing Acquisition and Channel Estimation for Wideband Multiuser DS-CDMA Systems Zhouyue Pi and Urbashi Mitra
127
Downlink Specific Linear Equalization for Frequency Selective CDMA Cellular Systems Thomas P. Krauss, William J. Hillery and Michael D. Zoltowski
143
Multipath Delay Estimation for Frequency Hopping Systems
Prashanth Hande, Lang Tong and Ananthram Swami
163
Multiuser Detection Amina AlRustamani, Branimir Vojcic and Andrej Stefanov
179
A New Class of Efficient Block-Iterative Interference Cancellation Techniques for Digital Communication Receivers Albert M. Chan and Gregory W. Wornell
Greedy Detection
197
Multiuser Detection for Out-of-Cell Cochannel Interference Mitigation in the IS–95 Downlink D. Richard Brown III, H. Vincent Poor, Sergio Verdú and C. Richard Johnson, Jr.
217
COD: Diversity-Adaptive Subspace Processing for Multipath Separation and Signal Recovery
Xinying Zhang and S.-Y. Kung
235
Multistage Nonlinear Blind Interference Cancellation for DS-CDMA Systems Dragan Samardzija, Narayan Mandayam and Ivan Seskar
257
Adaptive Interference Suppression for the Downlink of a Direct Sequence CDMA System with Long Spreading Sequences Colin D. Frank, Eugene Visotsky and Upamanyu Madhow
273
Constrained Adaptive Linear Multiuser Detection Schemes
293
George V. Moustakides
eBook ISBN: Print ISBN:
0-306-47322-4 0-7923-7691-9
©2002 Kluwer Academic Publishers New York, Boston, Dordrecht, London, Moscow Print ©2002 Kluwer Academic Publishers Dordrecht All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Kluwer Online at: and Kluwer's eBookstore at:
http://kluweronline.com http://ebooks.kluweronline.com
Journal of VLSI Signal Processing 30, 5–6, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Guest Editorial: Signal Processing for Wireless Communication Systems Needless to say, wireless communications is one of the most active areas of technology development today. With the emergence of many new services, and with very high growth rates in existing services, the demand for new wireless capacity is ever-growing. Unlike wireline communications, in which capacity can be increased by adding infrastructure such as new optical fiber, wireless capacity increases have traditionally required increases in either the radio bandwidth or power, both of which are severely limited in most wireless systems. Fortunately, thanks to Moore’s Law type growth, signal processing capability is one resource that is sufficiently plentiful and increasingly able to provide significant increases in capacity. Consequently, the research community has turned to advanced signal processing as a means of enabling substantial capacity gains in wireless systems. There has been an explosion of research in this area over the past five to ten years. The motivation for this special issue is to chronicle these developments, by presenting a broad and representative array of cutting-edge results in this very critical area. The papers in this issue are divided into three main groups. In the first group there are five papers addressing systems, networking, and implementation issues involved in applying advanced signal processing to wireless systems. The second group contains a further five papers addressing issues in estimation and equalization of wireless channels. And, finally, the third group contains seven papers in the important area of multiuser detection, which addresses the problem of effective receiver signal processing for multiple-access systems. These latter papers are further grouped into two subsets; the first three papers deal with advanced iterative methods for multiuser detection, and the final four papers develop methods for adaptation of multiuser detection. As a group, these contributions provide the reader with an excellent sampling of most of the principal areas of current activity in signal processing for wireless systems. All of these areas are of increasing importance in practical wireless systems, with many already finding their way into practical systems under development. It is expected that these and related techniques will play essential roles in providing remarkable capacity gains for emerging wireless applications.
H. Vincent Poor received the Ph.D. degree in electrical engineering and computer science in 1977 from Princeton University, where he is currently Professor of Electrical Engineering. He is also affiliated with Princeton’s Department of Operations Research and Financial Engineering, and with its Program in Applied and Computational Mathematics. From 1977 until he joined the Princeton faculty in 1990, he was a faculty member at the University of Illinois at Urbana-Champaign. He has also held visiting and summer appointments at several universities and research organizations in the United States, Britain, and Australia. His research interests are in the area of statistical signal processing and its applications, primarily in wireless multiple-access communication networks. His publications in this area include the book, Wireless Communications: Signal Processing Perspectives, with Gregory Wornell. Dr. Poor is a member of the U.S. National Academy of Engineering, and is a Fellow of the Acoustical Society of America, the American Association for the Advancement of Science, the IEEE, the Institute of Mathematical Statistics, and the Optical Society of America. He has been involved in a number of IEEE activities, including having served as
6
Poor and Tong
President of the IEEE Information Theory Society and as a member of the IEEE Board of Directors. Among his other honors are the Terman Award of the American Society for Engineering Education, the Distinguished Member Award from the IEEE Control Systems Society, the IEEE Third Millennium Medal, the IEEE Graduate Teaching Award, and the IEEE Communications Society and Information Theory Society Joint Paper Award.
[email protected]
Lang Tong received the B.E. degree from Tsinghua University, Beijing, China, in 1985, and M.S. and Ph.D. degrees in electrical engineering in 1987 and 1990, respectively, from the University of Notre Dame, Notre Dame, Indiana. He was a Postdoctoral Research Affiliate at the Information Systems Laboratory, Stanford University in 1991. Currently, he is an Associate Professor in the School of Electrical and Computer Engineering, Cornell University, Ithaca, New York. Dr. Tong received Young Investigator Award from the Office of Naval Research in 1996, and the Outstanding Young Author Award from the IEEE Circuits and Systems Society. His areas of interest include statistical signal processing, adaptive receiver design for communication systems, signal processing for communication networks, and information theory.
[email protected] http://www.ee.cornell.edu/~ltong
Journal of VLSI Signal Processing 30, 7–20, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Tradeoffs of Source Coding, Channel Coding and Spreading in Frequency Selective Rayleigh Fading Channels QINGHUA ZHAO, PAMELA COSMAN AND LAURENCE B. MILSTEIN Department of Electrical and Computer Engineering, University of California, San Diego. 9500 Gilman Drive, La Jolla, CA 92093-0407, USA Received August 31, 2000; Revised June 26, 2001
Abstract. This paper investigates the tradeoffs of source coding, channel coding and spreading in CDMA systems. We consider a system consisting of an image source coder, a convolutional channel coder, an interleaver, and a direct sequence spreading module. With different allocations of bandwidth to source coding, channel coding and spreading, the system is analyzed over a frequency selective Rayleigh fading channel. The performance of the system is evaluated using the cumulative distribution function of peak signal-to-noise ratio. Tradeoffs of different components of the system are determined through simulations. We show that, for a given bandwidth, an optimal allocation of that bandwidth can be found. Tradeoffs among the parameters allow us to tune the system performance to specific requirements. Keywords: bandwidth allocation, direct-sequence CDMA, frequency selective Rayleigh fading, image transmission over wireless channels, multiuser system, channel estimation
1.
Introduction
Source coding, channel coding and spread spectrum are the three main components in a CDMA communication system. A number of studies have been performed on the joint design of source and channel coding algorithms to yield better system throughput (e.g., [1–3]). There also exists a body of research on the tradeoffs between channel coding and CDMA (e.g., [4–6]). In this work, we investigate the interrelationship among all three components. Bandwidth is the major resource shared among the three components. Allocating more bandwidth to source coding allows more information from the source to be transmitted, but reduces the bandwidth available for both forward error correction (FEC) and spreading. For different compression methods and rates, the bit stream coming out of the source encoder is more or less sensitive to different types of error patterns. FEC and spreading protect the transmitted bits from noise and interference. Depending on the channel conditions
and the characteristics of the source coded bit stream, the system performs better with either more FEC or more spreading. Let and M denote the source code rate (in bits per pixel, bpp), channel code rate, and processing gain, respectively. For a given bandwidth constraint and transmission time, our goal is to find the optimal set under the constraint
where U is the number of pixels of the original image and C and are constants. The paper is organized as follows. Section 2 introduces the source coding and channel coding. In Section 3, the bit error performance of the system is analyzed for a frequency selective Rayleigh fading channel; theoretical and simulation results are compared. Some representative results of tradeoffs among all three components are given in Section 4, and the conclusions are given in Section 5.
8
2.
Zhao, Cosman and Milstein
Source Coding and Channel Coding
The system is shown in Fig. 1. In the following sections, we discuss each component in detail. 2.1. Source Coding The source images are encoded using a lossy compression algorithm called Set Partitioning In Hierarchical Trees (SPIHT [7]). The encoded bit stream is progressive, i.e., bits which come first can be used to reconstruct a low quality version of the source image, and bits which come later can be decoded to produce successively higher quality versions. The SPIHT algorithm has excellent compression performance, however, it is
very sensitive to errors. An error in one bit may lead to complete loss of synchronization in the source decoder, in which case attempting to decode the subsequent bits would cause the quality of the decoded image to deteriorate. Also, there is a small amount of image header information for the coded source bit stream (59 bits in most cases). This number is very small compared to the bit budget for almost all transmission rates of interest, so in all the analyses and simulations presented below, the header is assumed to be error-free. 2.2.
Channel Coding
In Fig. 2 [8], source information bits are grouped into blocks of size N. A 16-bit CRC (Cyclic Redundancy
Tradeoffs of Source Coding, Channel Coding and Spreading
Code) is added to each block. Then the block is convolutionally encoded using a Rate-Compatible Punctured Convolutional (RCPC) [9] code. At the receiver, the list-based Viterbi algorithm is used to find the best candidate in the trellis for the current block. Then the CRC detects whether there is an error. If there is an error, the second best candidate is found and the CRC is again checked, and so on. After checking the list of paths for a predetermined number of times, if the CRC check still declares an error, the source decoder discards this block and all subsequent blocks. The image is then reconstructed from the previously received blocks. 3.
Direct Sequence CDMA
3.1.
9
K — 1. The composite signal at the input to the channel is where
are independent identically distributed (iid) random variables, uniformly distributed in and are iid random variables, uniformly distributed in [0, T). A tapped delay line is used to model the frequency selective Rayleigh fading channel. The signal at the output of the channel can be written as where
Signal and Channel Model
The coded data stream is spread, using direct sequence with a long spreading code, by a factor of M (the processing gain). Then the signal is transmitted using BPSK modulation. Assume there are K simultaneously active users in the system. The signature sequences of different users have a common chip rate of where and 1/T is the data bit rate (in bits per second). Let denote the signature sequence waveform of the kth user, and let be the corresponding sequence elements, where Then
where is the chip pulse shape. For simplicity, a square-wave pulse is chosen, so that for and zero elsewhere. Similarly, the data signal may be written as
is complex Gaussian noise with two sided power spectral density L is the number of resolvable multipaths, and is a complex gain which represents the fading experienced by the kth user on the lth path, uncorrelated for different k and l, but correlated over time t (for convenience, we assume the fading is constant during each symbol duration). We assume all users are operating in a similar environment with a flat Multipath Intensity Profile (MIP), i.e., all are identically distributed with density function and is uniformly distributed. For simplicity, we set
3.2.
RAKE Receiver and Trellis Structure
where
The RAKE receiver shown in Fig. 3 is used to resolve the resolvable multipaths. Every T seconds at the RAKE output is sampled and fed into a soft decision decoder. For the ith data bit of the reference user, the test statistic on the path of the RAKE is given by
A is the magnitude of the transmitted signal, assumed to be the same for all users, is the common carrier frequency and is the phase of the kth user. Assuming asynchronous operation, the delay of user k relative to the reference user (user 0) is k = 1, . . . ,
In (9), the first term on the right hand side is the signal component, and the last three terms correspond to self-interference, multi-access interference, and noise,
where for the kth user is
Therefore, the transmitted signal
10
Zhao, Cosman and Milstein
0th user is determined by finding the path in the trellis which minimizes the metric given by
given by
and respectively, where
where the subscript refers to a particular path in the trellis, and is the ith code symbol along path We see that the decision rule (14) can be implemented by a maximal-ratio combining RAKE receiver (i.e., followed by an unweighted trellis decoder. 3.3.
Bit Error Performance
For convolutional codes, the union bound of the bit error rate (BER) is is the interference on the lth path of the ith bit of the reference user from the sth path of the kth user. As the user number is asymptotically Gaussian by the central-limit theorem. Since the selfinterference term becomes negligible when compared to the multi-access interference, we approximate the final test statistic as
where
is zero-mean complex Gaussian with independent of
both i and l. For perfect interleaving, are not only independent for different values of l, but also independent for different values of i. With perfect channel estimation, we can condition on and use the same technique as in [6] to determine performance. The decoded data for the
where is the distance spectrum, is the free distance of the code, and is the pairwise error probability of two sequences with distance d (assume they differ in the first d bits). Using the metric of (14), we have
Tradeoffs of Source Coding, Channel Coding and Spreading
To get
and
11
From [11] and [10],
then (16) can be written as
Note that with uncorrelated channel fading, the above is a special case of [10, p. 882] where and are a pair of correlated complex-valued Gaussian random variables. The dL pairs are statistically independent and identically distributed. From (9), (11), and (12), we can obtain the second (central) moments as
where
and and
Substituting (18) into (15), we take and to lower bound and upper bound the decoded BER, respectively. The uncoded BER can also be evaluated using Figure 4 shows the bit error rates
12
Zhao, Cosman and Milstein
for an uncoded bit stream as well as for the coded bit stream using the convolutional code from [9, Table II, b, code rate 8/9], where is the ratio of energyper-coded-bit to noise power spectral density. In the figure, it is seen that most of the simulation results are higher than the upper bound. This occurs because we have not considered self-interference in the analysis, and because we have truncated the infinite sum of (15) to 6 terms [9]. When the fading in the channel is correlated, the above analysis fails for the coded bit error rate, so we use simulation to get the desired results. The Jakes model [12, 13] is used to generate time-correlated Rayleigh fading parameters for the L independent paths of each user. 4.
Results
Equation (1) defines a 3-dimensional surface on which every point corresponds to a possible bandwidth allocation that the communication system could, in theory, use. We wish to find the point on the surface which has the best performance. We let denote the performance of the system. The choice of a good set of system parameters depends on the performance measure and the channel conditions. For a given system, both the fades and the noise in the channel are random processes. Therefore, the output from the source decoder is not the same for different trials. We measure the performance of the system by looking at the output for many independent trials. The Peak Signal-to-Noise Ratio (PSNR) is defined as where MSE = E[(received image — original image)2], and peak image energy = 2552 for 8 bits per pixel grey scale images. The empirical cumulative distribution function (CDF) of the PSNR of the decoded images incorporates the randomness of the channel by showing the percentage of the received images which have a quality less than a certain value. For each possible set of system parameters, we can generate the CDF curve corresponding to that set. In this work, we assume that the performance of the system, F, is taken to be some summary statistic of the CDF curve, such as the area above the curve [14]. Our criterion for determining when the CDF curve has converged is as follows. We run an initial number J of random simulations of the channel, where J is at least 500, and we generate the CDF curve and compute our summary statistic from it. We run another J random simulations of the channel, and compute the
CDF curve and its summary statistic from the 2J trials. If the new statistic is within 1% of the first one, we judge that the curve has converged, and use that result. In most cases, no further trials were required. If the difference exceeded 1%, we ran a last group of J random trials and combined the results again. In our experiments, the difference in the CDF metric and the previous one at this point was always less than 1%, and no further trials were required. For all plots, we ended up using between 1000 and 4000 random realizations of the channel. Figure 5 illustrates what the CDF curves could look like. Each curve is the CDF of many trials, and shows the performance of a given system under one set of parameters. When two curves do not cross (e.g., curves A and C, or curves B and C), the lower curve is superior because it always has a higher probability of achieving PSNR values above any given PSNR. When there are crossovers between two curves (e.g., curves A and B), one may be superior for one application but not for another. Comparison of the curves may then involve maximizing the area above the curve, perhaps with some weighting (e.g., all PSNRs less than a certain amount may be considered equally bad, and all PSNRs above a certain amount may be considered equally good). The application requirements can sometimes be summarized by saying that a given image quality must be present at least a specified fraction of the time. For example, at most 5% of the time can the decoded images have PSNR below 20 dB. Some curves may then be inadmissible. These issues are discussed in [14]. Finding the optimal point can of course be done by exhaustive search. For each 3-tuple under consideration, we simulate a certain number of
Tradeoffs of Source Coding, Channel Coding and Spreading
realizations of the lossy channel, and examine the decoded image quality for each realization to form the CDF curve. In any case where curves do not cross, we know the best performance corresponds to the lowest curve. When curves cross, we compute the final performance F as some summary statistic of this curve, and we find the best F. We can separate the procedure of finding the optimal point into two steps: Step 1. For every value of find the optimal pair under the constraint where is a constant. Step 2. Find the optimal point from the sets This is still an exhaustive search. However, for a fixed value of (and with all other parameters of the system fixed except for and M), the curves obtained by varying and M (under the constraint ) do not cross (see discussion below). So the lowest curve is the best, and one does not need to compute the summary statistic F for step 1. More substantial computational savings can be realized if the system satisfies the following property: For a fixed F is a monotonic function of some parameter, denoted by such as BER, packet erasure rate, etc. Note that can often be obtained from and M easily by either theoretical analysis or relatively simple simulations. An example is provided below. In this case, Step 1 is significantly simplified. 4.1.
Tradeoffs of and M Under the Constraint
For all the comparisons, we keep the ratio of energyper-source-bit to noise power spectral density, constant. Figure 6 is a typical example which shows the tradeoff between channel code rate and processing gain using the CDF curves. Here the number of users K = 10, L = 4, and the fading is correlated with normalized Doppler The parameters on each curve are We use a delay constrained interleaver size where the total delay is equal to roughly For example, for a processing gain of 72, we use an interleaver of size 20 by 16 coded bits; for a processing gain of 36, we use an interleaver of size 40 by 16 coded bits. For each plot, since the source code rate is fixed, the best achievable image quality is the same for all curves. The curves do not cross, and it is easy to see how the system perfor-
13
mance changes as the parameters change, and thus to decide which curve represents the best set of parameters. For example, in Fig. 6(b), for the sequence of channel code rates approximately 13%, 64%, 84%, 91%, 94%, respectively, of the decoded images have PSNR larger than 29 dB. In this scenario, the system improves when more bandwidth is allocated to the channel coding ( decreases). The lowest curve ( M = 48) is the best curve. Note that for a fixed when the bandwidth allocated to channel coding increases the bandwidth allocated to spreading decreases Thus there are two counterbalancing effects. Assume we have a set of error correcting codes where a lower value of corresponds to a larger coding gain; at the same time, M decreases and this causes both loss of some diversity enhancement and a decrease in interference suppression, both of which degrade the system performance. This degradation resulting from a small processing gain can be observed in Fig. 6(d). As the processing gain decreases from 36 to 30, even though we have a stronger error correction code, the performance of the system degraded. From the simulation, we observed empirically that, given the channel and a fixed set of parameters, L and K, along with a fixed interleaver delay constraint, for a given a lower BER always corresponded to a lower curve and thus a better system. In other words, if we let then the system performance F is a monotonic function of In this case, we can easily determine the best pair for a fixed BER values of all the curves from Fig. 6 are listed in Table 1; they yield the same optimal sets of parameters as do the CDF curves. Note that to get an accurate CDF curve, we normally need 10 to 40 times more trials in the simulation than what is needed for an accurate BER estimate. In the case of perfect interleaving, we can use the theoretical BER bounds derived in Section 3.3, and further reduce the amount of work by ignoring those pairs whose lower bounds lie above the upper bounds of any other pair.
4.2.
Tradeoffs of
and
M
After we find the optimal for each we plot the CDF curves for each 3-tuple. Figure 7 shows the best curves from Fig. 6. The parameters on each curve are For the top curve, we see that there is a higher probability that the output image has a low
14
Zhao, Cosman and Milstein
PSNR (only 77% of the decoded images have PSNR above 29 dB). But since more bandwidth is allocated to source coding, the best achievable PSNR is larger than that of all the other curves (the right end of the
curve reaches a PSNR of 32.59 dB). In contrast, for the lowest curve, there is a higher probability of achieving PSNR above 29 dB (96%). But with less bandwidth allocated to source coding, the best PSNR achievable is limited to 29.36 dB, lower than the corresponding values of the other curves. To give an idea of the resulting visual quality, Fig. 8 shows the decoded images at 29.36 dB and 32.59 dB. The image at 29.36 dB is significantly blurrier than the one at 32.59 dB. These images correspond to the maximum achievable quality of the system with and of Fig. 6, respectively. In this case, where there are crossovers between the curves, the criterion to choose the best curve depends
Tradeoffs of Source Coding, Channel Coding and Spreading
on the application requirements. One way to evaluate the curve is to compute as the unweighted area under the corresponding MSE CDF curves as in [14]. According to this criterion, (0.189, 0.30, 36) is considered optimal among the set of curves. 4.3.
Tradeoffs Related to Channel Estimation
The results above all corresponded to perfect estimation of the channel gains and phases. To incorporate
15
the effects of imperfect channel state estimation, we employ the block diagram of Fig. 9, which shows an estimation method appropriate for BPSK signals [10, p. 803]. The figure shows only the estimation technique on one tap. We use an integrate and dump for the low pass filter over the interval (t – BT, t ) , where B is called the estimation length. Figure 10 shows the BER performance versus estimation length B for a fixed and M. We assume a carrier frequency of and a
16
Zhao, Cosman and Milstein
data rate of 29 K bits/sec; the two plots correspond to mobile speeds of 30 mph and 70 mph, respectively, with parameters M = 72, K = 4, and L = 4. Here we use the same delay constraint for the interleaver as in Section 4.1—interleaver size 20 by 16 coded channel bits for processing gain 72. From the figure, we see that i.e., around 40 and 20, respectively, gives the best BER performance for these two cases. When B is too small, the estimate is degraded by the noise and interference; when B is too large, the channel changes too rapidly for the estimates to be useful. Figure 11 shows the tradeoffs when channel estimation is employed. The optimal is used. and the parameters beside the curves are The interleaver delay constraint is the same as in Section 4.1. Note that in this plot, as more bandwidth is allocated to channel coding ( decreases), the system first improves when decreases from 0.80 to 0.60 to 0.46, and then deteriorates when additional bandwidth is allocated to the coding ( decreases to 0.36 then to 0.30). We also observe that F is not a monotonic function of BER for a fixed when channel estimation is employed (for example, in Fig. 11, BER’s for the top curve to the bottom curve are and respectively). Therefore, we cannot exploit the computational savings referred to above in determining the best triple
4.4.
Effects of Interleaving
The source decoding algorithm, the channel decoding algorithm and the deinterleaver might cause significant delays in the system, especially for time critical applications such as voice and video transmissions. Here we discuss the effects of the interleaver. Generally, a larger deinterleaver will scatter correlated errors further apart. However, this does not always benefit the system, especially when the system performance depends more on packet erasure rate than on bit error rate (recall that the decoded image is reconstructed from the bit stream up to the first lost packet). Figure 12 shows the system performance versus interleaver size under different channel conditions. The channel coding rate is 0.80. There are K = 6 active users, the processing gain is 128, and is 4 dB. We see that a larger interleaver size does not necessarily lead to better performance. With dispersed errors, more packets are affected. Even though that dispersion of errors results in fewer errors per packet, the number of those bit errors may still be large enough to overwhelm the decoder. In such a case, the dispersion of errors causes a larger number of packets to experience decoding failure. When the deinterleaver size gets sufficiently large, the errors are, in turn, sufficiently dispersed so that they can get corrected and the final quality improves. For the curves shown in Fig. 12, the deinterleaver size has to be about 120 by 120 coded bits before the decoder functions efficiently.
Tradeoffs of Source Coding, Channel Coding and Spreading
17
18
Zhao, Cosman and Milstein
Tradeoffs of Source Coding, Channel Coding and Spreading
5.
Conclusions
In this paper, we introduce three-way tradeoffs among source coding, channel coding and spreading in CDMA systems. For a fixed bandwidth, the performance of the system is quantified by the CDF curves of the decoded images. A two step method is employed to obtain the optimal bandwidth allocation represented by the CDF curves. Under certain constraints, the two step optimization method significantly reduces the amount of computation. We show that allocating more bandwidth to source coding allows us to achieve a higher maximum image quality, but the probability of achieving this quality is smaller. On the other hand, allocating more bandwidth to channel coding and spreading decreases the number of source information bits transmitted and thus limits the best achievable image quality, but the probability of achieving this quality is higher. For a fixed source coding rate, allocating more bandwidth to channel coding gives us a stronger code; but since less bandwidth is left for processing gain, there is a loss of diversity enhancement and a decrease in interference suppression; in the case of imperfect channel estimation, a smaller processing gain can also lead to a poorer estimate and thus a degradation in the system performance.
5.
6.
7.
8.
9.
10. 11.
12. 13. 14.
Acknowledgments This research was partially sponsored by the Center for Wireless Communications of UCSD, and by the CoRe program of the State of California.
19
Antenna Transmit Diversity for High Capacity Space-Time Coded DS/CDMA,” Proceedings of Conference on Military Communications MILCOM 1999, Sept. 1999, vol. 1, pp. 393– 397. I. Oppermann and B. Vucetic, “Capacity of a Coded Direct Sequence Spread Spectrum System Over Fading Satellite Channel Using an Adaptive LMS-MMSE Receiver,” IEICE Trans. Fundamentals, vol. E79-A, no. 12v, 1996, pp. 2043–2049. J.R. Foerster and L.B. Milstein, “Coding for a Coherent DS-CDMA System Employing an MMSE Receiver in a Rayleigh Fading Channel,” IEEE Transactions on Communication, vol. 48, no. 6, 2000, pp. 1012–1021. A. Said and W.A. Pearlman, “A New, Fast, and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, no. 3, 1996, pp. 243–250. P.G. Sherwood and K. Zeger, “Progressive Image Coding on Noisy Channels,” IEEE Signal Processing Letters, vol. 4, no. 7, July 1997, pp. 189–191. J. Hagenauer, “Rate-Compatible Punctured Convolutional Codes (RCPC Codes) and Their Applications,” IEEE Transactions on Communication, vol. 36, April 1988, pp. 389–400. J.G. Proakis, Digital Communications, New York: McGrawHill, 1995. L.L. Chong, “The Effects of Channel Estimation Error on Wideband CDMA Systems with Path, Frequency, Time, and Space Diversity,” Ph.D. Dissertation, University of California, San Diego, 2001. W.C. Jakes, Microwave Mobile Communications, Piscataway, NJ: IEEE Press, 1993c 1974. P. Dent, G.E. Bottomley, and T. Croft, “Jakes’ Model Revisited,” Electronics Letters, vol. 29, no. 13, 1993, pp. 1162–1163. PC. Cosman, J.K. Rogers, P.G. Sherwood, and K. Zeger, “Combined Forward Error Control and Packetized Zerotree Wavelet Encoding For Transmission of Images Over Varying Channels,” IEEE Transactions on Image Processing, June 2000, pp. 982– 993.
References 1. B.D. Pettijohn, K. Sayood, and M.W. Hoffman, “Joint Source/Channel Coding Using Arithmetic Codes,” IEEE Proceedings DCC 2000. Data Compression Conference, March 2000, pp. 73–82. 2. G. Cheung and A. Zakhor, “Bit Allocation for Joint Source/Channel Coding of Scalable Video,” IEEE Transactions on Image Processing, vol. 9, no. 3, 2000, pp. 340– 356. 3. M. Zhao, A.A. Alatan, and A.N. Akansu, “A New Method for Optimal Rate Allocation for Progressive Image Transmission Over Noisy Channels,” Proceedings DCC 2000. IEEE Data Compression Conference, March 2000, pp. 213– 222. 4. D.J. Van Wyk, I.J. Oppermann, and L.P. Linde, “Performance Tradeoff Among Spreading, Coding and Multiple-
Qinghua Zhao was born in Shannxi, China, in 1976. She received the B.S. degree in electrical engineering, at Xi’an JiaoTong University, Xi’an, China, in 1996, and the M.S. in electrical engineering, at the University of California, San Diego (UCSD), in 2000. She is currently a Graduate Student Researcher at the University of California, San Diego, working toward the Ph.D. degree. Her research interests are in the area of communication theory, information theory, channel coding and spread-spectrum. She is a student member of IEEE.
[email protected]
20
Zhao, Cosman and Milstein
Pamela Cosman obtained her B.S. with Honor in Electrical Engineering from the California Institute of Technology in 1987, and her M.S. and Ph.D. in Electrical Engineering from Stanford University in 1989 and 1993, respectively. She was an NSF postdoctoral fellow at Stanford University and a Visiting Professor at the University of Minnesota during 1993–1995. Since July of 1995, she is on the faculty of the department of Electrical and Computer Engineering at the University of California, San Diego, where she is currently an associate professor. Her research interests are in the areas of data compression and image processing. Dr. Cosman is the recipient of the ECE Departmental Graduate Teaching Award (1996), a Career Award from the National Science Foundation (1996–1999), and a Powell Faculty Fellowship (1997–1998). She is an associate editor of the IEEE Communications Letters, and was a guest editor of the June 2000 special issue of the IEEE Journal on Selected Areas in Communications on “Error-resilient image and video coding.” She was the Technical Program Chair of the 1998 Information Theory Workshop in San Diego, and is a Senior Member of the IEEE, and a member of Tau Beta Pi and Sigma Xi. Her web page address is http://www.code.ucsd.edu/cosman/
[email protected]
Laurence B. Milstein received the B.E.E. degree from the City College of New York, New York, in 1964, and the M.S. and Ph.D. degrees in electrical engineering from the Polytechnic Institute of Brooklyn, Brooklyn, NY, in 1966 and 1968, respectively.
From 1968 to 1974, he was with the Space and Communication Group of Hughes Aircraft Company, and from 1974 to 1976, he was a Member of the Department of Electrical and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY. Since 1976, he has been with the Department of Electrical and Computer Engineering, University of California at San Diego (UCSD), La Jolla, where he is a Professor and former Department Chairman, working in the area of digital communication theory with special emphasis on spreadspectrum communication systems. He has also been a consultant to both government and industry in the areas of radar and communications. Dr. Milstein was an Associate Editor for communications Theory for the IEEE TRANSACTIONS ON COMMUNICATIONS, an Associate Editor for Book Reviews for the IEEE TRANSACTIONS ON INFORMATION THEORY, an Associate Technical Editor for the IEEE COMMUNICATIONS MAGAZINE, and Editor-in-Chief of the IEEE JOURNAL ON SELECTED AREA IN COMMUNICATIONS. He was the vice President for Technical Affairs in 1990 and 1991 of the IEEE Communications Society and has been a member of the Board of Governors of both the IEEE Communications Society and the IEEE Information Theory Society. He has been a member of the IEEE Fellows Selection Committee since 1996, and he currently is the Chair of that committee. He is also the Chair of ComSoc’s Strategic Planning Committee. He is a recipient of the 1998 Military Communications Conference Long-Term Technical Achievement Award, an Academic Senate 1999 UCSD Distinguished Teaching Award, an IEEE Third Millenium Medal, and the 2000 IEEE Communication Society Armstrong Technical Achievement Award.
[email protected]
Journal of VLSI Signal Processing 30, 21–33, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
VLSI Implementation of the Multistage Detector for Next Generation Wideband CDMA Receivers GANG XU, SRIDHAR RAJAGOPAL, JOSEPH R. CAVALLARO AND BEHNAAM AAZHANG Department of Electrical and Computer Engineering, Rice University, 6100 Main St., Houston TX 77005, USA Received September 6, 2000; Revised June 15, 2001
Abstract. The multistage detection algorithm has been proposed as an effective interference cancellation scheme for next generation Wideband Code Division Multiple Access (W-CDMA) base stations. In this paper, we propose a real-time VLSI implementation of this detection algorithm in the uplink system, where we have achieved both high performance in interference cancellation and computational efficiency. When interference cancellation converges, the difference of the detection vectors between two consecutive stages is mostly zero. Under the assumption of BPSK modulation, the differences between the bit estimates from consecutive stages are 0 and ±2. Bypassing the zero terms saves computations. Multiplication by ±2 can be easily implemented in hardware as arithmetic shifts. However, the convergence of the algorithm is dependent on the number of users, the interference and the signal to noise ratio and hence, the detection has a variable execution time. By using just two stages of the differencing detector, we achieve predictable execution time with performance equivalent to at least eight stages of the regular multistage detector. A VLSI implementation of the differencing multistage detector is built to demonstrate the computational savings and the real-time performance potential. The detector, handling up to eight users with 12-bit fixed point precision, was fabricated using a 1.2 µm CMOS technology and can process 190 Kbps/user for 8 users. Keywords: fixed-point
CDMA, multiuser detection, multistage detector, interference cancellation, real-time implementation,
1. Introduction The fast growing cellular telephony industry provides higher capacities for larger number of subscribers each year, which in turn requires complex signal processing techniques and sophisticated multiple access methods to meet these demands. Direct Sequence Code Division Multiple Access (DS-CDMA) has been recognized as one of the best multiple access schemes for wireless communication systems [1]. The Wideband CDMA system discussed [2] in this paper is based on the short code DS-CDMA scheme. We place particular emphasis on the uplink (from mobiles to the base station) system, where all the subscribers share the common channel (shown in Fig. 1). In such an environment, the only way to distinguish these users is to use orthogo-
nal or nearly orthogonal codes (spreading sequences) to modulate the transmitted bits. Any single desired user in the CDMA uplink system experiences direct interference from the other users in the same cell and neighboring cells. This effect is called Multiple Access Interference (MAI), which is the major limitation in capacity for the current IS-95 CDMA standard. The other related problem is called the near-far problem. When a user is far from the base station, it is likely that his signal would be overshadowed by the users who are near the base station. In the IS-95 standard, perfect power control is utilized, which ensures that the received signal of any user within the cell is equal to any other. This requires a complicated control system on both base stations and mobile phones. Users at the far end of the cell usually consume
22
Xu et al.
extremely large amounts of power, which would inevitably shorten the battery life. The assumption of simply considering all the other users as noise leads to the MAI and near-far problems [3]. One viable scheme is to use the cross-correlation information of all the users to do the linear or nonlinear multiuser detection [4] shown in Fig. 1, which requires a short code spreading scheme so that the crosscorrelation information is determined. In a short code system, the spreading sequence is repetitive bit after bit (shown in Fig. 2) with different codes for each user. The channel estimation block in Fig. 1 is an essential part in a W-CDMA uplink system to estimate the delay and the amplitude information of each user. There are many advanced algorithms for channel estimation, such as maximum-likelihood estimation and subspace parameter tracking. Most of the proposed communication algorithms in W-CDMA systems consist of various matrix and vector level operations. Advanced computer arithmetic techniques, such as CORDIC, online arithmetic units [5], fast multiplier structures and so on, are especially valuable to the optimization and implementation of these algorithms. In this paper, we focus
on the implementation of the multiuser detector block by using computer arithmetic techniques to reduce the complexity. One group of multiuser detectors is based upon interference cancellation (IC), especially parallel interference cancellation (PIC). The concept is to cancel the interference generated by all users other than the desired user. Lower computation demand and hardware related structures are the major advantages of this strategy. One of the most effective PICs comes from the iterative multistage method, first proposed by Varanasi and Aazhang [6]. The inputs of one particular stage are the estimated bits of the previous stage. After interference cancellation, the new estimates, which should be closer to the transmitted bits, are fed into the next stage. Later researchers developed this multistage idea and introduced some other types of PICs [7]. However, almost all the existing multistage based algorithms neglect that fact that as the iterations progress, the solution becomes more invariant, i.e. more elements in the output vector turn out to be the same as the elements in the input vector. Ideally at the last iteration stage, the output and the input should be identical if the algorithm converges. Therefore in the last several stages, the multistage detector generates an output which is almost identical to its input. This is a substantial waste of computation power and increases the system delay. Lin [8] developed a differential matched filter and presented an FPGA implementation [9], where they used the differential information in the FIR filter’s coefficients to mitigate the complexity. This idea is important to our research on the complexity reduction for the multistage detector. In this paper, we propose a differencing multistage detection algorithm. Unlike the conventional
VLSI Implementation of the Multistage Detector
23
multistage detector, the number of computations in each stage is not constant, but decreases dramatically stage after stage, which exactly reflects the characteristic of the iterative algorithm. Therefore the complexity is reduced, while in the meantime, the high performance of the interference cancellation of the multistage detector is preserved. We have implemented both the conventional and the proposed differencing multistage detector in a single ASIC with a select function. This is because the differencing multistage detector uses the conventional detector as its first stage. Recent researchers also proposed various kinds of CDMA related matched filter, detector and decoder structures [9–11]. Compared to their approaches, our design focuses on the multiuser detector for the next generation W-CDMA and arithmetic level optimization. Our implementation was fabricated by MOSIS in 1.2 µm CMOS technology. In the next section, we present the mathematical model of the multiuser communication system and our new differencing multistage detection algorithm. We will also analyze the convergence and fixed-point word length issues. An ASIC hardware implementation of this algorithm for real-time communication systems is shown in Section 3.
tion, The signature sequence is the spreading code of the kth user, where T is the duration of one bit. A short repetitive Gold code sequence of period T is used to generate in order to achieve better performance than random codes. Finally, represents the Additive White Gaussian Noise (AWGN).
2.
where vectors y and d are the output of the matched filter bank and the transmitted bits, respectively. There are K elements in each vector. In a synchronous system, the dimension of matrices R and A is K × K. The elements in the cross-correlation matrix can be represented by:
2.1.
Differencing Multistage Detector Multiuser Communication Model
We assume a multiuser binary phase shift keying (BPSK) modulated uplink DS-CDMA synchronous communication system. We could also extend this model to a general asynchronous system by adding the impact from adjacent bits, where an appropriate channel estimation block is required. The channel is a single path channel with additive white Gaussian noise (AWGN). Figure 1 shows the structure of a typical multiuser uplink communication system. At the base station receiver, the continuous received signal r (t) is given by:
where K is the number of users. We obtain the estimates of the kth user’s signal power from the channel estimation block. The source data bits are represented by Here because we use BPSK modula-
2.2.
Matched Filters and Cross-Correlation Matrix
A matched filter bank is usually the first stage in the baseband signal detection. The technique of the matched filter bank in CDMA systems is to use one matched filter to detect one user’s signal. There are no cross links among the filters. Each branch in the matched filter bank consists of the correlation operation of the received signal with one particular user’s signature sequence. The i th output of the matched filter of the kth user is
Equation (2) can also be expressed in a simpler matrix notation on substituting (1),
We can normalize the auto-correlation coefficients in (4) in our multistage detection algorithm because all the estimated bits are +1 or – 1 within the multistage detector (we are interested only in the sign of these bits). The amplitude of each user would not affect the final hard decision. However, if we need to provide soft decision output for a later decoding block, we should also compute the real values of the auto-correlation coefficients. The cross correlation matrix R can be split into three parts, as in (5):
24
Xu et al.
where I is the identity matrix and L is the lower triangular part of matrix R. Since R is symmetric (shown in (4)), the upper triangular matrix should be the transpose of the lower triangular matrix. The amplitude matrix of the signal A is represented as:
A is a positive definite matrix with rank K. Our differencing multistage detector is based on estimating the transmitted bits from (3) using a non-linear method. 2.3.
Derivation of the Differencing Multistage Detector
The multistage detector is an interference cancellation scheme. In each stage of the multistage detector, PIC removes the component of other users from the received signal in parallel to obtain a better estimated signal for one particular user. Because we do not know the exact information bit for any user, we use the estimated (hard decision) bits in each stage. The output of the lth iteration is:
in (8) can be implemented by arithmetic shifts. Therefore, dedicated multipliers are not necessary for this algorithm. Finally, because our modification is a linear transformation that subtracts two consecutive stages, the bit error rate (BER) after each stage will not change, compared with the conventional multistage detector implementation. Therefore, the final BER is exactly the same as the conventional multistage detector. The complete algorithm is described below:
1. 2. for k = 1 to K /* first stage conventional multistage detection */ 3. 4. end 5. 6. for to L /* second and later stages: differencing multistage detection */ /* differencing vector gener7. ation */ 8. for k = 1 to K 9. 10. end /* hard decision genera11. tion */ 12. end 2.4.
After l iterations, we are more likely to observe which reflects the convergence of the iterative method. We observe that instead of dealing with each estimated bit vector as in (7), we can calculate the difference of the estimated bits in two consecutive stages, i.e. the input of each stage becomes which is called the differencing vector. If we denote Eq. (7) can be re-written as
Using this differencing algorithm, computations can be saved by computing (8) instead of (7), as more elements in the vector tend to be zero after several iterations. Moreover, all the non-zero terms in (9) are equal to +2 or –2. The constant multiplication by ±2
Higher Modulation Schemes
The differencing scheme can be easily extended to QPSK modulation, proposed in 3G systems as the real and imaginary components can be processed separately. To generalize this to other modulation schemes, the matrix transpose can be replaced with a Hermitian transpose using a generalized slicer in place of the sign function. Let us first consider QPSK modulation. For QPSK, the transmitted bits are mapped into four different symbols: 1 + j, 1 – j, – 1 + j, – 1 – j. The output of the matched filter is now complex i.e. and the matrix becomes (L + L*). Thus, (8) becomes
If we use two separate slicer functions to demodulate the real and imaginary part of QPSK modulated
VLSI Implementation of the Multistage Detector
WCDMA signal, we would have two independent hard decisions. The differencing method can be used in this case since the vector and contain only 0 and ±2’s. We can generalize the slicer function for more complex modulation schemes, such as MPSK and MQAM, the slicer function can be represented in the following equation:
where is the i th bit in the hard decision symbol and m is the number of bits per symbol. The slicer function maps the soft decision into the hard decisions By reconstructing the hard decisions using the estimated bits, we can still use (10) in the differencing computation mode. We are investigating this as future work. 2.5.
Numerical Results
The differencing multistage detector is tested by Monte Carlo method with extensive simulations to estimate the convergence rate (shown in Fig. 3), given that iterations are forced to stop at the eighth stage. We assume a periodic Gold code sequence of length 31 as the spreading sequence. We observe that the differencing and conventional multistage detectors have the same convergence pattern and both of them work more effectively when SNR is high. Also, we observe that eight
25
stages are sufficient for most cases, which guides the implementation of this algorithm. The BER for the differencing multistage detector is exactly the same as the conventional multistage detector through the simulations. This is because we do not change the framework of the iterative method, nor the convergence rate. Equations (7) and (8) are essentially equivalent to each other. We define the multiple access interference (MAI) level as the ratio between the powers of the strongest and the weakest user. The BER plot versus SNR and MAI for a ten-user and twenty-user system is shown in Fig. 4. These figures show that the performance of the matched filter degrades dramatically when MAI increases or the number of users increases, which is due to the near-far and multiple access interference problem. In contrast, the performance of the differencing multistage detector, for moderate MAI and number of users, approaches the bound of a single user system, which is given by where is the signal energy per bit to noise ratio. The percentage of zeros, which in turn signifies the reduction in complexity, in the differencing vector is illustrated in Fig. 5(a). In this figure, we see that the percentage of zeros in the differencing vector increases as the iterations progress, which shows that the iterations converge progressively. After the fourth stage, the number of zeros approaches 98% in a 15-user communication system. This result explicitly indicates that if we use the conventional multistage detector, almost 98% of the computation resource is unnecessary in the fourth stage. Figure 5(b) gives us a clear view of how many
26
Xu et al.
computations are possible to save in a real system. The dotted line represents the accumulated number of floating point operations (flops) needed after each stage in the conventional multistage detector. As we explained earlier, the number of computations remains constant for each stage, which makes the total flops increase linearly. On the contrary, the number of computations in the differencing multistage detector decreases as the iteration proceeds. Thus we can achieve a 6× speedup in an eight stage system according to Fig. 5(b). With more stages in the system to increase the BER, higher speedups are obtained relative to the conventional multistage detector.
3.
Real-Time Implementation
The detector can be implemented in real-time by both DSPs and ASICs. Although high performance general purpose DSPs can meet the real-time requirements, they are not as cost-effective. In commercial communication systems, sophisticated algorithms tend to be implemented by dedicated ASICs, These hardware implementations are potentially cheaper and faster with lower power consumption [12–14]. In this section, we present a fixed-point implementation analysis and our ASIC implementation of the differencing multistage detector.
VLSI Implementation of the Multistage Detector
3.1.
Fixed-Point Implementation Analysis
Converting an algorithm from floating point to fixed point requires two major procedures. First, we have to estimate the dynamic range of the input data and all the variables used in the algorithm. Also, we have to find an optimized wordlength to represent numbers and truncate the results. In this section, we present an analysis of the fixed-point implementation of the differencing multistage detector. 3.1.1. Range Estimation. The cross-correlation coefficients from the channel estimation block and the matched filter output from integrators are two major operands in the differencing multistage detector. Both are generated by high speed analog to digital (A/D) converters, which sample and digitize the analog input signals at the front end. From the characteristics of the Gold code, we know that the maximum value of cross-correlation coefficients is the auto correlation of any particular spreading sequence, i.e., range is
where the spreading gain is Therefore if we use a Gold code of length 31. The range of the user’s amplitude depends on the dynamic range (or MAI) of the system. The relationship is the following,
The range estimation for the matched filter output is complicated because it is determined by SNR, MAI, and the number of users in the system. For sufficiently large number of interfering users with balanced interfering powers, the interference can be approximately modelled by a Gaussian distribution [15], as illustrated in Fig. 6. The distribution is also symmetric, based on the assumptions of BPSK modulation, binary distribution of the source bits, and the binary symmetric channel. The range of such a distribution is estimated as
27
where µ is the mean of one peak, is the standard deviation of that peak and n is an empirical constant. For the Gaussian distribution [14], n = 3 can guarantee 99.9% of all the samples fall in the range 3.1.2. Wordlength Analysis. From (12) and (13), we can conclude that the number of bits [14] needed to represent the result of matrix product RA in (3) is
Here we assume a binary representation of the integers. If MAI = 10 dB and r = 5 (Gold code of length 31), which indicates that at least eight bits are needed to represent any cross-correlation coefficient. For the matched filter output, the number of bits needed is nine in a perfect power control case, and ten in a MAI = 10 dB case for up to 20 users (shown in Fig. 7(a)). In Fig. 7, we can also observe that if the number of users is small, SNR will dominate the variation of the dynamic range. When more users are active in the system, MAI will determine the number of bits required. For some applications, the optimized wordlength might not follow the relation in (15), but will usually be smaller than The optimized wordlength is determined by simulation, in which the minimal mean square distortion is set corresponding to a particular performance requirement. 3.2.
Complexity Analysis
Further investigations show that the differencing vector has over 80% zeros after the first iteration in general (shown in Fig. 5), which can be regarded as a sparse vector. When solving (8), instead of additions, we can deal with only the non-zero terms. In the second iteration for example, the total computations will shrink to approximately The theoretical result shows that the total number of computations per user is linear in the number of users in the system. Since we have mitigated all the multiplication operations to simple additions and shifts, dedicated multipliers are not necessary. However, advanced computer arithmetic techniques, such as full carry lookahead adders, online arithmetic units [5], etc. are essential to achieve the real-time performance.
28
Xu et al.
3.3.
Prototyping the Differencing Multistage Detection Algorithm
The structure of the first three stages of the differencing multistage detector is shown in Fig. 8. In the first stage, the PIC uses the previous estimates (from the matched filter output) to generate a new vector of estimated bits. We need a conventional multistage detector as the first stage, so that two initial vectors are obtained for the differencing method. After the first stage, the differencing multistage detector starts to use the differencing vector as the input, which is generated by subtracting the input hard decision from the previous hard decision. In additional zeros should be observed. Furthermore, the inputs for the interference cancellation are not the matched filter output, but the previous stage’s output
Later stages repeat the same structure of the differencing multistage detector stage II, and further computations can be saved. Figure 9 is our architecture to implement one stage of the differencing multistage detector for synchronous users in hardware using a single custom chip. Using the select function, if we bypass the differencing vector and the arithmetic shift of the cross-correlation constants, it can also be used as a conventional multistage detector. Soft decision inputs and outputs are generated in parallel for each user and all users are detected in a serial manner. The timing of inputs and outputs is controlled by a hand shaking mechanism. The input numbers are in two’s complement format and they are stored in the data register bank. At the same time, the hard decisions are acquired from the sign bit of the soft decision and the differencing vector is generated by combinational
VLSI Implementation of the Multistage Detector
logic. The recoder block (highlighted in Fig. 9) implements the key features of the differencing multistage detector by selecting all the non-zero elements and tagging their addresses. The timing for the accumulation is scheduled according to the positions of the non-zero elements. If an element is not zero, the recoder will pick out the corresponding cross-correlation data, and update all the soft decisions by subtracting or adding it, according to the sign of the differencing vector’s element. Loading, shifting, accumulating and writing back are organized as a simple pipeline machine, managed by a two-phase clock. The pipeline will not stall because no data or control dependencies exist. Finally the soft and hard decisions are generated one by one with certain handshaking protocols to the next stage. 3.4.
Chip Specifications
Table 1 summarizes our prototype chip specifications. To simplify the hardware design, we have focused on fixed-point implementation of a synchronous system and the design is based on an eight-user Gold code spreading system. However, it can be extended to a random code, asynchronous system with a variable number of users. We choose an eight-user system since all the control logic is primarily binary counters. Therefore, a number of users with a power of 2 would be most efficient. The input data bus is limited by the pin count of our prototype chip. In order to meet the fixed point word length requirement, as determined in the
29
analysis in Section 3.1, we choose 10 bits as the input precision. The detector allows us to detect eight users in a MAI = 15 dB and SNR = 6 dB environment. The internal data bus is wider than the input or output bus to ensure that no overflow would occur during intermediate computations. Figure 10(a) shows the actual chip die photo. The chip has five major blocks: recoder, 12-bit carry lookahead adder, register banks for cross-correlation coefficients, soft decision registers and the address information of the non-zero elements. Some programmable logic arrays (PLAs) and temporary registers are necessary for control and pipeline management [16]. Figure 10(b) is the timing diagram from the real-time test of the differencing multistage detector chip. The complete process includes: loading cross-correlation matrix R, first PIC stage on the first chip, and second and later stages on the second chip.
30
Xu et al.
3.5.
Two Chip Multistage Detector with Predictable Delay
Our chip implements a single stage of the conventional/differencing multistage detector. A complete multistage detector is implemented by simply cascading two chips together with a proper feed back path and glue logic. The flow of data between the two chips is controlled by a simple hand shaking mechanism as we know that the next iteration for detection will rake time lesser than or equal to that of the previous stage.
The first chip conducts the conventional multistage detection. As a complete matrix-vector operation (7), is performed in the conventional detector, the delay is constant. The second stage is configured as a differencing multistage detector, the output of which is fed back to its own input after the first differencing multistage detection iteration. Since the number of clock cycles required decrease for each iteration, multiple iterations of interference cancellation can be run on the second chip within the processing latency of the first chip. The throughput is determined by the clock rate
VLSI Implementation of the Multistage Detector
of both chips and the delay is simply two stages of conventional multistage detector. Figure 11 shows the computational savings obtained by using the differencing technique over the conventional detection scheme. The figure shows the amount of iterations using the differencing method that are possible within a single iteration of the conventional method. For the worst operation case at SNR = 4 dB MAI = 0 dB, the two chip differencing system can execute at least seven iterations in the time taken for two iterations of a conventional detector. Figure 11 also shows when the SNR increases, the computational savings are higher and more iterations of the differencing scheme are possible. This is due to the reduction in noise, resulting in lower BER and faster convergence in the detection process. Also, it can be seen that higher MAI (10 dB) results in faster convergence and hence, more iterations can be performed for higher MAI. This is because MAI = 0 dB implies the equal power case (worst case) for all users. It should be noted that the Fig. 11 only conveys the computational savings due to the differencing scheme and 8 iterations are sufficient in most cases for convergence. A cascade-mode two chip differencing multistage detector is shown in Fig. 12. Two ASICs are cascaded in a chain, driven by the same clock. From our hardware testing (shown in Fig. 10(b)), the two chip system delay with the differencing algorithm is less than 70 cycles. Working at a clock rate of 12.5 MHz, the system delay is about much less than that of the conventional multistage detector, which is around for eight stages. Using our design, the system can reach a
31
throughput up to 190 Kbps with proper buffering. This rate meets the 144 Kbps requirement of the W-CDMA communication proposals [2].
3.6.
Scalable ASIC Design
Our hardware implementation shows the real-time performance in the communication system. We could estimate the size for a commercial base station detector chip in Table 2. If we design a chip which can handle 30 asynchronous users (upper limit for Gold code of length 31 system), it would require three full carry lookahead adder as the ALU. The cross-correlation matrix has elements, each one of which has 8-bit precision (according to Section 3.1). We could expand the data bus width to 16 bits in order to accommodate higher MAI. Total number of register cells are If a conservative static
32
Xu et al.
register cell consists of approximately 10 transistors, the total number of transistors would be around 100 K. 4.
Conclusion
In this paper, we have focused on the real-time implementation issues for the multistage detection algorithm in Wideband CDMA receivers. We developed a novel differencing multistage detection algorithm, by exploiting the convergence property of the iterative algorithm to greatly reduce the complexity of the multistage detector. The new differencing multistage detector computes the difference of vectors between two consecutive stages and saves computations when the difference becomes zero. We designed an ASIC chip to implement the differencing multistage detector. The chip was fabricated by 1.2 µm CMOS technology with a die size of Two cascaded chips can perform at least eight stages of multistage detection with a throughput of 190 Kbps/user and around delay in an eight-user system. The architecture is scalable for a larger design. Acknowledgments We are extremely grateful to the reviewers for their valuable suggestions. We would like to acknowledge the help of Praful Kaul in the VLSI implementation of the differencing multistage detector. We would also like to thank Srikrishna Bhashyam and Ashutosh Sabharwal for discussions and ideas. This work was supported by Nokia Inc., Texas Instruments, Texas Advanced Technology Program under grants 1997-003604-044 and 1999-003604-080, and NSF under grants NCR9506681 and ANI-9979465.
7. D. Divsalar, M.K. Simon, and D. Raphaehi, “Improved Parallel Interference Cancellation for CDMA,” IEEE Transactions on Communications, vol. 46, no. 2, 1998, pp. 258–268. 8. W. Lin, “Differentially Matched Filter for a Spread Spectrum System,” United States Patent 5,663,983, Sept. 1997. 9. K. Liu, W. Lin, and C. Wang, “Pipelined Digital Differential Matched Filter FPGA Implementation and VLSI Design,” in IEEE Custom Integrated Circuits Conference, San Diego, CA, May 1996, pp. 75–78. 10. J.K. Hinderling, T. Rueth, K. Easton, D. Eagleson, D. Kindred, R. Kerr, and J. Levin, “CDMA Mobile Station Modem ASIC,” IEEE Journal of Solid-State Circuits, vol. 28, no. 3, 1993, pp. 253–260. 11. I. Kang and A.N. Willson, “Low-Power Viterbi Decoder for CDMA Mobile Terminals,” IEEE Journal of Solid-State Circuits, vol. 33, no. 3, 1998, pp. 473–482. 12. C. Sengupta, S. Das, J.R. Cavallaro, and B. Aazhang, “Fixed Point Error Analysis of Multiuser Detection and Synchronization Algorithms for CDMA Communication Systems,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seattle, WA, May 1998, vol. 6, pp. 3249– 3252. 13. S. Das, C. Sengupta, J.R. Cavallaro, and B. Aazhang, “Hardware Design Issues for a Mobile Unit for Next Generation CDMA Systems,” in Proceedings of SPIE—Advanced Signal Processing Algorithms, Architectures, and Implementations VIII, San Diego, CA, July 1998, vol. 3461, pp. 476–487. 14. S. Kim, K. Kum, and W. Sung, “Fixed-Point Optimization Utility for C and C++ Based Digital Signal Processing Programs,” IEEE Transactions on Circuits and Systems-II: Analog and Digital Signal Processing, vol. 45, Nov. 1998, pp. 1455– 1464. 15. H.V. Poor and S. Verdú, “Probability of Error in MMSE Multiuser Detection,” IEEE Transactions on Information Theory, vol. 43, no. 3, May 1997, pp. 858–871. 16. G. Xu and J.R. Cavallaro, “Real-Time Implementation of Multistage Algorithm for Next Generation Wideband CDMA Systems,” in Advanced Signal Processing Algorithms, Architectures, and Implementations IX, SPIE, Denver, CO, July 1999, vol. 3807, pp. 62–73.
References 1. J.G. Proakis, Digital Communications, New York: McGrawHill, 1989. 2. “Third Generation Partnership Project,” http://www.3gpp.org 3. S. Moshavi, “Multi-User Detection for DS-CDMA Communications,” IEEE Communications Magazine, Oct. 1996, pp. 124– 136. 4. S. Verdú, Multiuser Detection, Cambridge University Press, Cambridge, 1998. 5. M.D. Ercegovac, “On-Line Arithmetic: An Overview,” in Real Time Signal Processing VII, SPIE, vol. 495, 1984, pp. 86–93. 6. M.K. Varanasi and B. Aazhang, “Multistage Detection in Asynchronous Code-Division Multiple-Access Communications,” IEEE Transactions on Communications, vol. 38, no. 4, 1990, pp. 509–519.
Gang Xu received his bachelor’s degree in electrical engineering from Tsinghua University, Beijing, China in 1996 and master’s degree in computer engineering from Rice University, Houston, TX in 1999. From 1996 to 1997, he worked in Zhenzhong Electronic Corp., Beijing, China, as a hardware engineer in charge of palmtop computer design. In 1999, he joined Nokia Research Center in Irving, TX.
VLSI Implementation of the Multistage Detector
Since then, he has involved in the rapid prototyping of the OFDM radio using FPGAs and digital signal processors. His current research interests include signal processing architectures and rapid prototyping of advanced wireless algorithms.
[email protected]
Sridhar Rajagopal received his B.E. (Honors with Distinction) in Electronics Engineering from VJTI, Bombay University, India in 1998 and his M.S. in Electrical and Computer Engineering from Rice University in 2000. He is currently a doctoral candidate at Rice University. His research interests are in wireless communications, VLSI signal processing, computer arithmetic and computer architecture. Web: http://www.ece.rice.edu/~sridhar/
[email protected]
Joseph R. Cavallaro was born in Philadelphia, Pennsylvania on August 22, 1959. He received the B.S. degree from the University of Pennsylvania, Philadelphia, PA, in 1981, the M.S. degree from Princeton University, Princeton, NJ, in 1982, and the Ph.D. degree from Cornell University, Ithaca, NY, in 1988, all in electrical engineering. From 1981 to 1983, he was with AT&T Bell Laboratories, Holmdel, NJ. In 1988, he joined the faculty of Rice University, Houston, TX, where he is currently an Associate Professor of Electrical and Computer Engineering, having received tenure in 1994. His research interests include computer arithmetic, fault tolerance, VLSI design and microlithography, and DSP and VLSI architectures and algorithms for applications in wireless communications and robotics. Dr. Cavallaro is a recipient of the NSF Research Initiation Award 1989–1992 and the IBM Graduate Fellowship 1987–1988, and is
33
a member of IEEE, Tau Beta Pi and Eta Kappa Nu. During the 1996–1997 academic year, he served at the U.S. National Science Foundation as director of the Prototyping Tools and Methodology Program in the Computer (CISE) Directorate. He is currently the Associate Director of the Center for Multimedia Communication at Rice University. Web:http://www.ece.rice.edu/~ cavallar/
[email protected]
Behnaam Aazhang received his B.S. (with highest honors), M.S., and Ph.D. degrees in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign in 1981, 1983, and 1986, respectively. From 1981 to 1985, he was a Research Assistant in the Coordinated Science Laboratory University of Illinois. In August 1985, he joined the faculty of Rice University, Houston Texas, where he is now the J.S. Abercrombie Professor in the Department of Electrical and Computer Engineering and the Director of Center for Multimedia Communications. He has been a Visiting Professor at IBM Federal Systems Company, Houston, Texas, the Laboratory for Communication Technology at Swiss Federal Institute of Technology (ETH), Zurich Switzerland, the Telecommunications Laboratory at University of Oulu, Oulu, Finland, and at the U.S. Air Force Phillips Laboratory, Albuquerque, New Mexico. His research interests are in the areas of communication theory, information theory, and their applications with emphasis on multiple access communications, cellular mobile radio communications, and optical communication networks. Dr. Aazhang is a Fellow of IEEE, a recipient of the Alcoa Foundation Award 1993, the NSF Engineering Initiation Award 1987–1989, and the IBM Graduate Fellowship 1984–1985, and is a member of Tau Beta Pi and Eta Kappa Nu. He is currently serving on Houston Mayor’s Commission on Cellular Towers. He has served as the Editor for Spread Spectrum Networks of IEEE Transactions on Communications 1993–1998, as the Treasurer of IEEE Information Theory Society 1995–1998, the Technical Area Chair of 1997 Asilomar Conference Monterey, California, the Secretary of the Information Theory Society 1990–1993, the Publications Chairman of the 1993 IEEE International Symposium on Information Theory San Antonio, Texas, and as the co-chair of the Technical Program Committee of 2001 Multi-Dimensional and Mobile Communication (MDMC) Conference in Pori, Finland. Web:http://www.ece.rice.edu/~aaz/
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 35–54, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Modulation and Coding for Noncoherent Communications* MICHAEL L. MCCLOUD AND MAHESH K. VARANASI Department of Electrical and Computer Engineering, University of Colorado at Boulder, Boulder, CO 80309-0425 USA Received October 5, 2000; Revised July 23, 2001
Abstract. Until recently, the theory of noncoherent communications was premised on the use of orthogonal multi-pulse modulation such as frequency shift keying. The main drawback of this modulation scheme has been its poor spectral efficiency (rate/bandwidth). This paper considers instead the more general non-orthogonal multi-pulse modulation (NMM) technique. Optimal and suboptimal noncoherent detection strategies for NMM are reviewed and their asymptotic (high SNR) performances are characterized for the additive Gaussian as well as the Rayleigh fading channels. The resulting non-Euclidean distance measures are then used to design NMM signal sets that yield significantly higher bandwidth efficiencies than their orthogonal counterparts. NMM in conjunction with convolutional coding is also studied as a way to improve energy efficiency. Several optimal convolutional codes are examined together with our signal designs. An introduction to equalization on the noncoherent channel is also presented and illustrated by example. This paper thus contains several new results and attempts at the same time to give a tutorial exposition of the subject of noncoherent communications. Keywords:
noncoherent communications, modulation, signal design, convolutional coding, error analysis
1. Introduction Orthogonal multi-pulse modulation (OMM) is usually employed on noncoherent communication channels, in which a user transmits one of M orthogonal signals during each baud interval. The most common implementation of OMM is frequency shift-keying (FSK), when the orthogonal signals are chosen to be M truncated sinusoidal waveforms. Detection may be performed by correlating the received waveform with each sinusoid and then choosing the symbol that corresponds to the largest magnitude matched-filter output. The chief advantage of OMM is the simple implementation of the envelope detection receiver. Its main drawback is its poor spectral efficiency, requiring a bandwidth which grows exponentially with the transmission rate (number of information bits per baud interval). *This work was supported in part by National Science Foundation Grants ANI-9725778 and ECS-9979400 and by US Army Research Office Grant DADD19-99-1-0291.
Non-orthogonal multi-pulse modulation (NMM) is a signaling scheme in which the signals are generally correlated. The primary advantage of NMM over OMM is that large signal sets may be designed with small bandwidth expenditures. When NMM signal sets are employed, the optimal detection rules for the noncoherent additive white Gaussian noise (AWGN) and Rayleigh fading channels have an asymptotic error rate performance that can be exploited for signal design. For equal-energy signals, the distance measure which dictates performance at large values of the signal to noise ratio (SNR) is the maximum magnitude of the crosscorrelations between distinct pairs of (complex-valued) signals. In this paper, we have attempted to present the subject of noncoherent communications in a comprehensive yet accessible manner. We develop a theory of detection, signal design, coding, and equalization for noncoherent communications with NMM. Our results on detectors and their performance analysis (Sections 3, 4) are presented primarily as review, as they
36
McCloud and Varanasi
appear in the journal references [1, 2]. The equalization (Section 9) and NMM signal design (Section 5) results have appeared in conference versions of this paper in [3] and [4], respectively. Our findings on convolutional coding for NMM for the AWGN channel (Section 7) are new, as is the asymptotic analysis of the optimal noncoherent detector (Section 4.3). Also presented here for the first time are the results on detection and convolutional coding and performance analysis for the Rayleigh fading channel. For simplicity, we have restricted attention to single-antenna systems. Signal detection on the noncoherent NMM channel is considered in Section 3. The dependence of the error probability for the AWGN channel on the signal geometry for both equal and unequal energy signal constellations is derived in Section 4. These results suggest the problem of designing signals for use on the noncoherent channel. In Section 5 we discuss signal design for the AWGN channel. In Section 5.1 we propose a method of successive updates to design equal energy NMM signal constellations with a guaranteed minimum distance in a fixed dimensional space [4]. A non-linear optimization problem is solved at each update subject to several quadratic inequality constraints. A modified Fletcher-Powell optimization algorithm is employed through the FSQP [5] optimization package. This design technique is versatile and able to handle arbitrary combinations of dimensionality and maximum cross-correlation. It has been found to work well for relatively small cardinality signal sets. In Section 5.2 we introduce signal designs for unequal energy constellations. We present a method based on building signals in disjoint energy shells and give an example that gives a notable improvement in energy efficiency over the equal energy designs. To develop insight into the potential gains that can be expected from our NMM designs with coding, we obtain the capacity of the NMM channel in Section 6 by extending the corresponding OMM results of [6]. By manipulating the multi-dimensional integral expression for mutual information appropriately, we find an efficient Monte-Carlo procedure for numerically evaluating the channel capacity. We compare our designs with OMM through the coded spectral efficiency measure, which is simply the channel capacity normalized by the minimum signal bandwidth required. We find that our designs improve considerably on OMM. The NMM signal sets are then used in conjunction with low-rate convolutional codes in Section 7 to map out new regions of the energy-spectral efficiency plane.
We examine rate l / Q convolutional codes presented for use with OMM in [7]. Numerical results indicate that by allowing correlation among the modulation waveforms, we can obtain improvements in the spectral efficiency with small losses in energy efficiency. We then outline the extension of our results to the Rayleigh fading channel in Section 8 for the singletransmit and single-receive antenna channel. The optimal rule admits an asymptotic performance measure which is identical to that found for the AWGN channel so the signal design results developed for that channel may be used without modification. We derive the two-codeword error probabilities (and their asymptotic behavior) for convolutional coding on this channel. The general case of multiple transmit antenna diversity is developed in [8, 9]. In the presence of a bandlimiting or multi-path channel, the orthogonality of the information bearing signals with OMM is lost and the received waveforms must be treated as resulting from NMM with intersymbol interference (ISI) regardless of the original modulation scheme. Equalization of such signals is described as difficult in [10]. However, in [11] several equalizers (linear and decision feedback) are proposed for NMM over baseband (coherent) channels. In Section IX we develop zero-forcing equalization for use with the multiple input/multiple output (MIMO) discrete time model characterizing NMM over the noncoherent ISI channel as in [3]. The more general case of the non-invertible channel is treated in [12, 13].
2.
The Noncoherent Communication Channel
We begin with a description of the basic complex baseband communication system depicted in Fig. 1. A source transmits a sequence of M-ary information symbols at rate 1 / T symbols/sec. The modulator maps each input symbol, into one of M possible waveforms. This transmitted signal may pass through a channel which may induce distortion, for instance when the transmit medium has a small bandwidth relative to the signal or when frequency selective fading is present. We model this distortion as resulting from a linear system with complexbaseband transfer function H ( f ) . The channel may also induce a time-varying fading of the signal in which case the amplitude and phase change due to multipath effects. Finally, additive noise corrupts the signal and the received waveform is
Modulation and Coding for Noncoherent Communications
where is the rath composite waveform corresponding to the response of the channel to the mth transmitted signal, each normalized to have unit energy. The additive noise, n(t), is modeled as a complex circularly symmetric white Gaussian process, with power In the special case of a distortionless channel (H(f) = 1 and we say that we have the additive white Gaussian noise (AWGN) channel model. When we employ channel coding, we assume that perfect interleaving is employed so that the carrier phase and the fading process can be modeled as being independent from symbol to symbol. In the absence of inter-symbol interference and fading we consider the simplified model for the noncoherent AWGN channel:
where we do not assume knowledge of the carrier phase 2.1.
Orthogonal and Non-Orthogonal Multipulse Modulation
Orthogonal multipulse modulation (OMM) is often employed on noncoherent channels wherein one of M time-limited orthogonal signals is transmitted at each baud period. The most common realization of OMM is frequency shift keying (FSK), wherein one of M orthogonal sinusoids with frequency separation is transmitted at each baud period. Detection is performed by matching the received signal r(t) against each lowpass equivalent transmitted signal,
The main drawback of OMM is its low spectral efficiency (the ratio of the transmission rate to the bandwidth employed). The smallest achievable bandwidth
37
for OMM is B = M / T. This implies that as the transmission rate, is increased, the spectral efficiency decreases as In order to overcome this problem, non-orthogonal multipulse modulation (NMM) may be employed, wherein the signals are allowed to be correlated. For a fixed signal dimensionality, N, we find that NMM employing M signals has a spectral efficiency of For a fixed performance, this increase in spectral efficiency comes at the expense of an increase in transmitted power. 2.2.
Signal Space Representations
Suppose that the NMM signals have a common basis, meaning that every signal has the baseband representation
A consequence of the Karhunen–Loeve expansion theorem [14] is that for white noise, n(t), the received signal, may be projected onto this basis for the purpose of detection without loss of generality. This means that any receiver for the noncoherent channel can employ the front-end generalized sampler with
The N × 1 measurement vector y is termed a sufficient statistic for the detection problem. Assuming that the basis functions satisfy the generalized Nyquist condition [10], we obtain an ISI-free model for y which, under hypothesis (that the mth signal is transmitted), is given as
where represents any amplitude and phase distortions due to a non-ideal channel. The signal vector, is given by
38
McCloud and Varanasi
and the additive noise, n, is a complex normal random vector with mean zero and correlation Throughout this paper, finite dimensional vectors and matrices are denoted by boldface type. We will use the notation A* to denote the complex conjugate transpose of the matrix A. 3. 3.1.
probability rule averages the likelihood functions over the distribution for phase and selects the maximum of the resulting statistics. The optimal detector is therefore
Detection Generalized Maximum Likelihood Detection
The idea behind the generalized maximum likelihood (GML) detection strategy is that if the gain, were known to the receiver, the optimal decision rule would be the maximum likelihood (ML) detector:
assuming equal priors. This detector is derived from maximizing the likelihood functions
over m. In the absence of this side information we may substitute the maximum likelihood estimate of into the detector to form the GML rule
The maximum likelihood estimate is found from the standard least squares theory (see e.g. [15]) and is given by where is the orthogonal projection matrix with range space The corresponding non-coherent GML detection rule is
and was derived in a different context in [1, 2]. 3.2.
The Asymptotically Optimal Detector
In this section we develop the asymptotically optimal rule of [1]. We assume knowledge of the signal energies, and look for the optimal detector for the unknown phase channel. Assuming that the phase has a uniform distribution, the minimum error
where is the zeroth-order modified Bessel function of the first kind, and we have removed the hypothesis independent terms and This test is optimal in terms of probability of error but may be too complex to implement due to the need to evaluate the Bessel function at the detector. For this reason we consider the asymptotically optimal (AO) approximation introduced in [16]:
This test was derived by using the asymptotic expansion of the Bessel function [17]
and ignoring the denominator terms. It is interesting to notice that the AO detector is also a GML rule when the phase, is assumed unknown and the likelihood function of (9) is maximized with respect to Specifically, we first let
Modulation and Coding for Noncoherent Communications
39
The corresponding GML test is found by maximizing the likelihood functions of (9) over m with the phase term replaced by the corresponding for each hypothesis:
respectively. 4.1.
The GML Detector
The probability of choosing signal is given by which is exactly the AO test of (13). Finally, notice that when the signals have equal energy ( is constant) the AO test reduces to the GML detector. 4.
Performance Analysis of the Noncoherent Detectors
In this section we characterize the performance of the GML and AO detectors in terms of exact expressions for pair-wise error probabilities and study their asymptotic high signal-to-noise ratio (SNR) behavior for the noncoherent AWGN channel. Given a detector, we can bound the probability of error by
under
We will begin by stating a general theorem which gives the probability that the difference of two magnitude squared normal random variables is negative. Theorem 4.1. Let X and Y be complex normal random variables with means and variances and and crosscovariance The probability that is given by
where
where is the probability that signal l is judged more likely than signal m under hypothesis (that signal m is transmitted). These bounds are asymptotically coincident in the SNR so that we need only characterize the pair-wise error probabilities In the next subsections and the sections to follow, we shall repeatedly use the nth order modified Bessel function, and the Marcum Q function defined by
over
40
McCloud and Varanasi
and the Marcum Q function and the zerothorder modified Bessel function are defined in (18). Proof:
See [18, Appendix B].
For our problem we have and Substituting these values into Theorem 4.1 we obtain the following expression for the pairwise error probability of the GML detector:
Remark. In order to obtain an exponential decay of the error probability for the GML detector (with increasing SNR), we require that no two signals be related by a complex scalar. This ensures that each crosscorrelation coefficient, is strictly less than one. This means that the signals to be designed must be in at least two complex dimensions. 4.2.
Error Probability of the Asymptotically Optimal Detector
We now consider the performance of the AO detector introduced in Section 3.2. This analysis is also valid for the optimal detector at large SNRs due to the equivalence of the two detectors (the optimal and the AO) in that regime. We find the pairwise error probability to be where we have defined We can obtain an asymptotic expression for (20) using the results of [19, 20]. Proceeding as in [20] we first write (20) as
This problem was consider in [16] and [1] and we will simply state that result: where we define
This allows us to use the asymptotic expression
where
where (21) we find that
Substituting these values into
Keeping only the exponential dependency for this expression we find
Modulation and Coding for Noncoherent Communications
In the next subsection, we show that at high SNRs, this pairwise error probability approaches the simple form
41
where the nth order modified Bessel function, is given by (18). Using the fact that the exponential function is monotonic in its argument we find that for
Notice that when this simplifies to the exponential bound for the GML detector given in (23), as it must. Remark. In order to ensure that the error probability for the AO detectors decays exponentially, we require that no two signals be related by a phase shift. We can, however, allow pairs of signals to be related by positive scaling (unlike the case of the GML detector). This implies that we can design the signals in one or more complex dimensions. With these bounds on the error probability, we can consider the problem of designing signals to minimize the appropriate distance measure for use with noncoherent detection. For the general case of unequal energy signaling the appropriate metric is
Interestingly, this distance measure can be interpreted as the worst-case Euclidean distance between and with respect to the phase term, We should point out again that this distance measure is appropriate for the AO detector and the optimal detector at large SNRs. 4.3.
Asymptotic Performance Analysis of the AO Detector
In this section, we prove the result in (27). We are interested in the asymptotic behavior of the pairwise error probability given in (25). In order to get this expression into a tractable form, we will replace some of the terms by asymptotic approximations. We begin by examining the function . Notice that for a fixed value of the second parameter is increasing as becomes small, while the first remains constant. Letting and we use the infinite summation form of the Marcum Q-function:
We use this expression together with (29) to find the bounds
where the lower bound is the first term in the sum of (29) and the upper bound follows when we employ our bound on and simplify the resulting geometric sum. For large values of we have and and consequently
for small values of Similarly, we replace the function by in Eq. (25) since the argument becomes large as vanishes we have the asymptotic approximation
where we have defined
and Applying Eqs. 4.462.2 and 3.462.1 of [21], we find
42
McCloud and Varanasi
where is the parabolic-cylinder function of order v [21]. The left-most term in Eq. (35) becomes large as approaches zero, while the right-most term is approaching zero when ). Consequently we are left with
Keeping only the exponential term and expanding the expression we find our final bound:
5. Signal Design for the Noncoherent AWGN Channel In this section we discuss techniques for designing signals for the AWGN which minimize the appropriate metric from the previous section under a strict bandwidth constraint. We will introduce a technique for designing equal energy signals in Section 5.1 and suggest extensions to unequal energy designs in Section 5.2. 5.1.
every pair of elements has a cross-correlation coefficient of at most Such a set is called a (complex) spherical code. We construct the signal set by a method of Successive Updates, whereby a signal, is added to At each iteration we maximize the norm, under the constraint that max We initialize our design with two orthogonal signals in and continue to add signals until the constraints can no longer be met, at which point the dimensionality of the signal set is increased if N > 2 or the design is terminated. We continue this process until the dimensionality is equal to N and the constraints can not be met. To compare our signal designs with OMM for noncoherent detection we consider several different values of the signal dimensionality, M, and cross-correlation coefficient, Table 1 lists the achieved cardinalities of our signal design for several values of M along with the Welch bound for each pair given by [22]
Signal Design for Equal Energy Constellations
In this section we consider designing signals with equal (unit) energy, such that the maximum cross-correlation coefficient is bounded by a constant and the overall bandwidth is constrained. This problem was considered by the authors in [4] and we will describe the results of that paper here and suggest some extensions. Given the performance measure, we seek to find the largest cardinality set such that is less than for all pairs When the absolute bandwidth of the signal constellation is constrained to BHz, the dimensionality of the signal set is correspondingly constrained to N = BT where T is the signaling period [18]. Within this framework we may state the signal design problem as follows: given and find the set of largest cardinality M satisfying (1) (2) (3) Dim
for all m whenever
We see that the signal design problem can also be stated as finding the largest finite subset of the Ndimensional complex unit hyper-sphere such that
where is the largest integer less then the real number, x. We see that our designs work well for relatively small signal cardinalities. To give an example of the performance difference between OMM and NMM, consider the error curves of Fig. 2. We notice that the gap between the two techniques is decreasing as the dimensionality of the signal set is increased, with the NMM designs enjoying twice the spectral efficiency of OMM. This trend becomes more pronounced as the dimensionality is increased,
Modulation and Coding for Noncoherent Communications
43
An alternative signal design technique would be to minimize the maximum correlation, max without the inequality bounds and continue until this term exceeds In this case we would exchange the inequality constraints of our previous algorithm for an optimization functional. 5.2. Signal Design for Unequal Energy Constellations On the coherent AWGN channel, it is well known that unequal energy constellations provide a better powerbandwidth trade-off than equi-energy schemes (QAM vs. PSK, for instance) because of the additional degree of freedom. We expect a similar result for the noncoherent channel, where the metric of Eq. (28) is used in the place of Euclidean distance. We have so far considered a simple trial-and-error procedure in which two shells of equal energy signals, and are designed; each shell corresponds to a different energy We fixed the cardinality and energy of shell one, and and varied the outer shell energy, and cardinality, so that the worst-case error exponent within the second shell and the worst-case error exponent between the two shells was roughly the same as that of the first shell. In this manner we constructed a signal set of cardinality that performed better than an equalenergy signal set of the same cardinality and with the same average energy. In Fig. 4, we plot the probability of symbol error for the two-shell scheme with and For comparison, we
demonstrating that with a small expenditure in SNR one can obtain a large increase in spectral efficiency relative to OMM. In Fig. 3 we plot the spectral efficiency of our designs versus the energy efficiency (SNR-perbit required to achieve a probability of bit error of For the NMM designs, we held the dimensionality N fixed and varied the maximum cross-correlation For comparison, we also plot the spectral efficiencies of coherent PAM and QAM modulation as well as the capacity curve for the coherent channel. These results show that we can map out new portions of the energy/spectral efficiency plane through our signal designs, and that NMM can be made more efficient than OMM.
44
McCloud and Varanasi
plot the probability of error for an equal energy constellation of cardinality 69 with the same average energy. We notice that we gain around 1.5 dB in performance over the equal energy scheme through this design. These results are encouraging, leading us to expect substantial gains over equal energy constellations when systematic design techniques are considered. 6.
When the signal constellation does not posses this symmetry, we can still lower bound the capacity through the use of the uniform prior mass to find the mutual information term
Capacity of Channels with Non-Orthogonal Multipulse Modulation
In order to assess the potential of NMM with coding, we extend the result of [6] for OMM to derive the capacity of the noncoherent channel on which NMM signal sets are employed. The spectral efficiency of the NMM channel is found by normalizing the capacity by the minimum bandwidth required to build the signals. This result is illustrated by evaluating the capacity of NMM signals sets of several cardinalities in two complex dimensions and comparing it with that of binary and quaternary OMM. This demonstrates the improvements in bandwidth efficiency which may be hoped for when employing NMM signaling together with coding. When the message symbol, X, is drawn from an M-ary alphabet we define the capacity of the channel in terms of the sufficient statistic, y, as
Averaging over the uniform phase of the measurement as in (12), we find the conditional likelihood functions to be
To simplify the presentation let us define the functions
Then placing the conditional multivariate normal distribution on z,
where is a probability mass function on X and the integral is taken over the domain of definition for the random variable y (here ). When the channel is symmetric, as arises when the signals are equi-correlated, the uniform prior distribution, maximizes the mutual information and we have
we find the following formulation for the NMM capacity:
When the constellation is symmetric we find
The advantage of this formulation is that Monte-Carlo integration techniques may be employed to numerically
Modulation and Coding for Noncoherent Communications
determine the channel capacity, even when the multidimensional integrals in equations (40) and (41) are infeasible due to large values of N. This is a multi-dimensional generalization of the methods of Ungerboeck [23], which were developed for the coherent channel. 6.1.
Examples
To quantify the bandwidth efficiencies of the various signal constellations with coding, we define the coded spectral efficiency, SE, to be the channel capacity divided by the minimum bandwidth needed to build the signals. For NMM signals with a N dimensional basis, the spectral efficiency is given in bits/second/Hertz by
with N = M for OMM. In Fig. 5 we plot the spectral efficiencies of several different sized NMM constellations in two dimensions as well as binary and 4-ary OMM for the AWGN channel. Notice that the 4-ary NMM signal design achieves twice the efficiency of the OMM designs, enjoying the bandwidth of binary OMM with the asymptotic rate of 4-ary OMM. For comparison, we plot the spectral efficiencies of coherent quadriphase shift keying(QPSK) together with that of a 16-ary NMM design (in two
45
dimensions). The capacity of the QPSK channel was computed as in [23]. 7. Convolutional Coding on the Noncoherent AWGN Channel Several low rate convolutional codes for use with OMM were presented in [7], The rate R = 1/Q convolutional encoder produces a Q-length sequence of M-ary output symbols for each q-ary input symbol with constraint length K (i.e., each output symbol is a function of the K most recent input symbols). We may view the codes as a mapping with memory, from GF(q), the Galois field of cardinality q, to with GF(q) taken to be a sub-field of GF(M). This vector of symbols is then read out in serial and each symbol is mapped to a continuous time waveform by associating the i th member of a set of M waveforms to the output symbol The time series of output vectors are related to the input sequence, by the convolutional mapping
where each is a 1 × Q vector with each element lying in GF(M) and all arithmetic is done in GF(M). At the decoder a sequence of measurements, is received with corresponding to the lth entry in the nth transmitted code sequence,
where is the energy of the th code symbol and t = n Q + l, i.e. We consider asymptotically optimal decoding with decisions
where is the set of allowable code sequences and we have assumed that each sequence is composed of P length-Q blocks. For the special case of equal energy code symbols ( is constant) we have the square law decoder:
46
McCloud and Varanasi
This decoding can be performed in the usual way via a maximum likelihood trellis search, typically with the Viterbi algorithm. The metric for a branch which leaves state p at time n and enters state j at time n + 1 is given by
where is the tth code symbol corresponding to the Q-length codeword associated with the transition from state p to state j. The survivor path metric at state j and at time n is given by
where is the set of states which can transition to state j in one step. A block diagram of the encoder is shown in Fig. 6, where we assume that each transmit sequence ends with an all zero sequence to terminate the trellis.
The authors of [7] developed several codes for q = 2 (binary to M-ary) and q = M (M-ary to M-ary) which are optimal with respect to the information spectrum (they actually used a truncated spectrum, keeping the first four terms). That is, they have the largest possible free distance, , and the optimal distribution of information symbols, associated with remergent trellis paths of Hamming weight for They considered performance on the OMM channel. In light of our development of complex spherical codes, we can extend the results of [7] by replacing the orthogonal waveforms in their development by M correlated waveforms in a low dimensional, subspace. This allows us to increase the bandwidth of the convolutionally coded systems of [7] by a factor of M/N. It is of interest to obtain a simple characterization of the increase in SNR/bit needed to obtain the same performance with OMM and the spectrally more efficient NMM scheme. We assume that the signals have equal crosscorrelation, and equal energies, This appears to be well justified from our numerical simulations and even when this is not true, the corresponding expressions act as an upper bound whenever for all m and l. Proceeding as in [7], we employ the transfer function bound on the bit error probability
where k = 1 for binary-to-M-ary codes and for the M-ary codes and is the number of detours from the all-zero code sequence with Hamming weight The two codeword error probabilities, with Hamming distance are derived below for the AWGN channel. Let and be two code sequences which differ in the positions Assuming that codeword was transmitted, the decoder chooses over whenever
where is the symbol corresponding to the th position of the codeword The following theorem (a generalization of Theorem 4.1) gives the probability of error for the difference of quadratic forms which obey this general model.
Modulation and Coding for Noncoherent Communications
Theorem 7.1.
Let the quadratic form D be given by
47
and Employing Theorem 7.1 with these values we find
where the pairs are mutual statistically independent complex normal random variables with means and variances and and equal magnitude cross-correlations, Then the probability that is given by Prob(D < 0)
where The term is the coded signal-to-noise ratio, where is the uncoded energy per symbol and k is defined as in (54). Theorem 7.2. Let the error probability associated with a quadratic form becoming negative be of the form where
where a,b, and are defined as in Theorem 7.1. Then under the assumption that as the SNR grows large, we find that the probability approached the exponential form
Proof: See [18, Appendix B]. For our problem we have
Proof:
We employ the asymptotic relations
48
McCloud and Varanasi
as a and b grow large, where we have kept only the exponential dependencies of the functions using (see e.g. [17,24]):
Here Q(x) denotes the standard normal cumulative function
keeping only the exponential term. When these expressions are substituted into (58) and the exponential dependency is isolated, the theorem follows. With the help of Theorem 7.2 we find that at large SNRs the two-codeword error probabilities approach
The corresponding transfer function bound of (54) approaches the form
keeping only the dominant exponent. This result demonstrates the importance of the free distance in designing convolutional codes for use with NMM modulation, at least asymptotically. 7.1.
where is a root of in GF(16). The signal constellations were designed as in Section 5.1 with the smallest value of needed to achieve the desired cardinality (M = 16 for these examples) employed for each value of N. The energy efficiencies were calculated via Eq. (54) with fixed at and are hence conservative. A key attribute of these plots is that it is possible to increase the spectral efficiency of these codes significantly with a small loss in the energy efficiency. The potential gains grow with dimensionality as the required cross-correlation grows small “quickly” as the cardinality increases. We also notice that the codes which were designed for OMM are also good for NMM. The signal design problem does not change when coding is introduced, which justifies the use of signals designed in the context of uncoded NMM.
Numerical Results 8.
In Fig. 7 we show the spectral efficiencies, and energy efficiencies, is the uncoded symbol energy), of the M = 16-ary convolutional codes of [7] of rates R = 1/2 and R = 1/3 for various values of the dimensionality N. The constraint length is K = 3 for each code and the generator matrices are given by
Extensions to the Rayleigh Fading Channel
In this section we outline the extensions to noncoherent communication on the single-antenna Rayleigh fading channel. In wireless communication the channel often has a randomly time-varying impulse response due to atmospheric effects. This can also be caused by the motion of the transmitter and/or receiver in a mobile
Modulation and Coding for Noncoherent Communications
49
The measurement has the likelihood function
and the maximum likelihood detector (assuming equal prior probabilities) is
system in which the physical surroundings of the communicator are varying. A useful model for such effects is the multipath fading channel, wherein the transmission of the signal s(t) results in the reception of the signal
In general the path parameters and are randomly time-varying. This process is shown in Fig. 8. For ionospheric or tropospheric propagation we have and we invoke the Central Limit Theorem [25] to simplify our model to
where is a complex normal random variable for each t. We have assumed that the time delays, are small relative to the support of the signal so that We will assume that the time-varying fading process is constant in one signaling interval but can change significantly over a few intervals. This leads to the following model for the received signal:
The fading amplitude, is a complex normal random variable with power We assume that the signals have equal energy normalized to for each m, the symbol energy is absorbed into the fading parameter Conditioned on the transmission of signal the vector y is distributed as a complex normal random vector with mean zero and correlation matrix
The pairwise probability of error of the above square law detector is given by
To analyze this error probability we may use Theorem 4.1 with and Since and we find that
where
where Notice that for orthogonal signals, and the corresponding error probability is simply
which is the usual probability of error for binary orthogonal signaling on the Rayleigh fading channel (see e.g. [18]). To determine the asymptotic behavior of the error probability for the general NMM case, we will expand the square root terms which appear in and in a Taylor series around to find
50
McCloud and Varanasi
Substituting the resulting expressions into (75) we find
which has the same form as the error probability for orthogonal signaling with the SNR modified by the cross-correlation factor Since the probability of error depends only the cross-correlation between the signals we may employ the successive updates algorithm developed in Section 5 to design signals for the Raleigh channel without modification. 8.1.
Capacity of the Rayleigh Fading NMM Channel
We proceed exactly as we did for the AWGN channel. With the likelihood function specified in (71), and defining as in (43), we find
8.2.
Examples
We plot the spectral efficiency curve for the 4-ary NMM signal design together with those of binary and 4-ary OMM for the Rayleigh fading channel in Fig. 9. Notice that we see a dramatic improvement for the correlated signal design over the orthogonal designs, exactly as we saw on the AWGN channel in Section 6. 8.3.
in which the codewords differ by We choose over whenever To find the error probability, we may use Theorem 7.1 with and where is the average energy in the coded symbol. Since and for n > 1 we find that
where
Convolutional Coding on the Rayleigh Fading Channel
We may use the coding techniques developed in Section 7 for the Rayleigh channel. We assume that the coded data is interleaved prior to transmission and deinterleaved before decoding so that the fading may be considered independent from symbol to symbol. With this assumption, the only difference from the AWGN channel is the error analysis. We will further assume that the signals have equal cross-correlation magnitude, When square-law detection is performed on the Rayleigh fading channel we again label the positions
and To gain insight into this expression we consider the behavior of the error probability at large SNRs. We may use the Taylor series expansion of and from (78) to find that Letting denote the reciprocal coded SNR we have
Modulation and Coding for Noncoherent Communications
The error probability approaches a polynomial in the reciprocal SNR, with leading coefficient of order Similarly, the transfer function bound of (54) approaches the form
51
codes employed in Section VII-A on the Rayleigh fading channel. We notice that the curves have the same general shape as observed for the AWGN channel in Fig. 7, demonstrating that we may effectively trade energy efficiency for spectral efficiency. Notice also that the performance loss due to the fading is substantial when compared with the AWGN channel. This motivates the extension of the results to multiple transmit and receive antennas, work which has been carried out in [9]. 9.
Equalization
We now consider the problem of noncoherent signaling over a dispersive channel. We will assume that the channel response has been determined and develop a zeroforcing equalizer to combat the ensuing inter-symbol interference. These results were originally developed for the invertible channel in [3] and were extended to the non-invertible channel in [12, 13]. which is a polynomial with leading coefficient of order We therefore refer to as the diversity order of the convolutional code. Notice that diversity is often achieved through either spatial separation of sensors or frequency division multiplexing (see e.g. [18]) while here we employ code-diversity to combat the Rayleigh fading channel. 8.3.1. Numerical Results. In Fig. 10 we plot the spectral and energy efficiencies of the 16-ary convolutional
9.1.
MIMO System Model
In this section we develop a multiple-input multiple output (MIMO) model for the ISI channel with NMM signaling. We consider communication over a bandpass channel on which the user transmits one of M waveforms every T seconds corresponding to one of M possible information symbols. The waveforms then pass through a channel to yield the composite normalized low-pass equivalent waveforms, corresponding to the symbols transmitted at time index k. We obtain the following low-pass equivalent model for the ISI channel:
where is the complex amplitude associated with the passband composite waveform for the th signal in the kth signaling interval, and N (t) is a complex white Gaussian noise process with power To form a set of sufficient statistics for the detection problem we match the received signal, r(t), against each of the possible transmitted signals to form the matched filter outputs
52
McCloud and Varanasi
We collect sets of M of these scalar measurements to form the vector with the MIMO model
where is the rath column of the identity matrix and n(k) is a wide-sense stationary, multivariate, Gaussian random process with correlation sequence and power spectral density The MIMO channel coefficients, H(k), are terms in the matrix-valued deterministic channelcorrelation sequence
and form a positive semidefinite sequence in whenever each continuous time waveform satisfies This process is shown in Fig. 11, together with the equalizer and decision branches of the receiver. 9.2.
Equalization of the MIMO system
We will develop a zero-forcing equalizer for use on the noncoherent channel. We caution that decisionfeedback equalization of the sort developed for the baseband channel in [26] can not be employed on this channel since the phase and possibly amplitude uncertainty at the receiver prohibit the reconstruction of past signals. When the MIMO filter, {H(n)}, is invertible we can completely remove the ISI by applying the inverse filter where as in [3] to obtain the ISI-free output sequence (z(n)}:
The noise vector w(n) has the MIMO correlation sequence The resulting post-equalizer detector will make decisions on each output independently. In order to match the post-equalizer model with that of Section 3 consider the equivalent statistic where is a positive definite square root of the matrix Q(0). The model becomes
where q is a zero-mean Gaussian random vector with correlation and we have suppressed the time dependency since we are performing one-shot detection. Out model now matches that of Sections 3 and 4 (and Section 8 in the case of a Rayleigh fading channel) and the detectors derived there may be employed. 9.3.
Numerical Example
We consider a system in which 4-ary FSK signaling is employed over a multipath channel with 3 discrete multi-paths. The channel impulse response is given by
with N = 3 paths described by 0.190+ 0.272j, –0.133 + 0.237j}, and T = 1/9600 s. For this example, the equivalent length 3 FIR discrete-time MIMO channel is invertible, and the resulting probability of error plots for the AO and the GML detectors
Modulation and Coding for Noncoherent Communications
are shown in Fig. 12. Notice that for this example, the AO detector outperforms the GML detector by about 5 dB. This is due to the large difference in energies in the signals after equalization. For comparison, we also plot the probability of error for the non-equalized system with conventional noncoherent envelope detection without equalization. For a given realization of the phase terms, we compute the union bound on the probability of error for this detector as in Section 4. A Monte-Carlo average of the resulting union bound is taken over the uniform distribution of the phase terms. It is clear that ignoring the effects of the ISI are catastrophic for this channel. 10. Summary
We have presented a comprehensive development of NMM signaling on the noncoherent AWGN channel including detection, signal design, coding, and equalization. Several examples were included which indicated the performance advantages enjoyed by NMM over orthogonal modulation schemes such as FSK with respect to bandwidth efficiency. We presented an outline of the extension of these ideas to the Rayleigh fading channel. There is also a rich multi-user detection theory of NMM on the noncoherent channel which has been developed in [1, 2, 27–33]. References 1. M.K. Varanasi and A. Russ, “Noncoherent Decorrelative Detection for Nonorthogonal Multipulse Over the Multiuser Gaussian
53
Channel,” IEEE Trans. Commun., vol. 46, no. 12, 1998, pp. 1675–1684. 2. M.L. McCloud and L.L. Scharf, “Interference Estimation with Applications to Blind Multiple-Access Communication Over Fading Channels,” IEEE Trans. Inform. Theory, vol. 46, no. 3, 2000, pp. 947–961. 3. M.K. Varanasi, “Noncoherent Equalization for Multipulse Modulation,” in Proc. IEEE Personal, Indoor and Mobile Communications Conf., Boston, MA, Sept. 1998. 4. M.K. Varanasi and M.L. McCloud, “Complex Spherical Modulation for Non-Coherent Communications,” in Proc. IEEE Intl. Symposium on Information Theory, Sorrento, Italy, June 2000, p. 163. 5. C.T. Lawrence, J.L. Zhou, and A.L. Tits, User’s Guide for CFSQP Version 2.5: A C Code for solving (Large Scale) Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality Constraints, College Park, MD: University of Maryland, 1997. 6. W.E. Stark, “Capacity and Cutoff Rate of Noncoherent FSK with Nonselective Rician Fading,” IEEE Trans. Commun., vol. 33, no. 11, 1985, pp. 1153–1159. 7. W.E. Ryan and S.G. Wilson, “Two Classes of Convolutional Codes Over GF(q) for q-ary Orthogonal Signaling,” IEEE Trans. Commun., vol. 39, no. 1, 1991, pp. 30–40. 8. M. Brehler and M.K. Varanasi, “Asymptotic Error Probability Analysis of Quadratic Receivers in Rayleigh Fading Channels with Applications to a Unified Analysis of Coherent and Noncoherent Space-Time Receivers,” IEEE Trans. Inform. Theory, vol. 47, no. 5, pp. 2383–2399, 2001. 9. M.L. McCloud, M. Brehler, and M.K. Varanasi, “Signal Design and Convolutional Coding for Noncoherent Space-Time Communication on the Rayleigh Fading Channel,” IEEE Trans. Inform. Theory, submitted. 10. E.A. Lee and D.G. Messerschmitt, Digital Communication, 2nd edn., Boston, MA: Kulwer Academic Publishers, 1994. 11. M. Varanasi, “Equalization for Multipulse Modulation,” in Proc. 1997 IEEE Intl. Conf. on Personal Wireless Communications (ICPWC’97), Mumbai (Bombay), India, Dec. 1997, pp. 48–51. 12. M. McCloud and M. Varanasi, “Noncoherent Zero Forcing Equalization for Multipluse Modulation,” in Proc. Conf. Inform. Sciences and Systems, Princeton, NJ, March 2000, Princeton University, pp. FP2.1–FP2.6. 13. M.L. McCloud and M. Varanasi, “Noncoherent Zero Forcing and Decision Feedback Equalization for Multipulse Modulation,” IEEE Trans. Inform. Theory, submitted. 14. H.V. Poor, Introduction to Signal Detection and Estimation, New York: Springer-Verlag, 1994. 15. L.L. Scharf, Statistical Signal Processing, Reading, MA: Addison-Wesley, July 1991. 16. M.K. Varanasi and A. Russ, “Noncoherent Decorrelative Multiuser Detection for Nonlinear Nonorthogonal Modulation,” in Proc. IEEE Intl. Conf. on Communications, Montreal, Canada, June 1997, pp. 919–923. 17. I.N. Bronstein and K.A. Semendyayev, Handbook of Mathematics, 3rd edn. Verlag Harri Deutsch, Thun and Frankfurt/Main, 1985. 18. J.G. Proakis, Digital Communications, 3rd edn., New York: McGraw-Hill, 1995. 19. M.K. Varanasi, “Noncoherent Detection in Asynchronous Multiuser Channels,” IEEE Trans. Inform. Theory, vol. 39, no. 1,
54
McCloud and Varanasi
1993, pp. 157–176. 20. R. Pawula, “Relations Between the Rice Ie-Function and the Marcum Q-Function with Applications to Error Rate Calculations,” Electronics Letters, vol. 31, no. 24, 1995, pp. 2078–2080. 21. I.S. Gradshteyn and I.M. Ryzhik, Table of Integrals, Series, and Products, 4th edn., New York: Academic Press, 1965. 22. L. Welch, “Lower Bounds on the Maximum Cross Correlation of Signals,” IEEE Trans. Inform. Theory, vol. 20, no. 3, 1974, pp. 397–399. 23. G. Ungerboeck, “Channel Coding with Amplitude/Phase Signals,” IEEE Trans. Inform. Theory, vol. 28, no. 1, 1982, pp. 55– 65. 24. M. Schwartz, W.R. Bennett, and S. Stein, Communication Systems and Techniques, Reissue, New York: IEEE Press Classic, 1996 (Originally A McGraw-Hill Publication, 1996). 25. M. Loeve, Probability Theory, New York, NY: Springer-Verlag, 1977. 26. A. Duel-Hallen, “Equalizers for Multiple Input/Muliple Output Channels and PAM Systems with Cyclostationary Input Sequences,” IEEE Journal on Selected Areas in Commun., vol. 10, no. 3, 1992, pp. 630–639. 27. M.K. Varanasi and D. Das, “Noncoherent Decision Feedback Multiuser Detection: Optimality, Performance Bounds, and Rules for Ordering Users,” in Proc. IEEE Intl. Symposium on Information Theory, Aug. 1998, p. 35. 28. M.K. Varanasi and D. Das, “Noncoherent Decision Feedback Multiuser Detection,” IEEE Trans. Commun., vol. 48, no. 2, 2000, pp. 259–269. 29. M.L. McCloud and L.L. Scharf, “Asymptotic Analysis of the MMSE Multiuser Detector for Non-Orthogonal Multipulse Modulation,” IEEE Trans. Commun., vol. 49, no. 1, 2001, pp.24– 30. 30. A. Kapur, D. Das, and M.K. Varanasi, “Noncoherent MMSE Multiuser Receivers for Non-Orthogonal Multipulse Modulation and Blind Adaptive Algorithms,” IEEE Trans. Commun, to appear. 31. A. Russ and M.K. Varanasi, “Noncoherent Multiuser Detection for Nonlinear Modulation Over the Rayleigh Fading Channel,” IEEE Trans. Inform. Theory, vol. 47, no. 1, 2001, pp. 295– 307. 32. E. Visotsky and U. Madhow, “Noncoherent Multiuser Detection for CDMA Systems with Nonlinear Modulation: A NonBayesian Approach,” IEEE Trans. Inform. Theory, vol. 47, no. 4, 2001, pp. 1352–1367. 33. A. Russ and M.K. Varanasi, “An Error Probability Analysis of the Optimum Noncoherent Multiuser Detector for Nonorthogonal Multipulse Modulation Over the Frequency-Selective Rayleigh Fading Channel,” IEEE Trans. Commun., submitted.
Michael McCloud received the B.S. degree in Electrical Engineering from George Mason University, Fairfax, VA in 1995 and the M.S. and Ph.D. degrees from in electrical Engineering from the University of Colorado in 1998 and 2000, respectively. He spent the 2000–2001 academic year as a Visiting Researcher and Lecturer at the University of Colorado. He has been a Senior Engineer with Magis Networks, San Diego, CA since May, 2001. His research interests include statistical signal processing and wireless communication theory.
[email protected]
Mahesh K. Varanasi received his B.E. degree in Electronics and Communications Engineering from Osmania University, Hyderabad, India in 1984 and the M.S. and Ph.D. degrees in Electrical Engineering from Rice University, Houston, TX, in 1987 and 1989, respectively. In 1989, he joined the faculty of the University of Colorado at Boulder where he is now Professor of Electrical and Computer Engineering. His teaching and research interests are in the areas of communication theory, information theory and signal processing. His research contributions have been in the areas of multiuser detection, space-time communications, equalization, signal design and power control for multiple-access, fading channels and diversity systems, blind receivers, and power and bandwidth-efficient multiuser communications. He was elected Senior Member of IEEE in 1995.
[email protected]
Journal of VLSI Signal Processing 30, 55–69, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Multiple Antenna Enhancements for a High Rate CDMA Packet Data System HOWARD HUANG, HARISH VISWANATHAN, ANDREW BLANKSBY AND MOHAMED A. HALEEM Lucent Technologies, 791 Holmdel-Keyport Rd., Holmdel, NJ 07733, USA Received September 25, 2000; Revised November 14, 2001
Abstract. A High Data Rate (HDR) system has been proposed for providing downlink wireless packet service by using a channel-aware scheduling algorithm to transmit to users in a time-division multiplexed manner. In this paper, we propose using multiple antennas at the transmitter and/or at the receiver to improve performance of an HDR system. We consider the design tradeoffs between scheduling and multi-antenna transmission/detection strategies and investigate the average Shannon capacity throughput as a function of the number of antennas assuming ideal channel estimates and rate feedback. The highest capacities are achieved using multiple antennas at both the transmitter and receiver. For such systems, the best performance is achieved using a multi-input multi-output capacity-achieving transmission scheme such as BLAST (Bell Labs Layered Space-Time) in which the transmitted signal is coded in space and time, and the receive antennas are used to resolve the spatial interference. In the second part of the paper, we discuss practical transmitter and receiver architectures using BLAST for approaching the theoretical gains promised by the capacity analysis. Because the terminal receivers will be portable devices with limited computational and battery power, we perform a computational complexity analysis of the receiver and make high-level assessments on its feasibility. We conclude that the overall computational requirements are within the reach of current hardware technology. Keywords:
1.
BLAST, high data rate, CDMA, multiple antennas
Introduction
As the demand for wireless packet data services increases and the availability of radio spectrum becomes more scarce, communication engineers face the challenge of designing systems which are increasingly efficient in their spectrum use and which are tailored to address the characteristics of packet data services. By exploiting the spatial domain, diversity techniques using antenna arrays are known to provide improved performance compared to conventional single antenna systems. A recent innovation known as BLAST (Bell Labs Layered Space Time) [1] uses arrays at both the transmitter and receiver, providing potentially enormous gains compared to diversity systems. What follows is a brief overview on multiple antenna diversity systems, BLAST, and the design challenges of wireless
communication systems with antenna arrays. We then discuss a high data rate (HDR) system [2] designed specifically for wireless packet data services and provide motivation for a combined HDR-BLAST system. In wireless communication systems, the transmitter and/or receiver are often surrounded by objects such as buildings, trees, pedestrians, and cars which scatter and attenuate the transmitted signal. The scattered signals arrive at the receiver, and depending on their relative phases, add constructively or destructively. Subtle movements of the objects, the transmitter, or the receiver can cause wide variations in the phases resulting in a received signal whose amplitude varies in time. The channel through which the signal traverses is known as a time-varying fading channel, and it presents one of the major challenges in wireless communication system design. A well-known technique for combating fading
56
Huang et al.
channels is diversity. There are many types of diversity, however, the overall concept is that the receiver capitalizes on a signal which traverses multiple independent realizations of the fading channel. For example, using transmit diversity, multiple transmit antennas are used to transmit the same data. Compared to a single transmitter system transmitting with the same total power, this system has an advantage since it is unlikely that the signals associated with each of the antennas will fade simultaneously. A single transmit antenna and multiple receive antennas result in similar gains due to receive diversity. However, if signals are coherently combined among the antennas, there is the additional benefit of increased signal-to-noise ratio (SNR). The increase in average SNR is directly proportional to the number of receive antennas. As the number of transmit and receive antennas increases, the post-combiner SNR increases, but the gains due to diversity saturate so that the only channel impairment becomes additive Gaussian noise. In this paper, we will use Shannon capacity as a measure of the link and system performance. The Shannon capacity of a communication link, measured in bits per second per Hertz, is the theoretical limit of information that can be transmitted and reliably decoded at the receiver [3]. For a system with transmit and receive diversity, the capacity increases logarithmically as the number of receive antennas increases. For example in a diversity system with 2 transmit and 2 receive antennas at 15 dB average SNR, the average capacity is about 5.80 bps/Hz. Doubling the number of receive antennas results in a capacity increase of about one bps/Hz to 6.91 bps/Hz. By further doubling the number of transmit antennas, the capacity increases only slightly to 6.94 bps/ because the diversity gains are already near saturation. While diversity systems use the spatial dimension to increase the capacity through improved link SNR, recent information theory results show that significantly larger capacity gains are achievable if the spatial dimension is used differently. In particular, the theory tells us that the capacity can increase linearly with respect to the number of transmitters or receivers (whichever is lower). For example, a system with 2 transmit and 2 receive antennas at 20 dB can achieve an average capacity of about 8.28 bps/Hz. In doubling both the number of transmit and receive antennas, the average capacity becomes 16.23 bps/Hz. Bell-Labs Layered Space Time (BLAST) [1] is an architecture for achieving a significant fraction of the potential capacity gains of
these multiple antenna systems. In contrast to diversity systems where the same data is sent through multiple antennas, BLAST transmits different data streams through the antennas and, in its most general form, uses coding to introduce redundancy in both space and time. The signals share the same frequency band, but they can be resolved at the receiver by using multiple antennas and by relying on the distinct spatial signatures induced by the fading channel in a rich scattering environment. BLAST is a promising technology with potential applications to a number of wireless multiple access systems. In [4], the authors studied a time-division multiple access (TDMA) system employing BLAST techniques, fixed power, rate adaptation, and capacityapproaching coding. A related study [5] investigates the use of specific modulation formats. In [6], the authors apply BLAST and diversity techniques to a codedivision multiple access (CDMA) system and evaluate the system capacity in terms of users per sector supported at a given data rate and bit error rate. In the context of CDMA, BLAST uses the same spreading code to transmit independent data from different antennas. Hence each code can be ‘reused’ up to M times, where M is number of transmit antennas. These traditional TDMA and CDMA cellular systems have been designed for voice traffic and are characterized by low tolerance for latency and by equal rate service over the entire system (except perhaps for a few areas of outage where the minimum SNR requirement is not met). In future wireless systems, there will be demands for non-real-time data applications such as email and web browsing with data rates significantly higher than those associated with voice service. Recognizing that these data applications have a much higher tolerance for latency, the authors of [2] designed a time division multiplexed high data rate (HDR) system where each base station transmits to a single user at a time and where the data rate depends on the link quality. In theory, to maximize the total throughput, the base station would transmit only to those users with the best link quality at any given time. These users are typically near the base station at the cell center. However in practice, to ensure fairness to users at the cell edge, the base uses a scheduling algorithm to transmit to a user when its time-varying fading channel is in some sense better than its average channel [7]. Figure 1 shows the block diagrams of the HDR transmitter. The downlink data stream is serial-concatenated coded, the output bits are scrambled and mapped
Multiple Antenna Enhancements
to either a QPSK, 8-PSK, or 16 QAM constellation. These modulated symbols are channel interleaved and punctured and/or repeated as necessary. They are then demultiplexed into 16 substreams and modulated with mutually orthogonal Walsh covers. During each transmission slot, a pilot signal is time-division multiplexed with the data traffic signal to allow the terminals to make channel estimates and measurements. A pseudonoise (PN) sequence modulates the resulting sum of data substreams or the pilot signal. The signal is then filtered, modulated by the carrier, and transmitted. In this conventional HDR system, the data rate is varied using a combination of symbol repetition, variable coding rates, and variable data constellation sizes. Multiple antennas provide additional options such as transmit diversity for improving the link performance and BLAST transmission for increasing the maximum data rate via code reuse [8]. In this paper, we evaluate the system performance of an HDR system with multiple antennas in an idealized setting using Shannon capacity as a link metric and perfect scheduling at the base station. While such idealized results may be difficult to achieve in practice, it is of interest to study trends in the nature of improvements that are possible with multiple antennas.
57
We evaluate both diversity systems and BLAST type systems, and we use a scheduling algorithm given in [7] to ensure fairness, assuming perfect and instantaneous SNR knowledge. As expected the system gains attained using BLAST are significant, however these gains can be realized only if the mobile receiver possesses sufficient processing power for BLAST signal detection. In the second part of the paper, we discuss an architecture for such a BLAST receiver and perform a high-level complexity analysis to assess its feasibility using current hardware technology. Because of the similarities between the transmitted signals for the HDRBLAST and CDMA-BLAST systems, we note that this general receiver architecture can be used for either system. The paper is organized as follows. In Section 2 we present the various issues concerning multiple antennas in a system with efficient scheduling that motivates much of the study carried out in this paper. In Section 3, we present the link level calculations for determining the achievable Shannon capacity versus received SNR and use these results with system level simulations to determine the system throughput using the various antenna architectures. In Section 4, we propose a practical system architecture using BLAST for
58
Huang et al.
approaching the predicted capacities. Because the terminal receivers will be portable devices with limited computational and battery power, we perform a computational complexity analysis of the receiver and make high-level assessments on its feasibility. 2.
Multiple Antennas and Scheduling
Using multiple antennas with HDR provides three advantages over conventional HDR. First, using multiple receive antennas, the gains from antenna combining reduces the required power for achieving a given rate. Alternatively, one can achieve higher rates using the same power. Second, if multiple antennas are available at both the transmitter and receiver so that BLAST transmission can be used, the maximum achievable data rate is M times the rate achievable with a single transmit antenna (where M is the number of transmit antennas). Higher peak throughputs imply not only better average throughputs but also better throughput-delay characteristics. Third, with BLAST transmission, some intermediate data rates can be achieved with a combination of BLAST and small data constellations. Compared to single antenna transmission scheme with a larger constellation to achieve the same rate, the BLAST technique may have a smaller required SNR, resulting in overall improved system performance. In the HDR system, the base station serves multiple users in a time-division multiplexed manner and uses a scheduling algorithm to ensure fairness to users at the cell edge. Pilot bursts are embedded in each slot transmission to allow mobile terminals to measure the SNR of the strongest base’s signal. This value is mapped to a data rate corresponding to the maximum rate at which the mobile can reliably demodulate the signal. The data rate value is transmitted from the mobile to the base as often as once every 1.67 ms. Because of the high frequency of the data rate updates, the scheduling algorithm can take advantage of favorable channel fades for each of the users. By transmitting to users when their channel is favorable, the scheduler provides a form of multiuser diversity. With multiple antennas at the transmitter the diversity gains may not be significant in the HDR system with efficient scheduling since the multi-user diversity gains that already exists might outweigh the benefits from transmit antenna diversity. However, the multiuser diversity gains depend on the number of users and the parameters of the scheduling algorithm, for example the delay requirements of the various users.
Hence transmit diversity may still be useful to a limited extent in some situations. When transmit antennas are available at both transmitter and receiver, space-time coding schemes such as BLAST that trade-off transmit diversity for higher throughput might be very efficient and the benefit of dual antenna arrays that is observed in point-to-point links might carry over to the multiple user system with scheduling. It is also likely that other scheduling algorithms are superior when multiple antennas are available. For example transmitting to a single user in each slot may not necessarily be optimal especially when a large number of number antennas are available. We explore the design tradeoffs in using antenna diversity, BLAST, and scheduling through detailed simulations. In the system evaluation part of the paper, we consider the average throughput per cell sector assuming an ideal feedback channel and also assuming that the base station transmits at the maximum data rate given by the Shannon capacity as a function of the measured SNR. 3. 3.1.
Simulation Study of HDR System Link Level Simulations
The Shannon capacity of a communication link is the theoretical limit of information that can be transmitted and reliably decoded at the receiver. In theory, this capacity is achieved for the multiple antenna system with Rayleigh fading in additive Gaussian noise and channel knowledge at the receiver by encoding with Gaussian distributed codewords with arbitrarily long block lengths. In this section, we consider the Shannon capacity of the individual links in order to obtain upper bounds on the overall system throughput. In the next section, we consider how to approach these capacities in practice. The Shannon capacity of an unrestricted multi-input multi-output (MIMO) system and that of a MIMO system restricted to diversity coding was studied in [9]. For completeness, we review these results here. Consider a link with M transmit antennas and N receive antennas, denoted as (M,N). If the channel is flat fading and richly scattering, the normalized complex channel coefficients between the mth transmit and nth receive antenna can be modeled as independent and identically distributed unit-variance complex Gaussian random variables The link Shannon capacity for a given channel realization is
Multiple Antenna Enhancements 59
where is the average SNR at each receiver antenna, “det” denotes the matrix determinant, I denotes the identity matrix, and the (m,n)th component of the matrix H is The SNR is divided by M because the transmit power is normalized to be independent of the number of transmit antennas (that is, the total average transmit power from the base is kept constant). Alternatively, one could use the multiple antennas to provide only diversity. In this case the goal of the spacetime encoding scheme is to achieve maximum transmit diversity without regard to the number of receive antennas. In essence, the space-time coding achieves transmit diversity of order M. For such diversity-restricted coding schemes, the link capacity of an ( M , N ) system is upperbounded by the Shannon capacity of a single transmit single receive system with an equivalent SNR of as follows:
Equality is achieved when there is only N = 1 receive antenna or M = 1 transmit antenna. For M = 2 transmit antennas, the bound can be achieved using space-time spreading (STS) which provides transmit diversity in a flat-fading CDMA channel without incurring bandwidth penalties [10]. Note that the upperbound in (2) and the capacity in (1) are equivalent when either M = 1 or N = l. For a given SNR and antenna architecture, we can numerically derive cumulative distribution functions of the unrestricted and diversity-restricted MIMO capacities from (1) and (2), respectively. For practical considerations, we study M = 1, 2, 4 antennas at the base station transmitter and N = 1, 2, 4 antennas at the mobile receiver. These link level results are used in the system level simulations to obtain the system throughputs. Figure 2 shows the distributions of the capacities for 10 dB SNR and various architectures. For the MIMO systems restricted to diversity, the distribution curves become more vertical as the number of antennas increases, indicating the saturation of the diversity benefits. For the unrestricted MIMO systems, the Shannon capacity increases more significantly. For a given architecture and SNR value, one can compute the average capacity from the corresponding distribution function. Figure 3 shows the average capacities as a function of SNR.
We emphasize that the results derived in this section are for a flat fading channel. Wideband CDMA systems will most likely encounter frequency selective channels which result in loss of orthogonality between spreading codes due to multipath delays. If no measures are taken to address the multipath fading, the capacity would be reduced. A portion of this capacity could be recouped by extending the equalizer techniques described in [11] to multiple antenna systems. However, this study is beyond the scope of this paper. 3.2.
System Level Simulations
Link level results are used in the system level simulations to obtain system throughputs and to study the tradeoffs between antenna and multiuser diversity. A fixed number of users K are placed uniformly in the sector of interest. For each user, one determines the
60
Huang et al.
tional fairness criteria [13] which states that the percentage increase in throughput to any particular user is less than the sum of percentage decreases to all other users under any other scheduler. We assume that the rates of all K users are fed back to the base with no errors, and the channel is static between the time of request and transmission. In practice, the rate would be drawn from a discrete set; however we achieve an upperbound on system throughput by assuming a continuous rate set drawn from the link level distributions. 3.3.
average measured SNR, corresponding to the signal power of the strongest received base divided by the sum power of the remaining bases and thermal noise. Note that we are implicitly modeling the base station interference received by the mobile terminals as spatially white Gaussian noise. This is a reasonable assumption since each base’s signal is the sum of code-multiplexed signals, resulting in a sufficiently large number of contributing terms to the interference. Pathloss and shadow fading are used to compute the received signal powers from each of the bases. The distribution of the SNRs for a large network of 3-sector cells and frequency reuse of one is obtained from [2] and shown in Fig. 4. Each user’s signal is also assumed to experience Rayleigh fading due to scattering. When multiple antennas are considered the scattering, is assumed to be sufficiently rich so that the fading is independent across the antennas. Each user determines the corresponding supportable capacity according to (1) or (2), as a function of the instantaneous fading channel realization. The fading is assumed to remain constant over the duration of the scheduling interval or slot and independent across slots. The base then transmits to the user with the highest where is the requested rate fed back by the user, and is the average rate received by the mobile over a window of time. This scheduling algorithm, first described in [12], ensures that a user is served when its channel realization is better than is has been in the recent past. Because the average rate decreases as the amount of time that a user is not served increases, this user is more likely to be served even if its channel does not improve significantly. Note that the algorithm is, in some sense, fair to users regardless of their location with respect to the base station. More specifically, as recognized in [12], this algorithm satisfies the propor-
Simulation Results
In our system simulations, the positions (average SNRs) of the K users are fixed for 10000 slots, and we assume independent fading realizations for each user from slot to slot. We study the average sector throughput derived by averaging the rates over 50 independent realizations of average SNRs for each data point. The throughput is a function of K, the number of antennas, and the feedback technique. We consider two feedback techniques. In one scheme, the rate is computed from either (1) or (2) assuming all M antennas were transmitting, and this value is fed back to the base station transmitter. In the performance figures, these techniques are labeled as “rate feedback (FB)” and “rate feedback (FB), div(ersity) b(ou)nd,” respectively. In the second feedback technique, the rate is computed by assuming that only the antenna with the highest capacity is used. In other words,
In addition to the rate feedback, the terminal must also feed back the index of the transmit antenna that achieves this maximum rate. Hence this technique is labeled as “rate/ant(enna) feedback (FB)”. The base station then transmits all the power from this antenna to achieve the determined capacity. While this scheme requires additional feedback bandwidth, we will see that there is a significant increase in throughput for the multiple transmit antenna, single receive antenna case. Figure 5 shows the average sector throughput as a function of the number of users for M = 2 or 4 transmit antennas and N = 1 receive antenna. Notice that all the curves increase with increasing number of users due to multi-user diversity. When there is only one user in the system the throughput increases in going from one antenna to two and four antennas. However, with more users in the system the trend is actually reversed for
Multiple Antenna Enhancements
rate-only feedback. (Recall that with a single receive antenna, the Shannon capacity and diversity bound capacity are equivalent.) This is because with transmit diversity the variations in the SNR are reduced (the probability that the SNR is higher is smaller) and hence the gains from efficient scheduling (multiuser diversity gains) is actually reduced. This shows that with rate only feedback, the gains from multi-user diversity with sufficient number of users are actually superior to transmit diversity. The performance with rate/antenna feedback is uniformly better than rate-only feedback since the best antenna is used to transmit to any user. Essentially each user appears as M different users, where M is the transmit diversity order, and hence there is greater efficiency from scheduling compared to the single transmit antenna case for any number of users. Figure 6 shows the throughput results for M = 2 transmit antennas and N = 2 or 4 receive antennas. Comparing the results to that in Fig. 6 it is clear that the gains from receive antennas are superior to that of transmit antennas. This is as expected since in addition to receive diversity receive antennas provide antenna combining gain. Nevertheless, the gains from receive antenna also decrease with increasing number of users. Note that rate and antenna feedback now performs worse than rate-only feedback. This shows that when two or more receive antennas are available there are gains from using both transmit antennas simultaneously than to transmit out of the best antenna. The additional capacity of the (2,2) system over the (1,2) system more than compensates for the multi-user diversity gains. As expected, the capacity of the diversity bound (which can actually be achieved for N = 2 re-
61
ceive antennas) is inferior to rate and antenna feedback. This is because these techniques both achieve transmit diversity, and the technique with antenna feedback uses more information to achieve a higher rate. For completeness, Fig. 7 shows the throughput results for M = 4 transmit antennas and N = 4 receive antennas. The relationships and trends are the same as in Fig, 6, however the throughputs are higher with respect to M = 2 transmit antennas for rate-only feedback and rate/antenna feedback. Note that for the diversity bound, the capacity is lower for (4,4) than (2,4) because of the reduced variation of SNR and reduced efficiency of the scheduler in the former case. Figure 8 shows the normalized sector throughput with increasing number of antennas for the cases of
62
Huang et al.
only one user in the sector and 16 users in the sector. The solid line corresponds to a system with 1 receive antenna at each terminal and multiple (1, 2, or 4) antennas at the base station. The dashed line corresponds to a system with a single transmit antenna at the base and multiple receive antennas (1, 2, or 4) at the each of the terminals. Finally the dotted line corresponds to multiple antennas at both the base station and the terminals ((1,1), (2,2), or (4,4)) using rate-only feedback. For the one user case the trends are same as average or outage capacity results in [14]. Transmit diversity has the least improvement and eventually saturates with increasing number of antennas. The most dramatic improvement is for the case when both transmit and receive antennas are available at the base station. When there are 16 users the multiple antenna gains are uniformly reduced for all schemes. Nevertheless, the throughput gains with dual arrays is still significant over the case with only receive antennas, and the gains appear to grow linearly with the number of antennas as before. 4. Implementation of an HDR System with BLAST The system performances derived in the previous section were based on a Shannon capacity analysis and could be achieved in theory using Gaussian distributed codewords and arbitrarily long block lengths. In practice, for single antenna transmitters, turbo codes and iterative decoding techniques can approach the Shannon capacity if the interleaver depth is sufficiently long [15]. The latency tolerance for packet data allows these cod-
ing techniques to be used. Hence in the HDR proposal, a family of turbo codes based on serially concatenated convolutional codes are used to provide powerful error correcting capability at low SNRs [16]. The encoder structure is shown in Fig. 9 and includes an interleaver between the outer and inner encoders. The outer convolutional code is rate-1/2 and has 16-states while the inner code, also rate-1/2, has 4-states. The overall concatenated code rate is rate-1/4, but by puncturing the outer and/or the inner convolutional code, concatenated codes of rates 3/8 and 1/2 are also supported. Unfortunately, the success of turbo codes has not been extended to systems with multiple transmit antennas (except in the (2,1) case as noted in [9]). However, using the BLAST technique [1] with singledimensional turbo codes, a significant fraction of the capacity is achieved by encoding the data in space and time and transmitting the streams simultaneously over multiple antennas. At the receiver, multiple antennas are required to distinguish the streams based on their spatial characteristics. Our proposed architecture with M transmit antennas extends the original HDR architecture using an M-ary demultiplexer following the channel encoder. These M parallel data streams are modulated and transmitted simultaneously through the M antennas. Details of this
Multiple Antenna Enhancements
BLAST transmitter are given in the following subsection. At the receiver, the number of receive antennas must be at least as large as the number of transmit antennas for BLAST demodulation. These antennas must be spaced sufficiently so that the correlation of the received signals across antennas is small. This spacing is on the order of half a wavelength, which for a 2 GHz carrier is 7.5 cm. Because high data rate applications will most likely target personal digital assistants and laptop computers, it is possible to have up to four antennas with sufficient spacing. In Subsection 4.2, we describe the receiver architecture and address its computational complexity. 4.1.
Transmitter Architecture
The proposed HDR transmitter with M antennas is shown in Fig. 10. Compared to the conventional single antenna transmitter in Fig. 1, the encoded data stream is now demultiplexed into M streams, and the channel interleaver is replaced with a generalized space-time interleaver for distributing the coded symbols in time and across antennas. Each of the M streams are mod-
63
ulated with the same set of 16 Walsh covers. These signals are summed, modulated with the same PN sequence, and transmitted simultaneously over the M antennas. There are a total of 16M substreams, and the M substreams which share the same code are distinguishable at the receiver only through their spatial channel characteristics. 4.2.
Receiver Architecture
We now describe and perform a complexity analysis for an HDR BLAST receiver architecture with N receive antennas, as shown in Fig. 11. The purpose of this section is to obtain a high level estimate of the processing requirements to assess the receiver’s feasibility. Let C be the number of chips per symbol, M be the number of transmit antennas, L be the number of resolvable multipath components. We assume that the timings of the L multipath delays and the MLN channel coefficients have been estimated. For a given symbol period, let be the C-dimensional complex vector representing the sampled baseband signal at the nth receive antenna corresponding to the lth multipath.
64
Huang et al.
4.2.1. PN Sequence Descrambling. The received signal is first descrambled using the complex conjugates of the PN sequence. Let the descrambling sequence be represented by a C-dimensional complex vector p whose components are the complex conjugates of the scrambling sequence. Descrambling is performed by taking the component-wise product of the descrambling vector with the received signal: Because the components of the descrambling sequence are each component-wise multiplication consists of 2 real additions. Hence there are a total of 2C additions per vector per symbol, and a total of 2CLN additions for all LN received signals. 4.2.2. Walsh Code Despreading. The descrambled signals are despread with the C Walsh code sequences. Let be the kth Walsh code sequence (k = 1 , . . . , C). Then the despreading corresponds to taking the inner product between the code and the descrambled signal:
Because the Walsh sequences are binary, the inner product consists of 2C real additions. For each of the LN descrambled signals, there are C inner products (one for each code) consisting of 2C real additions for a total of real additions. We can reduce this number of computations if we consider the special structure of the Walsh codes. For codes of length (n = 1, 2, 3 , . . . ) , the orthogonal codes are given by the columns of which is given by the following recursive expression:
where The inner products between a C-dimensional vector and the C Walsh codes can be obtained using a Fast Walsh-Hadamard Transform discussed in [17]. An example is shown in Fig. 12 for C = 4, where are the bits of the input vector, and are the resulting inner products of the vector with the four Walsh codes given by the columns of Using this technique, the number of total real additions required for processing each descrambled signal is reduced from to hence the Walsh code despreading requires a total of real additions per symbol. 4.2.3. Space-Time Combiner. The signal components corresponding to each of the MC substreams are distributed among LN components of the despreader outputs. For the kth code and nth receive antenna, we collect these components given by (3) to form a Ldimensional vector:
Because each code is transmitted from all M antennas, there are M channel coefficients corresponding to each element of For example, for the component the M channel coefficients are given by
Multiple Antenna Enhancements
corresponding to the channels from the M antennas over the lth multipath to the nth receive antenna. The L-dimensional vector of channel coefficients corresponding to the vector over the mth transmitter is
There are MLN channel coefficients in total which we assume have been estimated during a training phase. The space-time combining operation weights and combines each of the despreader outputs with the complex conjugate of its corresponding channel. For the kth code, the space-time combiner output is a M-dimensional vector given by where the mth column of the channel matrix is For each code, the space-time combiner requires MLN complex multiplications and MLN complex additions. Each complex multiplication requires 6 real operations (4 real multiplications and 2 real additions), and each complex addition requires 2. Therefore the entire operation requires 8CMLN real operations per symbol. 4.2.4. V-BLAST Detector. Each component of the vector is corrupted by spatial interference due to the other M – 1 components. In addition, in frequency selective channels (i.e., L > 1), there is also interference due to the substreams spread by the other codes. One could choose to mitigate this other-code interference, using for example a decorrelating detector [6]. However, this multipath interference may be ignored since it is typically less severe than the spatial interference among the code-sharing substreams. In general, there would be less multipath interference if higher order Walsh sequences were used. To eliminate the spatial interference, one could use a maximumlikelihood detector. However, the complexity of this technique grows exponentially with M. The V-BLAST detector is a computationally efficient alternative which is comparable to the maximum-likelihood detector in terms of performance [18]. A single V-BLAST detector algorithm can resolve the interference among a set of M substreams given by the vector. Therefore a bank of C V-BLAST detectors are needed in the receiver. The V-BLAST algorithm requires the M-by-M code-channel correlation matrix where is the L-by-L code correlation matrix for the kth code. The (i, j)th element of is inner product of
65
the ith delayed PN/Walsh sequence (the componentwise product of the PN scrambling sequence with the kth Walsh code: with that of the jth delayed PN/Walsh sequence. For example, if the delay of the l = 2 multipath relative to the l = 1 multipath is a single chip, then the (1,2) element of is
Each term of requires 4C real operations, and there are a total of terms. However, since each diagonal element of is the energy per symbol of the PN/Walsh sequence, they do not require computation. Also, since is a Hermitian symmetric matrix, the total number of operations for calculating each is upperbounded by Under the assumption of flat fading, the vector is a sufficient statistic vector for the substreams spread by the kth code [6]. Dropping the subscript k for simplicity, the vector can be written as
where R is the code-channel correlation matrix for the kth code, a is the M-dimensional vector of coded data symbols corresponding to the kth code, and n is the associated complex-valued additive Gaussian noise vector. Given the correlation matrix R and the vector y, the V-BLAST algorithm [19] successively detects the data symbols of a using the following steps: 1. Determine the component of y with the highest signal-to-noise ratio (SNR). 2. Correlate the vector y with a vector which satisfies either the minimum mean-squared error or zeroforcing criterion so that the result corresponds to the component with the highest SNR and is the free from interference due to the other M-1 components. 3. Use a slicer to estimate the symbol. 4. Using the estimated symbol, remove the contribution of this component from the vector y. 5. Repeat steps 1 through 4 until all M components have been detected. Let the ordered set be a permutation of the integers 1, 2, . . . , M specifying the order in which the components of are extracted. The V-BLAST algorithm can be written in the following pseudo-code:
66
Huang et al.
end
The matrix inverse in (5) can be computed using the Gauss-Jordan technique [20]. For M = 2, 3, 4, this computation requires respectively 100, 376 and 792 real operations. In (6) the component index of y with the strongest SNR is assigned to and corresponds to the index of the minimum noise variance given by the diagonal elements of The vector is the zero-forcing vector. The correlation in (8) requires complex multiplications and M – m complex additions, resulting in 8(M – m) +6 real operations. For QPSK data constellation, the slicing operation in (9) requires 2 real operations. Reconstructing the contribution of the th component in (10) requires M – m complex multiplications, and removing the contribution from y requires M – m complex additions. The th term of y is removed in (10), and the matrix R is deflated in (9) by removing the th row and column. For the Mth iteration, only the slicing operation in (9) is required for the QPSK constellation. For M = 2 and 4, the total number of operations per symbol per code for the V-BLAST algorithm ((5) through (11)) is respectively, 126 and 1350. The operation count for the V-BLAST algorithm can be reduced significantly if the correlation matrices can be reused among several codes or if it does not vary significantly from symbol to symbol so that it does not need to be recomputed each symbol period. Future studies will address these potential simplifications. Additional reductions in complexity can be achieved by using an efficient algorithm for nulling and cancellation which avoids calculating the matrix inverse of each deflated matrix R [21]. After the bank of C V-BLAST detectors, the signals are demapped, deinterleaved and passed to the turbo decoder. These processing blocks require memory but do not require arithmetic operations.
4.2.5. Turbo Decoder. The decoder structure is shown in Fig. 13. The optimal decoding algorithm is the Maximum Aposteriori Probability (MAP) algorithm proposed by Bahl et al. [22]. However, the complexity of implementing the MAP algorithm directly is prohibitive and hence suboptimal decoding algorithms, the most common being the Soft Output Viterbi Algorithm (SOVA) [23], are used in practice. It was shown in [24] that the computational cost of a SOVA decoder iteration per single bit is maximum operations, additions, and 6(K + 1) bit comparisons, where K is the constraint length of the convolutional code. For the HDR system K is 4 for the outer convolutional code and 2 for the inner code yielding 101 and 47 operations per decoder iteration and per bit respectively. Hence the total operations count for decoding a block of length B bits is given by 148MBD, where D is the number of decoder iterations. For example, using M = 4 transmit antennas, a packet length of 4096 bits, D = 4 decoder iterations, and a packet duration of 1.66 ms, the turbo decoder requires
Note that this simplified analysis has neglected the substantial number of memory operations associated with SOVA and interleaving. Table 1 gives the number of operations per second for the following three systems: a (1,2) system with only space-time combining (no additional processing for interference suppression), a (2,2) system with V-BLAST detection, and a (4,4) system with V-BLAST detection. The values are based on using C = 16 chips per symbol, L = 3 resolvable multipath components, a block length of B = 4096 bits, and a symbol rate of 76.8K symbols per second. Turbo decoding uses the majority of the processing cycles. Comparing a (1,2) and (2,2) system, the additional processing required for V-BLAST detection is an order magnitude less than that required for the additional processing for turbo
Multiple Antenna Enhancements
67
outlined and a complexity analysis was performed. The complexity is dominated by the turbo decoder, and the overall processing requirements are within the range of current hardware technology. References 1. G.J. Foschini, “Layered Space-Time Architecture for Wireless
2.
decoding. For a (4,4) system, the V-BLAST processing accounts for about 22% of the total processing while the turbo decoding accounts for about 73%. In this complexity analysis, we have ignored the processing required for estimating the path delays and channel coefficients. However, one can assume that these operations, which are performed during a time-division multiplexed training phase, require less processing than for the data. Hence because we have assumed that the data processing for the full duty cycle, the values obtained are upper bounds on the actual processing requirements.
5.
Conclusions
The impact of multiple antennas at the transmitter and receiver for a packet data system with channel-aware scheduling was studied. We showed that the relative gains from multiple antennas are considerably reduced compared to a point-to-point system with the same number of antennas. Nevertheless, the trends in the gains are similar and we continue to see a linear increase in average throughput with increasing number of transmit and receive antennas. For single antenna receivers, we show that multiuser diversity from efficient scheduling often outweighs the benefits of transmit diversity. Allowing for additional feedback regarding the strongest received antenna, selection diversity achieves better performance. For multiple antenna receivers, the best performance is achieved using multi-input multioutput capacity achieving transmission scheme such as BLAST in which the transmitted signal is coded in space and time, and the receive antennas are used to resolve the spatial interference. The actual gains that achieved by this transmission technology remains to be studied through detailed link simulations. A receiver architecture for the BLAST transmission scheme was
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
Communication in a Fading Environment When Using MultiElement Antennas,” Bell Labs Technical Journal, vol. 1, no. 2, Autumn 1996, pp. 41–59. P. Bender, P. Black, M. Grob, R. Padovani, N. Sindhushayana, and A. Viterbi, “CDMA/HDR: A Bandwidth-Efficient HighSpeed Wireless Data Service for Nomadic Users,” IEEE Communications Magazine, vol. 38, no. 7, 2000, pp. 70– 77. C.E. Shannon, “A Mathematical Theory of Communication,” Bell Systems Technical Journal, vol. 27, 1948, pp. 379–423, 623–656. F.R. Farrokhi, A. Lozano, G.J. Foschini, and R.A. Valenzuela, The 11th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, 2000 (PIMRC 2000), vol. 1, pp. 373–377. S. Catreux, P.F. Driessen, and L.J. Greenstein, IEEE Transactions on Communications, vol. 49, no. 8, 2001, pp. 1307– 1311. H. Huang, H. Viswanathan, and G.J. Foschini, “Multiple Antennas in Cellular CDMA Systems: Transmission, Detection, and Spectral Efficiency,” IEEE J. Selected Areas in Commun., to appear. P. Viswanath, D. Tse, and R. Laroia, “Opportunistic Beamforming using Dumb Antennas,” IEEE Transactions on Information Theorey, submitted. H. Huang and H. Viswanathan, “Multiple Antennas and Multiuser Detection in High Data Rate CDMA Systems,” in Proceedings of the IEEE Vehicular Technology Conference, Tokyo, Japan, May 2000. C. Papadias, “On the Spectral Efficiency of Space-Time Spreading for Multiple Antenna CDMA Systems,” in 33rd Asilomar Conference on Signals and Systems, Asilomar Conference, Monterrey, CA, Nov. 1999. C. Papadias, B. Hochwald, T. Marzetta, M. Buehrer, and R. Soni, “Space-Time Spreading for CDMA Systems,” in 6th Workshop on Smart Antennas in Wireless Mobile Communications, Stanford, CA, July 22–23, 1999. I. Ghauri and D. Slock, “Linear Receivers for the DS-CDMA Downlink Exploiting Orthogonality of Spreading Sequences,” in 32nd Asilomar Conference on Signals and Systems, Monterrey, CA, Nov. 1998. A. Jalali, R. Padovani, and R. Pankaj, “Data Throughput of CDMA-HDR: A High Efficiency High Data Rate Personal Communication Wireless System,” in Proceedings of the IEEE Vehicular Technology Conference, Tokyo, Japan, May 2000. F. Kelly, “Charging and Rate Control for Elastic Traffic,” European Transactions on Telecommunications, vol. 8, 1997, pp. 33– 37. G.J. Foschini and M.J. Gans, “On Limits of Wireless Communication in a Fading Environment when Using Multiple Antennas,”
68
Huang et al.
Wireless Personal Communications, vol. 6, no. 3, March 1998, pp. 311–335. 15. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error-Correcting Coding and Decoding,” in Proceedings of the International Conference on Communication ’93, May 1993, pp. 1064–1070. 16. G. Karmi, F. Ling, and R. Pankaj, “HDR Air Interface Specification (HAI),” Qualcomm Inc., Jan. 2000. 17. C.-L. I, C.A. Webb, H. Huang, S. ten Brink, S. Nanda, and R.D. Gitlin, “IS-95 Enhancements for Multimedia Services,” Bell Labs Technical Journal, vol. 1, no. 2, Autumn 1996, pp. 60–87. 18. G.J. Foschini, G.D. Golden, R.A. Valenzuela, and P.W. Wolniansky, “Simplified Processing for High Spectral Efficiency Wireless Communication Employing Multi-Element Arrays,” IEEE Jornal on Selected Areas in Communications, vol. 17, no. 11, 1999, pp. 1841–1851. 19. P.W. Woliansky, G.J. Foschini, G.D. Golden, and R.A. Valenzuela, “V-BLAST: An Architecture for Realizing Very High Data Rates Over the Rich-Scattering Wireless Channel,” in Proc. ISSSE, Pisa, Italy, Sept. 1998. 20. G. Strang, Linear Algebra and Its Applications, San Diego: Harcourt Brace Jovanovich, 1988. 21. B. Hassibi, “An Efficient Square-Root Algorithm for BLAST,” in Proceedings of the International Conference on Acoustics and Signal Processing (ICAASP) 2000, Istanbul, Turkey, June 2000, pp. 3129–3134. 22. L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Transactions on Information Theory, vol. IT-20, March 1974, pp. 284–287. 23. J. Hagenauer and P. Hoeher, “A Viterbi Algorithm with Soft Decision Outputs and Its Applications,” in Proceedings of GLOBECOM ‘89, Nov. 1989, pp. 1680–1686. 24. P. Robertson, E. Villebrun, and P. Hoeher, “A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in the Log Domain,” in Proceedings of the International Conference on Communications ‘95, 1995, pp. 1009– 1013.
Howard Huang received a B.S.E.E. degree from Rice University in 1991 and a Ph.D. In electrical engineering from Princeton University in 1995. Since graduating, he has been a member of technical staff in the Wireless Communications Research Department, Bell Labs, Holmdel NJ. His interests include multiuser detection, multiple antenna communication systems, and applications of these technologies to third generation mobile communication systems. hchuang@ lucent.com
Harish Viswanathan was born in Trichy, India, on August 14, 1971. He received the B. Tech. degree from the Department of Electrical Engineering, Indian Institute of Technology, Madras, India in 1992 and the M.S. and Ph.D. degrees from the School of Electrical Engineering, Cornell University, Ithaca, NY in 1995 and 1997, respectively. He is presently with Lucent Technologies Bell Labs, Murray Hill, NJ. His research interests include information theory, communication theory, wireless networks and signal processing. Dr. Viswanathan was awarded the Cornell Sage Fellowship during the academic year 1992–1993.
Andrew Blanksby received the bachelor’s degree and Ph.D. in Electrical and Electronic Engineering from the University of Adelaide in 1993 and 1999 respectively. In July 1998 he joined the DSP & VLSI Systems Research Department, Bell Laboratories, Lucent Technologies, Holmdel, NJ, as a Member of Technical Staff. Since March 2001 Andrew has been with the High Speed Communications VLSI Research Department, Agere Systems, Holmdel NJ. His professional interests include VLSI design, communication system design, and signal processing.
Mohamed A. Haleem has been with the Wireless Communications Research Department, Bell Laboratories, Lucent Technologies,
Multiple Antenna Enhancements
Holmdel, NJ since July 1996. He received a B.Sc. degree from the Department of Electrical & Electronic Engineering, University of Peradeniya, Sri Lanka in 1990 and the M.Phil. degree from the Department of Electrical & Electronic Engineering, Hong Kong University of Science & Technology in 1995. From March 1990 to August
69
1993 he was with the academic staff of the department of Electrical & Electronic Engineering, University of Peradeniya, Sri Lanka. His professional interests include dynamic resource assignment to wireless communication systems, high speed wireless systems, and Communication Systems Simulation.
This page intentionally left blank
Journal of VLSI Signal Processing 30, 71–87, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Deterministic Time-Varying Packet Fair Queueing for Integrated Services Networks* ANASTASIOS STAMOULIS AND GEORGIOS B. GIANNAKIS Department of Electrical and Computer Engineering, 200 Union Street. S.E., University of Minnesota, Minneapolis, MN 55455, USA Received September 13, 2000; Revised June 15, 2001
Abstract. In integrated services networks, the provision of Quality of Service (QoS) guarantees depends critically upon the scheduling algorithm employed at the network layer. In this work we review fundamental results on scheduling, and we focus on Packet Fair Queueing (PFQ) algorithms, which have been proposed for QoS wirelinewireless networking. The basic notion in PFQ is that the bandwidth allocated to a session is proportional to a positive weight Because of the fixed weight assignment, the inherent in PFQ delay-bandwidth coupling imposes limitations on the range of QoS that can be supported. We develop PFQ with deterministic time-varying weight assignments, and we propose a low-overhead algorithm capable of supporting arbitrary piecewise linear service curves which achieve delay-bandwidth decoupling. Unlike existing service-curve based algorithms, our time-varying PFQ scheme does not exhibit the punishment phenomenon, and allows sessions to exploit the extra bandwidth in under-loaded networks. Keywords: networks
1.
packet fair queueing, generalized processor sharing, QoS, service curves, integrated networks, wireless
Introduction
In integrated services networks, the provision of Quality of Service (QoS) guarantees to individual sessions depends critically upon the scheduling algorithm employed at the network switches. The scheduling algorithm determines the transmission order of packets in outgoing links and, thus, it has a direct impact on the packet delay and achievable throughput, which serve as primary figures of merit of the system performance. In wireline Computer Networks, scheduling is an area of fervent research for nearly two decades. In wireless networks, research of scheduling algorithms is in a rel*Work in this paper was supported by the NSF Wireless Initiative grant no. 99-79443. Parts of this work were presented at the Globecom’ 2000 Conference. Original version was submitted to the Journal of VLSI Signal Processing on September 11, 2000. It was revised on June 6, 2001.
atively nascent form, as existing second generation systems carry mainly only voice and low-rate data traffic. However, the emergence of third generation systems, and their promise of broadband transmissions combined with the indiosyncracies of the wireless channel shed light on the importance of scheduling in wireless environments as well. The primary mission of the scheduler is to allocate efficiently the system resources to the users. In computer networks, the term “resources” refers primarily to the bandwidth of the transmission link, and the queueing buffer space in the routers, whereas the adverb “efficiently” refers to making sure that resources are not wasted (for example, in a TDMA system, the scheduler would try to ensure that as many as possible timeslots are used for transmission). Integrated services networks do not only offer the potential of more efficient utilization of resources, but also exhibit billing advantages for both service providers and customers (see,
72
Stamoulis and Giannakis
e.g., [1]). Hence, it does not come as a surprise that third generation wireless networks are envisioned to support, similar to their wireline counterparts, a plethora of different transmission rates and services. What perhaps comes as a surprise, is that the need for efficient bandwidth allocation is not only prominent in wireless networks (where the RF spectrum is scarce and extremely expensive), but in wireline networks as well: despite physical-layer developments in optical networking (for the backbone network) and HDSL (for the last-mile), Internet access is mainly best-effort (which implies, for example, that it is not known a priori how long it will take for a web page to be downloaded). Network scheduling algorithms play a central role in how bandwidth is allocated among sessions—flows (in a wireline environment) or users (in a cellular network). From a historical standpoint, there is an extensive body of work in scheduling algorithms conducted by researchers in quite diverse fields, such as Operations Research, and Computer Science. In the emerging broadband multi-service networks, the Generalized Processor Sharing (GPS) [2] discipline and the numerous Packet Fair Queueing (PFQ) algorithms are widely considered as the primary scheduler candidates. This is because GPS has been shown to provide both minimum service rate guarantees and isolation from ill-behaved traffic sources. Not only have GPS-based algorithms been implemented in actual gigabit switches in wired networks, but also they have been studied in the context of the emerging broadband wireless networks (see, e.g., [3] and references therein). The fundamental notion in GPS-based algorithms is that the amount of service session i receives from the switch (in terms of transmitted packets) is proportional to a positive weight As a result, GPS (and its numerous PFQ variants) is capable of delivering bandwidth guarantees; the latter translate to delay guarantees as long as there is an upper bound on the amount of incoming traffic (this bound could be either deterministic for leaky-bucket constrained sessions [2], or stochastic, as in, e.g., [4]). One of the major shortcomings of GPS is that the service guarantees provided to a session i are controlled by just one parameter, the weight Hence, the delay-bandwidth coupling, which refers to the mutual dependence between delay and throughput guarantees (i.e., in order to guarantee small delays, a large portion of the bandwidth should be reserved). To appreciate why the delay-bandwidth coupling is a shortcoming, one needs to take into consideration that future networks will support multirate multimedia ser-
vices with widely diverse delay and bandwidth specifications. For example, video and audio have delay requirements of the same order, but video has an order of magnitude greater bandwidth requirements than audio. Therefore, delay-bandwidth coupling could lead to bandwidth underutilization. To overcome these problems, [5] introduces the notion of service curves (SC); a SC can be thought of as the minimum amount of service that the switch guarantees to session i in the interval [0, t]. SCs dispense with the delay-bandwidth coupling because the shape of could be arbitrary. However, as noted in, e.g., [6], SC algorithms suffer from the punishment effect: when session i receives in more service than (for example, this could happen if the system is under-loaded in and the load increases at then there is an interval where session i does not receive any service at all. Eventually session i will receive service at least equal to in but, nevertheless, it is penalized for the extra service it received in From a practical point of view, the punishment phenomenon is undesirable in wireline networks, because it does not allow sessions to take advantage of potentially available bandwidth in the system. In wireless networks, where the quality of the physical channel is time-varying (i.e., sometimes the channel can be considered “good” and sometimes the channel is “bad”), the punishment effect could potentially not allow a user to transmit, even though the physical channel could be temporarily in a good state. Therefore, in both wireless and wireline environments, the punishment phenomenon is undesirable. As GPS does not suffer from the punishment problem [2], we are motivated to study whether GPS with proper weight assignment is capable of providing the same QoS services as SC-based algorithms, while obviating the punishment phenomenon in both wireline and wireless networks. In this work we provide a short tutorial on scheduling algorithms and revisit some of the fundamental results on deterministic QoS. Furthermore, we develop a novel scheduling algorithm which combines the strengths of PFQ algorithms and SC-based algorithms. Our algorithm, called Deterministic Time-Varying Packet Fair Queueing (DTV-PFQ) relies on the intuitive idea of time-varying weights Noting that in GPS (and hence in PFQ) the weight assignment is fixed throughout the lifetime of the sessions, herein we develop a time-varying weight assignment which extends PFQ and makes PFQ capable of supporting piecewise linear SCs with minimum overhead. Our contribution lies
Deterministic Time-Varying Packet Fair Queueing
in showing how the time-varying weight assignment can be done deterministically for each session and independently of the other sessions, thus preserving the isolation properties of PFQ. As a result, our deterministic time-varying PFQ is capable of combining the strengths of GPS and the service flexibility of SC-based algorithms. Moreover, illustrating that DTV-PFQ can support (modulo some approximation errors) as many multiple leaky-bucket constrained sessions as the Earliest Deadline First (EDF) scheduling algorithm constitutes an important ramification of our work. Noting that in wireless environments a lot is to be gained by the joint design across network layers, we also discuss how our scheduling algorithm can be integrated with the physical layer transmission-reception scheme. The rest of the paper is organized as follows: in Section 2 we revisit results on GPS, EDF, PFQ, and SCs, and in Section 3 we discuss the punishment effect of SC-based schedulers. In Section 4 we describe our deterministic time-varying weight assignment scheme and illustrate its merits. In Section 5 we discuss implementation details in wireless–wireline networks, and finally, we provide concluding remarks and pointers to future research in Section 6.
2.
73
vide QoS guarantees to sessions (either to individual or to groups of sessions). The QoS guarantees can be expressed in terms of packet delay, achievable throughput, and packet drop probability (in wireline networks packets are dropped mainly because of buffer overflows due to congestion, whereas in wireless networks packets are dropped mainly because of symbol errors at the physical layer). In order to quantify the achievable QoS guarantees, let us introduce some notation. denotes the amount of traffic that session i transmits in the time interval could either be a deterministic function of time (as e.g., in the case of a known MPEG file which is to be transmitted over the network), or a stochastic process [7]. denotes the amount of service given to session i by the scheduler in the interval it readily follows that is the average bandwidth allocated to the session i over the time interval Supposing that session i started transmitting packets at time 0, then the backlog of session i at time t, and the delay of a packet of session i which arrives at the switch at time t are given respectively by [8]:
Model Description
In this section we briefly review results on GPS, EDF, PFQ, and SCs, and describe the deterministic model that we will use in the following sections to develop and study DTV-PFQ. We consider a single network switch which multiplexes data packets sent by various sessions. The scheduler operates either in each of the output links of a network switch (see Fig. 1, [41]), or at the basestation inside a cell of a wireless network (see Fig. 2). The objective of the scheduler is to pro-
Herein we focus on deterministic QoS guarantees, which refer to the worst-case service that a session can receive from the scheduler: the worst-case service is expressed as the maximum packet delay or the minimum service throughput. Intuitively thinking, the worst-case service scenario for session i is determined by the amount of traffic that session i sends to the switch (this amount could be measured in, e.g., bits or ATM cells), and the amount of service that receives from the switch. In its turn, the amount of service is determined by the available bandwidth and the scheduling algorithm: for some scheduling algorithms, the amount of service provided to session i does not only depend on session i traffic, but it is also a function of the traffic generated by the other active sessions in the network. Hence, if bounds on both the input traffic and the provided service exist, it is possible to derive bounds on the maximum delay and minimum throughput [8]. Next, in order to lay out the framework of our scheduling algorithm, we consider bounds on the amount of traffic generated by sessions, and the bounds on the amount of service provided by the scheduler.
74
Stamoulis and Giannakis
2.1.
Bound on Incoming Traffic
An upper bound on the amount of traffic generated by session i is given by the traffic envelope defined as [8]:
The traffic envelope provides an expression of the worst-case traffic that can be generated by a session (i.e., when the session is greedy) in a time interval of length t. In integrated services networks, knowledge of the traffic envelope of a traffic source is not only a prerequisite in determining QoS guarantees, but it can also be used in traffic policing and admission control. As it is naturally expected, can have an arbitrary shape; for simplicity of implementation reasons, it is highly desirable that as few as possible scalar parameters describe In the case of a leaky-bucket constrained session [8], two positive constants, determine the affine function are chosen such that and is as “close” to as possible. In other words, the leaky bucket serves as an approximation of the traffic envelope of a specific session (note that the term “approximation” is rather loosely defined); the intuition behind the leaky bucket traffic descriptor is that indicates approximately how much bandwidth should be allocated to session i, whereas indicates approximately the buffer space which should be reserved for the session so that
no overflows occur. To improve the quality of the approximation (which increases the bandwidth utilization inside the network), a single leaky bucket can be generalized to a multiple leaky-bucket [9]:
Multiple leaky-buckets capture the notion of multiple transmission rates (over different periods of time), and provide a parsimonious way of describing real-world traffic sources (see, e.g., [9], and [10] for an algorithm to determine the parameters of the leaky buckets). From a practical point of view, leaky buckets are attractive as traffic policing mechanisms because they can implemented by just a few lines of C or assembly code, and have been incorporated in the ATM standard (in the GCRA algorithm, see, e.g., [11]). From a theoretical point of view, the operation of leaky buckets can be rigorously defined (see [12] and references therein): under the (min, +) algebra, a single leaky bucket can be realized by an IIR filter, whereas multiple leaky buckets can be realized by a filterbank of IIR filters. 2.2.
Bounds on Provided Service by GPS
The scheduling algorithm, which is employed at the switch, determines the transmission order of the outgoing packets, and is primarily responsible for the bandwidth allocation to the sessions in the system (note
Deterministic Time-Varying Packet Fair Queueing
also that the scheduler can also determine how much queueing buffer space is allocated to incoming packets). It is common to assume that the scheduler operates at a fixed rate r: e.g., the scheduler can take r ATM cells/sec from the incoming queues, and transmit them on the outgoing link. The fixed-rate assumption is valid in most wireline network routers, but in wireless networks (where the physical layer achievable transmission rates fluctuate depending on the channel quality) the service rate remains fixed if the physical layer is not adaptive (for example, the physical layer does not employ any of the recently-proposed adaptive modulation or adaptive coding techniques [13]). From a network-wide point of view, the scheduler should efficiently utilize the available resources. From a user perspective, the scheduler should guarantee to the sessions that: (i) network resources are allocated irrespective of the behavior of the other sessions (which refers to the isolation property of the scheduler), and (ii) whenever network resources become available (e.g., in underloaded scenarios), the extra resources are distributed to active sessions (the fairness property of the scheduler). Known for its perfect isolation and perfect fairness properties is the Generalized Processor Sharing (GPS) scheduling algorithm [2]. According to [2], a GPS server operates at a fixed rate r and is work-conserving, i.e., the server is not idle if there are backlogged packets to be transmitted. Each session i is characterized by a positive constant and the amount of service session i receives in the interval is proportional to provided that the session is continuously backlogged. Formally, under GPS, if session i is continuously backlogged in then it holds:
for all sessions j that have also received some service in this time interval. It follows that in the worst case, the minimum guaranteed rate given to session i is where N is the maximum number of sessions that could be active in the system. Therefore, a lower bound for the amount of service that session i is guaranteed is: If session i is bucket constrained, and the minimum guaranteed rate is such that then the maximum delay is (note that this bound could be loose [2]). Effectively, GPS offers perfect isolation, because every session is guaranteed its portion of the band-
75
width irrespective of the behavior of the other sessions. From this point of view, GPS is reminiscent of fixedassignment TDMA or FDMA physical layer multiplexing techniques. What is radically different about GPS is its perfect fairness property: whenever a session i generates traffic at a rate less than then the “extra” bandwidth is allocated to other sessions proportionally to their respective weights. Let us clarify the operation of GPS in a wireless network using the following simple example: suppose that in a pico-cell three mobile users are assigned to the base-station. One (highrate) user has a weight of and the two (lowrate) users have a weight of When all users are active, the high-rate user will take 50% of the bandwidth, and each of the low-rate users 25%. If one of the low-rate users becomes silent, then the extra 25% of the bandwidth will be allocated to the other users: the highrate will have now 66%, and the low-rate 34%. Note that the extra bandwidth can be used in a multiple of ways: for example, to increase the information rate, or to decrease the transmitted power through the use of a more powerful channel code. GPS belongs to the family of rate-based schedulers [14], which attempt to provide bandwidth guarantees to sessions (recall that bandwidth guarantees yield delay guarantees if the traffic envelope is known). When the sessions have a nominal, long-term average rate (the “sustainable cell rate” (SCR) in ATM terminology), then the allocation of appears to be straightforward. The situation becomes more complicated if we consider that network traffic could be bursty or self-similar. Though there has been work on the weight assignment problem (see, e.g., [15]), it is still considered quite challenging (see, e.g., [16]). Apart from the weight assignment problem, another important issue is the implementation of GPS: real-world routers operate at the packet or cell level, whereas GPS assumes a fluid model of traffic. Hence, in practice GPS needs to be approximated by a Packet Fair Queueing (PFQ) algorithm [2]. Starting with Weighted Fair Queueing (WFQ) [17], there has been a lot of work on approximating GPS (see, e.g., [18–22]). We will revisit some of this work in Section 4.2. 2.3.
Bound on Service Provided by EDF and SC-based schedulers
Despite its perfect isolation and fairness properties, GPS is not necessarily the panacea of the scheduling problem. Let us observe that there are applications
76
Stamoulis and Giannakis
where we are interested more in delay QoS guarantees and less in throughput QoS guarantees: such an application is voice (IP telephony), where it is tolerable if some packets are dropped, but it is not tolerable if packets arrive too late. In such cases, it is meaningful to pursue the design of schedulers which base their operation on criteria other than rate. The Earliest Deadline First (EDF) algorithm, a member of deadline-based schedulers, attempts to provide maximum delay guarantees to individual sessions. Under EDF, each session i is characterized by a delay bound When a packet of session i arrives at the scheduler at time t, the deadline is assigned to the packet, and the switch transmits packets in increasing order of their deadlines (thus the name “earliest deadline first”). As long as the following “schedulability condition” holds [23, 24]:
and every source conforms to its traffic envelope, then no packet will miss its transmission deadline. It follows from (2), that given a set of traffic envelopes there are many vectors which satisfy (2). The set
is called the schedulability region, which intuitively indicates the range of achievable QoS guarantees that can be provided to sessions. The importance of EDF stems (partially) from its optimality with respect to the schedulability region [23]: for an arbitrary scheduling algorithm with corresponding schedulability region it holds
From a practical point of view, if delay bounds are the only desirable QoS aspect, then EDF supports the maximum number of users than any other scheduling algorithm. As a result, EDF has been proposed as the basis of a network-wide deterministic QoS architecture (see e.g., [25]). Note also that recently, there has been work on statistical QoS guarantees using EDF (see e.g., [26] and references therein). Going a step beyond rate-based (such as GPS) or deadline-based (such as EDF) schedulers, a SC-based
scheduler attempts in [0, t] to provide service to session i greater or equal to [5, 6]. Formally, let the service curve S(·) be a nonnegative nondecreasing real function with Then a session i which starts transmitting at time is guaranteed the service curve if for any time there exists such that
It is straightforward to check whether the switch is capable of satisfying all SCs by performing the following test [5]:
Equation (3) indicates that a SC-based scheduler guarantees a relatively more “relaxed” type of service than GPS or EDF: GPS attempts at every moment to achieve proportional bandwidth distribution, and EDF strives to transmit every packet before its deadline elapses. On the other hand, a SC-based scheduler only guarantees that eventually every session will receive its guaranteed amount of service. For example, if a SCbased scheduler imposes a much weaker condition that of GPS. Implicitly, this condition leads to short-term unfairness and the punishment phenomenon as we explain later on. On the other hand, a SC-based scheduler is very versatile with respect to the QoS guarantees that it can provide to sessions: because a SC can have an arbitrary shape (as long as it is a non-decreasing real function of t and (4) holds), the SC can be used to provide delay or bandwidth guarantees [5]. From a practical point of view, SC-based schedulers are attractive in wireless/wireline networks, because there could be low-rate users, high-rate users, and applications which could demand different service rates during various periods of time (for example, a video session). In such cases, allocating a fixed portion of the bandwidth to a session could result in considerable waste of network resources. A SC-based scheduler could lead to more efficient bandwidth utilization, because it allows time-varying service guarantees, which have the potential to meet the needs of multirate users and applications. From a theoretical point of view, SCs possess two important properties: (i) rate-based schedulers can be cast into the SC framework [10], and (ii) the Earliest Deadline First (EDF) algorithm can be modeled using service curves (recall that the importance of property (ii) stems from the proven optimality of EDF
Deterministic Time-Varying Packet Fair Queueing
in admitting the maximum number of sessions given delay bounds and traffic characterization [23]). To implement SCs in packet switched networks, [5] proposes the SC-based Earliest Deadline first policy (SCED), where every packet upon its arrival is assigned a deadline; the deadline is basically a function of and the number of packets session i has transmitted up to time t. The packets are transmitted in increasing order of their deadlines. Apart from “resetting” [5], the assignment of deadlines to packets of a particular session in SCED does not take into consideration the behavior of the other sessions. Thus, the punishment feature of SCED, which is our main motivation for the development of the Deterministic Time-Varying PFQ (DTV-PFQ).
3. Punishment in SCED As acknowledged in [5] and mentioned in [6], SCED exhibits the punishment property, which does not appear in GPS. Let us illustrate this with the following example (similar to an example in [2]): suppose that we are interested in providing minimum rate guarantees to a system with two sessions “1” and “2”. Both of them
77
are to receive 50% of the service rate. Using SCs, we could have whereas under GPS we would have We make the assumption that the scheduler operates in discrete time slots, and we let session “1” start transmitting at slot 0, whereas session “2” starts at slot 10. In the interval [0, 9], session “1” receives 100% of the available bandwidth: normally, source “1” starts transmitting packets at a rate no greater than r/2, because this is the rate which is guaranteed to the source. However, based on feedback information from the receiver (e.g., “packets arrived sooner than expected”), session “1” could decide to increase its rate. Figures 3 and 4 show the bandwidth which is allocated to sessions “1” and “2” in the interval [0, 30] under SCED and WFQ. To study how bandwidth is allocated, we make both sessions continuously backlogged by having them transmit at rate 1.5r. It is clearly illustrated that under WFQ, session “1” is allocated 100% of the bandwidth in [0, 9] and 50% of the bandwidth in [10, 30]. On the other hand, under SCED, session “1” receives 100% of the bandwidth, but in [10, 14] session “1” does not receive any service at all. Eventually, in [0, 30], session “1” receives at least 50% of the bandwidth, as it was advertised. Nevertheless, session “1” is punished for being greedy in
78
Stamoulis and Giannakis
Given the multirate capabilities of SC-based schedulers, and the potential of more efficient bandwidth utilization, the punishment phenomenon is our motivation for the design of DTV-PFQ. In wireless networks, where the bandwidth is so scarce and expensive, the punishment phenomenon is very undesirable: perhaps it makes little sense not to let a user transmit in future transmission rounds just because in some of the previous rounds, some of the other users in the cell were not active. Moreover, the wireless channel is time-varying; occasionally users experience “bad” or “good” channels. It is counter-intuitive not to let a user transmit packets, especially when the channel is “good”. On the other hand, in wireline networks, the punishment phenomenon does not encourage sessions to take advantage of extra bandwidth that may be available in the system. Before we present our designs on DTV-PFQ, we remark that considering the same scheduling algorithm for both wireless and wireline environments facilitates seamless integration between wireless-wireline network components, and allows us to capitalize on the extensive published work on scheduling in wireline networks.
4.
Deterministic Time-Varying PFQ
To overcome the performance limitations caused by the delay-bandwidth coupling of GPS, herein we propose a time-varying assignment of weights, which provides us with more degrees of freedom than the non rate-proportional weighting of [15]. In our scheme, the weight which is assigned to a session i is a function of time In particular, we focus in the case where the variations in are carried out in a deterministic fashion and unlike [27, 28], we develop a framework with minimum overhead. In this paper we study the single-node case, leaving the study of the multiple-node node case for future work. In this section, first we discuss why in theory a TimeVarying GPS is capable of implementing SCs of arbitrary shapes, but it is difficult to realize them in practice. Then, we focus on piecewise linear SCs having in mind that in integrated services networks, piecewise linear SCs can be used to provide multirate services [29], and delay guarantees to sessions constrained by multiple leaky buckets (recall from Section 2.1 that multiple leaky-buckets have been proposed to model
Deterministic Time-Varying Packet Fair Queueing
real-life applications and shown to result in improved network utilization [9]). We design a practically realizable weight assignment algorithm which guarantees piecewise linear SCs while obviating the punishment phenomenon. Moreover, we discuss how prescribed delay bounds can be met by DTV-PFQ (as long as the delay bounds are EDF-feasible), and show that our newly introduced scheme is optimal in the schedulabilityregion sense (as long as the traffic envelopes are piecewise concave linear functions). 4.1.
Time-Varying GPS, in Theory
Let us recall that every scheduling algorithm has to face two issues: (i) allocation of reserved bandwidth (which leads to isolation among sessions, and provision of worst-case guarantees), and (ii) distribution of extra (or available) bandwidth to active sessions (which defines the fairness properties of the scheduler). The SC framework generalizes GPS with respect to the first issue, as it allows a session to demand different service rates over different periods in time. However, a SC-based scheduler does not provide any analytical guarantees-results about how the extra bandwidth is allocated to sessions. On the other hand, GPS strives to achieve perfect fairness by providing the same normalized service to all backlogged sessions at any time interval [30]: if sessions i, j are continuously backlogged in then In theory, a time-varying GPS system is capable of accommodating arbitrarily shaped SCs provided that (4) is satisfied. By setting we obtain at any time t, and as a result, provided that is differentiable (we will address the case when the derivative does not exist later on). Hence, the deterministic assignment allows GPS to provide services similar to a SC-based scheduler. Intuitively thinking, the “equivalence” between a time-varying GPS system and a SC-based system should not come as a surprise. However, what perhaps comes as a surprise is the difficulty of implementing an arbitrary weight assignment in a packet-by-packet practically realizable system. In a real system, the GPS scheduler assigns deadlines to incoming packets; these deadlines, as we will explain in Section 4.2, are given as a function of a quantity termed the “virtual time” of the system and the weight of the session. Let us consider a packet of session i that arrives at time This packet will be transmitted by
79
the system at a later time At time the weight of the session is However, the time instant is not known upon the packet arrival at The time does not only depend on the backlog of the session i at time but also on the backlog and future packet arrivals of the other sessions. Therefore, unless the SC has a constant slope (which corresponds to fixed weight assignments), it is quite challenging to assign deadlines to packets upon their arrival. A possible solution is the computationally expensive algorithm of [27], which uses a vector as virtual time in the system. However, in high-speed (gigabit) networks, the implementation overhead of the scheduler should be kept as small as possible. Though we have not solved the problem of GPS supporting arbitrarily shaped SCs, we will show in Section 4.3 that fortunately in the practically appealing case of piecewise linear SCs, it is possible to implement DTV-PFQ with a computationally efficient deadlineweight assignment procedure. Before we present our approach in Section 4.3, let us revisit some known results on the packet-by-packet implementation of GPS and SC-based schedulers (as our DTV-PFQ algorithm capitalizes on these results). 4.2.
PFQ and SCED, in Practice
Perhaps one of the most renowned shortcomings of GPS is that it relies on a fluid model of traffic-service: the traffic sent by the users is considered to be infinitely divisible, and the router is supposed to be able to service all active sessions simultaneously. However, noncut-through switches operate at the packet or cell level (i.e., they transmit one packet of a single session at a time), a fact which implies that GPS needs to be approximated by a Packet Fair Queueing algorithm [2]. A PFQ algorithm attempts to allocate bandwidth proportionally to session weights, i.e., it attempts to minimize the difference
which serves as the “fairness index” of the algorithm (note that in GPS for sessions which are continuously backlogged—see, e.g., [31]). Many PFQ algorithms have been proposed in the literature, and their merits can be partially judged by the maximum value of the latency (which indicates how
80
Stamoulis and Giannakis
long it takes for a new session to start receiving service), and the implementation complexity (see, e.g., [30, 31], and references therein). All PFQ algorithms need to keep track of the amount of service that has been provided to the active sessions in the system. Central to almost all PFQ algorithms is the notion of “virtual time” or “system potential” (see [31] for a unifying framework) which is used for the assignment of deadlines: when the n-th packet of session i arrives to the switch at time the packet is time-stamped with a deadline which is a function of the virtual time and In [2], the virtual time V(t) is a scalar quantity, which is incremented every time a packet arrives or departs from the scheduler ( is the packet length):
In (5) the sequence represents the times that a packet arrives to or departs from the switch, and B(t) denotes the set of backlogged sessions at time t. The packets are transmitted by the switch in increasing order of their time-stamps, and the virtual time measures the progress of the work in the system. Thus, the virtual time is primarily responsible for the absence of the punishment phenomenon in PFQ algorithms [2], as the amount of service assigned to a session is represented by a “session potential function” [30] (in [2], the state of session i is represented by the sequence ). Equations (5) and (6) indicate that the implementation of PFQ relies on the knowledge of the Our basic intuition is to utilize the basic results on GPS, and extend the service guarantees that can be provided to sessions by using time-varying In this case, given a way to determine for every session i, fairnessdelay-rate guarantee results for PFQ can be translated to DTV-PFQ. In Section IV-C we will describe how can be defined deterministically, and how DTVPFQ can be implemented in the case of piecewise linear SCs. Before we describe our approach, let us define formally the SCs which can be provided to individual sessions. References [5, 29] have provided computationally efficient scheduling algorithms for piecewise linear SCs defined as:
where constants satisfying:
are real
and
Details on the selection of leaky bucket parameters can be found in, e.g., [9,10]. Herein, we allow the piecewise linear SC to be zero in the interval and have an initial “burst”:
We note that the introduction of the constant (which is zero in [5, 29]) allows our DTV-PFQ scheme to model an EDF scheduler for multiple-leaky buckets. Indeed, if session i has traffic envelope:
then the allocation of the SC guarantees the delay bound as long as (4) holds (Fig. 5). It readily follows that this allocation of the SC amounts to setting: Therefore, modulo approximation errors induced by any virtual-time based implementation [2], our DTVPFQ scheme is capable of achieving approximately the schedulability region of an EDF scheduler (for multiple leaky-bucket constrained sessions).
Deterministic Time-Varying Packet Fair Queueing
4.3. DTV-PFQ, in Practice Out DTV-PFQ algorithm implements piecewise linear service curves as given by (7). In order to obviate the punishment phenomenon, we rely on the virtual time implementation of GPS and, in order to provide multiple rates to the same session, we rely on the algorithms of [5, 29]. However, the combination of GPS and [5, 29] is not straightforward as we encounter 3 issues: (1) the weight of session i assumes discrete values which correspond to the different service rates that are to be given to the session. (2) Eq. (6) should be modified so that the delay factor is taken into account in the assignment of the deadline of the first packet of a session. (3) the term (the “burst”) in the service curve at time corresponds to an infinite slope and it should be handled in an appropriate way. Issue (1) is not very difficult to handle: [5, 29] provide mechanisms to determine the service curve slope which corresponds to each incoming packet of session i. Therefore, when a packet of session i arrives, we can use the method of [5, 29] to determine what is the slope of which corresponds to that packet. Then, we set equal to Issue (2) is taken care of by adding to the deadline of the first packet of the session the term which is defined as:
where
and
is an indicator function defined by:
81
As (8) indicates, accounts for the work-load that the scheduler should give to other sessions: in the worst-case all the other sessions are continuously active, and by integrating (5) we obtain a (conservative) estimate of the value of the virtual time at which session i should start receiving service from the scheduler. We remark that the term enables our DTV-PFQ scheduler to emulate the EDFscheduler; the intuition is that packets of a newly arrived session may be forced to wait a little bit longer in the scheduler. Interestingly enough, [30] indicates that when the potential of a new session is set higher than that of the sessions currently serviced, then the new session may have to wait before it can be serviced. In our scheme, we set the potential (i.e., the virtual finishing time) of a new session higher than the current virtual time on purpose. Finally, issue (3) is handled by setting “informally” for the packets of the session which correspond to the sudden “burst”. As (5) and (6) suggest, if the virtual time and the finishing time of the session are not updated. Note that the number of packets which belong to the initial burst is bounded. Therefore, if the admission test (4) is satisfied, then isolation among sessions is preserved. Our DTV-PFQ system can be implemented using the algorithm in Fig. 6. Our algorithm extends the algorithms of [2, 5] by addressing (1), (2), and (3), while maintaining the same complexity as the algorithm of [5]. We remark that our scheme provides an exact algorithm for the weight assignment, unlike [15] which provides only a heuristic way of Also, different from [16] which looks at the feasible region of weights for only two sessions, our scheme supports an arbitrary number of sessions. An approach different that ours is taken in [28] which proposes the adaptive modification of the session weights: the dynamic programming algorithm of [28] attempts to minimize the queuing delays, but it sacrifices closedform individual session guarantees, and requires traffic policing (because otherwise, greedy sessions would take over all the bandwidth).
is the slope of 4.4.
Note that we adopt the convention that if no is differentiable at a specific t, then the overall value of the integrated quantity is 0 (at that specific t).
DTV-PFQ, Approximation Errors and Fairness Guarantees
In DTV-PFQ approximation errors are caused by two factors: (i) the inability to assign the proper weight for the (rare) case of packets which cross the boundary of two leaky buckets, and (ii) the approximation
82
Stamoulis and Giannakis
To illustrate the operation of our algorithm, we assume a system which supports two sessions (“1” and “2”). We simulate the system for 20 slots, and we make both sessions transmit at rate 2r (to keep them both continuously backlogged, which allows us to study the bandwidth allocation). Session “1” starts transmitting at slot 0, whereas session “2” starts transmitting at slot 4. Figures 7 and 8 illustrate the bandwidth allocation under SCED and under DTV-PFQ. We assume that As Fig. 7 depicts, session “1” does not receive any service at all in [4, 11], being penalized for the extra bandwidth it received in [0, 9]. On the other hand, DTV-PFQ does not penalize session “1” (Fig. 8), because session “1” still receives service in [4, 11 ]. Under both schedulers, we observe that in the long run sessions “1” and “2” receive respectively 25% and 75% of the bandwidth; but it is in the transient that DTV-PFQ performs better than SCED.
5. error which is associated with PFQ algorithms [2] ( is the maximum packet length). As a result, the actual SC which can be guaranteed in the packetized system to a session is equal to:
With respect to the fairness guarantees of DTV-PFQ, in the fluid model, the extra bandwidth in the system is distributed according to the (as a result, perfect instantaneous fairness is provided to sessions). In the packetized version, the service that a session receives is subject to the approximations induced by the virtual time implementation. Having in mind that in the packetized system the service of one session can lag at most by or be ahead by with respect to the fluid system, then over an interval where both sessions i, j are continuously backlogged, we could have:
Scheduler Implementation
In Section 4.2 we briefly mentioned the problems associated with implementing the fluid traffic-service model of GPS in a packet-by-packet system. However, in both wireline and wireless networks, when it comes to the practical implementation of schedulers, there is more than meets the eye. In wireline networks, the switch is either outputbuffered or input-buffered depending on where the incoming packets are queued; if the switching fabric of the router is N times faster than the speed of the N input links, then the server is fast enough to route packets to the same outgoing link. Though there are some existing 5 Gb/s routers with switching fabric fast enough to keep pace with the input links (see, e.g., [32]), this might not be true for all existing routers: given the recent advances in gigabit optical networking, the switching fabric may not be fast enough, and input-buffering may be employed. Unfortunately, in such a case, tracking a fluid scheduling policy (such as GPS) with a non-anticipative packet scheduling policy is still an open problem (see, e.g., [33] and references therein for a link to the multi-periodic TDMA satellite scheduling problem). The importance of input-output buffering notwithstanding, the implementation cost of PFQ
Deterministic Time-Varying Packet Fair Queueing
83
84
Stamoulis and Giannakis
algorithms is a function of the following three factors (see, e.g., [34] and references therein): (i) the cost associated with keeping track of the system-potential function, (ii) the computational complexity of timestamp sorting, (iii) the storage and scalability issues related to recording the state of every session (see also [35] and references therein). On the other hand, in wireless networks, implementing PFQ poses a multitude of challenges. First of all, the wireless channel exhibits time-varying capacity and induces location-dependent errors which raise issues with respect to short-versus long-term fairness guarantees (see, e.g., [36–39]. Second, unlike wireline networks, in the uplink scenario the scheduler (which is located at the basestation) does not know a priori the queue status of the mobile users; it is up to the MAC to communicate the bandwidth requests of the mobile users to the scheduler (see, e.g., [3] and references therein): essentially, the network and the physical layer can be tied together using a twophase demand assignment MAC protocol. During the first phase, each user m notifies the base station about its intention to transmit; the base station calculates the and notifies each user about the corresponding bandwidth assignments. During the second phase, users transmit (at possibly different rates) multimedia information. Note that (i) the duration of the reservation phase can be reduced if users piggy-back their queue-lengths in prespecified intervals), and that (ii) the overall scheme becomes much simpler in the downlink case, as the basestation is aware of the queue lengths of all data streams. Third, though in wireline networks the reliability of the physical medium allows the independent design of physical/datalink/network layers, in wireless networks a lot is to be gained by a joint design approach across network layers. A very interesting ramification of the joint design approach across layers is that in a multicode CDMA wireless network, the implementation of PFQ does not necessarily have to be based on virtual time. To make this notion concrete, let us focus on a multicode CDMA transmission/reception scheme with C available codes1 (these codes could be, e.g., Pseudo-Noise or WalshHadamard), which can be allocated to mobile users. Each user m is allocated codes, and splits the information stream into substreams which are transmitted simultaneously using each of the codes: it readily follows that if user m has data symbols to transmit, then yields a measure of the time
it takes to transmit them. Assuming that all C codes can be successfully used (an issue which hinges upon channel conditions, power control, and the detection mechanism), essentially denotes the bandwidth which is allocated to user m, and GPS is implemented by setting
where is the set of active users (note that with C sufficiently large and frequent code re-assignments, the approximation error in implementing GPS using (11) can be made very small). As an illustrative example, we simulate a pico-cell where 3 mobile users communicate with the base-station. We assume C = 32, and that the traffic generated by each of the mobile users is Poisson with corresponding normalized rates The weight assignment is (under which, if all three users have data to transmit, users 1, 2, 3 are assigned 16, 12, and 4 codes respectively), and we model the user 3 as an on/off source in order to study the bandwidth reassignments. We simulate the system for 100 transmission rounds, and Fig. 9 depicts the number of queued packets and the number of allocated codes per user for a range of the transmission rounds. we can clearly see that users 1, 2 are allocated more CDMA codes whenever user 3 is silent. Finally, let us comment on the need for exact implementation of the transmission schedule: it is well known that accurate approximation of GPS leads to smaller latency, decreases the burstiness of the outgoing traffic, and lowers the buffer requirements inside the network. To get a feel of how important scheduling becomes for real-time applications such as IP telephony, [40] reports that a maximum delay of 150 ms is tolerable for voice communication (using a PC): the 150 ms-limit presents a delay budget which is to be distributed over the propagation delay (measured at 95 ms for the longest path in the Continental United States, from Seattle, WA to Orlando, FL), and the queuing/ processing delay. In the worst case, from the available 55 ms, 25 ms are to be allocated to the speech encoder, and the speech enhancement and silence suppression operations. Allowing for variable delay factors, only 10 ms are left for queueing delay in the backbone network. Hence, the scheduler does not only need to be fair, it also needs to be fast!
Deterministic Time-Varying Packet Fair Queueing
6.
Conclusions
In this paper we have discussed issues related to network scheduling in both wireline and wireless environments, and we have presented a deterministic time-varying weight assignment procedure for PFQ-based switching systems. By supporting piecewise linear SCs, our scheme dispenses with the delaybandwidth coupling and targets integrated services networks. Unlike existing SC based algorithms, our time-varying PFQ scheme does not exhibit the punishment phenomenon and allows sessions to exploit the extra bandwidth in under-loaded networks. Future research avenues include the study of stochastic timevarying weight assignment procedures which take into account the probabilistic description of incoming traffic (for wired networks), as significant savings of statistical over deterministic services (inherently conservative in admitting sessions) have been reported in, e.g., [26]. On the other hand, an adaptive SC could model the time-varying channel conditions in wireless networks, and yield improved bandwidth utilization. Note 1. Note that the capacity C is “soft” as it depends on channel conditions, power control, etc.
85
References 1. J. Chuang and N. Sollenberger, “Beyond 3G: Wideband Wireless Data Access Based on OFDM and Dynamic Packet Assignment,” IEEE Communications Magazine, vol. 7, no. 38, 2000, pp. 78–87. 2. A.K. Parekh and R.G. Gallager, “A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case,” IEEE/ACM Transactions on Networking, vol. 1, no. 3, 1993, pp. 344–357. 3. A. Stamoulis and G.B. Giannakis, “Packet Fair Queueing Scheduling Based on Multirate Multipath-Transparent CDMA for Wireless Networks,” in Proc. of INFOCOM’2000, Tel Aviv, Israel, March 2000, pp. 1067–1076. 4. Z.-L. Zhang, D. Towsley, and J. Kurose, “Statistical Analysis of Generalized Processor Sharing Scheduling Discipline,” IEEE Journal on Selected Areas in Communications, vol. 13, no. 6, 1995, pp. 1071–1080. 5. H. Sariowan, R.L. Cruz, and G.G. Polyzos, “SCED: A Generalized Scheduling Policy for Guaranteeing Quality-of-Service,” IEEE Transactions on Networking, vol. 7, no. 5, 1999, pp. 669– 684. 6. I. Stoica, H. Zhang, and T.S.E. Ng, “A Hiearchical Fair Service Curve Algorithm for Link-Sharing, Real-Time and Priority Services,” in Proc. ACM Sigcomm’97, 1997, pp. 249–262. 7. W. Willinger, M.S. Taqqu, R. Sherman, and D.V. Wilson, “SelfSimilarity Through High-Variability: Statistical Analysis of Ethernet LAN traffic at the Source Level,” IEEE/ACM Transactions on Networking, vol. 5, no, 1, 1997, pp. 71–86. 8. R.L. Cruz, “A Calculus for Network Delay, Part I: Network Elements in Isolation,” IEEE Transactions on Information Theory, vol. 37, no. 1, 1991, pp. 114–131.
86
Stamoulis and Giannakis
9. D.E. Wrege, E.W. Knightly, H. Zhang, and J. Liebeherr, “Deterministic Delay Bounds for vbr Video in Packet-Switching Networks: Fundamental Limits and Practical Tradeoffs,” IEEE Transactions on Networking, vol. 4, 1996, pp. 352–362. 10. J. Y. Le Boudec, “Application of Network Calculus to Guaranteed Service Networks,” IEEE Trans, on Information Theory, vol. 44, no. 3, 1998, pp. 1087–1096. 11. http://www.atmforum.com/atmforum/market_awareness/whitepapers/6.html 12. C.-S. Chang, “On Deterministic Traffic Regulation and Service Guarantees: A Systematic Approach by Filtering,” IEEE Trans. on Information Theory, vol. 44, no. 3, 1998, pp. 1097–1110. 13. A. Goldsmith, “Adaptive Modulation and Coding for Fading Channels,” in Proceedings of the 1999 IEEE Information Theory and Communications Workshop, 1999, pp. 24–26. 14. H. Zhang, “Service Disciplines for Guaranteed Performance Service in Packet-Switching Networks,” Proceedings of the IEEE, vol. 83, no. 10, 1995, pp. 1374–1396. 15. R. Szabo, P. Barta, J. Biro, F. Nemeth, and C.-G. Perntz, “Non Rate-Proportional Weighting of Generalized Processor Sharing Schedulers,” in Proc. of GLOBECOM, Rio de Janeiro, Brazil, Dec. 1999, pp. 1334–1339. 16. A. Elwalid and D. Mitra, “Design of Generalized Processor Sharing Schedulers Which Statistically Multiplex Heterogeneous QoS Classes,” in Proc. of INFOCOM’99, Piscataway, NJ, USA, 1999, pp. 1220–1230. 17. A. Demers, S. Keshav, and S. Shenker, “Analysis and Simulation of a Fair Queueing Algorithm,” in Proc. ACM Sigcomm ’89, 1989, pp. 1–12. 18. S.J. Golestani, “A Self-Clocked Fair Queueing Scheme for Broadband Applications,” in Proc. IEEE Infocom ’94, 1994, pp. 636–646. 19. P. Goyal, H.M. Vin, and H. Cheng, “Start-Time Fair Queueing: A Scheduling Algorithm for Integrated Services Packet Switching Networks,” IEEE Transactions on Networking, vol. 5, no. 5, 1997, pp. 690–704. 20. D. Saha, S. Mukherjee, and S.K. Tripathi, “Carry-Over Round Robin: A Simple Cell Scheduling Mechanism for ATM Networks,” IEEE Transactions on Networking, vol. 6, no. 6, 1998, pp. 779–796. 21. S. Shreedhar and G. Varghese, “Efficient Fair Queueing Using Deficit Round Robin,” IEEE Transactions on Networking, vol. 4, no. 3, 1996, pp. 375–385. 22. D. Stiliadis and A. Varma, “Efficient Fair Queueing Algorithms for Packet Switched Networks,” IEEE Transactions on Networking, vol. 6, no. 2, 1998, pp. 175–185. 23. L. Georgiadis, R. Guerin, and A. Parekh, “Optimal Multiplexing on a Single Link: Delay and Buffer Requirements,” IEEE Trans. on Information Theory, vol. 43, 1997, pp. 1518–1535. 24. J. Liebeherr, D. Wrege, and D. Ferrari, “Exact Admission Control for Networks with a Bounded Delay Service,” IEEE Transactions on Networking, vol. 4, 1996, pp. 885–901. 25. L. Georgiadis, R. Guerin, V. Peris, and K.N. Sivarajan, “Efficient Network QoS Provisioning Based on Per Node Traffic Shaping,” IEEE/ACM Transactions on Networking, vol. 4, no. 4, 1996, pp. 482–501. 26. V. Sivaraman and F. Chiussi, “End-to-End Statistical Delay Guarantees Using Earliest Deadline First (EDF) Packet Schedul-
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
ing,” in Proc. of GLOBECOM, Rio de Janeiro, Brazil, Dec. 1999, pp. 1307–1312. C-S. Chang and K.-C. Chen, “Service Curve Proportional Sharing Algorithm for Service-Guaranteed Multiaccess in Integrated-Service Distributed Networks,” in Proc. of GLOBECOM, Rio de Janeiro, Brazil, Dec. 1999, pp. 1340– 1344. H-T Ngin, C-K Tham, and W-S Sho, “Generalized Minimum Queueing Delay: An Adaptive Multi-Rate Service Discipline for ATM Networks,” in Proc. of INFOCOM’99, Piscataway, NJ, USA, 1999, pp. 398–404. D. Saha, S. Mukherjee, and S.K. Tripathi, “Multirate Scheduling of vbr Video Traffic in ATM Networks,” IEEE Journal on Selected Areas in Communications, vol. 15, no. 6, 1997, pp. 1132– 1147. D. Stiliadis and A. Varma, “Rate-Proportional Servers: A Design Methodology for Fair Queueing Algorithms,” IEEE Transactions on Networking, vol. 6, no. 2, 1998, pp. 164–174. D. Stiliadis and A. Varma, “Latency-Rate Servers: A General Model for Analysis of Traffic Scheduling Algorithms,” IEEE Transactions on Networking, vol. 6, no. 5, 1998, pp. 611– 624. DC. Stephens and H. Zhang, “Implementing Distributed Packet Fair Queueing in a Scalable Switch Architecture,” in Proc. of 1NFOCOM98, 1998, pp. 282–290. V. Tabatabaee, L. Georgiadis, and L. Tassiulas, “QoS Provisioning and Tracking Fluid Policies in Input Queueing Switches,” in Proc. of INFOCOM’2000, Tel Aviv, Israel, March 2000, vol. 3, pp. 1624–1633. F.M. Chiussi and A. Francini, Implementing Fair Queueing in ATM Switches: The Discrete-Rate Approach,” in Proceedings of INFOCOM’98, San Francisco, CA, USA, 1998, vol. 1, pp. 272– 281. Z.-L. Zhang, Z. Duan, and Y.T. Hou, “Virtual Time Reference System: A Unifying Scheduling Framework for Scalable Support of Guaranteed Services,” IEEE Journal on Selected Areas in Communications, vol. 18, no. 12, 2000, pp. 2684–2695. D.A. Eckhardt and P. Steenkiste, “Effort-Limited Fair (ELF) Scheduling for Wireless Networks,” in Proc. of INFOCOM2000, Tel Aviv, Israel, March 2000, pp. 1097–1106. S. Lu, V. Bharghavan, and R. Srikant, “Fair Scheduling in Wireless Packet Networks,” IEEE Transactions on Networking, vol. 7, no. 4, 1999, pp. 473–489. P. Ramanathan and P. Agrawal, “Adapting Packet Fair Queueing Algorithms to Wireless Networks,” in Fourth Annual ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom’98), New York, NY, 1998, pp. 1–9. N. Tse, I. Stoica, and H. Zhang, “Packet Fair Queueing Algorithms for Wireless Networks with Location-Dependent Errors,” in Proc. IEEE INFOCOM ’98, New York, NY, 1998, vol. 3, pp. 1103–1101. P. Goyal, A. Greenberg, C.R. Kalmanek, W.T. Marshall, P. Mishra, D. Nortz, and K.K. Ramakrishnan, “Integration of Call Signaling and Resource Management for IP Telephony,” IEEE Network, vol. 13, no. 3, 1999, pp. 24–32. D.E. Wrege, “Multimedia Networks with Deterministic Qualityof-Service Guarantees,” Ph.D. Thesis, University of Virginia, Aug. 1996.
Deterministic Time-Varying Packet Fair Queueing
A. Stamoulis holds degrees in Computer Engineering (Diploma, University of Patras, 1995), Computer Science (Master, University of Virginia, 1997), and Electrical Engineering (Ph.D., University of Minnesota, 2000). Three days after receiving his Ph.D. degree, he joined the AT&T Shannon Laboratory as a Senior Technical Staff Member.
[email protected]
G.B. Giannakis received his Diploma in Electrical Engineering from the National Technical University of Athens, Greece, 1981. From September 1982 to July 1986 he was with the University of Southern
87
California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc. in Mathematics, 1986, and Ph.D. in Electrical Engineering, 1986. After lecturing for one year at USC, he joined the University of Virginia in 1987, where he became a professor of Electrical Engineering in 1997. Since 1999 he has been a professor with the Department of Electrical and Computer Engineering at the University of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications. His general interests span the areas of communications and signal processing, estimation and detection theory, time-series analysis, and system identification—subjects on which he has published more than 125 journal papers, 250 conference papers and two edited books. Current research topics focus on transmitter and receiver diversity techniques for single- and multi-user fading communication channels, redundant preceding and space-time coding for block transmissions, multicarrier, and wide-band wireless communication systems. G.B. Giannakis is the (co-) recipient of three best paper awards from the IEEE Signal Processing (SP) Society (1992,1998,2000). He also received the Society’s Technical Achievement Award in 2000. He co-organized three IEEE-SP Workshops (HOS in 1993, SSAP in 1996 and SPAWC in 1997) and guest (co-) edited four special issues. He has served as an Associate Editor for the IEEE Trans. on Signal Proc. and the IEEE SP Letters, a secretary of the SP Conference Board, a member of the SP Publications Board and a member and vice-chair of the Statistical Signal and Array Processing Committee. He is a member of the Editorial Board for the Proceedings of the IEEE, he chairs the SP for Communications Technical Committee and serves as the Editor in Chief for the IEEE Signal Processing Letters. He is a Fellow of the IEEE, a member of the IEEE Fellows Election Committee, the IEEE-SP Society’s Board of Governors, and a frequent consultant for the telecommunications industry.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 89–105, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Monte Carlo Bayesian Signal Processing for Wireless Communications* XIAODONG WANG Electrical Engineering Department, Texas A&M University, College Station, TX 77843, USA RONG CHEN Information and Decision Science Department, University of Illinois at Chicago, Chicago, IL 60607, USA JUN S. LIU Statistics Department, Harvard University, Cambridge, MA 02138, USA
Received April 4, 2000; Accepted June 6, 2001
Abstract. Many statistical signal processing problems found in wireless communications involves making inference about the transmitted information data based on the received signals in the presence of various unknown channel distortions. The optimal solutions to these problems are often too computationally complex to implement by conventional signal processing methods. The recently emerged Bayesian Monte Carlo signal processing methods, the relatively simple yet extremely powerful numerical techniques for Bayesian computation, offer a novel paradigm for tackling wireless signal processing problems. These methods fall into two categories, namely, Markov chain Monte Carlo (MCMC) methods for batch signal processing and sequential Monte Carlo (SMC) methods for adaptive signal processing. We provide an overview of the theories underlying both the MCMC and the SMC. Two signal processing examples in wireless communications, the blind turbo multiuser detection in CDMA systems and the adaptive detection in fading channels, are provided to illustrate the applications of MCMC and SMC respectively.
1. Introduction The projections of rapidly escalating demand for advanced wireless services such as multimedia present major challenges to researchers and engineers in the field of telecommunications. Meeting these challenges will require sustained technical innovations on many fronts. Of paramount importance in addressing these challenges is the development of suitable receivers for wireless multiple-access communications in nonstationary and interference-rich environments. While considerable previous work has addressed many aspects of this problem separately, e.g., single-user chan*This work was supported in part by the U.S. National Science Foundation (NSF) under grants CCR-9980599, DMS-0073651, DMS0073601 and DMS-0094613.
nel equalization, interference suppression for multipleaccess channels, and tracking of time-varying channels, to name a few, the demand for a unified approach to addressing the problem of jointly combating the various impairment in wireless channels has only recently become significant. This is due in part to the fact that practical high data-rate multiple-access systems are just beginning to emerge in the market place. Moreover, most of the proposed receiver design solutions are suboptimal techniques whose performances are still far below the performance achieved by the theoretically optimal procedures. On the other hand, the optimal solutions mostly can not be implemented in practice because of their prohibitively high computational complexities. The Bayesian Monte Carlo methodologies recently emerged in statistics have provided a promising new
90
Wang, Chen and Liu
paradigm for the design of low-complexity signal processing techniques with performance approaching the theoretical optimum for fast and reliable communication in the highly severe and dynamic wireless environment. Over the past decade, a large body of methods in the field of statistics has emerged based on iterative Monte Carlo techniques and they are especially useful for computing the Bayesian solutions to the optimal signal reception problems encountered in wireless communications. These powerful statistical tools, when employed in the signal processing engines of the digital receivers in wireless networks, hold the potential of closing the giant gap between the performance of the current state-of-art wireless receivers and the ultimate optimal performance predicted by statistical communication theory. This advance will not only strongly influence the development of theory and practice of wireless communications and signal processing, but also yield tremendous technological and commercial impacts on the telecommunication industry. In this paper, we provide an overview of the theories and applications of Monte Carlo signal processing methods. These methods in general fall into two categories, namely, Markov chain Monte Carlo (MCMC) methods for batch signal processing, and sequential Monte Carlo (SMC) methods for adaptive signal processing. For each category, we outline the general theory and provide a signal processing example in wireless communications to illustrate its application. The rest of this paper is organized as follows. In Section 2, we discuss the general optimal signal processing problem under the Bayesian framework; In Section 3, we describe the MCMC signal processing techniques; In Section 4, we describe the SMC signal processing methods. We conclude the article in Section 5. 2. 2.1.
Bayesian Signal Processing The Bayesian Framework
A typical statistical signal processing problem can be stated as follows: Given some observations we want to make statistical inference about some unknown quantities Typically the observations Y are functions of X and some unknown “nuisance” parameters, plus some noise. The following examples are three well-known wireless signal processing problems.
Example 1 (Equalization). Suppose we want to transmit binary symbols through a bandlimitted communication channel whose input-output relationship is given by
where represents the unknown complex channel response; are i.i.d. Gaussian noise samples, The inference problem is to estimate the transmitted symbols based on the received signals The nuisance parameters are Example 2 (Multiuser Detection). In code-division multiple-access (CDMA) systems, multiple users transmit information symbols over the same physical channel through the use of distinct spreading waveforms. Suppose there are K users in the system. The k-th user employs a spreading waveform of the form
and transmit binary information symbols The received signal in this system is given by
where is the unknown complex amplitude of the k-th user; are i.i.d. Gaussian noise vectors, The inference problem is to estimate the transmitted multiuser symbols based on the received signals The nuisance parameters are Example 3 (Fading Channel). Suppose we want to transmit binary symbols through a fading channel whose input-output relationship is given by
where represents the unknown Rayleigh fading process, which can be modeled as the output of a lowpass filter of order r driven by white Gaussian noise,
Monte Carlo Bayesian Signal Processing
where D is the back-shift operator and is a white complex Gaussian noise sequence with independent real and imaginary components, The inference problem is to estimate the transmitted symbols based on the received signals The nuisance parameters are In Bayesian approach to a statistical signal processing problem, all unknown quantities are treated as random variables with some prior distribution The Bayesian inference is made based on the joint posterior distribution of these unknowns
91
Example 2 (continued). Consider the multiuser detection problem in Section 2.1. Denote and Then (3) can be written as
The optimal batch processing procedure for this problem is as follows. Let Assume that the unknown quantities and X are independent of each other and have prior distributions and p(X), respectively. Since is a sequence of independent Gaussian vectors, the joint posterior distribution of these unknown quantities based on the received signal Y takes the form of
Note that typically the joint posterior distribution is completely known up to a normalizing constant. If we are interested in making inference about the i-th component of X, say, we need to compute for some function h(·), then this is given by
The a posteriori probabilities of the transmitted symbols can then be calculated from the joint posterior distribution (11) according to where easy in practice. 2.2.
Neither of these computations is
Batch Processing versus Adaptive Processing
Depending on how the data are processed and the inference is made, most signal processing methods fall into one of the two categories: batch processing and adaptive (i.e., sequential) processing. In batch signal processing, the entire data block Y is received and stored before it is processed; and the inference about X is made based on the entire data block Y. In adaptive processing, however, inference is made sequentially (i.e., on-line) as the data being received. For example, at time t, after a new sample is received, then an update on the inference about some or all elements of X is made. In this paper, we focus on the optimal signal processing under the Bayesian framework for both batch processing and adaptive processing. We next illustrate the batch and adaptive Bayesian signal processings, respectively, by two examples.
Clearly the computation in (13) involves multidimensional integrals, which is certainly infeasible for any practical implementations. Example 3 (continued). Consider the fading channel problem with optimal adaptive processing. System (4) and (5) can be rewritten in the state-space model form, which is instrumental in developing the sequential signal processing algorithm. Define
Denote
By (5) we then have
92
Wang, Chen and Liu
where
and
Because of (14), the fading coefficient sequence can be written as
Then we have the following state-space model for the system defined by (4) and (5):
We now look at the problem of on-line estimation of the symbol based on the received signals up to time This problem is the one of making Bayesian inference with respect to the posterior distribution
For example, an on-line symbol estimation can be obtained from the marginal posterior distribution
Again we see that direct implementation of the optimal sequential Bayesian inference is computationally prohibitive. It is seen from the above discussions that although the Bayesian procedures achieve the optimal performance in terms of the minimum mean squared error on symbol detections, they exhibit prohibitively high computational complexity and thus are not implementable in practice. Various suboptimal solutions have been proposed to tackle these signal processing problems. For example, for the equalization problem, one may first send a training sequence to estimate the channel response; symbol detection can then be carried out based on the estimated channel [1]. For the multiuser detection problem, one may apply linear or nonlinear interference cancelation methods [2]. And for the fading channel problem, one may use a combination of training and decision-directed approach to estimate the fading process, based on which symbol detection can be made [3]. Although these suboptimal approaches provide reasonable performance, it is far from the best attainable performance. Moreover, the use of training sequence in a communication system incurs a significant loss in spectral efficiency. The recently developed Monte Carlo methods for Bayesian computation have provided a viable approach to solving many optimal signal processing problems (such as the ones mentioned above) at a reasonable computational cost. 2.3.
Monte Carlo Methods
In many Bayesian analyses, the computation involved in eliminating the nuisance parameters and missing data is so difficult that one has to resort to some analytical or numerical approximations. These approximations were often case-specific and were the bottleneck that prevented the Bayesian method from being widely used. In late 1980s and early 1990s, statisticians discovered that a wide variety of Monte Carlo strategies can be applied to overcome the computational difficulties encountered in almost all likelihood-based inference procedures. Soon afterwards, this “rediscovery” of the Monte Carlo method as one of the most versatile and powerful computational tools began to invade other quantitative fields such as artificial intelligence, computational biology, engineering, financial modeling, etc. [4]. Suppose we can generate random samples (either independent or dependent)
Monte Carlo Bayesian Signal Processing
from the joint distribution (6). We can approximate the marginal distribution by the empirical distribution (i.e., the histogram) based on the component of in and approximate the posterior mean (8) by
Most Monte Carlo techniques fall into one of the following two categories, Markov chain Monte Carlo (MCMC) methods, corresponding to batch processing, and sequential Monte Carlo (SMC) methods, corresponding to adaptive processing. 3. 3.1.
Markov Chain Monte Carlo Signal Processing General Markov Chain Monte Carlo Algorithms
Markov chain Monte Carlo (MCMC) is a class of algorithms that allow one to draw (pseudo-) random samples from an arbitrary target probability distribution, p(x), known up to a normalizing constant. The basic idea behind these algorithms is that one can achieve the sampling from p by running a Markov chain whose equilibrium distribution is exactly p. Two basic types of MCMC algorithms, the Metropolis algorithm and the Gibbs sampler, have been widely used in diverse fields. The validity of the both algorithms can be proved by the basic Markov chain theory. 3.1.1. Metropolis–Hastings Algorithm. Let p(x) = c exp{–f(x)} be the target probability distribution from which we want to simulate random draws. The normalizing constant c may be unknown to us. Metropolis et al. [5] first introduced the fundamental idea of evolving a Markov process in Monte Carlo sampling. Their algorithm is as follows. Starting with any configuration the algorithm evolves from the current state to the next state as follows: Algorithm 1 (Metropolis Algorithm—Form I). 1. A small, random, and “symmetric” perturbation of the current configuration is made. More precisely, is generated from a symmetric proposal function for all x and
93
2. The “gain” (or loss) of an objective function (corresponding to log p(x) = f(x)) resulting from this perturbation is computed. 3. A random number u is generated independently. 4. The new configuration is accepted if log(u) is smaller than or equal to the “gain” and rejected otherwise. Heuristically, this algorithm is constructed based on a “trial-and-error” strategy. Metropolis et al. restricted their choice of the “perturbation” function to be the symmetric ones, i.e., Intuitively, this means that there is no “flow bias” at the proposal stage. Hastings generalized the choice of T to all those that satisfies the property: if and only if [6]. This generalized form can be simply stated as follows. At each iteration: Algorithm Form II).
2 (Metropolis–Hastings Algorithm—
1. Propose a random “perturbation” of the current state, i. e., where is generated from a transition function which is nearly arbitrary (of course, some are better than others in terms of efficiency) and is completely specified by the user. 2. Compute the Metropolis ratio
3. Generate a random number u ~ uniform (0,1). Let if and let otherwise. It is easy to prove that the M–H transition rule results in an “actual” transition function A(x, y) (it is different from T because a acceptance/rejection step is involved) that satisfies the detailed balance condition
which necessarily leads to a reversible Markov chain with p(x) as its invariant distribution. The Metropolis algorithm has been extensively used in statistical physics over the past 40 years and is the cornerstone of all MCMC techniques recently adopted and generalized in the statistics community. Another class of MCMC algorithms, the Gibbs sampler [7], differs from the Metropolis algorithm in that it uses
94
Wang, Chen and Liu
3.1.2. Gibbs Sampler. Suppose where is either a scalar or a vector. In the Gibbs sampler, one randomly or systematically choose a coordinate, say and then update its value with a new sample drawn from the conditional distribution Algorithmically, the Gibbs sampler can be implemented as follows:
3.1.3. Other Techniques. A main problem with all the MCMC algorithms is that they may, for some problems, move very slowly in the configuration space or may be trapped in a local mode. This phenomena is generally called slow-mixing of the chain. When chain is slowmixing, estimation based on the resulting Monte Carlo samples becomes very inaccurate. Some recent techniques suitable for designing more efficient MCMC samplers include parallel tempering [11], multiple-try method[12], and evolutionary Monte Carlo [13].
Algorithm 3 (Random Scan Gibbs Sampler). pose currently Then
3.2.
conditional distributions based on p(x) to construct Markov chain moves.
Sup-
1. Randomly select i from the index set { 1 , . . . , d} according to a given probability vector 2. Draw from the conditional distribution and let
Algorithm 4 (Systematic Scan Gibbs Sampler). the current state be For i = 1 , . . . , d, we draw distribution
Let
from the conditional
It is easy to check that every individual conditional update leaves p invariant. Suppose currently Then follows its marginal distribution under p. Thus,
which implies that the joint distribution of is unchanged at p after one update. The Gibbs sampler’s popularity in statistics community stems from its extensive use of conditional distributions in each iteration. The data augmentation method [8] first linked the Gibbs sampling structure with missing data problems and the EM-type algorithms. The Gibbs sampler was further popularized by [9] where it was pointed out that the conditionals needed in Gibbs iterations are commonly available in many Bayesian and likelihood computations. Under regularity conditions, one can show that the Gibbs sampler chain converges geometrically and its convergence rate is related to how the variables correlate with each other [10]. Therefore, grouping highly correlated variables together in the Gibbs update can greatly speed up the sampler.
Application of MCMC—Blind Turbo Multiuser Detection
In this section, we illustrate the application of MCMC signal processing (in particular, the Gibbs sampler) to the multiuser detection problem (cf. Example 2), taken from [14]. Specifically, we show that the MCMC technique leads to a novel blind turbo multiuser receiver structure for a coded CDMA systems. The block diagram of the transmitter end of such a system is shown in Fig. 1. The binary information bits for user k are encoded using some channel code (e.g., block code, convolutional code or turbo code), resulting in a code bit stream. A code-bit interleaver is used to reduce the influence of the error bursts at the input of the channel decoder. The interleaved code bits are then mapped to binary symbols, yielding symbol stream Each data symbol is then modulated by a spreading waveform and transmitted through the channel. The received signal is given by (3). The task of the receiver is to decode the transmitted information bits of each user k. We consider a “turbo” receiver structure which iterates between a multiuser detection stage and a decoding stage to successively refine the performance. Such a turbo processing scheme has received considerable recent attention in the fields of coding and signal processing, and has been successively applied to many problems in these areas [15]. The turbo multiuser receiver structure is shown in Fig. 2. It consists of two stages: a Bayesian multiuser detector, followed by a bank of maximum a posteriori probability (MAP) channel decoders. The two stages are separated by deinterleavers and interleavers. The Bayesian multiuser detector computes the a posteriori symbol probabilities Based on these, we first compute the a posteriori log-likelihood ratios (LLR’s) of a transmitted “+1” symbol and a transmitted “–1” symbol,
Monte Carlo Bayesian Signal Processing
where the second term in (26), denoted by represents the a priori LLR of the code bit which is computed by the channel decoder in the previous iteration, interleaved and then fed back to the Bayesian multiuser detector. (The superscript p indicates the quantity obtained from the previous iteration). For the first iteration, assuming equally likely code bits, i.e., no prior information available, we then have The first term in (26), denoted by represents the extrinsic information delivered by the Bayesian multiuser detector, based on the received signals Y, the structure of the multiuser signal given by (9) and
95
the prior information about all other code bits. The extrinsic information which is not influenced by the a priori information provided by the channel decoder, is then reverse interleaved and fed into the channel decoder. Based on the extrinsic information of the code bits and the structure of the channel code, the k-th user’s MAP channel decoder [16] computes the a posteriori LLR of each code bit, based on which the extrinsic information is extracted and fed back the Bayesian multiuser detector as the a priori information in the next iteration. From the above discussion, it is seen that the key computation involved in the turbo multiuser receiver is calculating the a posteriori symbol probabilities Note that we do not assume that the receiver has the knowledge about the channels, e.g., the user amplitudes or the noise variance Hence the receiver is termed “blind”. In Bayesian paradigm, all unknowns are considered as random quantities with some prior distributions. We first briefly summarize the principles for choosing those priors, as follows.
96
Wang, Chen and Liu
Noninformative Priors: In Bayesian analysis, prior distributions are used to incorporate the prior knowledge about the unknown parameters. When such prior knowledge is limited, the prior distributions should be chosen such that they have a minimal impact on the posterior distribution. Such priors are termed as non-informative. The rationale for using noninformative prior distributions is to “let the data speak for themselves”, so that inferences are unaffected by information external to current data [17,18]. Conjugate Priors; Another consideration in the selection of the prior distributions is to simplify computations. To that end, conjugate priors are usually used to obtain simple analytical forms for the resulting posterior distributions. The property that the posterior distribution belongs to the same distribution family as the prior distribution is called conjugacy. The conjugate family of distributions is mathematically convenient in that the posterior distribution follows a known parametric form [17, 18]. Finally, to make the Gibbs sampler more computationally efficient, the priors should also be chosen such that the conditional posterior distributions are easy to simulate. For an introductory treatment of the Bayesian philosophy, including the selection of prior distributions, see the textbooks [17–19]. An account of criticism of the Bayesian approach to data analysis can be found in [20, 21]; and a defense of “The Bayesian Choice” can be found in [22]. We next outline the MCMC procedure for solving this problem of blind Bayesian multiuser detection using the Gibbs sampler. Consider the signal model (9). The unknown quantities a, and X are regarded as realizations of random variables with the following prior distributions. For the unknown amplitudes a, a truncated Gaussian prior distribution is assumed,
to be independent, the prior distribution p(X) can be expressed in terms of the prior symbol probabilities
where is the indicator such that if and if The blind Bayesian multiuser detector based on the Gibbs sampler is summarized as follows. For more details and some other related issues, see [14]. Algorithm 5 (Blind Bayesian Multiuser Detector (B 2 MUD)). Given the initial values of the unknown quantities drawn from their prior distributions, and for j = 1,2, ... 1. Draw
from
with
and
where in (31) 2. Draw
and
from
3. For t = 1, 1 , . . . , n
For k = 1, 2 , . . . , K where is an indicator that is 1 if all elements of a are positive and it is 0 otherwise. Note that large value of corresponds to the less informative prior. For the noise variance an inverse chi-square prior distribution is assumed,
Small value of corresponds to the less informative priors. Finally since the symbols are assumed
Draw ratio
based on the following probability
Monte Carlo Bayesian Signal Processing
where
4.
97
Sequential Monte Carlo Signal Processing
4.1.
General Sequential Monte Carlo Algorithms
where
To ensure convergence, the above procedure is usually carried out for iterations and samples from the last v iterations are used to calculate the Bayesian estimates of the unknown quantities. In particular, the a posteriori symbol probabilities are approximated as
4.1.1. Sequential Importance Sampling. Importance sampling is perhaps one of the most elementary, wellknown, and versatile Monte Carlo techniques. Suppose we want to estimate E{h(x)} (with respect to p), using Monte Carlo method. Since directly sampling from p(x) is difficult, we want to find a trial distribution, q(x), which is reasonably close to p but is easy to draw samples from. Because of the simple identity
where and
where
is the indicator such that
3.2.1. Simulation Example. We now illustrate the performance of the blind turbo multiuser receiver. We consider a 5-user (K = 5) CDMA system with processing gain N = 10. All users have the same amplitudes. The channel code for each user is a rate constraint length-5 convolutional code (with generators 23, 35 in octal notation). The interleaver of each user is independently and randomly generated, and fixed for all simulations. The block size of the information bits is 128. (i.e., the code bit block size is n = 256.) All users have the same amplitudes. In computing the symbol probabilities, the Gibbs sampler is iterated 100 runs for each data block, with the first 50 iterations as the “burn-in” period. The symbol posterior probabilities are computed according to (36) with Figure 3 illustrates the bit error rate performance of the blind turbo multiuser receiver for user 1 and user 3. The code bit error rate at the output of the blind Bayesian multiuser detector is plotted for the first three iterations. The curve corresponding to the first iteration is the uncoded bit error rate at the output of the The uncoded and coded bit error rate curves in a single-user additive white Gaussian noise (AWGN) channel are also shown in the same figure (as respectively the dash-dotted and the dashed lines). It is seen that by incorporating the extrinsic information provided by the channel decoder as the prior symbol probabilities, the performance of the blind turbo multiuser receiver approaches that of the singleuser in an AWGN channel within a few iterations.
is the importance weight, we can appoximate (38) by
where are random samples from q, and In using this method, we only need to know the expression of p(x) up to a normalizing constant, which is the case for all the signal processing problems we have studied. Each is said to be properly weighted by with respect to p. However, it is usually difficult to design a good trial density function in high dimensional problems. One of the most useful strategies in these problems is to build up the trial density sequentially. Suppose we can decompose x as where each of the may be multidimensional. Then our trial density can be constructed as
by which we hope to obtain some guidance from the target density while building up the trial density. Corresponding to the decomposition of x, we can rewrite the target density as
98
Wang, Chen and Liu
Monte Carlo Bayesian Signal Processing
and the importance weight as
Equation (43) suggests a recursive way of computing and monitoring the importance weight. That is, by denoting we have
Then is equal to (x) in (43). Potential advantages of this recursion and (42) are (a) we can stop generating further components of x if the partial weight derived from the sequentially generated partial sample is too small, and (b) we can take advantage of in designing In other words, the marginal distribution can be used to guide the generation of x. Although the “idea” sounds interesting, the trouble is that expressions (42) and (43) are not useful at all! The reason is that in order to get (42), one needs to have the marginal distribution
which is perhaps more difficult than the original problem. In order to carry out the sequential sampling idea, we need to find a sequence of “auxiliary distributions,” so that is a reasonable approximation to the marginal distribution for t = 1,... , d – 1, and We want to emphasize that the are only required to be known up to a normalizing constant and they only serve as “guides” to our construction of the whole sample The sequential importance sampling (SIS) method can then be defined as the following recursive procedure. Algorithm 6 (Sequential Importance Sampling (SIS)). For t = 2 , . . . , d: 1. Draw from 2. Compute
and let tal weight.
and let
Here
is called an incremen-
99
It is easy to show that is properly weighted by with respect to provided that is properly weighted by with respect to Thus, the whole sample x obtained by SIS is properly weighted by with respect to the target density p(x). The “auxiliary distributions” can also be used to help construct a more efficient trial distribution: We can build in light of choose (if possible)
For example, one can
Then the incremental weight becomes
In the same token, we may also want to be where the latter involves integrating out When we observe that is getting too small, we may want to reject the sample half-way and restart. In this way we avoid wasting time on generating samples that are doomed to have little effect in the final estimation. However, as an outright rejection incurs bias, techniques such as the rejection control are needed [23]. Another problem with the SIS is that the resulting importance weights are often very skewed, especially when d is large. An important recent advance in sequential Monte Carlo to address this problem is the resampling technique [24–26]. 4.1.2. SMC for Dynamic Systems. Consider the following dynamic system modeled in a state-space form as
where and are, respectively, the state variable, the observation, the state noise, and the observation noise at time t. They can be either scalars or vectors. Let and let Suppose an on-line inference of is of interest; that is, at current time t we wish to make a timely estimate of a function of the state variable say based on the currently available observation, With the Bayes theorem, we realize that the optimal solution to this problem is
100
Wang, Chen and Liu
In most cases an exact evaluation of this expectation is analytically intractable because of the complexity of such a dynamic system. Monte Carlo methods provide us with a viable alternative to the required computation. Specifically, suppose a set of random samples is generated from the trial distribution By associating the weight
the choice of the trial distribution q(·) and the use of resampling. Specifically, a useful choice of the trial distribution for the state space model (49) is of the form
to the sample interest,
For this trial distribution, the importance weight is updated according to
we can approximate the quantity of as
where The pair is a properly weighted sample with respect to distribution A trivial but important observation is that the (one of the components of is also properly weighted by the with respect to the marginal distribution To implement Monte Carlo techniques for a dynamic system, a set of random samples properly weighted with respect to is needed for any time t. Because the state equation in system (49) possesses a Markovian structure, we can implement a a SMC strategy [26]. Suppose a set of properly weighted samples (with respect to is given at time (t – 1). A Monte Carlo filter (MCF) generates from the set a new one, which is properly weighted at time t with respect to according to the following algorithm. Algorithm 7 (Sequential Monte Carlo Filter for Dynamic Systems). For j = 1 , . . . , v: 1. Draw a sample from a trial distribution and let 2. Compute the importance weight
The algorithm is initialized by drawing a set of i.i.d. samples from When represents the “null” information, corresponds to the prior of There are a few important issues regarding the design and implementation of a sequential MCF, such as
See [26] for the general sequential MCF framework and a detailed discussion on various implementation issues. 4.1.3. Mixture Kalman Filter. Many dynamic system models belong to the class of conditional dynamic linear models (CDLM) of the form
where (here I denotes an identity matrix), and is a random indicator variable. The matrices and are known given In this model, the “state variable” corresponds to We observe that for a given trajectory of the indicator in a CDLM, the system is both linear and Gaussian, for which the Kalman filter provides the complete statistical characterization of the system dynamics. Recently a novel sequential Monte Carlo method, the mixture Kalman filter (MKF), was proposed in [27] for on-line filtering and prediction of CDLM’s; it exploits the conditional Gaussian property and utilizes a marginalization operation to improve the algorithmic efficiency. Instead of dealing with both and the MKF draws Monte Carlo samples only in the indicator space and uses a mixture of Gaussian distributions to approximate the target distribution. Compared with the generic MCF method the MKF is substantially more efficient (e.g. giving more accurate results with the same computing resources). However, the MKF often needs more “brain power” for its proper implementation, as the required formulas are more complicated. Additionally, the MKF requires the CDLM structure which may not be applicable to other problems.
Monte Carlo Bayesian Signal Processing
Let and let By recursively generating a set of properly weighted random samples to represent the MKF approximates the target distribution by a random mixture of Gaussian distributions
where is obtained by implementing a Kalman filter for the given indicator trajectory and A key step in the MKF is the production at time t of a weighted sample of indicators, based on the set of samples, at the previous time (t – 1) according to the following algorithm.
101
where the mean and covariance matrix can be obtained by a Kalman filter with the given In order to implement the MKF, we need to obtain a set of Monte Carlo samples of the transmitted symbols, properly weighted with respect to the distribution Then the a posteriori symbol probability can be estimated as
where 1(·) is an indicator function such that if and 0 otherwise. Hereafter, we let and The following algorithm, which is based on the mixture Kalman filter and first appeared in [28], generates properly weighted Monte Carlo samples
Algorithm 8 (Mixture Kalman Filter for Conditional Dynamic Linear Models). For j = 1,..., v:
Algorithm 9 (Adaptive Blind Receiver in Flat Fading Channels).
1. Draw a sample
1. Initialization: Each Kalman filter is initialized as with j = 1 , . . . , m, where is the stationary covariance of and is computed analytically from (6). (The factor 2 is to accommodate the initial uncertainty.) All importance weights are initialized as j = 1, ..., v. Since the data symbols are assumed to be independent, initial symbols are not needed. Based on the state-space model (17)–(18), the following steps are implemented at time t to update each weighted sample. For j = 1,..., v: 2. Compute the one-step predictive update of each Kalman filter
from a trial distribution
2. Run a one-step Kalman filter based on and to obtain 3. Compute the weight
4.2.
Application of SMC—Adaptive Detection in Fading Channels
Consider the flat-fading channel with additive Gaussian noise, given by (17) and (18). Denote and We are interested in estimating the symbol at time t based on the observation The Bayes solution to this problem requires the posterior distribution
Note that with a given the state-space model (17)– (18) becomes a linear Gaussian system. Hence,
3. Compute the trial sampling density: compute
with
For
102
Wang, Chen and Liu
4. Impute the symbol Draw {+1, –1} with probability
from the set
Append to and obtain 5. Compute the importance weight:
Since the fading process is highly correlated, the future received signals contain information about current data and channel state. Hence a delayed estimate is usually more accurate than the concurrent estimate. From the recursive procedure described above, we note by induction that if the set is properly weighted with respect to then the set is properly weighted with respect to for any Hence, if we focus our attention on at time we obtain the following delayed estimate of the symbol
6. Compute the one-step filtering update of the Kalman filter Based on the imputed symbol and the observation complete the Kalman filter update to obtain as follows:
The above algorithm is depicted in Fig. 4. It is seen that at any time t, the only quantities that need to be stored are At each time t, the dominant computation in this receiver involves the v one-step Kalman filter updates. Since the v samplers operate independently and in parallel, such a sequential Monte Carlo receiver is well suited for massively parallel implementation using the VLSI systolic array technology [29].
Since the weights contain information about the signals the estimate in (70) is usually more accurate. Note that such a delayed estimation method incurs no additional computational cost (i.e., cpu time), but it requires some extra memory for storing For more details and some other issues related to SMC, such as resampling, see [28]. 4.2.1. Simulation Example. In this simulation, the fading process is modeled by the output of a Butterworth filter of order r = 3 driven by a complex white Gaussian noise process. The cutoff frequency of this filter is 0.05, corresponding to a normalized
Monte Carlo Bayesian Signal Processing
Doppler frequency (with respect to the symbol rate ) which is a fast fading scenario. Specifically, the fading coefficients is modeled by the following ARMA(3,3) process:
where The filter coefficients in (71) are chosen such that Differential encoding and decoding are employed to resolve the phase ambiguity. The number of Monte Carlo samples drawn at each time was empirically set as v = 50. In each simulation, the sequential Monte Carlo algorithm was run on 10000 symbols, (i.e., t = 1 , . . . , 10000). In counting the symbol detection errors, the first 50 symbols were discarded to allow the algorithm to reach the steady state. In Fig. 5, the bit error rate (BER) performance versus the signal-to-noise ratio (defined as corresponding to delay values (concurrent estimate), and is plotted. In the same figure, we also plot the known channel lower bound, the genie-aided lower bound, and the BER
103
curve of the differential detector. From this figure it is seen with only a small amount of delay, the performance of the sequential Monte Carlo receiver can be significantly improved by the delayed-weight method compared with the concurrent estimate. Even with the concurrent estimate, the proposed adaptive receiver does not exhibit an error floor, as does the differential detector. Moreover, with a delay the proposed adaptive receiver essentially achieves the genie-aided lower bound. 5.
Concluding Remarks
We have presented an overview on the theories and applications of the emerging field of Monte Carlo Bayesian signal processing. The optimal solutions to many statistical signal processing problems, especially those found in wireless communications, are computationally prohibitive to implement by conventional signal processing methods. The Monte Carlo paradigm offers an novel and powerful approach to tackling these problems at a reasonable computational cost. We have outlined two families of Monte Carlo signal processing methodologies, namely, Markov chain Monte Carlo (MCMC) for batch signal processing, and sequential
104
Wang, Chen and Liu
Monte Carlo (SMC) for adaptive signal processing. Although research on Monte Carlo signal processing has just started, we anticipate that in the near future, an array of optimal signal processing problems found in wireless communications, such as mitigation of various types of radio-frequency interference, tracking of fading channels, resolving multipath channel dispersion, space-time processing by multiple transmitter and receiver antennas, exploiting coded signal structures, to name a few, will be solved under the powerful Monte Carlo signal processing framework. Indeed, a number of recent works have addressed applications of MCMC methods in receiver design for various communication systems [30–35]. For additional recent references on theory and applications of Monte Carlo signal processing, see the two journal special issues [36, 37], the books [4, 38, 39], and the websites http://www.statslab.cam.ac.uk/~mcmc and http://www-sigproc.eng.cam.ac.uk/smc
13.
14.
15.
16.
17. 18. 19. 20. 21.
References 1. J.G. Proakis, Digital Communications, 3rd edn., New York: McGraw-Hill, 1995. 2. S. Verdú, Multiuser Detection, Cambridge, UK: Cambridge University Press, 1998. 3. R. Haeb and H. Meyr, “A Systematic Approach to Carrier Recovery and Detection of Digitally Phase Modulated Signals on Fading Channels,” IEEE Trans. Commun., vol. 37, no. 7, 1989, pp. 748–754. 4. J.S. Liu, Monte Carlo Methods in Scientific Computing, New York: Springer-Verlag, 2001. 5. N. Metropolis, A.W. Rosenbluth, A.H. Teller, and E. Teller, “Equations of State Calculations by Fast Computing Machines,” J. Chemical Physics, vol. 21, 1953, pp. 1087–1091. 6. W.K. Hastings, “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika, vol. 57, 1970, pp. 97–109. 7. S. Geman and D. Geman, “Stochastic Relaxation, Gibbs Distribution, and the Bayesian Restoration of Images,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, no. 11, 1984, pp. 721–741. 8. M.A. Tanner and W.H. Wong, “The Calculation of Posterior Distribution by Data Augmentation (with Discussion),” J. Amer. Statist. Assoc., vol. 82, 1987, pp. 528–550. 9. A.E. Gelfand and A.F.W. Smith, “Sampling-Based Approaches to Calculating Marginal Densities,” J. Amer. Stat. Assoc., vol. 85, 1990, pp. 398–409. 10. J.S. Liu, “The Collapsed Gibbs Sampler with Applications to a Gene Regulation Problem,” J. Amer. Statist. Assoc, vol. 89, 1994, pp. 958–966. 11. C.J. Geyer, “Markov Chain Monte Carlo Maximum Likelihood,” in Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, E.M. Keramigas (Ed.), Fairfax: Interface Foundation, 1991, pp. 156–163. 12. J.S. Liu, F. Ling, and W.H. Wong, “The Use of Multiple-Try
22. 23.
24.
25.
26.
27. 28.
29. 30.
31.
32.
Method and Local Optimization in Metropolis sampling,” J. Amer. Statist. Assoc, vol. 95, 2000, pp. 121–134. F. Liang and W.H. Wong, “Evolutionary Monte Carlo: Applications to Model Sampling and Change Point Problem,” Statistica Sinica, vol. 10, 2000, pp. 317–342. X.Wang and R. Chen, “Adaptive Bayesian Multiuser Detection for Synchronous CDMA in Gaussian and Impulsive Noise,” IEEE Trans. Sign. Proc, vol. 48, no. 7, 2000, pp. 2013– 2028. J. Hagenauer, “The Turbo Principle: Tutorial Introduction and State of the Art,” in Proc. International Symposium on Turbo Codes and Related Topics, Brest, France, Sept. 1997, pp. 1–11. L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Trans. Inform. Theory, vol. IT-20, no. 3, 1974, pp. 284–287. G.E. Box and G.C. Tiao, Bayesian Inference in Statistical Analysis, Reading, MA: Addison-Wesley, 1973. A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin, Bayesian Data Analysis, Chapman & Hall, 1995. E.L. Lehmann and G. Casella, Theory of Point Estimation, 2nd edn., New York: Springer-Verlag, 1998. J.O. Berger, Statistical Decision Theory and Bayesian Analysis, 2nd edn., New York: Springer-Verlag, 1985. T.J. Rothenberg, “The Bayesian Approach and Alternatives in Econometrics,” in Studies in Bayesian Econometrics and Statistics, vol. 1, S. Fienberg and A. Zellners (Eds.), Amsterdam: North-Holland, 1977, pp. 55–75. C.P. Robert, The Bayesian Choice: A Decision-Theoretic Motivation, New York: Springer-Verlag, 1994. J.S. Liu, R. Chen, and W.H. Wong, “Rejection Control for Sequential Importance Sampling,” Journal of the American Statistical Association, vol. 93, 1998, pp. 1022–1031. N.J. Gordon, D.J. Salmon Salmon, and A.F.M. Smith, “A Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation,” IEE Proceedings on Radar and Signal Processing, vol. 140, 1993, pp. 107–113. J.S. Liu and R. Chen, “Blind Deconvolution Via Sequential Imputations,” Journal of the American Statistical Association, vol. 90, 1995, pp. 567–576. J.S. Liu and R. Chen, “Sequential Monte Carlo Methods for Dynamic Systems,” Journal of the American Statistical Association, vol. 93,1998, pp. 1032–1044. R. Chen and J.S. Liu, “Mixture Kalman Filters,” Journal of Royal Statistical Society (B), vol. 62, no. 3, 2000, pp. 493–509. R. Chen, X. Wang, and J.S. Liu, “Adaptive Joint Detection and Decoding in Flat-Fading Channels Via Mixture Kalman Filtering,” IEEE Trans. Inform. Theory, vol. 46, no. 6, 2000, pp. 2079– 2094. S.Y. Kung, VLSI Array Processing, Englewood Cliffs, NJ: Prentice Hall, 1988. B. Lu and X. Wang, “Bayesian Blind Turbo Receiver for Coded OFDM Systems with Frequency Offset and Frequency-Selective Fading,” IEEE J. Select. Areas Commun., vol. 19, no. 12, 2001, Special issue on Signal Synchronization in Digital Transmission Systems. V.D. Phan and X. Wang, “Bayesian Turbo Multiuser Detection for Nonlinearly Modulated CDMA,” Signal Processing, to appear. X. Wang and R. Chen, “Blind Turbo Equalization in Gaussian and Impulsive Noise,” IEEE Trans. Vehi. Tech., vol. 50, no. 4,
Monte Carlo Bayesian Signal Processing
2001, pp. 1092–1105. 33. Z. Yang, B. Lu, and X. Wang, “Bayesian Monte Carlo Multiuser Receiver for Space-Time Coded Multi-Carrier CDMA Systems,” IEEE J. Select. Areas Commun., vol. 19, no. 8, 2001, pp. 1625–1637, Special Issue on Multiuser Detection Techniques with Application to Wired & Wireless Communication System. 34. Z. Yang and X. Wang, “Blind Turbo Multiuser Detection for Long-Code Multipath CDMA,” IEEE Trans. Commun., to appear. 35. Z. Yang and X. Wang, “Turbo Equalization for GMSK Signaling Over Multipath Channels Based on the Gibbs Sampler,” IEEE J. Select. Areas Commun., vol. 19, no. 9, 2001, pp. 1753–1763, Special Issue on the Turbo Principle: From Theory to Practice. 36. IEEE Trans. Sig. Proc., Special Issue on Monte Carlo Methods for Statistical Signal Processing, 2002. 37. Signal Processing, Special Issue on Markov Chain Monte Carlo (MCMC) Methods for Signal Processing, vol. 81, no. 1, 2001. 38. A. Doucet, N. de Freitas, and N. Gordon (Eds.), Sequential Monte Carlo Methods in Practice, New York: Springer-Verlag, 2001. 39. W.R. Wilks, S. Richardson, and D J. Spiegelhalter (Eds.), Monte Carlo Monte Carlo in Practice, London: Chapman & Hall, 1998.
Xiaodong Wang received the B.S. degree in Electrical Engineering and Applied Mathematics (with the highest honor) from Shanghai Jiao Tong University, Shanghai, China, in 1992; the M.S. degree in Electrical and Computer Engineering from Purdue University in 1995; and the Ph.D degree in Electrical Engineering from Princeton University in 1998. In July 1998, he joined the Department of Electrical Engineering, Texas A&M University, as an Assistant Professor. Dr. Wang’s research interests fall in the general areas of computing, signal processing and communications. He has worked in the areas of digital communications, digital signal processing, parallel and distributed computing, nanoelectronics and quantum computing. His current research interests include multiuser communications theory and advanced signal processing for wireless communications. He worked at the AT&T Labs–Research, in Red Bank, NJ, during the summer of 1997. Dr. Wang is a member of the IEEE and a member of the American Association for the Advancement of Science. Dr. Wang has received the 1999 NSF CAREER Award. He has also received the 2001 IEEE Communications Society and Information Theory Society Joint Paper Award. He currently serves as an Associate Editor for the IEEE Transactions on Communications, and for the IEEE Transactions on Signal Processing.
105
Rong Chen is a Professor at the Department of Information and Decision Sciences, College of Business Administration, University of Illinois at Chicago. Before joining UIC in 1999, he was at Department of Statistics, Texas A&M University. Dr. Chen received his B.S. (1985) in Mathematics from Peking University, P.R. China, his M.S. (1987) and Ph.D. (1990) in Statistics from Carnegie Mellon University. His main research interests are in time series analysis, statistical computing and Monte Carlo methods in dynamic systems, and statistical applications in engineering and business. He is an Associate Editor for journal of American Statistical Association, Journal of Business and Economic Statistics, Statistica Sinica and Computational Statistics.
Jun S. Liu received the B.S. degree in Mathematics in 1985 from Peking University, Beijing, China; and the Ph.D. degree in Statistics in 1991 from the University of Chicago. He was Assistant Professor of Statistics at Harvard from 1991 to 1994 and joined the Statistics Department of Stanford University in 1994, as Assistant Professor. In 2000, he returned to Harvard Statistics Department as Professor. Dr. Liu’s main research interests are Bayesian modeling and computation, Monte Carlo methods, bioinformatics, dynamic system and nonlinear filtering. Dr. Liu has worked with collaborators on designing novel Gibbs sampling-based algorithms to find subtle repetitive patterns in biopolymer sequences. These algorithms have been successfully applied to predict protein functions and to understand gene regulation mechanisms. Dr. Liu has also worked on theoretical analyses of Markov chain Monte Carlo algorithms, such as efficiency comparison of different algorithms, novel algorithmic designs, and convergence rate studies. Dr. Liu’s research recently focuses on designing more efficient sequential Monte Carlo-based filtering methods for signal processing and developing statistical tools for genomic data analysis. Dr. Liu received the CAREER award from the National Science Foundation in 1995 and the Mitchell Prize from the International Society of Bayesian Analysis in 2000.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 107–126, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Bounds on SIMO and MIMO Channel Estimation and Equalization with Side Information* BRIAN M. SADLER Army Research Laboratory, AMSRL-CI-CN, 2800 Powder Mill Road, Adelphi, MD 20783, USA RICHARD J. KOZICK Department of Electrical Engineering, Bucknell University, Lewisburg, PA 17837, USA TERRENCE MOORE AND ANANTHRAM SWAMI Army Research Laboratory, AMSRL-CI-CN, 2800 Powder Mill Road, Adelphi, MD 20783, USA Received December 1, 2000; Revised May 18, 2001
Abstract. Constrained Cramér-Rao bounds are developed for convolutive multi-input multi-output (MIMO) channel and source estimation in additive Gaussian noise. Properties of the MIMO Fisher information matrix (FIM) are studied, and we develop the maximum rank of the unconstrained FIM and provide necessary conditions for the FIM to achieve full rank. Equality constraints on channel and signal parameters provide a means to study the potential value of side information, such as training symbols (semi-blind case), constant modulus (CM) sources, or known channels. Nonredundant constraints may be combined in an arbitrary fashion, so that side information may be different for different sources. The bounds are useful for evaluating the performance of SIMO and MIMO channel estimation and equalization algorithms. We present examples using the constant modulus blind equalization algorithm. The constrained bounds are also useful for evaluating the relative value of different types of side information, and we present examples comparing semi-blind, constant modulus, and known channel constraints. While the examples presented are primarily in the communications context, the CRB framework applies generally to convolutive source separation problems. Keywords: source separation, channel estimation, equalization, communications, Cramér-Rao bounds, semiblind, constant modulus, blind estimation
1. Introduction A great many signal processing scenarios are described by instantaneous and convolutive (single-input and) multi-input, multi-output (SIMO/MIMO) linear models. This is especially true for wireless communications, e.g., see [3,4] and references therein. Often, side information is available that may be exploited when estimating the sources and/or the channels. This infor* Parts of this paper were presented in [1] and [2].
mation may be very useful from a signal processing perspective, enabling blind or semi-blind estimation. Algorithms that exploit source properties such as constant modulus (CM) [5–7] and non-Gaussianity [8, 9] are well known, as are approaches that rely on training, i.e., the semi-blind case. In this paper we develop Cramér-Rao bounds (CRBs) for channel and source estimation in convolutive MIMO scenarios, when side information is available. The SIMO and SISO (single-input, single-output) cases follow naturally. We have previously used the
108
Sadler et al.
same methodology with instantaneous mixing models (uncalibrated arrays) [10], as well as calibrated arrays [11, 12]. This approach may also be used in communications where both the transmitter and receiver employ multiple antennas (space-time coding) [10]. We employ the constrained CRB formulation of Gorman and Hero [13], and Stoica and Ng [14, 15]; see also Marzetta [16]. Stoica and Ng have used this approach to obtain bounds on blind SIMO channel estimation [15]. Related work includes Barbarossa et al. [17], who analyze a channel estimator with a block transmission scheme, and Liu et al. [18] who analyze a constant modulus space-time coding method. Dogandzic and Nehorai have also employed the constrained CRB in the context of EEG measurements [19]. The approach of Stoica and Ng does not require the unconstrained Fisher information matrix (FIM) to be full rank; if the unconstrained FIM is non-singular then their result reduces to that of Gorman and Hero. This approach provides a general framework that yields CRBs in a large variety of cases, and allows combination of side information, e.g., CRBs when some sources are CM, some have training, and some are completely unknown. We use a deterministic model for the sources and channels; a random Gaussian source model does not account for source attributes such as CM and non-Gaussianity that are commonly exploited in channel and source estimation algorithms. (On the other hand, Bayesian CRBs with random source models allow study of source non-Gaussianity as it impacts source separation.) Our results generalize deterministic CRBs for semi-blind SIMO channel estimation developed by Carvalho and Slock [20]. Channel estimation for the single-input single-output semi-blind case has also been considered by Bapat [21]. The blind MIMO problem has inherent ambiguities in simultaneous source and channel estimation. We study the MIMO Fisher information matrix and show its properties, including its maximum rank, and develop necessary conditions for the FIM to reach its maximum rank. Because of the model ambiguities in the blind case, the MIMO FIM will not be full rank, so that the FIM cannot be directly inverted to obtain the desired CRB. While it may be tempting to proceed using the pseudo-inverse of the singular FIM, this approach may lead to overly optimistic bounds, see, e.g., [22] and [23]. Consequently, we also show how to specify parameters to achieve a full rank FIM, generalizing results of Hua for the SIMO FIM [24]. Within the constrained CRB framework, parameter specification is a
special case of arbitrary non-linear constraints that are applied to achieve bounds that directly reflect the level of side information available. We also briefly discuss related identifiability issues. While strict identifiability and regularity do not seem to require exactly the same conditions, those required for strict identifiability do not pose a significant practical limitation beyond those required for MIMO FIM regularity. A complete discussion of the necessary and sufficient conditions for strict identifiability and regularity will be presented elsewhere. Several examples of constrained CRBs are presented. We compare the performance of several equalization algorithms with the constrained CRBs, both for SISO and SIMO cases, including the constant modulus algorithm. In addition, we compare various constrained CRBs for SIMO and MIMO to show the relative potential value of various forms of side information. 2.
Source and Channel Model
In this section we establish the SIMO and MIMO models. We write the SIMO model as follows; see, e.g., [3, 24]. The complex baseband representation of the M-channel FIR system is given by
for channels, and output samples. Note that the assumption of M channels is not limited to physical channels, rather, the channels may arise via various forms of diversity (such as fractional sampling in digital communications). The maximum channel order is denoted L, and is assumed known. There are N + L input samples and N output samples, which mitigates edge effects. (In the communications context, it may be desirable to alter the model to have N input and N + L output samples. In this case, our Theorem 1 in Section 4 continues to hold, and Corollary 1 may be appropriately modified using our framework.) The noise is assumed to be circular Gaussian, independent and identically-distributed (iid) over the i and k indices, and zero-mean with variance The singleinput single-output (SISO) model easily follows with M = 1, and the memoryless case (instantaneous mixing) follows with L = 0.
Bounds on SIMO and MIMO Channel Estimation
The model may be expressed as an MN × 1 vector y, given by
with input put and channel matrix
out-
3.
The MIMO Fisher Information Matrix
Next we develop the MIMO Fisher information matrix. It can be readily derived using a FIM for a vector of complex parameters; the real-valued case follows immediately. The SIMO FIM has been derived by Hua [24]. We briefly review this result, and then show the extension to the MIMO case. 3.1.
where by
is the i th channel (convolution) matrix given
The noise assumptions imply that The MIMO model is a K-source extension of the SIMO case, formulated as
SIMO FIM
We derive the Fisher information matrix (FIM) for the model in (2). Define the vector of complex unknown parameters as
where
Writing where, for example, (k) denotes the K th source and (k) denotes the ith sub-channel of the Kth source. We extend (2) to obtain
Here, where
109
in terms of its real and imaginary parts as define the real parameter vector
From (2), y is complex normal with mean and covariance matrix Then, it is well known that the elements of the FIM are given by (see, e.g., Kay [25, p. 525])
denotes the contribution of the kth source We may go further and work with the complexvalued FIM defined below. Because the model of (2) is linear in the parameters, the FIM has the general block symmetric form
and quence of the kth source.
is the input se-
110
Sadler et al.
The FIM for the complex parameters may be defined as Using (11) and (12) we have
where
Now using (13), the multi-source FIM is where denotes the complex conjugate of plex derivative is defined as Kay [25, chpt. 15]. Now, it is straightforward to show that
where is the M × M identity matrix, Kronecker product, and
and comsee, e.g.,
denotes
The real-valued MIMO FIM is obtained by letting and in (12), with corresponding real parameter vector
4.
Using (13), the complex-parameter FIM for (2) is given by
The related FIM for and 3.2.
is obtained by noting that in (12).
MIMO FIM
Next we develop the FIM for the multi-source model in (6). Now, with K sources, the complex parameter vector is
Regularity and Identifiability
To obtain a useful CRB, it is desired that the FIM be regular, i.e., that it is full rank. In addition, it is also desired that the model be identifiable. While identifiability often implies regularity, the converse is not necessarily true, see, e.g., [26] in the context of additive Gaussian noise models. We explore properties of the MIMO FIM, which is in general not full rank due to the inherent ambiguities in simultaneous source and channel estimation. We establish a basis for the FIM null space, and demonstrate how parameters may be specified to make the resulting FIM full rank. While our focus in this paper is on the FIM properties, a brief discussion of identifiability is provided at the end of this section. 4.1.
where superscripts are used to index the sources and subscripts are used to index the channels. The mean of y is now
Regularity of MIMO FIM
Because does not have full rank, nullity for all Next we show the maximum rank of define its null space, and relate the rank of to the number of equations and unknowns in the MIMO model. Then, we derive parameter constraints that achieve a full rank FIM.
Using (14), Theorem 1. Proof:
If
See the Appendix.
then nullity
Bounds on SIMO and MIMO Channel Estimation
Theorem 1 provides the maximum rank of this maximum will not be surpassed by increasing N, M, or L. (The case is of little practical interest in Theorem 1, since it puts extreme limits on the data length N.) As noted, the real parameter case FIM (denoted ) may be obtained from (12), and then nullity nullity In the single source case nullity corresponding to the multiplicative ambiguity in blind SIMO problems (see Theorem 2 of [24]). Intuitively, under certain conditions the Ksource convolutional MIMO problem may be equalized, yielding a memoryless MIMO problem that has a K × K matrix ambiguity remaining (see, e.g., the discussion of MIMO MA model identifiability in section III.A. of [9]). The results of Theorem 1 may be related to the number of variables. Notice that there are MN equations and K(N + L + ML + M) unknowns in the MIMO model, which implies the following. Corollary 1.
Proof:
Nullity
see the Appendix.
For large enough block size N, Corollary 1 implies that we must have M > K, i.e., more diversity channels than sources. Indeed, if M = K, we can verify that the nullity is indicating that if the channels have memory (L > 0), we do not have sufficient diversity with M = K. Further, if M < K, the nullity is only if
which is very restrictive. Hence, in the sequel we assume M > K. Corollary 1 leads to a necessary condition for nullity for in this case we must have
Rearranging (23) we have that the number of equations must be greater than or equal to the number of unknowns minus Solving (23) for N we obtain a necessary condition on the data length,
111
In the SIMO case (K = 1), Eq. (24) reduces to Notice that for all and for all L, we have Therefore, in the SIMO case, is always necessary. Condition (24) is not always sufficient. However, numerical testing reveals that when (24) fails to be sufficient, it still provides a good approximation for the minimum value of N. (Using iid realizations of normal samples for the sources and channels, we find cases where equality in (24) results in nullity Our numerical results lead us to the following (analytically unproven) conjecture. Conjecture 1. for nullity
A sufficient condition on data length is
where is the ceiling function (round up to nearest integer). It is also of interest to specify when nullity Generally, identifiability and regularity require the sources to be persistently exciting of sufficient order (see, e.g., Soderstrom and Stoica [27]). Alternatively, for finite deterministic sequences, this idea can be expressed in terms of the number of modes necessary to be present in the source. The modes are independent basis functions that may be used to describe any finite length sequence. A sufficient condition for K = 1 is that the number of modes be greater than or equal to 2L + 1; additional conditions are that the SIMO sub-channels do not have common zeros and that However, the situation is more complicated with K > 1, for now the required value of N depends on M, K, and L. Equation (24) provides a good approximation when the input has sufficient modes; the resulting necessary values of block size N are not large for practical cases, being on the order of K L. The above theory applies in the memoryless MIMO case (L = 0) as well. In particular, from Theorem 1, the removal of memory does not reduce nullity because reducing L reduces the number of unknowns while the number of observations remains fixed. In the SISO case (K = 1, M = 1), there are N equations in L + 1 channel plus N + L source unknowns. Thus we always have 2L +1 more unknowns than equations, and we find that nullity in this case, with equality achieved when the input has sufficient modes.
Sadler et al.
112
When nullity it is of interest to specify complex parameters in so that the resulting row and column-reduced FIM will have full rank. This can be seen from the following Theorem for two sources. The proof of Theorem 2 utilizes the null space basis vectors found in the proof of Theorem 1. Theorem 2. Let K = 2 and assume nullity Let denote with four row-column pairs removed, i.e., by specifying four complex parameters Then, nullity if the are chosen in any of the following ways.
In the SIMO case, equivalence between conditions for strict ID and regularity have been established [29]. It is not clear that such an exact equivalence holds for the MIMO case. However, the conditions are very similar and overlap, as we have noted. We give a full treatment of necessary and sufficient conditions for MIMO strict identifiability and regularity elsewhere. 5.
Constrained CRBs
with at least one of
(a)
and one of (b) (c)
set Proof:
with at least one of and one of from each of any unique three in the and in any element of the set. See the Appendix.
Unlike the K = 1 case, we cannot arbitrarily specify four parameters, e.g., we cannot specify four parameters in the set and achieve a full rank FIM. Rather, some parameters must be specified for both sources. This idea generalizes for K > 2, although the proof is cumbersome. The lack of FIM regularity in blind SIMO channel estimation problems is often circumvented by assuming that one of the complex channel coefficients is known, resulting in a full rank FIM. Theorem 2 states that, if parameters are to be specified in order to obtain a full rank FIM for K = 2, at least four complex parameters must be specified, with at least one parameter for each source. 4.2.
Corollary 2 of [28] states a sufficient data length for strict ID, which is equivalent to our Conjecture 1.
Strict Identifiability
Various definitions of identifiability (ID) are available, based on statistical modeling assumptions, see, e.g., [3, 9, 24, 28–33]. For MIMO systems with deterministic inputs the notion of strict ID is useful [28, 31]. This provides for ID up to a scalar in the SIMO case, and up to a non-singular K × K instantaneous mixing matrix in the convolutive MIMO case. Abed-Meraim and Hua provide necessary and sufficient conditions for strict ID for the MIMO case under consideration here [28]. We note the following. Inequality (24) is a necessary condition for MIMO FIM regularity and for strict ID if
In this section we develop CRBs under equality constraints on the sources and channels. This approach leads to invertible FIMs if sufficient constraints are applied, and enables the study of a great variety of scenarios, allowing the value of side information to be assessed. We work with the real-valued FIM J, and the corresponding real-valued parameter vector in (10). Consider equality constraints on elements of where The constraints have the form for The constraints form a vector and we define a gradient matrix
with elements The gradient is assumed to have full row rank for any matrix satisfying the constraints (This is not too restrictive; linearly dependent constraints must be avoided.) Let U be a matrix whose columns are an orthonormal basis for the null space of F, so that
Then, Stoica and Ng have shown that as long as is invertible, the constrained CRB is given by (Thrm. 1 of [14])
Notice that J in (27) is the unconstrained CRB, and need not be full rank, while U is solely a function of the constraints. Generally, a specific constraint will be either a function of a source or of a channel,
Bounds on SIMO and MIMO Channel Estimation
but not both simultaneously (for example, the constant modulus property applies only to the signal). This idea, along with properties of inverses of partitioned matrices, may be exploited to find closed form expressions for the resulting constrained CRBs [12,15]. This requires specification of U for a particular constraint set, which is tractable for many cases. Alternatively, given and J, both U and (27) may be evaluated numerically. The simplest equality constraint is to set an element of to a known constant. When specifying signal parameters, this corresponds to the semi-blind case. For example, consider the case when the kth source signal is known for the first samples, so that
113
with the CRB for channel estimation with known input. Similarly, we can constrain channels to be partially or completely known. Another constraint of primary interest arises from the constant modulus (CM) signal property. For example, suppose the kth source is CM with unit modulus, yielding N + L constraints of the form
For the SIMO case, this CM signal constraint yields gradient matrix
and the corresponding nullspace matrix is where the represent known constants. Thus we have 2T constraints that may be written
The resulting gradient matrix has dimension has a single non-zero entry for each row, and has (row) rank 2T, with the form
where P is a permutation matrix. The corresponding null space basis has entries drawn from {0, 1}, regardless of the specific values of the constants with the form
When applied via (27), in (31) acts to zero-out the row-column pairs corresponding to the known parameters, as expected. The inner matrix corresponds to the CRB for the reduced parameter set, with the known parameters removed. The outer transformation then acts to restore these row-column pairs with all-zero entries. Equation (27) requires invertibility of the inner matrix, in this case implying that sufficient parameters are specified to produce an invertible FIM over the reduced parameter set as, for example, given by Theorem 2. We also note that, if all the source parameters are specified (T = N + L), then we are left
If the signal is CM and the first T signal samples are known, then the resulting constraint gradient matrix is simply the combination
where arises from the semi-blind constraint. Then, the nullspace matrix for the CM and semi-blind case corresponds to in (34) with the first T columns deleted. 6.
Examples and Simulations
Examples of constrained CRB computations and comparisons with algorithm performance are presented in this section for SISO, SIMO, and MIMO systems. The CRBs presented for source signal parameters are computed as the mean of the diagonal elements of the CRB submatrix corresponding to the unknown signals. Similarly, the CRBs for channel parameters are computed as the mean of the diagonal elements of the CRB submatrix corresponding to the unknown channel coefficients.
114
6.1.
Sadler et al.
SISO Channel Equalization
Consider a SISO channel (K = 1, M = 1 in (1)), where the subscript on y, h, in (1) will be omitted for simplicity. We compare the performance of two blind linear equalizers with a constrained CRB. The source is QPSK with iid symbols and unit modulus, and we use symbol synchronous sampling. The block length is N, SNR is defined as and the equalizer length is The channel impulse response is which is the same channel used in [34]. The equalizer yields source signal estimates
where and d are the equalizer weights and delay, respectively. Only of the N + L signal values are estimated, eliminating block edge effects. We compare performance of the constant modulus algorithm (CMA) [19] and the alphabet-matched algorithm (AMA) [35, 36]. Both CMA and AMA employ a block-averaged gradient method for smoother convergence, as in [35, 36]. The CRB incorporates the CM signal constraint with unit modulus. The CM constraint alone does not provide a full rank FIM, so we additionally constrain one signal sample to be known that is block centered at s(N/2 – 1). The CMA exploits the CM signal property directly, while AMA exploits the discrete alphabet constraint in a soft manner [36]. We note that applying the discrete alphabet constraint does not yield a useful bound on variance. Because the FIR SISO channel cannot be perfectly equalized with an FIR filter, residual inter-symbol interference (ISI) will remain. This residual ISI can be bounded by designing a noise-free known-signal equalizer, and measuring the ISI in this case. We refer to this value as the minimum ISI (MIN ISI), defined as
Note that (37) is a function of the equalizer delay d. The MIN ISI(d) for the noise-free known-signal equal-
izer is given by the elements of the matrix
where
Figure 1(a) shows AMA and CMA results for N = 400, along with the constrained CRB. The AMA and CMA converge to weight vectors with (nonoptimal) delay d = 2, primarily because the gradient descent was center tap initialized with weight vector and for The line labeled “MIN ISI (CMA/AMA DELAY)” is the residual ISI of the noisefree, known-signal equalizer using the same value of delay d = 2 as the CMA and AMA. The line labeled “MIN ISI (OPTIMUM DELAY)” corresponds to the minimum ISI over all delays. Figure l(b) shows the effect of increasing the block length for a high SNR case. Both CMA and AMA asymptotically (in N and SNR) achieve the minimum ISI bound for the appropriate delay. They did not achieve the minimum possible ISI limit as they did not converge to the optimal delay. Figure l(c) indicates that the AMA algorithm displays some advantage over CMA at smaller block sizes N = 200, while Fig. l(d) indicates that CMA and AMA fail to converge when the block size is too small (N =100 for this case). We note that AMA generally converges faster than CMA (AMA can significantly outperform the CMA algorithm for arbitrary (non-CM) constellations [35, 37]). The RMSE performance of CMA and AMA in Fig. 1 is obtained by averaging the results of 100 Monte Carlo runs for each case. The CMA results are used to initialize AMA, and the parameters in AMA are chosen according to the guidelines in [37]. The constrained CRB provides a fundamental bound on the potential improvement if a nonlinear equalizer is employed over that attainable with the linear equalizer in the SISO case. Together, the constrained CRB and ISI bounds delineate SNR regimes in which the
Bounds on SIMO and MIMO Channel Estimation
115
linear equalizer is “noise limited” (when the CRB is larger than ISI) and “residual ISI limited” (when the ISI is larger than the CRB). The particular bound in this example incorporates the CM source property. It is interesting to consider other constraints that may lead to potentially lower bounds, and to determine if appropriate algorithms might reach such bounds. 6.2.
SIMO Channel Equalization
Next we consider linear equalization of a SIMO channel (K = 1, M > 1 in (1)), where an equalizer with an odd number of taps for each channel yields signal estimates according to
As in the SISO example presented in the previous section, the source signal is QPSK with iid symbols and unit modulus. We evaluate the performance of the CMA and AMA [36] blind equalizers for SIMO channels, using a block-averaged gradient method for smoother convergence. Unlike the SISO case, the linear SIMO
116
Sadler et al.
equalizer (40) can perfectly invert the SIMO channels to recover the signals, provided that the channel impulse responses and equalizer length satisfy certain conditions [38]. Therefore residual ISI does not occur in the SIMO case, and the equalizer performance is independent of the delay d (in the absence of noise). We compare the performance of CMA and AMA with a constrained CRB that incorporates the CM signal constraint and one known signal sample in the center of the block at s ( N / 2 – 1 ) . Figure 2 shows results for M = 2 channels with real-valued impulse responses so channel 1 has zeros at +j, –j and chan-
nel 2 has zeros at –1, 1. The SNR at the SIMO channel outputs is defined as Note in Fig. 2 that AMA improves CMA for small block size N. As N increases, AMA and CMA perform similarly, but we found that AMA converges with fewer iterations than CMA. For large N and large SNR, AMA and CMA appear to converge to a common asymptote that is slightly larger than the constrained CRB. The RMSE performance of CMA and AMA in Fig. 2 is obtained by averaging the results of 100 Monte Carlo runs for each case. Next we consider a SIMO system with M = 2 channels, L = 4 units of memory per channel, and
Bounds on SIMO and MIMO Channel Estimation
complex-valued
impulse
responses
Note that is identical to the channel in the SISO example in the previous subsection, while the coefficients in were randomly generated. In addition to CMA and AMA equalization, we evaluate the performance of a zero-forcing (ZF) equalizer that is designed with perfect knowledge of the channels [38]. Three constrained CRBs are evaluated: CM signal and one known signal sample
117
in the center of the block s(N/2 – 1), known channel, and known channel combined with the CM signal constraint. Results are presented in Fig. 3. Note that AMA maintains a performance advantage over CMA for all block sizes in this case. Further, for sufficiently large block size the blind AMA equalizer achieves the same signal estimation performance as the ZF equalizer that is designed with perfect knowledge of the channel. The ZF performance is slightly worse than the known channel CRB since the ZF equalizer fails to account for the additive noise.
118
6.3.
Sadler et al.
SIMO Channel and Signal Estimation
Constrained CRBs for SIMO channel and signal estimation are evaluated in this section for a variety of constraints on the channel and signal parameters. The constraints that we consider are commonly found in blind, semi-blind, and CMA algorithms, so the constrained CRBs are relevant for performance analysis of such algorithms (see [4] for an overview of blind, semi-blind, and CMA algorithms for SIMO channel estimation and equalization). We do not evaluate the performance of particular algorithms in this section, but rather we compare the CRBs for various constraints to gain insights about the relative value of different types of side information. We consider M = 2 complex channels with L = 4 and impulse responses
which are the same complex channels considered in the previous subsection. The source signal is QPSK with unit modulus and iid symbols. We present CRBs according to the following convention: Channel Parameters. The mean CRB is computed for all unknown channel coefficients. For example, if one channel coefficient is known, then the mean CRB is computed for the remaining channel coefficients. Signal Parameters. The mean CRB is computed for the signal samples at s(N – 1 – 2L). This set of signal samples is chosen to meet three objectives: (1) signal CRBs under different constraints are computed for the same set of time samples to enable fair comparisons; (2) the first and last 2L signal samples are excluded because they have larger CRBs due to edge effects in the finite data block; (3) the first signal samples are excluded to allow training samples at these times (semi-blind). Constrained CRBs are considered for the following combinations of constraints: KNOWN CHANNEL: All M(L + 1) channel coefficients are known. KNOWN SIGNAL: All (N + L) signal samples are known. KNOWN: One channel coefficient is known, which in this case is the coefficient with smallest magnitude.
KNOWN: One channel coefficient is known, which in this case is the coefficient with largest magnitude. CM: Constant modulus signal constraint. SB: Semi-blind, with the first T signal samples known at s ( – L ) , . . . , s(T – L – 1), where the number of training samples SB (OFFSET): Semi-blind, with the training samples offset from the start of the block by 2L samples, so the known signal samples are at s(L), . . . , s(T + L – 1), where N = T– L: If the first T signal samples are known, then channel estimation may be performed based only on the first T – L channel outputs, which corresponds to a KNOWN SIGNAL constraint with reduced block size T – L. CRBs for various combinations of constraints are presented in Figs. 4 and 5. Note in Fig. 4(a) and (b) that if one channel coefficient is known, then the signal and channel CRBs depend on which coefficient is known. The performance of blind algorithms is often assessed by assuming that one channel coefficient is known. A known channel coefficient with larger magnitude is more informative than a known channel coefficient with smaller magnitude. Also shown in Fig. 4(a) and (b) are the KNOWN CHANNEL and KNOWN SIGNAL bounds, respectively, which bound the performance of non-blind methods. As N increases, note from Fig. 4(a) that the mean signal CRBs for blind algorithms that rely solely upon knowledge of one channel coefficient converge to the KNOWN CHANNEL CRB. However, Fig. 4(b) indicates different behavior for the channel CRBs: the blind CRBs corresponding to one known channel coefficient do not converge to the KNOWN SIGNAL CRB. Figure 4(a) also illustrates the potential improvement in signal estimation accuracy when the CM signal property is exploited with a known channel. Figures 4(c), (d), and 5 illustrate constrained CRB variations with the number of training samples T for various block sizes N. The plots show that a small number of known signal samples yield semi-blind CRBs that are close to the known channel/known signal CRBs. We make the following observations: Comparing the semi-blind methods SB and SB (OFFSET), offsetting the training by 2L samples from the start of the block is advantageous when the block size is small and when the number of training samples is very small (T = 1 or 2). We note that more training samples may be required
Bounds on SIMO and MIMO Channel Estimation
for channels with longer memory L, and also that the results in this section are not averaged across an ensemble of random channels. CRBs are plotted with and without the CM signal constraint. In each case, the CM constraint produces uniformly smaller CRBs. Signal CRBs are shown in Figs. 4(c), 5(a), and (c). As T increases, SB approaches the KNOWN CHANNEL CRB (and SB & CM approaches the KNOWN CHANNEL & CM CRB). Further, the curve has a sharp knee, so that most of the improvement occurs for T < 10 training samples. The knee at T < 10 is independent of N, i.e., larger N does not require more
119
training T in order for SB to approach the KNOWN CHANNEL CRB. Channel CRBs are shown in Figs. 4(d), 5(b), and (d). Similar to the signal CRBs, the channel CRB curves have a sharp knee, so that T < 10 training samples provide most of the potential improvement in channel estimation accuracy, independent of N. The dotted-line curve labeled N = T – L in Fig. 5(d) demonstrates that semi-blind processing of the entire data block provides lower channel estimation CRBs than pure training-based channel estimation using the T – L channel outputs composed entirely of known signals.
120
6.4.
Sadler et al.
MIMO Channel and Signal Estimation
Constrained CRBs are evaluated in this section for a MIMO system with K = 2 sources with M = 3 channels per source. In particular, semi-blind CRBs are evaluated with different amounts and placements of the training symbols. The source signals are generated as realizations of iid QPSK sequences. The signals are CM with amplitudes and respectively. The channel coefficients are realizations of iid complex Gaussian random variables with zero mean and unit variance, so the mean SNR of source k at the output of each channel is
In the example, the SNRs are fixed at and Constrained CRBs are presented in Fig. 6 for the following cases, where the CRBs are averaged over 50 channel realizations: KNOWN CHANNEL: All KM(L + 1) channel coefficients are known. KNOWN SIGNAL: All K(N + L) signal samples are known. SB 1 (CL, OV): Semi-blind case 1 with training samples clustered (CL) and overlapped (OV) in sources 1
Bounds on SIMO and MIMO Channel Estimation
and 2. More specifically, T training samples appear in at the midamble positions n = l,...,T. SB 2 (CL, NOV): Semi-blind case 2 with training samples clustered (CL) and non-overlapped (NOV) in sources 1 and 2. The T training samples appear in at n = 1 , . . . , T and in at n = T + l,...,2T. SB 3 (SP, NOV): Semi-blind case 3 with training samples spread (SP) throughout the block with no overlap (NOV) for sources 1 and 2. The T training samples in each source are spaced by K(L + 1) samples and appear in at n + L = L + l,
3(L+1), . . . , ( 2 T – l)(L + l) and in n + L = 2(L + 1), 4(L + 1), . . . , 2T (L + 1).
121
at
The question of optimum placement of training symbols has been investigated in recent works, e.g., [12, 24]. For the three semi-blind cases, one channel coefficient is also constrained to be known. The block size N, channel length L, and number of training symbols T are varied as specified in Fig. 6. The signal CRBs in Fig. 6 are normalized by so that the variance bounds may be interpreted with respect to unit modulus signals. The channel CRBs are averaged over all unknown channel coefficients, and the
122
Sadler et al.
signal CRBs are averaged over all unknown signal samples. We make the following observations from the plots in Fig. 6. Convergence of the semi-blind CRBs to the “KNOWN” CRBs is expected, since “KNOWN SIGNAL” is the limiting case of T = N + L training symbols, and “KNOWN CHANNEL” is the limiting case in which the training yields perfect channel estimates. The semi-blind signal CRBs are often considerably closer to the KNOWN CHANNEL CRB than the semi-blind channel CRBs are to the KNOWN SIGNAL CRB. Signal estimation is often the primary objective, so semi-blind signal CRBs may be very close to the limiting KNOWN CHANNEL CRB, while the corresponding semi-blind channel CRBs are an order of magnitude larger than the KNOWN SIGNAL CRB. The placement of training samples has little impact on the semi-blind channel and signal CRBs for this case. For memoryless channels (L = 0 in Fig. 6(a)), the constrained CRBs change little as T is increased from 3 to 10. The semi-blind signal CRBs are nearly equal to the KNOWN CHANNEL CRB for as few as T = 3 training symbols. For channels with more memory (L = 2,4 in Figs. 6(b) and (c)), the semi-blind CRBs decrease as the number of training symbols T is increased. More training T is required for the L = 4 channels to achieve CRBs comparable to the L = 2 case, suggesting that more training is required for channels with larger memory. Figure 6(d) shows CRBs for channels with larger memory L = 8 and larger block size N = 200. The trends observed in the previous items continue with this case. 7.
We have studied the important cases of constant modulus sources, as well as the use of training symbols for communications. The results show that a small amount of training can potentially yield significant gains in source estimation accuracy. The amount of training required to approach the asymptotic regime may be quantified, and is often on the order of ten to twenty training symbols. For MIMO systems, we found that the placement of training symbols had little effect on the constrained CRBs, which complements recent work on optimal training placement [39,40]. Our results bound performance of various versions of the constant modulus algorithm. For the constellations studied, the constant modulus assumption generally results in uniformly lower CRBs, and the combination of constant modulus and training is very informative. Extensions of interest include hybrid bounds that incorporate both random and deterministic parameters. For example, we may obtain results over an ensemble of channels (such as Rayleigh) in the constrained framework. Also of interest are model extensions that include unknown source synchronization parameters such as delay.
Appendix: Proofs A.1.
Proof of Theorem 1
For K = 1, using (14),
so that that nullity
is in the null space of Q, implying Generally,
Conclusions
We have presented a general CRB framework for convolutive source and channel estimation problems. Using a deterministic model for the sources and channels, this framework enables the evaluation of bounds under equality constraints. The constraints may incorporate a large variety of side information on the sources and channels. These are fundamental bounds on various multi-source equalization and separation problems. The results include SIMO and memoryless models as special cases.
for For example, for K = 2 (43) holds for the four cases {k = l = 1; k = l = 2; k=1, l = 2; k = 2, l = 1}. Considered pair-wise, we have
where
is a column of
Bounds on SIMO and MIMO Channel Estimation
123
Let
Note that the columns of V are linearly independent, so that V forms a basis for at least part of the null space of Considering the total number of linearly independent null space vectors that may be formed in this way is (No. of choices of k) • (No. of choices of l)
be the ith element of Then, if However, if then unless the ith column of is all zeros. Thus, if and are both non-zero, then is in (Here, denotes the vector z with the ith element removed). Note that for any we have with Hence, we want not all zero, such that
Then, This gives us a nullspace for where (unless, of course, for each n). Now, consider case (a) of Theorem 2. Suppose we first specify Then, from (49), we want the restriction that
so that nullity A.2.
Proof of Corollary 1
Recalling that of
then we have that rank The size is MN × K(N + L + ML + M), so
(Note that the elements of and corresponding to are zero.) From (50), either or so that either of or may be zero but not both (e.g., if then set and Now, a basis for is the columns of Continuing this procedure, let be the FIM with both and specified. Then, we obtain the restriction that
Now, using the fact that we find that
Combining (48) with Theorem Corollary 1. A.3.
1, we obtain
Proof of Theorem 2
Consider the case of K = 2, and assume that From (45), where k = 1 and l = 2. Specifying is equivalent to deleting the corresponding rowcolumn from Let denote the FIM with the i th row-column removed, corresponding to Also, let equal with the ith element removed. Then, the null space of can be found from the null space of
To reduce the nullity of the FIM it is necessary that and The second condition is satisfied provided that The new reduced rank null space basis for the FIM with the two row-cols deleted is then Applying the same technique to reduces the nullity to zero, and the proof of part (a) is completed. The proof of part (b) is exactly analogous to that of part (a). Next consider case (c), so that either (cl) two parameters are selected from any one of the set and two more parameters from any other unique two from or (c2) one parameter is selected from each element in Consider case (cl). Without loss of generality (WLOG), suppose the two parameters are taken from Then, as shown above,
124
Sadler et al.
so that Now, a parameter in or must be specified. If it is chosen from say element k, then the nullspace is reduced to Then, choosing the fourth parameter from either or eliminates from the null space and the reduced FIM is full rank. Similarly, if an element of is chosen as the third parameter (again, element k), then the null space is reduced to and specifying the fourth parameter in or reduces the nullity to zero. Next, consider case (c2). WLOG, as shown above, specifying the ith parameter from yields a null space of spanned by the columns of The restrictions previously stated on and continue to apply. Specifying the second parameter, j, from yields a null space spanned by the columns of with the condition that if the specified parameter is then
Next, specifying k from yields a single null space vector with the condition that if the parameter is then
Finally, specifying from yields a nullity of zero, where if (l) is the specified parameter then
which concludes the proof of case (c).
References 1. B.M. Sadler, R.J. Kozick, and T. Moore, “Constrained CRBs for Channel and Signal Estimation of MIMO Systems,” in Proc. of 2001 Conf. on Info. Sci. and Syst. (CISS’0l), Johns Hopkins University, March 2001. 2. B.M. Sadler, R.J. Kozick, and T. Moore, “Bounds on MIMO Channel Estimation and Equalization with Side Information,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), Salt Lake City, Utah, 2001.
3. L. Tong and S. Perreau, “Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods,” IEEE Proc., vol. 86, no. 10, 1998, pp. 1951–1968. 4. J.K. Tugnait, L. Tong, and Z. Ding, “Single-User Channel Estimation and Equalization,” IEEE Sig. Proc. Mag., vol. 17, no. 3, 2000, pp. 16–28. 5. B. Agee, “The Least-Squares CMA: A New Technique for Rapid Correction of Constant Modulus Signals,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), 1986, pp. 953–956. 6. R. Gooch and J. Lundell, “The CM Array: An Adaptive Beamformer for Constant Modulus Signals,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), 1986, pp. 2523– 2526. 7. A.-J. van der Veen and A. Paulraj, “An Analytical Constant Modulus Algorithm,” IEEE Trans. Signal Processing, vol. 44, no. 5, 1996, pp. 1136–1155. 8. P. Comon, “Independent Component Analysis—A New Concept?,” Signal Processing, vol. 36, no. 3, 1994, pp. 287–314. 9. A. Swami, G. Giannakis, and S. Shamsunder, “Multichannel ARMA Processes,” IEEE Trans. Signal Processing, vol. 42, no. 4, 1994, pp. 898–913. 10. B.M. Sadler and R.J. Kozick, “Bounds on Uncalibrated Array Signal Processing,” in 10th IEEE Workshop on Statistical Signal and Array Processing, (SSAP’00), Poconos Manor, PA, Aug. 2000, pp. 73–77. 11. B.M. Sadler, R.J. Kozick, and T. Moore, “Serai-Blind Array Processing with Constant Modulus Signals,” in Proc. 4th World Multiconf. on Syst., Cyb. and Inform. (SCI 2000), Orlando, FL, July 23–26, 2000, vol. VI, pp. 241–246. 12. B.M. Sadler, R.J. Kozick, and T. Moore, “Bounds on Bearing and Symbol Estimation with Side Information,” IEEE Trans. Signal Processing, vol. 49, no. 4, 2001, pp. 822–834. 13. J.D. Gorman and A.O. Hero “Lower Bounds for Parametric Estimation with Constraints,” IEEE Trans. Info. Theory, vol. 26, no. 6,1990, pp. 1285–1301. 14. P. Stoica and B.C. Ng, “On the Cramér-Rao Bound Under Parametric Constraints,” IEEE Sig. Proc. Letters, vol. 5, no. 7, 1998, pp. 177–179. 15. P. Stoica and B.C. Ng, “Performance Bounds for Blind Channel Estimation,” in Signal Processing Advances in Wireless Communications, Volume 1: Trends in Channel Estimation and Equalization, G.B. Giannakis, Y. Hua, P. Stoica, and L. Tong (Eds.), Prentice-Hall, Upper Saddle River, NJ, USA, 2001. 16. T.L. Marzetta, “A Simple Derivation of the Constrained Multiple Parameter Cramér-Rao Bound,” IEEE Trans. Signal Processing, vol. 41, no. 6, 1993, pp. 2247–2249. 17. S. Barbarossa, A. Scaglione, and G.B. Giannakis, “Performance Analysis of a Deterministic Channel Estimator for Block Transmission Systems with Null Guard Intervals,” IEEE Trans. Signal Processing, to appear. 18. Z. Liu, G.B. Giannakis, A. Scaglione, and S. Barbarossa, “Decoding and Equalization of Unknown Multipath Channels Based on Block Precoding and Transmit-Antenna Diversity,” in Proc. 33rd Asilomar Conf. on Sigs., Syst., and Comp., 1999, vol. 2, pp. 1557–1561. 19. A. Dogandzic and A. Nehorai, “Estimating Evoked Dipole Responses in Unknown Spatially Correlated Noise with EEG/MEG Arrays,” IEEE Trans. Signal Processing, vol. 48, no. 1, 2000, pp. 13–25.
Bounds on SIMO and MIMO Channel Estimation
20. E. De Carvalho and D.T.M. Slock, “Cramér-Rao Bounds for Semi-Blind, Blind and Training Sequence Based Channel Estimation,” in Proc. 1997 IEEE Workshop on Sig. Proc. Adv. in Wireless Comm. (SPAWC’97), Paris, France, April 1997, pp. 129–132. 21. J.L. Bapat, “Partially Blind Estimation: ML-Based Approaches and Cramér-Rao Bound,” Signal Processing, vol. 71, 1998, pp. 265–277. 22. A.O. Hero, J.A. Fessler, and M. Usman, “Exploring Estimator Bias-Variance Tradeoffs Using the Uniform CR Bound,” IEEE Trans. Signal Processing, vol. 44, 1996, pp. 2026–2041. 23. P. Stoica and T.L. Marzetta, “Parameter Estimation Problems with Singular Information Matrices,” IEEE Trans. Signal Processing, vol. 49, no. 1, 2001, pp. 87–90. 24. Y. Hua, “Fast Maximum Likelihood for Blind Identification of Multiple FIR Channels,” IEEE Trans. Signal Processing, vol. 44, no. 3, 1996, pp. 661–672. 25. S.M. Kay, Fundamentals of Statistical Signal Processing, Estimation Theory, Prentice-Hall, Upper Saddle River, NJ, USA, 1993. 26. B. Hochwald and A. Nehorai, “On Identifiability and Information-Regularity in Parameterized Normal Distributions,” Circuits, Systems, Signal Processing, vol. 16, no. 1, 1997, pp. 83–89. 27. T. Soderstrom and P. Stoica, System Identification, Prentice-Hall, Cambridge University Press, 1989. 28. K. Abed-Meraim and Y. Hua, “Strict Identifiability of Multichannel FIR Systems: Further Results and Developments,” in Proc. Int. Conf. onTelecomm.,Melbourne, Australia, April 1997, pp. 1029–1032. 29. K. Abed-Meraim, W. Qiu, and Y. Hua, “Blind System Identification,” IEEE Proc., vol. 85, no. 8, 1997, pp. 1310–1322. 30. Z. Ding, “Characteristics of Band-Limited Channels Unidentifiable from Second-Order Cyclostationary Statistics,” IEEE Signal Processing Letters, vol. 3, no. 5, 1996, pp. 150–152. 31. Y. Hua and M. Wax, “Strict Identifiability of Multiple FIR Channels Driven by an Unknown Arbitrary Sequence,” IEEE Trans. Signal Processing, vol. 44, no. 3, 1996, pp. 756–759. 32. V.U. Reddy, C.B. Papadias, and A.J. Paulraj, “Blind Identifiability of Certain Classes of Multipath Channels From Second-Order Statistics Using Antenna Arrays,” IEEE Signal Processing Letters, vol. 4, no. 5, May 1997, pp. 138–141. 33. J.K. Tugnait, “On Blind Identifiability of Multipath Channels Using Fractional Sampling and Second-Order Cyclostationary Statistics,” IEEE Trans. Info. Theory, vol. 41, no. 1, 1995, pp. 308–311. 34. E. Serpedin, A. Chevreuil, G.B. Giannakis, and P. Loubaton, “Blind Channel and Carrier Frequency Offset Estimation Using Periodic Modulation Precoders,” IEEE Trans. Signal Processing, vol. 48, no. 8, 2000, pp. 2389–2405. 35. S. Barbarossa and A. Scaglione, “Blind Equalization Using Cost Functions Matched to the Signal Constellation,” in Proc. 31st Asilomar Conf. Sig. Sys. Comp., Pacific Grove, CA, Nov. 1997, vol. 1, pp. 550–554. 36. A. Swami, S. Barbarossa, and B.M. Sadler, “Blind Source Separation and Signal Classification,” in Proc. 34th Asilomar Conf. Sig. Sys. Comp., Pacific Grove, CA, Nov. 2000. 37. S. Barbarossa, A. Swami, B.M. Sadler, and G. Spadafora, “Classification of Digital Constellations Under Unknown Multipath Propagation Conditions,” in Proc. SPIE, Digital Wireless Comm.
125
II, Orlando, FL, April 2000. 38. D.T.M. Slock, “Blind Fractionally-Spaced Equalization, Perfect Reconstruction Filter Banks, and Multichannel Linear Prediction,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), Adelaide, Australia, 1994, pp. 585–588. 39. M. Dong and L. Tong, “Optimal Design and Placement of Pilot Symbols for Channel Estimation,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), Salt Lake City, Utah, 2001. 40. S. Ohno and G.B. Giannkis, “Optimal Training and Redundant Precoding for Block Transmissions with Application to Wireless OFDM,” in Proc. IEEE Int. Conf. Acoust., Speech, and Sig. Proc. (ICASSP), Salt Lake City, Utah, 2001. 41. C.R. Johnson, Jr., P. Schniter, T.J. Endres, J.D. Behm, D.R. Brown, and R.A. Casas, “Blind Equalization Using the Constant Modulus Criterion: A Review,” IEEE Proc., vol. 86, no. 10, 1998, pp. 1927–1950.
Brian M. Sadler received the B.S. and M.S. degrees from the University of Maryland, College Park, in 1981 and 1984, respectively, and the Ph.D. degree from the University of Virginia, Charlottesville, in 1993, all in electrical engineering. He has been a member of the technical staff of the Army Research Laboratory (ARL) and the former Harry Diamond Laboratories in Adelphi, MD, since 1982. He was a lecturer at the University of Maryland from 1985 to 1987, and has been lecturing at the Johns Hopkins University Whiting Institute since 1994 on statistical signal processing and communications. He is an Associate Editor for the IEEE Transactions on Signal Processing, is a member of the IEEE Technical Committee on Signal Processing for Communications, and co-chaired the 2nd IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC-99). His research interests generally include statistical signal processing with applications in communications, radar, and aeroacoustics.
[email protected]
Richard J. Kozick received the B.S. degree from Bucknell University in 1986, the M.S. degree from Stanford University in 1988, and
126
Sadler et al.
the Ph.D. degree from the University of Pennsylvania in 1992, all in electrical engineering. From 1986 to 1989 and from 1992 to 1993 he was a Member of Technical Staff at AT&T Bell Laboratories. Since 1993, he has been with the Electrical Engineering Department at Bucknell University, where he is currently an Associate Professor. His research interests are in the areas of statistical signal processing, communications, and sensor array processing. He serves on the editorial board of the Journal of the Franklin Institute. Dr. Kozick received the Presidential Award for Teaching Excellence from Bucknell University in 1999. He received an Air Force Laboratory Graduate Fellowship during 1989–1992 and is a Member of IEEE, ASEE, Tau Beta Pi, and Sigma Xi.
[email protected]
Terrence Moore received the B.S. and the M.S. degree in mathematics from the American University, Washington, DC, in 1998 and 2000, respectively. He is currently with the Army Research Laboratory in Adelphi, MD. His research interests are generally in statistical signal processing. tmoore @ arl.army.mil
Ananthram Swami received the B.S. degree from the Indian Institute of Technology, Bombay; the M.S. degree from Rice University, Houston; and the Ph.D. degree from the University of Southern California, all in Electrical Engineering. He has held positions with Unocal Corporation, the University of Southern California, CS-3 and Malgudi Systems. He is currently a Research Scientist with the Communication Networks Branch of the US Army Research Lab, Adelphi, MD, where his work is in the broad area of signal processing for communications. Dr. Swami is a Senior Member of the IEEE, a member of the IEEE Signal Processing Society’s (SPS) technical committee (TC) on Signal Processing for Communications (since 1998), a member of the IEEE Communication Society’s TC on Tactical Communications, and an associate editor for IEEE Signal Processing Letters. He was a member of the society’s TC on Statistical Signal and Array Processing (1993–98); an associate editor of the IEEE Transactions on Signal Processing, vice-chairman of the Orange County Chapter of IEEE-GRS (1991–93); and co-organizer and co-chair of the 1993 IEEE SPS Workshop on Higher-Order Statistics, the 1996 IEEE SPS Workshop on Statistical Signal and Array Processing, and the 1999 ASA-IMA Workshop on Heavy-Tailed Phenomena. Dr. Swami was a Statistical Consultant to the California Lottery, developed a Matlab-based toolbox for non-Gaussian signal processing, and has held visiting faculty positions at INP, Toulouse, France. He has taught short courses for industry, and currently teaches courses on communication theory and signal processing at the University of Maryland.
[email protected]
Journal of VLSI Signal Processing 30, 127–142, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
On Blind Timing Acquisition and Channel Estimation for Wideband Multiuser DS-CDMA Systems* ZHOUYUE PI** AND URBASHI MITRA Department of Electrical Engineering, The Ohio State University, Columbus, OH 43210, USA Received September 7, 2000; Revised July 9, 2001
Abstract. The problems of blind timing acquisition and channel estimation for DS-CDMA signals in multipath fading channels are investigated. Using only the spreading code of the desired user, methods based on QR decompositions are proposed. These methods perform comparably or even better than subspace based methods with an order lower complexity. Furthermore, the methods exhibit significantly more robustness to channel order mismatch. Based on the acquired timing information, channel estimation algorithms are also developed which are competitive with the previously proposed subspace based channel estimation algorithms. In addition, a channel order estimation algorithm is proposed for the scenario where the order is unknown. Performance of the proposed algorithms is evaluated through simulation and comparison to asymptotic Cramér-Rao bounds. Keywords: DS-CDMA, multipath fading, synchronization, timing acquisition, multiuser systems, blind algorithms, channel estimation, wideband CDMA, QR decompositions
1. Introduction Wideband Code-Division Multiple-Access (WCDMA) has been selected to be the technology for wideband radio access to support third-generation multimedia services. Optimized to allow very high-speed multimedia services such as voice, Internet access and video-conferencing, the technology will provide access speeds up to 2 Mbit/s in the local area and 384 kbit/s wide area access with full mobility. These higher data rates require a wide radio frequency band, 5 MHz for WCDMA, which is much wider than the coherence bandwidth of most outdoor wireless channels. This fact implies that the information-bearing signals experience frequency-selective multipath fading, which can be modeled by a tapped-delay-line with equally spaced taps [1]. Typical values for the multipath delay spread *This work was supported by the National Science Foundation under CAREER Grant NCR 96-24375. **Present address: Nokia Research Center, 6000 Connection DR., M2-700, Irving TX 75039.
are about 10 µS for outdoor channels [2], which corresponds to several symbols if the data rate is 3 84 kbit/s or 2 Mbit/s and the cardinality of the modulation alphabet is not very large (such as QPSK). Our objective in this paper is to derive timing acquisition and multipath channel estimators for use in multiuser equalizers. Such information is necessary to compensate for the effects of the non-ideal channel. The conventional method for channel identification is to employ training sequences of known data. However, such a scheme requires more bandwidth to transmit the same amount of information. Consequently, there is an interest in blind schemes for multi-user wireless systems. Recently, there has been a significant amount of interest in blind estimation and detection techniques for DS-CDMA (Direct-Sequence CDMA) systems in asynchronous and multipath fading channels [3–9]. Most of the proposed algorithms require prior knowledge of the signature waveform and timing of the user of interest to perform channel estimation or data detection. Furthermore, the support of the multipath channel
128
Pi and Mitra
(channel order) is also often required. The timing information is critical to the performance of all these algorithms. In [5, 9], the timing information of the user of interest is assumed to be known. In [3, 6], synchronization of the user of interest is assumed, i.e., the initial delay of the user of interest is zero. It is note that in [10], a maximum likelihood approach of joint timing estimation, channel estimation and data detection for single-user DS-CDMA signals in single path channels is developed, based on coordinate ascent algorithm. The extension of this iterative approach to multiuser, multipath fading channels system is not straightforward and the complexity may be tremendously high. Consider the receiver side in the reverse link of DSCDMA mobile communication systems, i.e., the base station. While the signature waveform of the user of interest, (or even the signature waveforms of all active users at the base station) is readily available, the timing information of the user of interest still needs to be estimated. Hence, to implement the blind algorithms previously proposed, we need an algorithm to acquire the timing information blindly. In this work, we shall assume that we do not have access to a training signal; however, we shall have access to the spreading code of the desired user. While blind timing acquisition and estimation algorithms have been developed for single path asynchronous DS-CDMA systems [11, 12], blind timing acquisition in a multipath fading channel is still an open problem. Multipath acquisition and estimation has been considered in the context of training-based algorithms in [13]. We note that in [5, 9], a brief discussion about timing initialization in multipath fading channels is provided, but the algorithms incur a computational complexity as high as where N is the spreading gain, which will render the implementation of these algorithms highly costly and thus not favorable in practice. Our aim is to develop blind, moderate complexity algorithms. The QR decomposition has been used as a tool for adaptive Singular Value Decomposition [14] and tracking signal subspaces [15]. It is also applied to blind equalization of single user channel in [16] where the QR decomposition is applied directly to the received data matrix without forming the correlation matrix and can thus be viewed as a square-root version of [17]. Again, initial timing knowledge is required. In this paper, we develop new blind timing acquisition algorithms based on the QR decomposition of matrices related to the estimated noise subspace, which has rel-
atively lower complexity than the timing initialization algorithms in [5, 9]. Despite of an order of lower implementation complexity, these QR decomposition based methods can provide better performance than subspace based methods with mild SNR conditions. The issue of imperfect channel order information is also addressed. It is observed that the subspace based methods perform badly for both a small as well as a large amount of channel order mismatch. The originally proposed QR timing acquisition method is robust to channel order overestimation. A QR-based method for timing acquisition which is independent of channel order information is constructed as well as an algorithm for estimating the channel order. Finally channel estimation algorithms are developed. This paper is organized as follows: in Section 2, the system model for DS-CDMA signals in multipath fading channels is described. Blind timing acquisition and channel estimation methods based on QR decompositions are developed in Section 3. The sensitivity of these algorithms to channel order mismatch is also investigated. In Section 4, some numerical examples are considered. Conclusions are drawn in Section 5. Appendix A provides the algorithm for low-complexity updates of successive QR decompositions. And Appendix B gives the derivation of the Cramér-Rao Bound (CRB) for channel estimation. 2.
System Model
We consider a multiuser DS-CDMA system. Each user experiences a static or slow-varying multipath fading channel with additive white Gaussian noise. The signal transmitted by the kth user is given by
where and are the amplitude and the n-th data symbol of the kth user, respectively. The symbol duration is denoted by T. The data symbols are assumed to be independently distributed, with unit power, i.e., with for i = j and elsewhere. The signature waveform of the kth user is
where N, are the spreading gain, signature sequence, and chip duration, respectively.
On Blind Timing Acquisition
The pulse shape function is denoted by which is assumed to be a rectangular pulse shape with width equal to in this work. The pulse shape and the signature waveform are assumed to be of unit energy, i.e.,
Each user’s signal is transmitted through a multipath fading channel. For frequency-selective, slow fading, the multipath channel the kth user experiences can be modeled by a tapped delay line with equally spaced taps and an initial delay. Any fractionally spaced channel impulse response model can be converted to a tapped delay line model, with initial delay aligned with the chip boundary (see [1]). Thus, we estimate the “effective” discrete-time channel. The parameterization is given by,
129
sampled received signal. This representation will facilitate the development of timing acquisition and channel estimation algorithms. If is known, we can synchronize the receiver to the kth user. Denote the discrete time true channel vector of the kth user by then the discrete time representation of the true channel vector becomes
If no timing information is available, we zero-pad the channel vector at both ends to construct a channel vector that is “synchronized” to the receive timing, thus the synchronized channel vector becomes
In the above equation, the channel length becomes l N where and denotes the smallest integer greater than or equal to Note that it would be possible to estimate however, this would be inefficient as effort would be expended in estimating coefficients that are zero-valued. Now we partition the channel vector as
where
is the initial delay of the kth user, and is the delay spread of the channel of the kth user. Thus, the timing acquisition problem is to estimate or equivalently
The channel estimation problem is to estimate the complex coefficients, for The channel and the timing parameters are considered to be deterministic unknowns. At the receiver side, after the radio frequency frontend reception, the received signal will be filtered by a chip-matched filter and sampled at the chip rate to obtain discrete time data for subsequent digital signal processing. The chip matched filter is arbitrarily synchronized. Oversampling is possible if excessive bandwidth is available, but we don’t discuss this issue here. We next provide a discrete time representation for the
where has dimension N × 1 for i = 0, . . . , l – 1. Denoting the contribution of the kth user’s signal to the ith chip of the nth symbol sample by we define the following vectors:
Stacking N chip samples together, is the contribution of the kth user to the nth symbol sample. The contribution of the kth user to the nth receive vector is formed by stacking m successive symbol samples, and is denoted by We shall call this parameter m, the smoothing factor [5]. The data vector of the kth user is which contains the transmitted symbols present in Recalling Eq. (1), we define the spreading code vector for the kth user, Define the spreading code matrix for the kth user as
130
Pi and Mitra
We next define the channel matrix for the kth user as
Note that the constituents of the channel matrix are the partitions of the synchronized channel vector; thus is in fact, much more sparse than indicated by Eq. (5). With all these definitions, we can write the contribution of the Kth user to the received signal vector as
where M is the number of received vectors we have. The data matrix can thus be written as
3. The product of the Sylvester matrices and describes the convolution of the signature waveform with the channel. The received signal vector is comprised of the contributions of each of the K active users and the additive channel noise,
where is a circularly symmetric complex Gaussian noise with correlation matrix Let and then the received vector becomes
Multiple observations will be used for forming the estimates, to describe the total received data matrix, we make the following definitions of the received data matrix, transmitted data matrix, and noise matrix,
Blind Timing Acquisition and Channel Estimation
In this section, we focus on the development of blind algorithms exploiting the second order statistics of the received signal. The objective is to estimate the timing of a user when the multipath fading coefficients are unknown. The algorithm developed herein will then be coupled to a multipath channel estimation algorithm. Due to the cyclostationarity of the DS-CDMA signal sampled at the chip-rate and due to the assumption that the data is white, second order statistics of the received signal can be employed to estimate the timing and the channel. As other blind methods that take advantage of the second order statistics [3, 5, 9], a subspace decomposition is performed. We shall perform a singular value decomposition (SVD) on the received data matrix. Based on the relationship between the true noise and signal subspaces, estimation algorithms can be derived. To distinguish between the true noise and signal subspace decomposition and that derived when only finite samples are available, let denote the received data matrix with Thus, the SVD of
On Blind Timing Acquisition
gives us the true value of and The column vectors in which are associated with the signal eigenvalues in span the signal subspace determined by the span of the columns of F defined in Eq. (7). Whereas, the column vectors in which are associated with the noise eigenvalues in span the noise subspace which is the orthogonal complement of the signal subspace. In practice, only a finite number of samples are employed to form For finite M, the SVD of will yield noisy estimates of the true noise and signal subspaces. We denote the estimated noise and signal subspaces as and respectively. Motivating our concerns for the computational complexity of any proposed algorithm, we make the following observations. From the formulation of the data matrix (see Eq. (7)), we can see the column rank of the true signal subspace, is and the column rank of is One necessary condition for blind identifiability is the channel matrix F should be a “tall” matrix (see e.g. [8]) to ensure full column rank. We note that this condition is equivalent to ensuring the existence of a noise subspace. Thus, key parameters must satisfy the following relationship: which means where m is the number of symbols in the received signal vector r(n) (see Eq. For a fixed number of users, m must increase if l increases in order to ensure identifiability. If m increases, then the dimension of the received signal vector will also increase. This increase has two consequences: first, the complexity of the SVD is exponential with the dimension of the received data vector; second, more data samples (increased M) are required to obtain high fidelity estimates of the second order statistics. Therefore, computationally efficient algorithms become especially attractive for solving blind estimation problems for dispersive channels. Since the column vectors of the true signal subspace, span the range of F, and due to the orthogonality between and we have the following orthogonality condition
We observe that the channel matrix is both sparse and contains few unique non-zero components. Thus the orthogonality condition in Eq. (9) can be expressed in an alternative form. Assume user 1 is the user of interest. Let
131
where i =0, 1,…, m has dimension Noting the Sylvester structure of defined in Eq. (5), we define
thus obtaining an equivalent expression for Eq. (9) as
In the sequel, we will derive estimation algorithms for the timing information of user 1 which is embedded in by exploiting Eq. (12). First, we note a few observations on identifiability of the channel when the channel order is unknown and when the channel order is known. 3.1.
Identifiability
From the previous discussion, we know that a necessary condition for blind identifiability is mN – K(m + l) > 0, to ensure full column rank of F. Since Eq. (12) is an equivalent representation of Eq. (9), we can investigate blind identifiability by examining Eq. (12). Let denote the right null space of i.e., Under the assumption that the SVD yields the true signal and noise subspaces, from Eq. (12) we know that any vector in is a solution to the channel identification problem as posed in Eq. (12). If contains no nonzero solution for If then and can be identified only up to a complex ambiguity matrix. To ensure identifiability of the channel up to a complex coefficient (versus a complex ambiguity matrix), we must ensure that In this situation, This discussion assumes we have the knowledge of the spreading code of the user of interest, but the initial timing, and channel order are not
132
Pi and Mitra
available. We summarize this discussion in the following proposition, Proposition 1. For unknown channel order, the channel of the user of interest is identifiable up to a complex coefficient if and only if the column rank of defined by (10) and (11) is equal to l N – 1. Assume that user 1 is the user of interest. In the case where the channel order of user 1 is available, the channel vector has the structure defined as in Eq. (3), which offers an additional possible constraint to be used for channel identification. Let denote the submatrix which consists of columns of starting from the ith column to the jth column. The true channel vector will satisfy for with defined as in Eq. (3). If the channel is to be identifiable up to a complex coefficient, then for all other values of i, the matrix should be full rank, otherwise another solution which satisfies both the constraint and Eq. (12) would arise and contradict the identifiability condition. On the other hand, if the rank of this set of matrices, i= satisfies the desired condition, identifiability is ensured. Hence, sufficient and necessary condition for identifiability is given by: Proposition 2. The channel of the user of interest is identifiable up to a complex coefficient if and only if of the N matrices all but one has full column rank and the rank of that one matrix is equal to 3.2.
Estimation Methods Based on Successive QR Decompositions
In this section, we develop blind timing acquisition and channel estimation algorithms based on successive QR decompositions. We assume that user 1 is the user of interest; we shall further first assume that we know the channel order, for user 1. Equipped with this channel order, we can exploit the structure of exhibited in Eq. (3). Recall that the problem of timing acquisition is equivalent to estimating the value of An interpretation of the orthogonality condition is that the column vectors of the submatrix are linearly dependent
when We can use this notion to determine the timing of user 1. Note that we initially develop the algorithm under the assumption that is formed with the column vectors of the true noise subspace, The QR decomposition is a tool that can be employed to evaluate whether a set of vectors are in fact linearly dependent. Let The QR decomposition of is given by
If the blind identifiability condition in Proposition 2 is satisfied, if and only if In the finite sample case, the estimator is implemented as
where is determined using the estimate of the noise subspace versus (see Eq. (9)). By reformulating the timing estimation problem in terms of a QR decomposition (rather than a SVD or EVD) we can take advantage of moderate complexity methods for updating QR decompositions and thus construct an acquisition algorithm which offers lower overall complexity than those based on SVDs or EVDs [3, 5, 8]. A clarification on the methods of [3, 5, 8] is in order. First, these schemes require an initial subspace decomposition as in the proposed method. Second, to determine the desired channel estimate (conditioned on knowing the correct timing acquisition information), the eigenvector (singular vector) corresponding to one of the extrema eigenvalues (singular values) must be determined. In the absence of timing acquisition information, this solution method must be applied N times for each possible value of Our proposed method does not obviate the need to check N possible values of However, by employing QR decompositions, we can take advantage of low complexity methods to update the desired decomposition in a manner that is not currently possible with EVDs or SVDs. We also note that the methods of [3, 5, 8], in essence, perform timing acquisition and channel estimation simultaneously, while we must first design a timing acquisition algorithm, followed by an associated channel estimation
On Blind Timing Acquisition
algorithm. However, we will see that the channel estimation algorithm is quite straightforward and of low complexity. The complexity of the algorithm in Eq. (14) will be if we implement the direct QR decomposition in (13) for each possible value of Using the QR factorization update methods in [18], we can implement (13) successively for and reduce the computation to The details of the successive QR decomposition update is provided in Appendix A. It is noted that there are also low complexity algorithms for updating subspaces [15] and iterative methods for solving for eigenvalue extrema [18]. However, the subspace updating methods are for tracking an evolving subspace rather than for computing decompositions of embedded matrices. Furthermore, the iterative methods for computing eigenvalue extrema may have poor convergence rate. Equation (14) provides the timing acquisition algorithm. If the initial delay is obtained correctly, the channel is determined by employing the QR decomposition corresponding to the estimate of and the corresponding orthogonality condition:
where is the true channel vector and corresponds to the non-zero elements of the synchronized channel vector, (see Eq. (3)). Observe that the channel length is used explicitly to form the successive QR updates and in the equation above. In the case of infinite data and thus access to the true noise subspace, Hence the channel estimate is simply given by rearranging the above equation,
Clearly, the channel is identifiable up to a complex coefficient. Since we have already obtained the QR decomposition of in the successive QR decompo-
133
sition for timing acquisition, solving the above set of linear equations only requires computation on the order of
3.3.
Computational Complexity
In Appendix A, we show that we can obtain the QR decomposition of based on the QR decomposition of (see Eqs. (22)–(26)). Herein, we investigate the computational complexity of this method. The possible values of are The QR decomposition of requires flops. The main computational complexity of each update from to comes from the Givens rotations in Eqs. (22) and (23). Each Givens rotation can be performed with flops because of the structure of the Givens rotation matrices. Thus, each update from to has complexity of flops. There are N possible values for Therefore the overall complexity is as the update is done N times for each possible value of The channel estimation algorithm has a complexity on the order of thus the total complexity of the timing acquisition and the channel estimation remains on the order of This complexity is an order less than the methods proposed in [3, 5, 8].
3.4.
Methods for Imperfect Channel Order Knowledge
The previous set of algorithms were developed under the assumption that perfect knowledge of was present. In practice, one may not know the exact channel order of the dispersive channel; however, with information about the wireless application and the associated environment, it may be possible to bound the channel order by the maximum channel spread [1], Another motivation for this study is the fact that the subspace algorithms of [3, 5, 8] appear to be very sensitive to channel order mismatch, thus limiting their application in practical scenarios. In this section, we consider algorithms for imperfect channel order information. Two approaches are considered: first, we develop a blind timing acquisition without knowledge of the channel order of the user of interest; second, we develop a channel order estimation algorithm. The algorithms derived herein still require an estimate of the noise subspace and thus information about the dimension of the noise subspace. In the absence of channel
134
Pi and Mitra
order information, the dimension of the signal subspace in (8) can be estimated by some information theoretic criteria such as the Akaike information criterion (AIC) or minimum description length (MDL) method [19]. With a viable choice of the maximum channel spread, i.e., lN, we can construct the expressions in Eq. (12) as in the previous section. Without the channel order information, a reasonable approximation is to assume the channel vector has all non-zero taps after the initial delay, hence the channel vector is given by
Again, assume user 1 is the user of interest. Even though Eq. (12) remains valid, the method developed in the previous section is not applicable as it required explicit knowledge of However the same “idea” based on linear dependence can be used. With the approximation in (16), we can see the column vectors of are linearly dependent; we shall determine by testing for this property. The QR decomposition of is given by
If the blind identifiability condition 1 is satisfied, if and only if In the finite sample case, the estimator is implemented as
where is determined using the estimate of the noise subspace versus Note that the key difference between the channel order dependent and the channel order independent methods is that a sliding window of interest is used to examine the columns of for the case where is known; an expanding window view is taken for the case where is not known. In reality for the case where is not known, a single QR decomposition is employed. A drawback of the method above is that it is not conducive to subsequent channel estimation. One could estimate the length vector (see Eq. (16)); however, due to the overparameterization of the true channel which is of length the estimation performance would be poor. Thus, to enable channel estimation, the channel length should be estimated first.
We use the same method as estimating Eq. (12)and Eq. (13),
Recall from
Assuming is estimated correctly in the previous blind timing acquisition stage, the channel order is obtained by determining how many leading zeros there are in Denote We define the left-right flipped version of
Using the QR decomposition to evaluate the linear dependence of the column vectors in starting from which corresponds to the initial delay The QR decomposition is given by
If the blind identifiability condition 1 is satisfied, 0 if and only if In the finite sample case, formed with and the estimator is,
is
Assuming the initial delay and channel order estimates are correct, the channel estimate is found in a manner similar to that employed when the channel length is known,
On Blind Timing Acquisition
The computation of the timing estimator in Eq. (17) is on the order of The computation of the estimate of channel in length in Eq. (19) is also on the order of while the computation of the channel estimate in Eq. (20) is only on the order of based on the QR decomposition in (18). The structure of the channel vector cannot be fully exploited because channel order information is not available (the approximate channel vector is overparameterized); thus some performance loss in terms of timing estimation will be incurred by the approximation in (16). On the other hand, since (17) does not rely on the channel order information as long as the maximum channel spread is chosen properly, this estimator is intrinsically immune to errors in channel order and can outperform the methods based on channel order information when channel order mismatch occurs. These predictions are borne out by the numerical results.
4.
Numerical Results
In this section, we provide numerical results to evaluate the performance of the proposed blind timing acquisition and channel estimation algorithms for DS-CDMA signals in multipath fading channels. Randomly generated sequences of length N = 16 are used as spreading codes. The initial timing delay of each user is generated randomly with uniform probability mass function over {0, 1,..., N – 1). The multipath fading channels are modeled by tapped-delay-lines with delay spread thus the delay spread corresponds to the duration of one symbol. The complex channel coefficients are generated as i.i.d. white Gaussian random variables. Then the channel vectors are normalized to have unit power. The load of the system is K = 6. User 1 is assumed to be the user of interest and has unit power. All the other users are assumed to have the same power level which is 20 dB higher than user 1 (the excess power of the interferers is called the near-far ratio and denoted by NFR). During the runs for each data point in the figures in this section, spreading codes are generated randomly for each run but the channel vectors are fixed. The subspace based estimation algorithms proposed in [5, 8] can be summarized as
135
These methods minimize the cost function at each possible location of and use the minimum cost associated with each location as the detection metric. The channel estimate is given by the eigenvector corresponding to the minimum eigenvalue of at Thus, N channel estimates are determined and the one with the minimum associated cost is selected. For timing acquisition, we use the detection error probability (DER) as the figure of merit which is defined to be The performance of four algorithms is compared, namely, the subspace based method (SUB) given in Eq. (21), the successive QR decomposition based method (QR w L) provided in Eq. (14), the QR decomposition based method without channel order knowledge (QR w/o L) seen in Eq. (17), and the QR decomposition based channel order estimation algorithm (QR Ord Esf) in (19). The simulated performance is shown in Figs. 1 and 2. Figure 1 shows the performance versus the number of samples used to form the second order statistics whereas Fig. 2 shows the performance versus the signal noise ratio (SNR) of user 1. In the SNR and number of samples regions considered, QR w L algorithm always outperforms the subspace based method (SUB). As the SNR increases or the number of samples increases, QR w/o L and QR Ord Est also outperform the subspace based method. The successive QR decomposition based method with channel order information (QR w L) always performs better than the QR based methods without the knowledge of channel order(QR w/o L and QR Ord Est). The performance of the channel order estimation algorithm (QR Ord Est) is the same as the QR decomposition based timing acquisition method without channel order knowledge (QR w/o L), because they essentially are based on the same linear dependency idea. The effect of channel order mismatch is shown in Fig. 3. The simulation settings do not change except that the channel order of user 1 is changed to The case with M = 400, SNR = 15 dB, NFR = 20 dB is simulated. Since the QR based method without channel order knowledge is intrinsically immune to channel order mismatch, the performance remains constant as a function of mismatch. Thus, this algorithm offers the best resistance to channel order mismatch. The subspace method [5,8] proves to be quite sensitive to channel order mismatch; the performance degrades by one or two orders of magnitude even when the channel order mismatch is ± 1. The successive QR based method is sensitive to channel order underestimation, but is
136
Pi and Mitra
On Blind Timing Acquisition
insensitive to channel order overestimation. This is because the column vectors of in Eq. (13) are still linearly dependent when even if the channel order is overestimated. In contrast, can not guarantee linear dependency if the channel order is underestimated. For channel estimation, the performance of the QR decomposition based channel estimator (QR w L) given in Eq. (15), the QR decomposition based method (QR w OE) without channel order but with channel order estimation (proposed in Eq. (20)), and the subspace based channel estimator (SUB) (given in Eq. (21)) are compared in Figs. 4 and 5. The performance of the QR decomposition based method (QR w/o OE) without channel order knowledge, but with the approximation in Eq. (16) is also provided to justify the effectiveness of channel order estimation when the channel order knowledge is not available. The Root Mean Squared Error (RMSE) is used as the performance metric, For evaluating the RMSE in QR w/o OE, the true channel is zero padded so that it is the same length as (i.e., length The simulated system is the same as the one considered for the evaluation of timing acquisition algorithms. In the simulated regions of SNR and number of samples, the the QR decomposition based method (QR w L) performs very close to the subspace based method in most cases,
137
regardless of an order lower of complexity. Without the channel order knowledge, performance degradation is observed from the curves of methods QR w OE and QR w/o OE. It is not surprising that the channel order estimation algorithm can improve the performance of channel estimation (QR w OE) over the algorithm with simple channel approximation (QR w/o OE) as in Eq. (16). This is due to the fact the channel order estimation method leads to a more parsimonious parameterization of the channel. These last two figures (Figs. 6 and 7) compare the channel estimation algorithms with the asymptotic Cramér-Rao bound (CRB). Assuming independent data bits, it is straightforward to show that the CRB of parameter estimation of a particular user in a multiuser system approaches the CRB in the corresponding single user system (where all the other users are absent) asymptotically as the number of samples goes to infinity. Hence essentially single-user CRB is shown in Figs. 6 and 7. It is noted that in many previous publications (e.g., [11]), extra efforts were spent in deriving and calculating the asymptotic multiuser CRB, which is equivalent to the single user CRB simply by invoking the mutual independence of data from different users. The simulation settings are the same as in Figs. 1 and 2 except the spreading codes are
138
Pi and Mitra
On Blind Timing Acquisition
139
140
Pi and Mitra
generated randomly and then fixed for the whole simulation as the CRB is spreading code dependent. Again, the QR decomposition based method (QR w L) performs very close to the subspace based method in most cases. However, both algorithms do not achieve the CRB.
5.
2) Update the QR decomposition again when a new column is added to the right of the matrix to form
First we look at how to update the QR decomposition when the first column of is deleted. From Eq. (13), we have
Conclusions
In this paper, we have considered the problems of blind timing acquisition and blind channel estimation for DS CDMA signals in multipath fading channels. Methods based on QR decompositions are developed. The paper treats several issues not fully explored in prior work: blind timing acquisition in multipath channels; acquisition methods which are insensitive to imperfect channel length information; and channel length estimation algorithms. A successive QR decomposition method, exploiting the knowledge of channel order, is proposed based on updating the QR decomposition of a set of vectors related to the estimated noise subspace. Another method which does not rely on the channel order knowledge is also developed. A related algorithm can estimate the channel order when it is unknown. It is shown that these methods have better performance, lower complexity, and better robustness against channel order mismatch than subspace based methods [5, 8] in terms of timing acquisition. Channel estimation algorithms based on QR decompositions are also developed to work in tandem with the timing acquisition algorithms. These estimation algorithms provide almost identical performance to that offered by subspace algorithms with modest complexity.
Note is an upper Hessenberg matrix [18]. We can apply a sequence of Givens rotations to eliminate the unwanted subdiagonal elements The sequence of Givens rotation can be written as
Appendix A: Successive QR Decomposition In this Appendix we provide the algorithm for updating the QR decomposition in Section 3. Starting from suppose we have the QR decomposition of as (13). Our purpose is to find an efficient way to compute the QR decomposition of taking advantage of the extant QR decomposition of rather than decomposing directly. The update from to can be performed in two consecutive steps: 1) Update the QR decomposition when the first column of is deleted; and then
where
with complex scalars and properly specified to eliminate the unwanted subdiagonal element at the kth column. Denote
On Blind Timing Acquisition
141
where is given by Eq. (22), the first column vectors in are given by Eq. (23), the last column vector in is given by Eqs. (24) and (25), and the last column in is given by Eq. (26). Notes
Clearly, and and can be specified as
Then
l. Recall that K is the number of active users, m is the smoothing factor and lN is the channel order. 2. Thus for a single receive sensor, the number of active users should be less than the spreading gain; however, this condition can be relaxed if multiple receivers are employed as in an antenna array.
References
Clearly, for The initial value is given by
After Givens rotations, we get Eqs. (22) and (23), which are the updated QR decomposition corresponding to step (1). Note that it is unnecessary to compute the last column of because the last row of is the zero vector. The update of the QR decomposition for step (2) is essentially one step of a Gram–Schmidt orthogonalization [18], which is given by
where is the column added to in step (2). Hence, the QR decomposition of is obtained as
1. J.G. Proakis, Digital Communications, 3rd edn., New York: McGraw-Hill, 1995, Ch. 14, pp. 795–797. 2. T. Rappaport, S. Seidel, and R. Singh, “900-MHz Multipath Propagation Measurements for U.S. Digital Cellular Radiotelephone,” IEEE Transactions on Vehical Technology, vol. 39, May 1990, pp. 132–139. 3. E. Aktas and U. Mitra, “Single User Sparse Channel Acquisition for DS/CDMA,” submitted to the IEEE Transactions on Communication, June 2000. 4. S.E. Bensley and B. Aazhang, “Subspace-Based Channel Estimation for Code Division Multiple Access Communication Systems,” IEEE Transactions on Communications, vol. 44, no. 8, Aug. 1996, pp. 1009–1020. 5. M. Torlak and G. Xu, “Blind Multiuser Channel Estimation in Asynchronous CDMA Systems,” IEEE Transactions on Signal Processing, vol. 45, no. 1, Jan. 1997, pp. 137–147. 6. M.K. Tsatsanis and Z. (Daniel) Xu, “Performance Analysis of Minimum Variance CDMA Receivers,” IEEE Transactions on Signal Processing, vol. 46, no. 11, Nov. 1998, pp. 3014–3022. 7. M. Honig, U. Madhow, and S. Verdii, “Blind Multiuser Detection,” IEEE Trans. Inform. Theory, vol. 41, no. 7, July 1995, pp. 944–960. 8. X. Wang and H.V. Poor, “Blind Equalization and Multiuser Detection in Dispersive CDMA Channels,” IEEE Transactions on Communications, vol. 46, no. 1, Jan. 1998, pp. 91–103. 9. X. Wang and H.V. Poor, “Blind Multiuser Detection: A Subspace Approach,” IEEE Transactions on Information Theory, vol. 44, no. 2, March 1998, pp. 677–690. 10. I. Sharfer and A.O. Hero, “A Maximum Likelihood Digital Receiver Using Coordinate Ascent and the Discrete Wavelet Transform,” IEEE Transactions on Signal Processing, vol. 47, no. 3, March 1999, pp. 813–825. 11. E.G. Ström, S. Parkvall, S.L. Miller, and B.E. Ottersten, “Propagation Delay Estimation in Asynchronous Direct-Sequence Code-Division Multiple Access systems,” IEEE Transactions on Communications, vol. 44, no. 1, Jan. 1996, pp. 84–93. 12. U. Madhow, “Blind Adaptive Interference Suppression for the Near-far Resistant Acquisition and Demodulation of Directsequence CDMA Signals,” IEEE Transactions on Signal Processing, vol. 45, no. 1, Jan. 1997, pp. 124–136.
142
Pi and Mitra
13. E. Ertin, U. Mitra, and S. Siwamogsatham, “Maximum-Likelihood Based Multipath Channel Estimation for Code-Division Multiple-Access Systems,” IEEE Transactions on Communications, vol. 49, no. 2, February 2001, pp. 290–302. 14. E.M. Dowling, L.P. Ammann, and R.D. DeGroat, “A TQRIteration Based Adaptive SVD for Real Time Angle and Frequency Tracking,” IEEE Transactions on Signal Processing, vol. 42, no. 4, April 1994, pp. 914–926. 15. C.H. Bischof and G.M. Shroff, “On Updating Signal Subspaces,” IEEE Transactions on Signal Processing, vol. 40, no. 1, Jan. 1992, pp. 96–105. 16. X. Li and H. (Howard) Fan, “QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics,” IEEE Transactions on Signal Processing, vol. 48, no. 1, Jan. 2000, pp. 60–69. 17. L. Tong, G. Xu, and T. Kailath, “Blind Identification and Equalization Based on Second-Order-Statistics: A Time Domain Approach,” IEEE Transactions on Information Theory, vol. 40, March 1994, pp. 340–349. 18. G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd edn., Baltimore, MD, The Johns Hopkins University Press, 1996, 19. J. Rissanen, “Modeling By Shortest Data Description,” in Proceedings of IFAC, 1978, pp. 465–471.
Zhouyue Pi received the B.E. degree with honors in automation from Tsinghua University, Beijing, China, and the M.S. degree in
electrical engineering from Ohio State University, in 1998, and 2000 respectively. He is a recipient of the Ohio State University Fellowship. He is currently with Nokia Research Center, Irving, TX, where his current research includes work in system design, capacity analysis and resource management for advanced wireless communication systems.
[email protected]
Urbashi Mitra received the B.S. and the M.S. degrees from the University of California at Berkeley in 1987 (high honors) and 1989 respectively, both in Electrical Engineering and Computer Science. From 1989 until 1990 she worked as a Member of Technical Staff at Bellcore in Red Bank, NJ. In 1994, she received her Ph.D. from Princeton University in Electrical Engineering. From 1994 to 2000, Dr. Mitra was an Assistant Professor in the Department of Electrical Engineering at The Ohio State University, Columbus, Ohio. She became an Associate Professor in 2000 and currently holds that position in the Department of Electrical Engineering—Systems at the University of Southern California, Los Angeles. At the University of Southern California, Dr. Mitra is a member of the Communication Sciences Institute and the Integrated Media Systems Center. She is a recipient of a 1996 National Science Foundation CAREER award. Additionally, she has received from the Ohio State College of Engineering: a Charles E. MacQuigg Award for Outstanding Teaching in 1997 and a Lumley Award for Research in 2000. Dr. Mitra received an NSF International Post-doctoral Fellowship in 1994 as well as the Lockheed Leadership Fellowship (1988–1989) and the University of California Microelectronics Fellowship (1987). During several occasions between 1995 and 1997, Dr. Mitra was a visiting scholar at the Institut Eurecom, in Sophia Antipolis, France. She participates in the IEEE as an Associate Editor for the IEEE Transactions on Communications and as the Membership Chair for the IEEE Information Theory Society.
[email protected]
Journal of VLSI Signal Processing 30, 143–161, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Downlink Specific Linear Equalization for Frequency Selective CDMA Cellular Systems* THOMAS P. KRAUSS, WILLIAM J. HILLERY AND MICHAEL D. ZOLTOWSKI 1285 School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907-1285, USA Received September 1, 2000; Revised June 19, 2001
Abstract. We derive and compare several linear equalizers for the CDMA downlink under frequency selective multipath conditions: minimum mean-square error (MMSE), zero-forcing (ZF), and RAKE. MMSE and ZF equalizers are designed based on perfect knowledge of the channel. The downlink specific structure involves first inverting the multipath channel to restore the synchronous multi-user signal transmitted from the base-station at the chip-rate, and then correlating with the product of the desired user’s channel code times the base-station specific scrambling code once per symbol to decode the symbols. ZF equalization restores orthogonality of the Walsh-Hadamard channel codes on the downlink but often suffers from noise gain because certain channel conditions (no common zeros) are not met; MMSE restores orthogonality only approximately but avoids excessive noise gain. We compare MMSE and ZF to the traditional matched filter (also known as the RAKE receiver). Our formulation generalizes for the multi-channel case as might be derived from multiple antennas and/or over-sampling with respect to the chip-rate. The optimal symbol-level MMSE equalizer is derived and slightly out-performs the chip-level but at greater computational cost. An MMSE soft hand-off receiver is derived and simulated. Average BER for a class of multi-path channels is presented under varying operating conditions of single-cell and edge-of-cell, coded and un-coded BPSK data symbols, and uncoded 16-QAM. These simulations indicate large performance gains compared to the RAKE receiver, especially when the cell is fully loaded with users. Bit error rate (BER) performance for the chip-level equalizers is well predicted by approximate SINR expressions and a Gaussian interference assumption. Keywords: equalization, smart antennas, wireless communications, space-time processing, spread spectrum, RAKE receiver 1. Introduction We further investigate the properties and performance of linear equalizers for receivers designed specifically for the CDMA downlink. The system in consideration uses orthogonal Walsh-Hadamard channel codes and a base-station dependent scrambling code or “long code,” similar to the CDMA standard IS-95 [1] and proposed standard cdma2000 [2]. The channel is frequency-selective multipath fading with delay-spread *This research was supported by the Air Force Office of Scientific Research under grant no. F49620-00-1-0127 and the Texas Instruments DSP University Research Program.
equal to a significant fraction of the symbol period. We examine BPSK and 16-QAM symbols with QPSK spreading; however the results here-in also apply to systems with symbols and spreading drawn from other constellations. Mobile receivers in current CDMA cellular systems employ a matched filter (also known as a RAKE receiver) which combines the energy from multiple paths across time, and space (if spatial or polarization diversity is employed). The matched filter is ideal when there are no interfering users [3]. In the CDMA forward link, there are many other users being transmitted through the same channel although with different spreading codes. The current CDMA standard
144
Krauss, Hillery and Zoltowski
(IS-95 [1]) and standards proposals (cdma2000 [2] and UMTS W-CDMA [4]) for third generation (3G) systems all employ orthogonal Walsh-Hadamard channel codes on the downlink. This means that in a flat faded situation, upon despreading with a correctly synchronized spreading code of the desired user, all interference from other same-cell users is completely eliminated. Major problems arise when the fading is not flat, which is true often in urban wireless systems at the high chip rates specified in 3G systems. Major problems also arise when the user is near the edge of a cell and receiving significant out-of-cell interference, regardless of whether the fading is flat or not. In this paper we derive the MMSE chip estimator for the two base-station case. The results can be easily extended to more base-stations; in practice at most three base-stations will be received. We simulate equalizers for a class of frequency-selective fading channels, both near one base-station so the out-of-cell inteference is negligible, and mid-way between two base-stations where the mobile experiences significant out-of-cell interference. For the edge-of-cell case, we derive and analyze a “soft-hand-off” mode in which the desired user’s data is transmitted simultaneously from two base-stations, as well as the “typical” mode where the second base-station is considered interference. We assess the cases of one and two receive antennas and compare them. Our results indicate that MMSE significantly out-performs RAKE, and that two antennas significantly out-perform a single receive antenna with oversampling. These results hold both for soft handoff and normal operation although, as expected, soft hand-off enhances performance. One fundamental question this work tries to address is whether adding a second antenna at the mobile is worth the associated costs and difficulties. In future cellular systems, the physical link from base-station to handset or other mobile data terminal will be a major bottleneck. Employing spatial diversity through multiple antennas, at the mobile-station, can possibly eliminate this bottleneck. There is great hesitation to include multiple antennas at the mobile-station because of the extreme pressures to keep handsets cheap and low power. However the cost of bandwidth is also competing head-on and at some point may be more expensive than multiple antenna hardware. Our simulations show the potential a multiple antenna equalizer has for reducing interference and hence increasing system performance, range and capacity. Broadband wireless access will require exponential increases in system per-
formance, so it is only a matter of time before the idea of two or more antennas at the mobile-station is embraced and put into practice. Some relevant papers on linear chip equalizers that restore orthogonality of the Walsh-Hadamard channel codes and hence suppress multi-user access interference (MAI) are [5–15]. Of these, [5, 6, 8, 9] consider a single antenna with oversampling, while the others address the multi-channel case (antenna arrays). The interference from other base-stations is addressed only in Ghauri and Slock [7], Frank and Visotsky [6], and later by us in [13, 15]. Of those listed, “soft-hand-off mode has been considered only in [13, 15]. References [9] and [10] simulate the use of OVSF channel codes in which some users are assigned multiple Walsh-Hadamard channel codes to transmit data at lower spreading factors. In [12] and [16] we argue that zero-forcing (ZF) equalization in this CDMA downlink context, even in the multi-antenna case, suffers from poor channel conditioning for sparse channels; this result is confirmed in the present work. In [15] we present results for convolutionally encoded BPSK, and uncoded 16-QAM constellations. In this paper, as in References [6, 7], and [8], the channel and noise power are assumed known (i.e., channel estimation error is neglected), as well as time synchronization with the long scrambling code. However, as suggested in [11] where the channel is blindly estimated via the “cross-relation” of Xu et al. [17], channel estimation is extremely important and its failure leads to disastrous results for equalization. Using the exact channel in simulation and analysis leads to an informative upper bound on the performance of these methods, but must be understood as such. Reference [8] takes a very similar approach to ours but with a different channel model: 2- or 4-path Rayleigh with much smaller delay spread as would be found in indoor environments. Also their equalizers are single antenna with 4 times oversampling. Frank and Visotsky [6] consider adaptive equalization based on the pilot channel, which solves the problem of not knowing the channel as well as handling time variation. [18] proposes a subspace method for blind channel identification specifically for the CDMA downlink with long scrambling codes; however they assume the symbol interval is longer than the delay spread, which may not always be true for 3G CDMA systems. The assumption we make that the channel is unchanging might be the case only over a short time interval. For adaptive versions of linear chip equalizers for CDMA downlink, see [6] and [9]
Downlink Specific Linear Equalization
and references in [8]. Note for the equalizers studied herein, as well as for blind channel identification and adaptive equalization based on the pilot symbols as proposed elsewhere, synchronization with the long code is required. As in [13] and [15], we derive performance measures conditioned on the channel in the form of receiver SINR; from this a BER estimate is obtained based on a Gaussian approximation. This approximation is shown to be a highly accurate prediction of the uncoded BER. [6] and [7] also present performance analysis in the form of SINR expressions for the multiple base-station case. Linear multiuser detection for CDMA systems is treated fairly thoroughly in Verdu’s text [3] and compared to the optimal multi-user detector. The optimal detector is non-linear with complexity that increases exponentially with the number of users and delay spread. Our work may be considered an investigation into several sub-optimal multi-user detectors for the specific case of receiving sets of synchronous users through different frequency selective fading channels (from the different base-stations), with orthogonal spreading and a base-station-dependent long code. These linear equalizers are much less computationally complex than the optimal detector, but have performance which significantly improves on the standard matched-filter (RAKE). Reference [19] presents several ZF and MMSE detectors for the frequency selective channel, although not specifically for the downlink. Note that this work assumes one antenna per basestation transmitter. For some recent work that presents promising space-time coding techniques for the downlink that take advantage of the diversity provided by a transmit array, see [20] (flat-faded) and [21] (frequency-selective). 2.
Data and Channel Models
The impulse response for the ith antenna channel, between the kth base-station transmitter and the mobilestation receiver, is
is the composite chip waveform (including both the transmit and receive low-pass filters) which we assume has a raised-cosine spectrum. is the total number of delayed paths or “multipath arrivals,” some of
145
which may have zero or negligible power without loss of generality. The channel we consider for this work consists of equally spaced paths 0.625 apart this yields a delay spread of at most 10 µs, which is an upper bound for most channels encountered in urban cellular systems. The spacing of 0.625 µs is motivated by the software channel simulator SMRCIM [22] which utilizes this spacing for urban cellular environments. We use this spacing in our simulations; however the methods presented in this work do not require any specific spacing of the or that they are evenly spaced. The simulations employ a class of channels with 4 equal-power random coefficients with arrival times picked randomly from the set the rest of the coefficients are zero. Once the 4 arrival times have been picked at random and then sorted, the first and last arrival times are forced to be at 0 and the maximum delay spread of 10 µs respectively. The coefficients are equal-power, complex-normal random variables, independent of each other. The arrival times at antennas 1 and 2 associated with a given base-station are the same, but the coefficients are independent. See Fig. 1 for a plot of a typical channel’s impulse response, sampled at the chip rate. The “multi-user chip symbols” for base-station may be described as
146
Krauss, Hillery and Zoltowski
where the various quantities are defined as follows: k is the base-station index; is the base-station dependent long code; is the jth user’s gain; is the jth user’s bit/symbol sequence; n = 0, is the jth user’s channel (short) code; is the length of each channel code (assumed the same for each user); is the total number of active users; is the number of bit/symbols transmitted during a given time window. Note that each base-station uses only one transmit antenna. The signal received at the ith receive antenna (after convolving with a matched filter impulse response having a square-root raised cosine spectrum) from base-station k is
where is as defined in Eq. (1). The total received signal at the mobile-station is simply the sum of the contributions from the different base-stations plus noise:
is a noise process assumed white and Gaussian prior to coloration by the receiver chip-pulse matched filter. Without loss of generality, only two base-stations are assumed; the subsequent sections may be extended in a straightforward manner to more than two. For the first antenna, we oversample the signal in Eq. (4) at twice the chip-rate to obtain and These discretetime signals have corresponding impulse responses and for base-stations k = 1, 2. Oversampling can help take advantage of the full bandwidth of the raised cosine pulse; it provides a moderate performance increase for the MMSE equalizer as seen in the simulations section. For the second antenna, we also oversample the signal in Eq. (4) at twice the chip-rate to obtain and These
discrete-time signals have corresponding impulse responses and for base-stations k = 1, 2. Let M denote the total number of chip-spaced channels due to both receiver antenna diversity and/or oversampling. When two times oversampling is employed, M = 2 channels if only one antenna is used, while M = 4 channels if both antennas are used. For some simulations, two antennas are employed with no oversampling (in which case M = 2). 3.
Estimate of Multi-User Synchronous Sum Signal
In addition to the matched-filter, we also consider two other types of estimators for the synchronous sum signal at the chip rate s[n]: zero-forcing (ZF), and minimum mean-square error (MMSE). These “chip-level” equalizers are shown in Fig. 2 (single antenna case). The portion on the left of the figure shows the chiprate transmitted signals filtered with the channel and added together with thermal noise. The right of the figure shows two methods of processing the received chip sequences; the “chip-level” equalizers above, and the “symbol-level” below (described in Section 6). The chip-level equalizers estimate the multi-user synchronous sum signal for either base-station 1 or 2, and then correlate with the desired user’s channel code times that base-station’s long code. To derive the chip-level equalizers (both ZF and MMSE), it is useful to define signal vectors and channel matrices based on the equalizer length Generally, the larger the better the performance since the equalizer will have more degrees of freedom for mimimizing the noise gain (for the ZF equalizer) or the MSE (for the MMSE equalizer). This issue is further addressed in the simulations section. The “recovered” chip signal will be for some delay D, where is the chip-level equalizer
Downlink Specific Linear Equalization
for base-station k, k = 1, 2. The received signal is given by
vectorized
where
is the
convolution matrix
3.1.
147
BLUE (ZF) Estimate of Multi-User Synchronous Sum Signal
For multiple channels obtained either through oversampling and/or multiple antennas, there are infinitely many zero-forcing equalizers provided the channel matrix is left-invertible. Besides the requirement that the matrix is tall, left invertibility requires that the subchannels of each channel have no common zeros. This is clear since if they do share a common zero, the signal with a pole at the common zero will be cancelled by all subchannels, and hence there is a non-zero vector in the null-space of i.e., it is not left-invertible. Assuming well conditioned channels, the optimal choice among the infinitely-many ZF equalizers is to minimize the noise gain, which is obtained with the “best linear unbiased estimate” (BLUE) equalizer; this equalizer is also presented in [9]. Specifically, to design a ZF equalizer we solve a constrained optimization problem:
In matrix form this becomes
Equation (5) is more compactly written as
where is the noise covariance matrix, of size is all zeroes except for unity in the (D + l)th position (so that and
where
and
Note that in some of our simulations, only one basestation is considered, and the second base-station is assumed to have negligible power. This case is included in our notation by simply letting the coefficients of base-station 2’s channel go to zero.
The objective and constraint can be combined into a single unconstrained objective function by introducing a Lagrange vector
is a real function with complex vector arguments; to minimize it, it is useful to employ Brandwood’s complex gradient [23]:
148
Krauss, Hillery and Zoltowski
3.2. The gradient with respect to simply yields the constraint that Then plugging the necessary form for into the constraint we have
from which we solve for the Lagrange vector
The final BLUE solution is thus given by
For the channels we simulated we sometimes encountered singular or near-singular channel matrices, and frequently when a single antenna is employed with 2 times oversampling as opposed to 2 antennas. To avoid conditioning problems, a small diagonal loading of was applied prior to matrix inversion in the simulations. The two antenna case typically avoids conditioning problems because the separation of the antennas makes it likely that at least one of the antennas will not be experiencing a deep fade. The ZF equalizer depends on the particular delay D chosen for the right-hand side and also the equalizer length For a fixed equalizer length, the “best choice” equalizer is the one that has smallest noise gain It is not necessary to compute all possible equalizers corresponding to each of the possible delays. Instead, we note that which is the (D + l)th diagonal element of (or the element if k = 2). Hence our method of computing the equalizer with the best delay is:
MMSE Estimate of Multi-User Synchronous Sum Signal
The MMSE estimate of the synchronous sum signal in Eq. (2) is where minimizes the MMSE criterion:
where is all zeroes except for unity in the (D + l)th position (so that In contrast to MMSE symbol estimates, we here estimate the chiprate signal as transmitted from the desired basestation, and follow this with correlation and summation with the long code times channel code to obtain the symbol. This procedure works by approximately restoring the orthogonality of the Walsh-Hadamard codes, but allowing a residual amount of interference from other users to achieve reduction in noise gain relative to an exact ZF solution. We assume unit energy signals, and furthermore that the chip-level symbols are independent and identically distributed, This is the case if the base-station dependent long codes, are treated as iid sequences, a very good assumption in practice. The equalizer which attains the minimum is
Note this is the standard solution to the Wiener-Hopf equations where
and
The MMSE is compute the matrix inverse find the smallest element of the diagonal of the appropriate block (either the upper-left block for k = 1 or the lower-right block for k = 2), multiply the column corresponding to that element by the matrix Our simulation experience has found that optimizing over D led to a 1-2 dB improvement in average BER performance over simply choosing the reasonable “centered” delay of
The MMSE equalizer is also a function of the delay D. The MMSE may be computed for each D, with only one matrix inversion (which has to be done to form anyway). Once the D yielding the smallest MMSE is determined, the corresponding equalizer may be computed without further matrix inversion or system solving. Our simulation experience has found that optimizing over D led
Downlink Specific Linear Equalization
149
to a slight improvement in average BER performance over simply fixing D at The improvement was less than that experienced by the BLUE equalizer, implying the BLUE equalizer is more sensitive to choice of delay.
a length vector and form its inner product with the length channel code times the appropriate portion of the base-station long code:
3.3.
Here and for the remainder of this section, the chip index n is a specific function of the symbol index m and delay is the length channel code vector of the kth base-station’s jth user times the long code of that base-station,
Asymptotic Behaviour of MMSE
In the matrix inversion lemma for matrices of the form
(see e.g. [24]), let From this we obtain
Recognizing that izer of Eq. (21) is equivalent to
the MMSE equal-
This last expression shows that the MMSE equalizer gives us the best of both worlds: at low SNR, the MMSE equalizer
The composite channel formed by the convolution of channel and equalizer, and summed together, is a timeinvariant, SISO (single-input single-output) FIR system of length (recall that L is the channel length and is the equalizer length, in chips). Let this impulse response, for the channel between the base-station and the equalizer “tuned” to the kth basestation’s channel, be denoted
acts like a “pre-whitened” RAKE receiver, and therefore benefits from the diversity gains of the multipathincorporating matched filter. At high SNR, Here, the equalizer coefficients equalizer vector acts like the BLUE (ZF) Equalizer of Eq. (19) and hence can completely eliminate the MAI because the orthogonality of the channel codes is restored. Note however that for channels that have close to common zeroes, the performance of the ZF equalizer does not approach the performance of the MMSE at high SNR. Because of the instability incurred in inverting the ill-conditioned matrix the ZF equalizer performs much worse than the MMSE (as seen in the simulations). 3.4.
comprise the
where
All of the composite responses may be decomposed into a delay D term, plus a residual impulse response, or “ISI” contribution:
Performance Analysis
To obtain the symbol estimate for the chip-spaced equalizers, we assemble the chip signal estimate into
For the zero-forcing criterion, the ISI portion will be zero.
150
Krauss, Hillery and Zoltowski
Assume for the moment that we seek to estimate the symbol from user 1, transmitted from base-station k = 1. The symbol estimate in Eq. (30) becomes
while the noise and interference term is complex. The power of the real part of the interference plus noise is simply half of the power of the complex term:
Hence,
where is the convolution matrix for composite impulse response and
has only one non-zero diagonal, which by construction picks out exactly that portion of corresponding to the mth symbol. is the postequalization noise vector. Here we have used the fact that and (the QPSK long code gives rise to the factor of 2 in Eq. (34)). Also, due to orthogonality of the Walsh-Hadamard channel codes, that is, all same-cell users on the direct path are cancelled. Rewrite (34) as “signal” term plus an “interference plus noise” term denoted as :
The hard decision of the symbol is taken to be 1 if real is positive and –1 otherwise. Assuming the real part of the interference is Gaussian by a centrallimit argument with a large number of interferers, the bit error rate (BER) for BPSK information symbols is
where This approximation has been empirically determined to be valid even for as few as 8 users where the Gaussian assumption might be tenuous (see Fig. 11 in the Simulations section). Note that this BER is valid as an approximation for both MMSE and RAKE and any other linear equalizer, and is exact for ZF equalizers.
The SINR is given by 4.
where we have used the fact that The expectation in the denominator can be shown to contain a weighted sum of products of the long code evaluated at four different indices, e.g.,
and reduces to
Note in practice with BPSK symbols that the constant times symbol is real
Soft Hand-Off Mode
For soft hand-off mode, the desired user’s symbols are modulated onto one of the channel codes at each base-station. At the receiver, two equalizers are designed, one for each base-station: we attempt to estimate the multi-user synchronous sum signal from base-station one, AND the multi-user synchronous sum signal from base-station two, via individual equalizers. The output of each of the two chip-level equalizers is correlated with the desired user’s channel code times the corresponding base-station’s long code (see Fig. 2), yielding two symbol estimates. These two signal estimates are optimally combined in the linear MMSE sense for the soft-hand-off mode. For the MMSE equalizer, the total delay of the signal, D, through both channel and equalizer, was chosen to minimize the MSE of the equalizer; this was done for both equalizers.
Downlink Specific Linear Equalization
For deriving the combining coefficients, we have two bit estimates of the form:
We seek combining coefficients minimize
in order to
Assuming are known real constants and are zero-mean independent random variables with known power the weights are given by
While technically are not independent, with many users and iid scrambling codes, their correlation will be very small and may be ignored in practice. Note that this soft hand-off analysis could also be applied to the symbol-level equalizer of Section 6, as well as the chip-level equalizers (MMSE, RAKE, and ZF). 5.
given base-station are the same. The equalizer is much more computationally complex and complicated when all the different users travel through different channels with different delays. The conclusions reached in that paper apply equally well here, namely that FIR MMSE equalization always performs at least as well as the “coherent combiner” (that is, the RAKE receiver). This type of symbol-level receiver has also been presented in [19], although again not specifically for the CDMA downlink. The symbol-level equalizer differs from the chiplevel equalizer in that the base-station and WalshHadamard codes do not appear explicitly in the block diagram (see Fig. 2). Instead, the codes become incorporated into the equalizer itself. To derive the equalizer, we first define as the bit sequence upsampled by when and otherwise. We wish to estimate directly and we do this by finding
where the minimization is done only when n – D = As in the chip-level case, where y[n] is given by Eq. (9). Setting the MSE is minimized yielding
RAKE Receiver
The above performance analysis and soft hand-off receiver apply also to the RAKE receiver, which is simply a multipath-incorporating matched filter. In particular, the RAKE can be viewed as a chip-spaced filter matched to the channel, followed by correlation with the long code times channel code (see Fig. 2). Note, in practice, these operations are normally reversed, but reversal is allowed due to shorttime LTI assumptions. Specifically, to apply the above results to the RAKE receiver, let and n = 0, . . . , L – 1, i = 1, . . . , M. 6.
151
where
Once again, this is the standard solution to the WienerHopf equations where
Symbol-Level Equalizer
In this section we present what we call the “symbollevel” MMSE estimator. This estimator depends on the user index and symbol index, and hence varies from symbol to symbol. The FIR estimator that we derive here is a simplified version of that presented in [25] where in our case, all the channels and delays from a
and
We now proceed to derive expressions for and Using Eq. (11),
152
Krauss, Hillery and Zoltowski
where We assume here that the desired user is only transmitted by base station k. If we were to also assume, as in the chip-level case, that the long code is i.i.d., the right-hand-side vanishes because the long code and the symbol are independent and the long code has zero mean. Instead, we assume here that the base-station and Walsh-Hadamard codes are deterministic and known so that the only random elements in s[n] are the transmitted bits. Under these assumptions, [m]} = 0 for and any n and m, so The (i, j)th element of is When i = j, When
This bound is plotted as a function of in Fig. 3. Note that when for all so If we assume that the Walsh codes are chosen randomly when it can be shown that is given by
where Y is a hypergeometric random variable with mean Therefore, the mean and variance of are zero and respectively. That is, the off-diagonal elements which are not identically zero have zero mean and a very small variance as shown in the plot in Fig. 3. For nearly all values of the variance is clearly quite small. So in all cases, we may well approximate by I in Eq. (47) yielding
We will see through simulation that this approximation works quite well when compared to the “exact” equalizer constructed with a time-varying The ith element of is
where
with
With fixed m and n, note that and are two different rows of the Hadamard matrix. The element-byelement (Schur) product of these two rows is also a row of the Hadamard matrix containing 1’s and So
Therefore, when
and
With D satisfying Walsh code for the desired user appears in
the entire and
Downlink Specific Linear Equalization
153
two base-station case, the channels are scaled so that the total energy from each of the two base-stations is equal at the receiver. Specifically, where is defined in Eq. (31). While the equalizer varies from symbol to symbol due to variation in both and by approximating by I, the variation is confined to
7.
Simulations
A wideband CDMA forward link was simulated similar to one of the options in the US cdma2000 proposal [2], The spreading factor is chips per bit. The chip rate is 3.6864 MHz 3 times that of IS-95. The data symbols for each user are BPSK unless otherwise stated, and are spread with a length 64 WalshHadamard function. The signals for all the users are of equal power and summed synchronously. The sum signal is scrambled with a multiplicative QPSK spreading sequence (“scrambling code”) of length 32768 similar to the IS-95 standard; the offset into this code is determined by the base-station and is the same for all users of a given base-station. The signals were sampled at a factor of eight times the chip rate in order to perform accurate matchedfiltering with the square-root chip pulse filter. The channel was the exact continuous time convolution of a square-root raised cosine (beta = .22) pulse with the discrete impulse response This was then sampled at eight times the chip rate. The noise was white at the different antennas, also generated at eight times the chip rate. The sum of the contributions from the two base-stations, and the noise, was filtered with a chip matched filter, also at eight times the chip rate; this gave the proper noise coloration, manifested as correlation between different polyphase channels from the same antenna. The different polyphase channels were formed by sub-sampling the eight times oversampled received signal at the chip rate; the 0th polyphase started at samples 0, 8, 16, etc, while the 1st polyphase started at 4, 12, 20, etc. The square-root raised cosine pulse used for both the generation of and at the receiver was truncated at 5 chip intervals to the right and left of the origin. The BER results are averaged over 500 different channels and among the different users for varying SNRs. Theoretical BER results based on Eq. (42) in Section 3.4 are shown. The channels were generated according to the model presented in Section 2. For the
In the one base-station case, the channel from the second base-station is set to zero. “SNR” is defined to be the ratio of the sum of the average powers of the received signals from both base-stations, to the average noise power, after chip-matched filtering. Note this definition of SNR is more amenable to the soft hand-off mode since the second base-station is considered “signal energy” instead of “interference energy.” “SNR per user per symbol” is SNR multiplied by the spreading factor and divided by the number of users, which for 64 users is the same as the SNR. For ZF and MMSE, the total delay of the signal, D, through both channel and equalizer, was chosen to minimize the MSE of the equalizer. 7.1. One Base-Station We first present results for a single base-station, in which the mobile unit is near the base and out-of-cell interference is negligible. Figure 4 shows the average BER performance for a two antenna system with no oversampling. All 64 users are active and equal power. The MMSE and ZF equalizers are compared with the RAKE receiver. Equalizers of length (equal to the channel length including tails of the chip waveform), and twice that length were simulated. From the curves of Fig. 4 we see that the RAKE receiver benefits from diversity gains at lower SNR, while the ZF equalizer benefits from the orthogonality of codes at moderate to high SNR. We observe that
154
Krauss, Hillery and Zoltowski
the RAKE receiver’s average performance saturates at high SNR, so that BERs below .01 are not possible no matter how little noise is present; this is due to the interference that the matched filter allows to pass in this frequency selective channel. The average performance for the MMSE is much better than RAKE or ZF. Both MMSE and ZF benefit from longer equalizers. Similar observations with respect to the performance of RAKE versus that of MMSE have been reported elsewhere in the literature for short, periodic spreading codes and other channel and synchronization assumptions (see, for example, Fig. 6.7 of [3]). As expected, the MMSE equalizer approaches the RAKE at low SNR. However, the MMSE does not approach the ZF at high SNR. This may be partially explained by the next figure, Fig. 5. This figure shows that the ZF equalizer has a large degree of variability in performance in the class of channels simulated. The worst channels for the ZF equalizer have performance so bad that the mean BER is dominated by these bad channels at high SNR. These few bad channels correspond to a channel matrix that is very ill conditioned corresponding to near common zeros, and hence it cannot be said that the MMSE equalizer is approximately ZF equalizer in this case at high SNR. For a large class of channels, the ZF equalizer exhibits a BER that falls off more like the MMSE; the median BER (the 50th percentile) of the ZF and MMSE equalizers is off by at most 3 dB, whereas the mean BER is 10 dB or more worse for ZF than MMSE at high SNR. In contrast, the MMSE exhibits less variability in performance across the different channels (Fig. 6). Given the poor average performance of the ZF equalizer we will not consider its use for the remainder of the simulations. Note that RAKE suffers less from MAI with fewer users (and hence has a lower BER floor), while ZF is the same no matter how many users are active. This
is clearly seen in the plots for 16 and 32 users in [12] which are not included here. The MMSE slightly degrades as the number of users in the cell is increased, however it still allows a substantial improvement over the conventional RAKE receiver. This is shown for MMSE versus RAKE in the 8 users plot of Fig. 13. This validates the use of our i.i.d. sequence assumption for the scrambling code along with the MMSE criterion for estimating the synchronous sum signal. 7.2. Two Base-Stations For both the RAKE and MMSE, two chip-level equalizers are designed, one for each base-station restoring the multi-user synchronous sum signal from that base-station. The output of each of the two chip-level equalizers is correlated with the desired user’s channel code times the corresponding base-station’s long code (see Fig. 2), obtaining two symbol estimates. These two signal estimates are weighted and summed for the soft-hand-off mode as developed in Section 4. Simulations were performed for both “saturated cells,” that is, all 64 possible channel codes active in each cell, as well as lightly loaded cells with 8 channel codes active in each cell. The MMSE equalizer length was set to Two antennas were employed with two times oversampling per antenna for a total of M = 4 chip-spaced channels. The results of the simulations are plotted in Fig. 7 for 8 users per cell, and in Fig. 8 for 64 users per cell. Note in both cases, the RAKE receiver’s performance flattens out at high SNR due to the MAI. This effect is more severe when many users are present; in fact, a minimum BER of only .01 to .1 is possible with the RAKE receiver when channel codes are active, regardless of how high the SNR.
Downlink Specific Linear Equalization
155
Figure 11 compares the theoretical BER approximation using the Gaussian assumption of Eq. (42) and the derived SINR expression. This figure illustrates that, even for only 8 users where the Gaussian assumption might not hold very well, the theory and simulation results are very well matched. The simulation results ran 100 bits through each channel at each SNR point and for each user (for a total of 6400 bits per channel times 500 channels = 3.2 million bits for 64 users, or 0.4 million for 8 users). This plot also shows that the MMSE equalizer degrades much less dramatically than RAKE when the number of active channel codes increases. The MMSE significantly outperforms the RAKE receiver. To illustrate this more clearly, Figs. 9 and 10 plot the difference in SNR between the RAKE and MMSE receivers as a function of target uncoded BER. For both normal and soft hand-off modes of operation, the RAKE requires much more power than the MMSE receiver. This is more pronounced when there are more users, and when soft hand-off is unavailable. MMSE equalization allows operation in SNR regions that would be impossible with RAKE receivers— especially the case when a large number of channel codes are active relative to the spreading factor.
7.3.
Symbol-Level versus Chip-Level: Simulation of Time-Varying
To analyze the performance of the symbol-level equalizer as presented in Section 6, we simulated a receiver near the base-station so that out-of-cell interference is negligible. Two receive antennas are employed with no oversampling. Two equalizer lengths were simulated: for chip-level, and 114, while for symbollevel, the length is chosen longer. Since the chip-level equalizer is followed by correlation with
156
Krauss, Hillery and Zoltowski
the channel code times long code, its effective length is hence, a fair comparison between the symbol-level and chip-level sets the symbol-level equalizer to chips. Note that while this is a fair comparison if equalizer length mainly determines the computation (as might be the case for adaptive equalization), a “block” implementation involving actual matrix inversion in Eqs. (21) and (47) would require a larger matrix inverse for the symbollevel than for the chip-level. Figure 12 presents the results for the fully loaded cell case, i.e. 64 equal power users were simulated. The RAKE receiver is significantly degraded at high SNR by the MAI, which is seen in the Figure as a BER floor for SNR greater than 10 dB. The chip- and symbol-level equalizers perform much better than the RAKE. Increasing the equalizer length improves performance for both chip-level and symbol-level. Comparing the length 57 chip-level to 120 symbol-level, we observe little improvement in the symbol level at low SNR with increasing improvement, up to 2–3 dB, at high SNR. Comparing length 114 chip-level to 177 symbol-level also shows an improvement that increases with SNR, but less of an improvement than for the shorter equalizers. Note that since all 64 channel codes are present and have equal power, and the symbol-level MMSE estimate is optimal in the MSE sense. In Fig. 13, once again the out-of-cell interference is assumed negligible. In this simulation only 8 equal power channel codes are active, i.e., the cell is only lightly to moderately loaded. In this simulation the RAKE receiver does much better since it experiences less in-cell MAI than for 64 users. For the range of SNR
simulated the chip-level equalizer does only slightly better than the RAKE receiver. As for the fully loaded cell, the symbol-level equalizer performs better than the chip-level equalizer. For comparison the “optimal” symbol-level equalizer is shown which involves a matrix inverse for every symbol (as in Eq. (47)); this equalizer is only slightly better than the symbol-level equalizer presented in this paper. This result justifies the assumption/simplification that is proportional to I, even when This is despite the much greater computational complexity required for the optimal symbol-level equalizer: it must compute and invert the covariance for for all m=0, 1, 2,... (that is, for every symbol), whereas the non-optimal symbol-level equalizer need not do so. 7.4.
Coded BPSK
In Fig. 14 the soft hand-off receiver BER is compared for coded and uncoded signaling. For the coded
Downlink Specific Linear Equalization
157
system, a standard rate 1/2 convolutional code with constraint length 7 was employed. The generator polynomials were (171, 133) in octal. A Viterbi algorithm with survivor length 100 was used to decode the perturbed code symbols (“soft decisions”) from the length chip equalizer followed by decorrelator. The coded SNR has been increased by 3 dB to account for the increased signal energy per data bit. 500 channels were generated and 50 information bits (100 BPSK code bits) per channel per user were simulated; the actual BER was measured for all 64 users. As expected both RAKE and MMSE benefit from the error control coding at high SNR. From the Figure we see that the RAKE receiver in the coded case still exhibits an error floor, although at a much lower BER of about The MMSE equalizer achieves a much lower BER, consistent with the uncoded results. In the case of data transmission where the target BER after coding is the advantage of MMSE equalization over the RAKE receiver is dramatic; in fact, RAKE cannot achieve this BER. With more powerful (e.g., 1/3 rate) coding, the RAKE might achieve BER, but at the expense of the data rate. 7.5.
Uncoded 16-QAM
Future CDMA systems may employ higher order constellations, for example, Qualcomm’s HDR system [26]. See also [27] for a comparison of 16-QAM and QPSK multiuser receivers in severe multipath. In such systems the MMSE downlink equalizer can significantly enhance performance over the RAKE receiver. Figures 15 and 16 present uncoded 16-QAM BER results. 500 channels were generated and 100 symbols (400 bits) per channel per user simulated; the actual BER results were averaged for all 64 users. There were two antennas, each with two times oversampling. The equalizer length was The SNR has been reduced by a factor of 4 (6 dB) to account for the smaller signal energy per bit. Figure 15 shows the results for a single base-station in which out-of-cell interference is negligible. The MMSE equalizer significantly outperforms the RAKE. In this case the RAKE BER floor at high SNR is above .1 because 16-QAM suffers from residual interference. Figure 16 shows the results for two base-stations in soft hand-off operation. Again, the MMSE out-peforms RAKE but to a lesser degree since the second base-station in each of the two sections of the soft hand-off receiver contributes some interference.
7.6. Orthogonal Variable Spreading Factor (OVSF) Codes The simulation results plotted in Fig. 17 reveal the substantial performance gains of a chip-level MMSE
158
Krauss, Hillery and Zoltowski
Equalizer over a RAKE receiver in the case where OVSF codes are employed. In this simulation, there were thirty-two active channel codes with 64 chips per bit plus a single 2 chips per bit channel code. Each of the thirty-two codes of length 64 employed is orthogonal to each and every integer multiple of 2 time-shift of the single code of length 2. Two antennas were employed with no oversampling; out-of-cell interference was not simulated (only one base-station). The definition of SNR is slightly different for this scenario. SNR is here defined to be the signal to noise ratio for a given user PRIOR to the channel. This means that the channel and equalizer gain modify the power of the signal, while the noise power is modified only by the equalizer. For this simulation the channel consisted of 4 equal power paths, each arriving exactly at a chip interval. Each path had independent real and imaginary parts with unit variance, meaning each path has a power of 2. Because each of the four paths has the same strength at each of the two antennas, and all paths have at least a chip spacing between them, the RAKE receiver benefits from roughly a 10 log(8 · 2) = 12 dB diversity gain. Note that in practice the achievable diversity gain depends heavily on the actual multipath parameters and accurate estimation of them. The post-correlation SNR (defined as the SNR times the processing gain) for EACH of the thirty-two 64 chips per bit users (including times (18 dB) processing gain) was held constant at 3 dB. The BER performance of the RAKE receiver, chip-level MMSE equalizer, and the ZF equalizer for the single 2 chips per bit user as a function of the post-correlation SNR of the single 2 chips per bit user (including 3 dB processing gain) is plotted in Fig. 17—see the curves labeled “RAKE 2”, “MMSE 2”, and “ZF 2”, respectively. In the 3 to 10 dB SNR range, the MMSE equalizer is observed to provide several orders of magnitude improvement in uncoded BER for the single 2 chips per bit user. Note that the performance of the RAKE receiver for the 2 chips per bit user suffers from inter-symbol interference as well as multipath-induced MAI since the delay spread is 37 chips corresponding to a delay spread of roughly a half-symbol. In addition, Fig. 17 plots the average BER performance of the RAKE receiver, the MMSE equalizer, and the ZF Equalizer for EACH of the thirty-two 64 chips per bit users as a function of the post-correlation SNR of the single 2 chips per bit user—see the curves labeled “RAKE 64”, “MMSE 64”, and “ZF 64”, respectively. The BER performance of the ZF equalizer for
each of the thirty-two 64 chips per bit users is observed to remain relatively constant as the SNR of the single 2 chips/bit user is increased due to the restoration of channel code orthogonality. In contrast, the BER performance of the RAKE receiver for each of the thirtytwo 64 chips per bit users is observed to increase dramatically as the SNR of single 2 chips per bit user is increased, since this increases the level of the MAI affecting each of the thirty-two 64 chips per bit users. As the SNR of the single 2 chips per bit user is increased, the point at which the BER of the single 2 chips per bit user is the same as that for EACH of the thirtytwo 64 chips per bit users occurs at roughly the same SNR for all three different methods. Interestingly, this cross-over point occurs when the post-correlation SNR (includes 3 dB processing gain) of the single 2 chips per bit user is equal to the post-correlation SNR (includes 18 dB processing gain) of EACH of the thirtytwo 64 chips per bit users: 3 dB. However, the BER at this cross-over point achieved with the chip-level MMSE equalizer is two orders of magnitude lower than the cross-over point BER achieved with either the RAKE receiver or ZF equalizer. These simulation results are a strong testament to the efficacy of employing an MMSE based chip-level equalization in the case of synchronous CDMA with OVSF channel codes. 7.7. One versus Two Antennas In this section we compare performance for one and two antenna systems. Two times oversampling was employed, yielding a total of two chip-spaced “virtual channels” in the one antenna case, and four in the two antenna case. SNR is once again according to the original definition. The results of the simulations for soft hand-off mode are plotted in Fig. 18, and without soft
Downlink Specific Linear Equalization
hand-off (normal operation) in Fig. 19. For the MMSE equalizers, two lengths were tried, length 57 (equal to the total channel length including tails) and twice that. The main result is that for this edge-of-cell, soft-handoff situation, two antennas at the mobile-station can dramatically improve the average BER performance over a single antenna. Increasing the equalizer length helps, but does not bring the performance of the single antenna receiver anywhere near two antennas. Also, the relationship between RAKE and MMSE, and different equalizer lengths, holds whether in soft hand-off mode or in normal mode; soft hand-off simply improves the situation significantly as can be expected. The dramatic enhancement in performance in the two antenna case can in part be explained through a linear-algebraic/zero-forcing argument. The channel matrix has dimension With M = 4 channels and a per channel equalizer length equal to the channel length, (tall). This allows a left-inverse, or zero-forcing (ZF) solution, implying perfect cancellation of ISI (inter“chip symbol” interference) and CCI (co-channel interference from the other base-station) in the case of no noise. Of course, in the practical case of noise, we employ the MMSE estimator over the ZF equalizer to avert high noise gains arising in the case where the respective 4 channels for either base-station have common or near common zeroes. That is, ZF considerations guide the choice of both the number of channels and the equalizer length per channel, but the MMSE estimator is used for equalization to avert potential high noise gains. Note that when the noise power approaches 0, the MMSE equalizer approaches the ZF equalizer (see Section 3.3) if there are no common zeroes amongst the channels. Continuing the discussion of the two antenna case, it can be shown that the two polyphase channels cre-
159
ated from either antenna are nearly linearly dependent in the case of a sparse multipath channel as encountered in the high-speed link simulated here. Thus, even though is tall with M = 4 channels and a per channel equalizer length equal to the channel length, thereby allowing a left-inverse and perfect zero-forcing (ZF), in the case of sparse multipath channels the resulting is ill-conditioned. ISI and CCI cancellation can be enhanced by increasing the per channel equalizer length above the channel length thereby facilitating inversion of the two nearly common polyphase channels extracted from either antenna. The simulation results reveal that doubling the per channel equalizer length from L = 57 to yields a substantial performance enhancement. In contrast, in the single antenna case (M = 2), both for and is wide (2L × 4L – 2 and 4L × 6L – 2 respectively), and no zero-forcing solution exists. Further increases in the per channel equalizer length above yield diminishing returns as the use of two polyphase channels extracted from a single antenna does not provide enough degrees of freedom to cancel the multi-user access interference from the other base-station as well as cancel ISI. If one extracts four polyphase channels from a single antenna, one can theoretically cancel co-channel interference as well as ISI. However, again, in the case of sparse multipath channels, the four polyphase channels from a single antenna are too linearly dependent to achieve this practically, especially in the practical case where the excess bandwidth associated with the chip pulse shaping is less than 50%. This is supported through simulating 500 channels and measuring the average BER for the soft hand-off receiver as seen in Fig. 20. Four times oversampling with a single antenna does not improve the situation for either the RAKE receiver or the MMSE length 57 equalizer compared to just two times oversampling.
160
8.
Krauss, Hillery and Zoltowski
Conclusion
The results indicate that the MMSE chip-equalizers as recently developed have a great potential for increasing CDMA downlink capacity, and this result holds for the case of soft hand-off as well as normal operation. MMSE equalizers benefit greatly from multiple receive antennas. The SINR and BER analysis show that it is possible to theoretically predict the uncoded BER performance of 3G cellular systems under certain conditions, given knowledge of the channel or class of channels. The symbol-level equalizer derived here performs better than the chip-level, however at a greater computational cost. In fact our simulations have shown that even though the equalizer is sub-optimal, it has performance closely approaching the optimal linear equalizer. The approximation that the source covariance is diagonal means that a matrix inverse is required only as often as the channel changes (and not every symbol), and hence the computational complexity is much smaller than the optimal equalizer.
6.
7.
8.
9.
10.
11.
Acknowledgments The authors thank Colin Frank and Eugene Visotsky at Motorola Labs for invaluable direction and sharing of ideas. Thanks to Geert Leus at the the Katholieke Universiteit Leuven, Belgium for early pointers into the benefits of MMSE equalization versus ZF and RAKE, even in the case of multiple channels. Also, thanks to Professor Saul Gelfand for use of his Sun HPC-450 and 3500 computers, which were obtained from the Army Research Office under the DURIP program. Thanks to Jeongsoon Park for discussions and resources on multiuser detection.
12.
13.
14.
15.
References 1. Telecommunications Industry Association TIA, “Mobile Station-Base Station Compatibility Standard for Wideband Spread Spectrum Cellular Systems—ANSI/TIA/EIA-95-B-99,” TIA/EIA Standard, Feb. 1999. 2. Telecommunications Industry Association TIA, “Physical Layer Standard for cdma2000 Standards for Spread Spectrum Systems—TIA/EIA/IS-2000.2-A,” TIA/EIA Interim Standard, March 2000. 3. V. Sergio, Multiuser Detection, Cambridge, UK: Cambridge University Press, 1998. 4. 3rd Generation Partnership Project, “3GPP TS 25.2-Series (Physical Layer),” Technical Specification, March 2001. 5. A. Klein, “Data Detection Algorithms Specially Designed for the Downlink of CDMA Mobile Radio Systems,” in IEEE 47th
16.
17.
18.
19.
Vehicular Technology Conference Proceedings, Pheonix, AZ, 4–7 May 1997, pp. 203–207. C.D. Frank and E. Visotsky, “Adaptive Interference Suppression for Direct-Sequence CDMA Systems with Long Spreading Codes,” in Proceedings 36th Allerton Conf. on Communication, Control, and Computing, Monticello, IL, 23–25 Sept. 1998, pp. 411–420. I. Ghauri and D.T.M. Slock, “Linear Receivers for the DSCDMA Downlink Exploiting Orthogonality of Spreading Sequences,” in Conf. Rec. 32rd Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 1998. K. Hooli, M. Latva-aho, and M. Juntti, “Multiple Access Interference Suppression with Linear Chip Equalizers in WCDMA Downlink Receivers,” in Proc. Global Telecommunications Conf., Rio de Janero, Brazil, 5–9 Dec. 1999, pp. 467–471. S. Werner and J. Lilleberg, “Downlink Channel Decorrelation in CDMA Systems with Long Codes,” in IEEE 49th Vehicular Technology Conference Proceedings, Houston, TX, 16–19 May 1999, vol. 2, pp. 1614–1617. M. Zoltowski and T. Krauss, “Two-Channel Zero Forcing Equalization on CDMA Forward Link: Trade-Offs Between MultiUser Access Interference and Diversity Gains,” in Conf. Rec. 33rd Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, 25–27 Oct. 1999. T. Krauss and M. Zoltowski, “Blind Channel Identification on CDMA Forward Link Based on Dual Antenna Receiver at Handset and Cross-relation,” in Conf. Rec. 33rd Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, 25–27 Oct. 1999. T. Krauss, M. Zoltowski, and G. Leus, “Simple MMSE Equalizers for CDMA Downlink to Restore Chip Sequence: Comparison to Zero-Forcing and RAKE,” in International Conference on Acoustics, Speech, and Signal Processing, 5–9 June 2000. T.P. Krauss and M.D. Zoltowski, “MMSE Equalization Under Conditions of Soft Hand-Off,” in IEEE Sixth International Symposium on Spread Spectrum Techniques & Applications (ISSSTA 2000), NJIT, New Jersey, 6–8 Sept. 2000. T.P. Krauss, W.J. Hillery, and M.D. Zoltowski, “MMSE Equalization for Forward Link in 3G CDMA: Symbol-Level Versus Chip-Level,” in Proceedings of the 10th IEEE Workshop on Statistical Signal and Array Processing (SSAP 2000), Pocono Manor, PA, 14–16 Aug. 2000. T.P. Krauss and M.D. Zoltowski, “Chip-level MMSE Equalization at the Edge of the Cell,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC 2000), Chicago, IL, 23–28 Sept. 2000. T. Krauss and M. Zoltowski, “Oversampling Diversity Versus Dual Antenna Diversity for Chip-Level Equalization on CDMA Downlink,” in Proceedings of First IEEE Sensor Array and Multichannel Signal Processing Workshop, Cambridge, MA, 16–17 March 2000. G. Xu, H. Liu, L. Tong, and T. Kailath, “A Least-Squares Approach to Blind Channel Identification,” IEEE Trans, on Signal Processing, vol. 43, 1995, pp. 2982–2993. A.J. Weiss and B. Friedlander, “Channel Estimation for DSCDMA Downlink with Aperiodic Spreading Codes,” IEEE Transactions on Communications, vol. 47, 1999, pp. 1561– 1569. A. Klein, G. Kaleh, and P. Baier, “Zero Forcing and Minimum Mean-Square-Error Equalization for Multiuser Detection
Downlink Specific Linear Equalization
20.
21.
22.
23.
24. 25.
26.
27.
in Code-Division Multiple-Access Channels,” IEEE Transactions on Vehicular Technology, vol. 45, 1996, pp. 276– 287. A.P. Naguib, N. Seshadri, and A.R. Calderbank, “Increasing Data Rate Over Wireless Channels: Space-Time Coding and Signal Processing for High Data Rate Wireless Communications,” IEEE Signal Processing Magazine, vol. 17, 2000, pp. 76–92. A. Stamoulis, Z. Liu, and G.B. Giannakis, “Space-Time Coded Generalized Multicarrier CDMA with Block-Spreading for Multirate Services, in Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2000. Wireless Valley Communications, Inc., SMRCIM Plus 4.0 (Simulation of Mobile Radio Channel Impulse Response Models) User’s Manual, 24 Aug. 1999. D.H. Brandwood, “A Complex Gradient Operator and Its Application in Adaptive Array Theory,” IEEE Proc., Parts F and H, vol. 130, 1983, pp. 11–16. T. Kailath, Linear Systems, Englewood Cliffs, NJ: Prentice Hall, 1980. Hui Liu and M. Zoltowski, “Blind Equalization in Antenna Array CDMA Systems,” IEEE Transactions on Signal Processing, vol. 45, 1997, pp. 161–172. A. Julali, R. Padovani, and R. Pankaj, “Data Throughput of CDMA-HDR a High Efficiency-High Data Rate Personal Communication Wireless System,” in Proc. of IEEE Vehicular Technology Conference, Tokyo, Japan, 15–18 May 2000. P. Shamain and L.B. Milstein, “Using Higher Order Constellations with Minimum Mean Square Error (MMSE) Receiver for Severe Multipath CDMA Channel,” in Proceedings of Ninth IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Boston, MA, 8–11 Sept. 1998, pp. 1035–1039.
Thomas P. Krauss received a B.S. in electrical and computer engineering in 1989 from University of Colorado at Boulder, an M.S. degree in electrical engineering in 1993 from Cornell University, Ithaca, NY, and a Ph.D. in electrical engineering in 2000 from Purdue University, West Lafayette, IN. From 1992 to 1997 he developed signal processing software for MATLAB and SIMULINK with the MathWorks, Inc., Natick, MA. His internship experience includes US WEST Advanced Technologies, Boulder, CO where he developed a real-time speech processing system for the hearing impaired, and Thomson Consumer Electronics, Indianapolis, IN, where he developed and simulated modulation and coding techniques for a nonlinear satellite channel. Dr. Krauss was awarded the 1999 Daniel Noble Fellowship by the IEEE Vehicular Technology Society and Motorola. His research interests are in digital communications and signal processing, and their application to broadband mobile wireless communications using antenna arrays. Since September 2000,
161
Dr. Krauss has been with Motorola Labs in Schaumburg, IL.
[email protected] William J. Hillery was born in Dubuque, IA in 1960. He received the B.S. degree in electrical engineering from Iowa State University, Ames, in 1984 and the M.S. degree in electrical engineering from Stanford University, Stanford, CA, in 1990. He is currently working toward the Ph.D. degree at Purdue University in West Lafayette, IN. In 1984 he joined Hewlett Packard, Santa Clara, CA, where he designed mixed-signal bipolar integrated circuits. In 1988 he was a Research Fellow at University College London, London, U.K., where he designed a high-speed bipolar digital-to-analog converter. During the period from 1988 to 1998, he worked in several areas at HP— developing device models for HP’s high-speed bipolar integrated circuit processes, designing CMOS ASIC’s for digital communications systems, and managing the physical CAD tool development for HP’s CMOS integrated circuit processes. His research interests include wireless digital communications, equalization, and adaptive filtering. Mr. Hillery is a member of Tau Beta Pi, Eta Kappa Nu, and Phi Kappa Phi.
[email protected]
Michael D. Zoltowski received both the B.S. and M.S. degrees in Electrical Engineering with highest honors from Drexel University in 1983 and the Ph.D. in Systems Engineering from the University of Pennsylvania in 1986. In Fall 1986, he joined the faculty of Purdue University where he currently holds the position of Professor of Electrical and Computer Engineering. Dr. Zoltowski was the co-recipient of the IEEE Signal Processing Society’s 1991 Young Author Award (Statistical Signal and Array Processing Area), “The Fred Ellersick MILCOM Award for Best Paper in the Unclassified Technical Program” at the IEEE Military Communications (MILCOM ’98) Conference, a Best Paper Award at the 2000 IEEE International Symposium on Spread Spectrum Techniques and Applications (ISSSTA 2000), and the 2001 IEEE Communications Society’s “Leonard G. Abraham Prize Paper Award in the Field of Communications Systems.” Within the IEEE Signal Processing Society, he has been a member of the Technical Committee for the Statistical Signal and Array Processing Area, and is crurently a member of both the Technical Committee for Communications and the Technical Committee on DSP Education. In addition, he is currently a Member-at-Large of the Board of Governors and Secretary of the IEEE Signal Processing Society. He is a Fellow of IEEE. His present research interests include space-time adaptive processing for all areas of mobile and wireless communications, GPS, and radar.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 163–178, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Multipath Delay Estimation for Frequency Hopping Systems* PRASHANTH HANDE School of Electrical Engineering, Cornell University, Ithaca, NY 14853, USA LANG TONG School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA ANANTHRAM SWAMI Army Research Lab, AMSRL-CI-CN, 2800 Powder Mill Rd., Adelphi, MD 20783, USA Received February 14, 2001; Revised September 5, 2001
Abstract. The multipath delay estimation problem for a slow frequency hopping system is studied. High resolution delay estimation algorithms are proposed by exploiting invariance structures in the data packet. The proposed approach converts the problem of delay estimation using temporally received packets to one of estimating directionsof-arrival in array processing. Two closed-form estimators are developed. The first algorithm is based on the use of a single invariance and applies the ESPRIT algorithm. The second approach utilizes multiple invariances, and enforces the Cayley-Hamilton constraint in the signal subspace. It is shown, via an analysis of acquisition time, that the use of multiple invariances significantly shortens the number of hops required for parameter identifiability. Simulation examples also demonstrate the advantage of exploiting multiple invariances. Keywords: delay estimation, multiple invariance, eigenstructures 1. Introduction Among the two major spread spectrum techniques, frequency hopping (FH) techniques have been extensively studied for many years [1]. The hopping period can either span several bits for Slow Frequency Hopping (SFH) or can be a fraction of the bit period for Fast Frequency Hopping(FFH). The receiver for a FH system is conceptually a bank of filters arranged so that each filter in the bank is responsible for a portion of the total bandwidth [2]. Acquisition, the process of achieving time synchronization, has been the main focus of receiver design for many of the frequency hopping sys*This work was supported in part by the Army Research Office under Grant ARO-DAAB19-00-1-0507 and the Multidisciplinary University Research Initiative (MURI) under the Office of Naval Research Contract N00014-00-1-0564.
tems [3–5], with matched filter or the decorrelator being the usual detection schemes. Conventionally, FH systems are considered narrowband and the so called flat-fading model is invoked thereby ignoring the effect of the multipath delays to a considerable extent. The characterization of a signal as narrow-band is channel dependent and depends upon the coherence bandwidth of the channel. It was shown in [6] that the flat-fading model is not valid for a FH system whose bandwidth is comparable to the coherence bandwidth of the multipath channel. Thus the received signal in an SFH system at high data rates is frequency selective and estimation of delays becomes important. This paper addresses the delay estimation problem for an SFH system. When the propagation channel can be modeled by discrete scatterers, the optimal receiver requires the knowledge of multipath delays. However, delay
164
Hande, Tong and Swami
estimation for an SFH system is nontrivial in many aspects. One possible solution is the transmission of training bits. This, unfortunately, requires a multidimensional search, and has no closed-form solution. Although this problem can be circumvented by quantizing the delays into a finite number of levels, the quantization either limits the resolution of the estimator or requires high-dimensional matrix computations. In this paper, we exploit the packet structure of the transmitted data to obtain high resolution1 closed-form delay estimators. The estimators are blind in the sense that there is no training bits embedded in the data packet. We assume instead that all packets contain a fixed (although unknown) segment which may correspond to the packet identifier, synchronization bits, or the address segment. It is this packet structure that enables us to relate the problem of high resolution delay estimation to high resolution direction-of-arrival (DOA) estimation in array processing. The key to this connection is the translation of SFH packets to the array elements in the problem of DOA estimations. In this paper, we present two approaches: one based on the celebrated ESPRIT algorithm [7], the other on the more recent SPECC algorithm [9] that appears to be especially suitable for the delay estimation problem because it requires a much smaller number of packets. The paper is organized as follows. The model and the invariance structures are presented in Section 2. The delay estimation algorithm based on ESPRIT, that exploits a single invariance structure, is presented in Section 3. Also included is an analysis of the acquisition time for the ESPRIT approach. In Section 4, a general version of the SPECC algorithm for delay estimation is presented. Simulation results and discussions are given in Section 5.
physical multipath channel is quasi time-invariant, with gains and delays constant across a few hops. In general, delays vary with carrier frequency, as do the amplitudes. The assumption of hop-independent gains is not required, as we shall see later. A widely used approach, one that often leads to asymptotic optimality, is maximum likelihood detection based on following minimization
where is the hop period (or dwell time), and the estimated symbol vector in the nth hop. This minimization is difficult to accomplish particularly due to the non-linear dependence of the received signal on the delay parameters However, the ML-minimization problem is tractable if the delays are known, or can be constantly estimated first. Since the dependence on the channel gain parameters is linear, the minimization over and can be separated. This motivates us to seek robust algorithms for the estimation of the delay parameters. It should be noted that, without joint optimization, this approach is no longer optimal, and tradeoffs between the simplicity of the algorithm and its performance need to be made. We stress that, because of the nonlinear dependence of on the problem of delay estimation for the model in (1) is not trivial even if training is available, i.e., is known. The discrete-time version of (1) can be written as
2. The Model The baseband model of the FH received signal in a discrete multipath environment is of the form
where n is the hop index, is the received signal, is the hop frequency, is the vector of transmitted symbols in the nth hop, is the baseband transmitted signal, and is the white Gaussian noise component. Gain and delay parameters are unknown, and assumed independent of hop index. The hop frequency is known. We assume that the
where, with a sampling interval T that satisfies the Nyquist criterion,
Note that we do not assume a particular form for the modulation. The task at hand is to obtain estimates of the channel delays based on the received signal samples without knowledge of the channel gains or the transmitted bit sequences.
Multipath Delay Estimation
2.1.
The Invariance Structure
The key to the proposed approach is the invariance in the structure of the packet and the received signal. A typical SFH system [11] employs packet transmission with hopping from packet to packet. Some (consecutive) symbols in each packet form the address headers or the synchronization bits and are identical in every packet intended for the same recipient. We use these facts to recast the delay estimation problem as one of direction-of-arrival estimation in array processing. We assume that the first2 K symbols are fixed for all hops. This is a realistic assumption in some practical systems. For example, the SFH based PCS system described in [11] has a total of 16 QPS K symbols or 32 bits as sync and guard bits. Similarly, the Bluetooth standard [12,13] specifies the use of 72 bits as access code and 54 bits as header information in each packet. The access code is used for synchronization and is the same for all packets originating in the same group (called a piconet in Bluetooth terms). The header consists of the member address, link flow bits and other control information. Thus it makes sense to assume that the first few bits of each packet received at a terminal are essentially the same. This implies that
165
so that
where
Suppose next that a sequence of hop frequencies can be put into a set of (ordered) M + 1-tuples such that, for some and all i
Note that a particular hop frequency can appear in multiple positions in the M-tuple. For purposes of exposition, let us consider a simple example. Let the hop frequencies be chosen from integers {1, 2 , . . . , 5), and suppose that we have received packets from the hop sequence
The discrete time version of the received signal can be written as [cf., Eq. (3)] There are many ways to form the partitions. If we choose M = 1 and we then have
where, by exploiting (4), we have suppressed the dependency of on n and Equation (5) is essentially in the DOA framework; here, k is the snapshot number, and n is the sensor index. We have P sources at bearings with source waveforms given by is the sensor displacement. Thus, each hop gives us a suite of K snapshots at one sensor (in contrast with the usual DOA problem where conceptually simultaneous sampling of the array elements, gives us one snapshot across all the sensors). The K samples in (5) can be stacked into a row vector as
corresponding to here N = 6. Choosing M = 1 and yields N = 4,
If we choose M = 4 and i = 1 , . . . , 4), we have
(so that
Let be the received data (row) vector corresponding to the header part of the packet associated with frequency with i = 0, . . . , M, and j = 1 , . . . , N. We
166
Hande, Tong and Swami
can then form the following N × K matrices
3.1.
The ESPRIT Algorithm
Suppose that we partition the set of hop frequencies into only two subsets, i.e., M = 1 in (11). Assume for simplicity that (the general case is discussed at the end of the section). Consider the auto-covariance matrix of the (2N × K) received signal matrix, obtained by stacking together all the received data
where In the DOA framework, we have created (M + 1) possibly overlapping subarrays, each with N “sensors,” and have collected data from K snapshots. If are not all the same, then the different subarrays have different displacements. From (7)–(10), the data matrices defined above impose the following invariance structure
where and are matrices obtained by stacking the noise vectors. Matrices A and defined by
where and are the noise covariance matrices, and is the auto-covariance of the transmitted signal, assumed to have full rank. Since the columns of the S matrix consist of delayed versions of the same signal, the assumption translates to the condition that the various delayed versions of the signal are not fully correlated. To be more precise, we can write
where the expectation is with respect to the gains and the delays. A common assumption in multipath modeling is that the delays are independent of the gains, and that for discrete multipaths, the path gains are statistically independent of one another. In the case of zeromean scattering, we have contain information about the delays. The task now is to estimate given that the data matrices are available. This, of course, is exactly the same formulation used in ESPRIT [7] and Multiple Invariance (MI) ESPRIT [8]. 3.
which is a diagonal matrix. The signal and noise subspace separation is achieved by considering the eigen-decomposition of as
Delay Estimation Via Single Invariance
We first show how the ESPRIT algorithm can be applied to the multipath time-delay estimation problem. Next, we derive a closed-form expression for the average number of hops (sensors!) required to ensure identifiability.
where and represent the signal and noise subspaces respectively. We have assumed that (number of snapshots must be no smaller than the number of sources), and that (number of sensors must be no smaller than
Multipath Delay Estimation
the number of sources); both assumptions are reasonable. Note that the signal space is represented by the eigenvectors corresponding to the P largest eigenvalues of the covariance matrix while the noise space is represented by the rest. The signal space corresponding to the two data sets can be obtained by splitting the signal space matrix as
where and are N × P unitary matrices representing the signal subspaces of the two data sets respectively. With the covariance matrices related as in (15), and represent the same subspace and this subspace is well known to be the same as span{A}[14]. Assuming that the matrix A is full rank, this implies the existence of a P × P full rank matrix T satisfying
The matrix can be estimated if and are known. In practice, the presence of noise and a finite data set makes it possible for us to only obtain estimates, and of these subspaces. In this case we obtain an estimate of as
By defining B = AT and the minimization as
we can rewrite
Note that the parameters of interest are the diagonal elements of which are the same as the eigenvalues of Denoting the estimates of these eigenvalues by the delay parameters are estimated as
In order to avoid ambiguity of the estimates, we must have equivalently, we need The minimization problem in (20) can be solved by either considering that only one of the two data sets is corrupted by noise or that both the data sets are corrupted by noise. The first assumption leads
167
to the least squares (LS) ESPRIT implementation [15] while the second assumption leads to total least squares (TLS) ESPRIT [16]. We assumed in the preceding development; in the general case, matrix will be replaced by and the identifiability conditions becomes The frequency-spacing can only be an integer multiple of the hop spacing. Since the hop spacing is determined by the signal bandwidth (approximately the reciprocal of the symbol time duration), we see that the above approach allows us to resolve only fractional time delays. This is both an advantage and a limitation. It is an advantage because in conventional approaches, fractional time delays are considered unresolvable; our algorithm exploits invariances in the data, and allows us to estimate fractional delays. It is a limitation, because it does not allow us to resolve true multipath delays causing ISI (in the usually understood sense of the term, with delays larger than the symbol period). 3.2.
Acquisition Time for the ESPRIT Approach
An important consideration in the ESPRIT algorithm that has been underplayed until now is the selection of the separation frequency The algorithm requires frequencies separated by a fixed but the received frequencies are pseudo-random. We note that packet reuse is allowed in the algorithm as long as pairs of frequencies differ by a constant For the parameters to be identifiable by ESPRIT, the number of distinct frequency pairs N must be no less than the number of parameters The problem is to pick pairs of received frequencies whose difference occurs the maximum number of times. In other words, we choose so as to minimize the average number of packets one must wait until the identifiability condition for ESPRIT is satisfied. Once again consider the previous example but with a different realization of a random hop sequence
Suppose that the channel is a two-ray multipath channel, and delays and need to be estimated. To apply ESPRIT with M = 1, if we choose we obtain the following set
168
Hande, Tong and Swami
for which N =4. The corresponding matrix A is a 4 × 2 matrix with full (column) rank of 2. The identifiability condition for ESPRIT is satisfied after receiving the third packet. If, on the other hand, is chosen, we then have
Let denote the probability that a occurs exactly i times and b occurs exactly j times. Then we have
where where N = 5, This leads to a 5 × 2 matrix A with full column rank of 2. However, the identifiability condition for ESPRIT is satisfied only after the fifth packet is received, which means that one has to wait longer. We now analyze the average number of packets one has to wait until the identifiability condition is satisfied. We note that the identifiability condition of ESPRIT is satisfied when the number of distinct pairs in reaches the number of parameters to be estimated. In the following we compute the average number of distinct pairs in Since the set of T hop frequencies usually constitutes an arithmetic series, we let the set G = {1, 2, . . . , T} represent the hop frequencies. The minimum separation between hop frequencies is represented by 1 in this formulation (without loss of generality, since the constant factors can be absorbed into the delays). Consider the received frequency to be a realization of a random sequence Next, for an integer k, we form the set from the L-element sequence such that, for all i,
Let be the number of distinct pairs in Assume that frequency hops are randomly chosen from G with equal probability. Then there are T – k distinct pairs ({1, k + 1}, {2, k + 2} and so on) that can possibly be members of We can then obtain by calculating, for each possible pair, the probabilities of all trials in which the pair occurs at least once. Since the probability of the occurrence of any pair is the same as any other pair, we need to evaluate this trial probability for any one pair and then sum it over all possible T – k pairs. With Pr(·) representing the probability of an event and {a, b} representing any possible pair, we have [Pr({a, b}occurs exactly once) + Pr({a, b}occurs exactly twice) + . . . ]. Each probability can be evaluated by calculating the probability that a occurs i times and b occurs j times.
So the expectation can be written as
This can be shown to evaluate to
This shows that the expected number of pairs differing by k decreases linearly with k making the choice of k = 1 optimal. For large T, the expected number of distinct pairs that differ by k can be approximated as
As expected, the expected wait time increases as the size of the hopset (T), or the number of invariances (L) increases; it decreases as the difference k increases. For evaluation of channel parameters in the P multipath case, we require a minimum of P pairs of frequencies for either the LS-ESPRIT or the TLS-ESPRIT. If we choose k = 1 as the frequency pair difference, then on an average we need to wait for hops where
which implies that For typical values like T = 75, P = 3 we need to wait on an average for hops before we have a set of P pairs to which the algorithm can be applied. The waiting time is increased further ifmore than P pairs need to be used. The above analysis brings out some of the drawbacks associated with the ESPRIT algorithm for channel estimation. For large T and M, there are nearly as many pairs that differ by k = 2, 3 , . . . as those that differ by k = 1. Since we are exploiting only the k = 1
Multipath Delay Estimation
structure, information provided by invariances associated with k = 2, k = 3 , . . . remain unexploited indicating the possibility of better algorithms. The drawback of the ESPRIT approach arises from the fact that the ESPRIT algorithm is capable of exploiting only a single invariance. The uniform linear variation in frequencies of packets received, as assumed in our simulations, favors the ESPRIT algorithm. But in reality, this structure is rarely encountered and the received frequencies are more or less random. This prompts us to search for an algorithm that can work well when we have a small number of packets sharing multiple invariances. One such possibility is shown in the next section. 4. Delay Estimation by Multiple Invariance 4.1.
The SPECC Algorithm
SPECC stands for Signal Parameter Estimation via the Cayley-Hamilton Constraint. The SPECC algorithm [9, 10] was developed for solving the parameter estimation problem involving multiple invariances exactly in the form given in (13). The original SPECC algorithm assumes Here, we present a more general version that allows non-consecutive The task now is to estimate assuming that data matrices in (12)–(13), are the only measurements available. This is accomplished by separating the complex vector space to which the received signals belong into orthogonal signal and noise spaces. Proceeding as before, it can be shown that the signal subspaces of the data in each set share the following structure
Defining B = AT and re-written as
the structure can be
In practice, the presence of noise and only a finite data set implies that only estimates of the signal subspaces, are available. The search for now assumes the form
169
An equivalent representation is
Note that the eigenvalues of are the diagonal elements of This minimization has been dealt with in [8] where the proposed solution was a multidimensional search for the minimizing A closed form solution, SPECC, based on an alternate constraint on the signal subspaces was proposed in [9]. If the problem reduces to the one considered in [9]. Let be the characteristic polynomial of the P × P square matrix The CayleyHamilton theorem enforces the following constraint on
where is the identity matrix of size P. The polynomial belongs to the class of polynomials called the annihilating polynomials of [20, page 221] all of which reduce the matrix to the zero matrix. The minimal polynomial is defined to be the annihilating polynomial of minimum degree. It turns out that the characteristic polynomial is the minimal polynomial for a square matrix with distinct eigenvalues [20, page 240]. On the other hand, it is possible to have a minimal polynomial of degree less than that of the characteristic polynomial for matrices with non-distinct eigenvalues. Nevertheless the minimal polynomial has all the eigenvalues of the matrix as roots. Only that the multiplicity of the roots of the minimal polynomial is not the same as that of the eigenvalues of the corresponding square matrix. The minimal polynomial is unique and any annihilating polynomial has the minimal polynomial as a factor. Consider an annihilating polynomial of with zero coefficients at all positions except maybe those at By definition of an annihilating polynomial we have
This, along with (24), imposes the following constraint on the signal space
170
Hande, Tong and Swami
Since the eigenvalues of occur as roots of it remains to be shown that the coefficients can be determined from (29). In fact, we show that an annihilating polynomial of of the form given in (28) can always be determined from (29). Let denote the set of roots of a polynomial Let be the minimal polynomial of By definition of minimal polynomials, is the set of eigenvalues of the size P square matrix Theorem 1. Let B and be as defined before. Without loss of generality, we assume that the integers are strictly increasing i.e., Let be the minimal polynomial of Consider the minimization
Let have the following 1. If
Then we
and B is full column rank, then
2. If is diagonalizable as and BM has no zero columns, then 3. If has distinct eigenvalues and BM has no zero columns, then
Remarks. Notice that the characteristic polynomial is minimal (d = P) when the eigenvalues are distinct. The theorem implies that P + 1 data sets structured as in (23) for some distinct set of integers are sufficient for the estimation of the distinct eigenvalues of Note that there is no obvious way to recognize the eigenvalues from the set of roots of For the particular case of channel estimation, the eigenvalues are of unit amplitude and hence identifiable from the set of elements in provided that no other root has unit amplitude. Notice also, that if is diagonalizable, then the full rank condition on B can be relaxed. It suffices to have the row space of B orthogonal to no eigenvectors of which translates to the no zero column condition on BM. This implies that the theorem holds even when the rank of the matrix B is less than P, the number of multipaths. Since the matrix for the estimation problem turns out to be diagonalizable with
this removes the ESPRIT restriction that the invariance be present among at least P data packets in each set and makes it possible to exploit invariances even among data sets with just one packet. Proof: Since all eigenvalues of occur as roots of any annihilating polynomial, we need to show first that is an annihilating polynomial of What concerns us now is the particular form of Is there any annihilating polynomial with zero coefficients at all positions except maybe those at We show below that such a exists and that this results in being an annihilating polynomial. Recall that any annihilating polynomial has the minimal polynomial as a factor. To prove that an annihilating polynomial of the given form exists, we need to show that a polynomial exists such that Comparing the largest power on both sides, we see that The polynomial has zero coefficients and we have a total of l + 1 variables with which we can set these coefficients to zero. This is always possible if which is true if So an annihilating polynomial of the same form as always exists under the given conditions. That is also an annihilating polynomial is now easily seen. Since is an annihilating polynomial of and since the subspaces are related as in (23), we have
Since have
is a minimizer of (30) we
Since the norm is non-negative, we have
which implies
Multipath Delay Estimation
Under full rank assumption on B, it can be concluded that
This indicates that is an annihilating polynomial for the matrix and the first part of the theorem stands proved. If is diagonalizable, then the full rank condition on B is not required to show that the polynomial is an annihilating polynomial. To see this, diagonalize in (32) to obtain
Since M is full rank, we can write this as
where If and gonal elements of then we have
is also diagonal. are the dia-
which indicates that if none of the columns of BM is zero, then we have
From this, we conclude that is an annihilating polynomial of and hence includes all the eigenvalues of among its roots. This completes the proof of the theorem. The SPECC algorithm based on Theorem 1 can be described as follows. Collect frequencies into sets such that
The header data samples of packets corresponding to frequencies in each set is collected into The signal subspace estimates corresponding to each data set is obtained by eigen-decomposing an estimate of the covariance matrix of the data as before.
171
We solve the minimization problem,
assuming a normalization constraint on and find the roots of the resulting polynomial Note that the minimization is quadratic in nature and hence has a unique solution which can be obtained by taking the SVD of the appropriate matrix. Next, we estimate the eigenvalues as the P roots nearest to the unit circle. Finally, we estimate the delays via (21). 4.2.
The Acquisition Time for the SPECC Approach
The SPECC algorithm was shown to require a minimum of P + 1 data sets sharing P invariances among themselves. On the other hand, the ESPRIT algorithm requires a minimum of two data sets and can do no better when more are available. ESPRIT requires that a minimum of P packets be available in each data set whereas the SPECC algorithm works even with just one packet in each of the P + 1 data sets. This factor contributes towards decreasing the waiting time quite drastically when compared to the ESPRIT algorithm. In Section 3.2 it was shown that we need to wait for number of packets, on an average, before the ESPRIT algorithm can be employed for delay estimation. We seek a similar result for the SPECC algorithm in this section. The problem can be cast in the integer domain as follows. Consider M realizations of a random variable X which can take values in the domain G = {1, 2,..., T}. We need to find the M for which the average number of distinct numbers among the M realizations of X is at least as large as P + 1. The average number of distinct numbers in the M realizations of the random variable X can be shown to be given by
For large T, we can approximate the average number as so that when T = 75 and P =3, we have E = P + 1 when The waiting time is reduced from that for 15 packets in the ESPRIT case to 4 packets in the SPECC case. The occurrence of P pairs differing by the same constant and the occurrence of P + 1 distinct frequencies,
172
Hande, Tong and Swami
respectively are the two events that need to be satisfied before the ESPRIT and SPECC algorithm can be applied. The reduction in the waiting time manifests itself more clearly when we compare the probabilities of the two events against the number of received packets. The probabilities are plotted in Fig. 1 and it is immediately obvious that the SPECC algorithm requires fewer packets than does ESPRIT. In particular, we considered T = 75 and P = 3. The SPECC algorithm requires about 6 packets before estimation can start with probability one. The ESPRIT algorithm requires 20 packets before estimation can proceed with probability 0.8. 5.
Simulations and Discussion
The simulation setup is first described in detail; next, we present typical simulation results, and finally we discuss these results, leading to insights into the performance of the various algorithms considered here. 5.1.
Simulation Setup
We compare the performance of the ESPRIT- and SPECC-based algorithms. The algorithms are applied
to a frequency hopping system similar to the one described in [11]. The transmission frequency was confined to the range 1899–1929 MHz, the uplink frequency range for the PCS system described in [11]. A total of 75 frequencies are considered to be the ‘hop’ frequencies in this range of 30 MHz thus providing a frequency separation of 400 kHz. The symbol period considered for the system is the minimum required to achieve 500 kbps using QPSK. The number of multipaths in all our simulations was assumed to be P = 3, but results tend to be similar for larger number of multipaths. This system allows a minimum of 400 kHz. Thus delays larger than 2.5 µs require additional processing for estimation, one possible technique was outlined in Section 3. We consider here the harder problem of separation of delays less than the symbol period and assume the multipath delays to be 0.18 µs, 0.43 µs and 0.90 µs respectively in all our simulations. Frequency is hopped from packet to packet with each packet consisting of the same header symbols and varying payloads. For our purposes, we generate a set of K random symbols which are used as header symbols in all packets. The header parts of the packets are thus identical but generated randomly for every Monte Carlo run. Since we never operate on the payload part of the
Multipath Delay Estimation
packets, we never actually generate the payloads. Thus each packet consists of K QPSK symbols transmitted at a hop frequency. The algorithm requires culling of packets with frequencies sharing a certain structure. This aspect of the algorithm was studied in Sections 3.2 and 4.2, and we re-emphasize the significance of this step in the algorithm. At this moment, however, we are interested in studying the performance of the algorithm provided that the frequency struture is readily available. With this in mind, the hop frequencies of the packets are assigned definite values rather than assigning random values in the given range. In particular, we assume a total of N packets transmitted at frequencies that form an arithmetic progression, a structure that immediately leads to the “linear array” [7] formulation of the ESPRIT algorithm. The frequency structure described above is far from reality but extracts the best performance from the ESPRIT algorithm. The SPECC algorithm and the MI-ESPRIT algorithm were presented as an alternative to this highly constrained structure. For purposes of fair comparison, however, we continue to assume the same “linear-array” structure of the frequencies for the SPECC and the MI-ESPRIT algorithm as well. Our simulations assumed a total of N = 20 packets with “linear-array” frequency structure. If the packet frequencies are labeled with the set of integers {1,2, 3 , . . . , N}, the frequency structure exploited in the three algorithms are as follows: ESPRIT: {1,2, . . .,19}, {2, 3, . . ., 20} SPECC: {1–5}, {6–10}, {11–15}, {16–20} MI-ESPRIT: {1–5}, {6–10}, {11–15}, {16–20} The SPECC algorithm requires a minimum of 4 groupings to estimate P = 3 multipath delays and hence the above grouping. Note that this grouping corresponds to MIESPRIT can incorporate a more general grouping but we assume the same grouping as assumed for SPECC for comparison purposes. The delays are expressed as fractions of the symbol period and the mean squared error calculated for this fraction is referred to as the ‘normalized’ MSE. Performance of the three algorithms is compared by plotting normalized MSE against SNR and against the number of header symbols. SNR is measured at the receiver front end as the signal to noise ratio where the signal constitutes the sum of all multipaths.
173
MSE calculation is accomplished by running a total of 500 Monte Carlo runs. Before we proceed with comparison of results, we note that the SPECC and ESPRIT based algorithms are the only viable options for channel delay estimation. The MI-ESPRIT algorithm involves a tedious multidimensional search and is displayed on the plots only as a benchmark; it is not a practical on-line algorithm. Note also, that in each case, we confine our comparison to one of three delays involved. Performance tends to be similar for typical values of the delays. 5.2.
Results and Discussions
A comparison of the behavior of the normalized mean square against SNR for the three algorithms is shown in Fig. 2. The SPECC algorithm clearly outperforms the TLS-ESPRIT algorithm in estimating the delay parameters. The MSE achieved by the SPECC algorithm is considerably lower than that of the ESPRIT algorithm at low SNRs. The two MSEs approach each other at high SNRs. The MI-ESPRIT algorithm, with optimal exploitation of the frequency structure performs the best as expected. However, the SPECC algorithm performs the best at low SNRs and appears to be a worthy alternative to ESPRIT for low power applications. At low SNRs, SPECC offers a 5 to 7 dB advantage over TLS (which has comparable computational complexity) and MI-ESPRIT (whose computational complexity is extremely high). Figure 3 is a comparison plot between the three algorithms with varying number of header symbols. Since the number of signal samples available is limited by the number of header symbols, signal space estimation tends to be inaccurate. It is clear from the plots that the SPECC algorithm is more robust against inaccurate subspace estimation when compared to the ESPRIT algorithm. Applications that employ the ESPRIT scheme require a sufficient number of samples to be available for a good estimation of the signal subspace. The subspace estimation is further hindered by the presence of noise. The SPECC algorithm exploits a linear constraint on the subspaces as opposed to the rotational invariance exploited by the ESPRIT algorithm and hence is better suited to handle inaccurate estimates of signal subspaces. Detailed performance analyses of the suggested approaches are issues of current interest. The plot of the MSE against SNR for a more general case is shown in Fig. 4. A total of N = 30
174
Hande, Tong and Swami
Multipath Delay Estimation
packets was assumed available at the receiver. The packet hop frequencies continue to have a uniform linear arrangement. Labeling the packet hop frequencies as {1, 2 , . . . , 30}, the frequency partition considered for the two algorithms are ESPRIT: {1,2, ...,19}, {2, 3 , . . . , 20} SPECC:{l–5}, {11–15}, {16–20}, {21–25}, MI-ESPRIT: {1–5}, {11–15}, {16–20}, {21–25}. The total number of packets used by SPECC is still 20 and hence the ESPRIT scheme was made available the same number of packets. Note that this partition corresponds to M = P and The performance is similar to the basic SPECC algorithm and provides better estimation than the ESPRIT algorithm especially at low SNRs. Figure 5 is a comparison of the MSEs with the following frequency partition. ESPRIT: {1,2, ...,19}, {2, 3 , . . . , 20} SPECC: {1–5}, {11–15}, {21–25}, {26–30}, MI-ESPRIT: {1–5}, {11–15}, {21–25}, {26–30}.
175
Notice again that the three algorithms are provided with the same number of packets. The SPECC partition corresponds to M = P and A significant degradation in the performance of the SPECC algorithm is observed. While the SPECC algorithm, in theory, can exploit a general structure, performance degrades as the value of strays away from the optimal value. Note that is optimal in the sense that the resulting polynomial has the same number of roots as the number of parameters estimated. For we have more roots than required and hence the degradation in the performance. A crucial factor to be considered for ESPRIT estimation is the availability of at least as many packets in each data set as the number of parameters to be estimated. The estimation is substantially better as the number of packets available increases. Figure 6 displays MSE vs. number of packets. The assumption of uniform linearity among the hop frequencies of the packets available makes it possible for us to have all but one packet in each data set. It is clear from the plot that most of the gain in the performance is achieved at a certain optimal number of packets and any increase in the number of packets beyond this does not add much to the performance.
176
Hande, Tong and Swami
Multipath Delay Estimation
6.
Conclusions
In this paper, we have considered the multipath delay estimation problem for a slow frequency hopping system. The key idea was to exploit the invariant structures in the data packet. Two approaches were proposed; neither requires training bits; both lead to closed-form estimates; which can be used to initialize high-complexity optimization algorithms, such as MI-ESPRIT The use of multiple invariances can significantly improve the performance; thus the SPECC algorithm has the advantage. Furthermore, the SPECC algorithm also has shorter acquisition time than the ESPRIT approach. We also demonstrated that the SPECC algorithm is more robust to inaccurate estimation of the signal subspaces that results as a consequence of limitations in the number of header symbols available for estimation. One limitation of the proposed algorithm is that the multipath delay is limited to one symbol interval due to the ambiguity of the phase-wrapping.
Notes l. By high resolution we mean that, in the absence of noise, the algorithm has the ability to distinguish arbitrarily small relative delays. 2. If the K symbols are located in the middle of the packet, our approach remains valid by restricting the estimation window to those parts of the signal not interfered by the data part of the packet. 3. N corresponds to the number of sensors and P the number of sources in the DoA estimation problem.
References l. R.L. Pickholtz, D.L. Schilling, and L.B. Milstein, “Theory of Spread Spectrum Communication—A Tutorial,” IEEE Trans. Commun., vol. COM-30, 1982, pp. 855–884. 2. M.K. Simon, J.K. Omura, R.A. Scholtz, and B.K. Levitt, Spread Spectrum Communication Handbook, New York: McGraw-Hill, 1994, revised edition. 3. Y.A. Chau and J.-K. Wang, “Spectral Estimation Based Acquisition for Frequency Hopping Spread Spectrum Communications in a Nonfading or Rayleigh Fading Channel,” IEEE Trans. Communications, vol. 45, no. 4, April 1997, pp. 445–455. 4. P.-T. Sun and C.-Y. Chu, “Hidden Preamble Detector for Acquisition of Frequency Hopping Multiple-Access Communication System,” IEE Proceedings-Communications, vol. 144, no. 3, June 1997, pp. 161–165. 5. L. Aydm and A. Polydoros, “Joint Hop-Timing Estimation for FH Signals Using a Coarsely Channelized Receiver,” in Proc. MILCOM 95, San Diego, CA, Nov. 1995, pp. 769–773.
177
6. P.H. Hande, L. Tong, and A. Swami, “Channel Estimation for Frequency Hopping Systems,” in MILCOM 99, Nov. 1999, vol. 2, pp. 1323–1327. 7. R. Roy, A. Paulraj, and T. Kailath, “ESPRIT—A Subspace Rotation Approach to Estimation of Parameters of Cisoids in Noise,” IEEE Trans. Acoust. Speech, Signal Proc., vol. ASSP-34, no. 10, Oct. 1986, pp. 1340–1342. 8. A.L. Swindlehurst, B. Ottersten, R. Roy, and T. Kailath, “Multiple Invariance ESPRIT,” IEEE Trans, on Signal Processing, vol. 40, no. 4, April 1992, pp. 867–881. 9. P.H. Hande and L. Tong, “Signal Parameter Estimation via the Cayley-Hamilton Theorem,” IEEE Signal Processing Letters, April 2001, to appear. 10. P.H. Hande, L. Tong, and A. Swami, “Channel Estimation for Frequency Hopping Systems Via Multiple Invariances,” in Proc. IEEE Wireless Comm. and Networking Conf., Sept. 2000. 11. P.O. Rasky, G.M. Chiasson, D.E. Borth, and R.L. Peterson, “Slow Frequency-Hop TDMA/CDMA for Macrocellular Personal Communications,” IEEE Personal Communications, vol. 1, 2nd quarter, 1994, pp. 26–35. 12. Bluetooth Special Interest Group, “The Bluetooth Specification,” http://www.bluetooth.com. 13. J.C. Haartsen, “The Bluetooth Radio System,” IEEE Personal Communications, vol. 7, Feb. 2000, pp. 28–36. 14. Lang Tong and S. Perreau, “Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods,” Proceedings of the IEEE, vol. 86, Oct. 1998, 1951–1968. 15. A. Paulraj, R. Roy, and T. Kailath, “Estimation Of Signal Parameters Via Rotational Invariance Techniques-ESPRIT,” Asilomar Conference on Circuits, Systems and Computers, Nov. 1985, pp. 83–89. 16. R. Roy and T. Kailath, “ESPRIT—Estimation of Signal Parameters Via Rotational Invariance Techniques,” IEEE Trans. Acoust. Speech, Signal Proc., vol. ASSP-37, no. 7, July 1989, pp. 984– 995.
17. B. Ottersten, M. Viberg, and T. Kailath, “Performance Analysis of the Total Least Squares ESPRIT Algorithm,” IEEE Trans, on Signal Processing, vol. SP-39, May 1991, pp. 1122–1135. 18. G.H. Golub and C.F. Van Loan, Matrix Computations, Baltimore, MD: Johns Hopkins University Press, 1983. 19. K.T. Wong and M.D. Zoltowski, “Closed-Form MultiDimensional Multi-Invariance ESPRIT,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, 1997, ICASSP-97, vol. 5, pp. 3489–3492. 20. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, 1985.
Prashanth Hande received his B .Tech degree in Electrical Engineering from the Indian Institute of Technology, Bombay, in 1998 and
178
Hande, Tong and Swami
the M.S. degree from the School of Electrical Engineering, Cornell University, in May 2000. During the summer of 1999, he worked at Motorola Labs, Fort Worth. He was with Texas Instruments from July 2000 to November 2000 working on the 802.11 high data rate standard. He is currently working with nBand Communications, developing Broadband wireless technologies for software defined Radio. His research interests include equalization issues in Broadband wireless, signal processing for multi-carrier and multiple input multiple output systems.
[email protected].
Lang Tong received the B.E. degree from Tsinghua University, Beijing, China, in 1985, and M.S. and Ph.D. degrees in electrical engineering in 1987 and 1990, respectively, from the University of Notre Dame, Notre Dame, Indiana. He was a Postdoctoral Research Affiliate at the Information Systems Laboratory, Stanford University in 1991. Currently, he is an Associate Professor in the School of Electrical and Computer Engineering, Cornell University, Ithaca, New York. Dr. Tong received Young Investigator Award from the Office of Naval Research in 1996, and the Outstanding Young Author Award from the IEEE Circuits and Systems Society. His areas of interest include statistical signal processing, adaptive receiver design for communication systems, signal processing for communication networks, and information theory. Itong @ ee.cornell.edu http://www.ee.cornell.edu/~ltong
Ananthram Swami received the B.S. degree from the Indian Institute of Technology, Bombay; the M.S. degree from Rice University, Houston; and the Ph.D. degree from the University of Southern California, all in Electrical Engineering. He has held positions with Unocal Corporation, the University of Southern California, CS-3 and Malgudi Systems. He is currently a Research Scientist with the Communication Networks Branch of the US Army Research Lab, Adelphi, MD, where his work is in the broad area of signal processing for communications. Dr. Swami is a Senior Member of the IEEE, a member of the IEEE Signal Processing Society’s (SPS) technical committee (TC) on Signal Processing for Communications (since 1998), a member of the IEEE Communication Society’s TC on Tactical Communications, and an associate editor for IEEE Signal Processing Letters. He was a member of the society’s TC on Statistical Signal and Array Processing (1993–98); an associate editor of the IEEE Transactions on Signal Processing, vice-chairman of the Orange County Chapter of IEEE-GRS (1991–93); and co-organizer and co-chair of the 1993 IEEE SPS Workshop on Higher-Order Statistics, the 1996 IEEE SPS Workshop on Statistical Signal and Array Processing, and the 1999 ASA-IMA Workshop on Heavy-Tailed Phenomena. Dr. Swami was a Statistical Consultant to the California Lottery, developed a Matlab-based toolbox for non-Gaussian signal processing, and has held visiting faculty positions at INP, Toulouse, France. He has taught short courses for industry, and currently teaches courses on communication theory and signal processing at the University of Maryland, a.swami @ ieee.org
Journal of VLSI Signal Processing 30, 179–195, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Greedy Detection AMINA ALRUSTAMANI Dubai Internet City, Dubai, P.O. Box: 73000, UAE BRANIMIR VOJCIC Department of Electrical & Computer Engineering, The George Washington University, Washington DC 20052, USA ANDREJ STEFANOV Telecommunications Research Center, Department of Electrical Engineering, Arizona State Unversity, Tempe, AZ 85287-7206, USA Received September 6, 2000; Revised May 18, 2001
Abstract. In this paper, we introduce a new greedy algorithm developed for communication systems characterized with multiple simultaneous data transmission. Specifically, we consider code-division multiple-access (CDMA) systems and systems employing space-time coding (STC). The optimum detection in such systems has an exponential complexity and cannot be used in practical systems. We show that performance close to the optimum performance yet with significantly lower complexity can be achieved by the proposed algorithm. Also, we show that its performance is significantly better than that of most of the existing sub-optimum schemes for a wide range of operating conditions. Keywords: multiuser detection, multiple access, space time coding, turbo coding, multi-input multi-output systems, interference cancellation
1. Introduction Many communication systems are characterized with the reception of a number of distinct transmitted signals simultaneously. This is either due to the communication scheme itself such as in code-division multipleaccess (CDMA) systems [1] in which more than one user transmit information at the same time, and systems employing space time coding (STC) [2] in which more than one symbol are transmitted using different transmit antennas or due to the channel characteristics such as channels with multi-path and inter-symbol interference (ISI) [3]. The challenge at the receiver is to separate such interference and to fully exploit all received information in the detection process. Although centralized optimum demodulation meets this challenge,
the optimum receivers for these systems, in general, have an exponential complexity which is impractical for real systems [4–7]. A significant amount of research in the literature has focused on such problems [8–31]. The main objective for these studies is to find a sub-optimum approach with performance close to the optimum performance yet with acceptable complexity. In this paper, we develop a new sub-optimum detector that is based on the greedy principle [32,33], which is applied to a wide variety of combinatorial optimization problems such as the knapsack problem, hoffman coding and minimum spanning tree problem. We show that the greedy algorithm outperforms all the other suboptimum schemes and offers a performance close to the optimum performance, yet with polynomial complexity. In this paper, we analyze the performance of the
180
AlRustamani, Vojcic and Stefanov
greedy algorithm for two systems: CDMA systems and systems employing STC. The rest of the paper is organized as follows: In Section 2, an overview of the problem is presented. In Section 3 the greedy algorithm is described. The implementation and performance results of the greedy algorithm for CDMA systems and STC are discussed in Sections 4 and 5, respectively. Concluding remarks are given in Section 6. 2.
General Overview of the Problem
The exponential complexity of the systems considered in this paper arises form the following equation [1,4, 6,7]:
where B is an n × n matrix in which a is an n × 1 vector with complex elements and x is an n × 1 vector with elements belonging to a discrete set representing the signal constellation, with size where M is the number of bits per symbol. Pr(x) is the probability that the vector x is transmitted. denotes the conjugate transpose of a vector or a matrix and Re{z] is the real part of z. The values of f (x) belong to the set of real numbers. In CDMA systems n represents the number of active users in the system. The task of the optimum detector is to find the vector x that maximizes the function f (x) and the elements of the vector x that results in the largest value is the final estimate of the transmitted symbols for each user [1,4]. Unfortunately, since the equation is in a quadratic form and the elements of x are discrete values, the problem is considered as an NP hard problem for which an algorithm with polynomial complexity does not exist [5]. Therefore, the optimum detector calculates all the possible values of f (x) to find the vector that results in the maximum value. For systems with STC, n represents the number of transmit antennas in the system. In this scenario the task at the receiver is to calculate all the possible values of f (x) [6, 7]. These values are then used to find the log-likelihoods of the transmitted bits to feed them back to the iterative decoder. For large number of transmit antennas and large constellation size, finding all these values is impractical for real systems. One suboptimum approach suggests that we need to find only large values of f (x) and neglect small values [53]. This brings us back to the same problem as in the CDMA system: how to find the large values?
In combinatorial optimization, when binary signaling is employed, Eq. (1) is known as unconstrained bivalent quadratic programming problem and it belongs to a class with many different applications [34–44]. The characteristics of the matrix B, such as density (the ratio of the number of nonzero entries to the total number of entries) and sparsity, the noise level affecting the elements in a and the dimensionality of the problem mainly determine the complexity of the problem and the performance of the sub-optimum detectors. When the matrix has low density, or when it is diagonally dominant, or when the signal-to-noise ratio (SNR) is low, then finding a nearoptimum solution becomes much easier. In this case, most of the proposed algorithms in the literature provide a significant improvement in performance and complexity. In fact, for CDMA systems, it has been shown that a careful design of the signature waveforms and, thus, the elements in B, results in significant reduction in the complexity of the jointly optimum detection [45–50]. However, these requirements limit the system capacity, assume synchronous transmission and cannot be maintained in multi-path channels [51,52]. For STC systems the matrix B reflects the channel characteristics and cannot be controlled as in CDMA systems. 3. 3.1.
The Greedy Algorithm Greedy Principle
Although the above discussion suggests that the elements in the matrix B play a crucial role in determining the elements of x, most of the existing suboptimum detectors do not explicitly utilize these elements to decide on the symbol values. They are mainly utilized to estimate and suppress intreference. Therefore, we propose the use of the greedy principle to account for their role in demodulation. Most of the problems tackled by the greedy principle [32, 33] have n inputs, such as the objects in the knapsack problem, and require obtaining a subset that satisfies some certain constraints. The greedy method suggests that one can think of an algorithm which works in stages, considering one input at a time. At each stage, the algorithm examines one input and decides on a partially constructed solution or on a complete solution. The main distinctive feature of the greedy principle is that the inputs are considered in an order determined by some selection procedure that is based on some optimization measure such as maximizing the increase in
Greedy Detection
the profit in every stage. For example, some of the criteria that are used to order the objects in the knapsack problem are: profit, weight or density (profit/weight). The algorithm, proposed in this paper, views the coefficients of the symbols x in f (x) as weights (inputs) that indicate the order in which symbols could be estimated. The dominating coefficients, the large values, will have more impact on the value of the function and thus, more effect on deciding the value of the symbol associated with them than smaller terms. 3.2.
The Greedy Algorithm
In what follows, we present the greedy algorithm to find the maximum value of the function in (1). Then we point out how the algorithm can be utilized to find a subset of the values of the function for iterative decoding. The function f (x), can be equivalently presented as:
Before proceeding, we should note that this is the general form of the function f (x) In some systems the last two terms, have the same value for all the possible vectors x, indicating that they carry no useful information for demodulation, and thus we eliminate them from the function. These variations will be pointed out when we discuss the application of the algorithm in the following sections. The objective of the greedy algorithm is to maximize the above function. We need to define the inputs that will determine the order of detection to attempt to maximize the function in (2). We propose the use of the following weights:
reflects the magnitude of the coefficients that depend on the symbol while is the magnitude of the coefficient of the quadratic term in the function, By examining Eq. (2), we can view the function f (x) as a sum of (n(n + 3)/2) + 1 numbers, where deciding on one symbol, has a direct effect on n + 1 terms in the summation. Based on the greedy princi-
181
ple, the proposed algorithm views the terms in (3) as weights that indicate in which order the symbols could be sequentially estimated. By ordering these weights in descending order and examining the incremental effect of each term associated with the weight on the value of the function, we choose the values of the symbol that in combination with the corresponding coefficient make as large as possible positive contribution to the value of the function. When the weight is considered, the effect of choosing the corresponding symbol is examined in conjunction to its impact on the other n terms in the function. Similarly, when considering the coefficient then the effect of the corresponding symbols and on the other 2n + 1 terms is examined. Thus, to maximize the function in (2), we always try to resolve the symbols in the order of their contribution to the value of the function. Initially, the values of the weights, and j and j >i, are sorted in descending order. The subscripts of these weights indicate in which order the symbols must be estimated. The algorithm consists of at most stages, which is the total number of weights in (3), and each symbol is examined up to n times. The m-th stage corresponds to the m-th weight after sorting. In some cases the number of stages might be less than n(n + 1)/2 stages for reasons that will be evident later. Let the elements of the vector be the values of the weights after sorting. be the tentative estimate of the vector x in the m-th stage and let In stage m, if then the function f (x), in (2), is calculated times for all the possible vectors obtained by substituting all possible symbol values for in i.e. for for Then the vector that results in the largest metric value is chosen as the new estimate of the vector in the m-th stage. On the other hand, if the coefficient is of the form then the function f (x) is calculated times for all the possible vectors obtained by substituting all possible symbol values for and in i.e. for for and Then the vector that results in the largest metric value is chosen as the new estimate of the vector in the m-th stage. Therefore, in the m-th stage, one or two symbols in depending on the weight corresponding to the m-th stage, are examined and updated based on the decisions on the symbols made in the previous stages,
182
AlRustamani, Vojcic and Stefanov
corresponding to the largest increment of the value of the function. After examining the main steps of the greedy algorithm, we observe the following characteristics: Initially, we start with no estimates for As we progress through the stages we obtain estimates for the symbols In some stages, we might not have estimates of all the symbols. In addition, in some stages we might examine a symbol that we already have an estimate for. In such cases we reexamine the estimate obtained for that symbol after obtaining estimates or changing estimates of the other symbols to account for their effect on the quadratic terms. Estimates of all the symbols might not be available at least until where is the smallest integer greater than or equal to c. This can occur if the algorithm, in every stage, examines two symbols with no estimates in the first stages. On the other hand, estimates of all the symbols might not be available at most until n (n – 1)/2 stages. This can occur if all the n weights of a specific symbol are examined in the last n stages in the greedy algorithm. Once estimates of all symbols are available or estimates of the symbols examined in the stage already exist, then one of the possible symbol combinations that should be examined in the stage is the one chosen in the previous stage. Therefore, to reduce the number of computations, the algorithm must forward the vector and the value of the function associated with this vector to the next stage. This indicates that if the global maximum is found in a stage then it propagates throughout the following stages. The number of stages might be less than n(n + 1)/2 in some cases. In Lemma 1, we summarize the cases in which the number of stages can be reduced. Lemma 1. In the greedy algorithm, if two consecutive stages examine or if three consecutive stages examine or all the other five possible permutations of and then deleting the stages corresponding to and will not affect the result. Proof: The vectors examined in the consecutive stages or or all other possible permutations, have the same elements in all other positions besides k and l, i.e. same symbol estimates for all other users except users k and l. Moreover, the examined vectors in the stages are subsets
of the examined vectors in the stage Therefore, deleting stages will not have an effect on the results. One major factor that influences the performance of the greedy algorithm is the fact that in the first several stages of the algorithm estimates of all symbols are not available and the effect of symbols with no estimates on the quadratic terms is not accounted for. This might direct the algorithm to converge to a local maximum. One possibility to reduce the effect of this factor is to forward the L largest values and the corresponding vectors from the current stage to the next one. By considering more than one value, the algorithm is exploring for the global maximum in more that one region, increasing the possibility of global convergance. Therefore, the input to stage m will be L estimates of x and the corresponding values of the function In stage m, for every or values are calculated. Then among or values, the largest L values and the corresponding vectors are chosen as the output of the stage. Instead of examining vectors as in the optimum scheme, the algorithm examines at least or at most vectors. In the beginning of the algorithm the number of examined vectors might be less than L; thus, in what follows, q represents the possible number of metric values and vectors that can be passed from one stage to the other. The greedy algorithm for obtaining the maximum value of f (x) is summarized with the following steps:
1. Sort the values of the weights in (3) in a descending order V = sort_descending
2. Based on Lemma 1, delete stages that are redundant and find 3. Set m = 1 and q = 1. 4. If then find all the possible values of f(x), (2), corresponding to all the possible vectors obtained by substituting all the possible values of Set where min(x, y ) is the minimum of the set of values
Greedy Detection
{x,y}. Choose the largest q values and the corresponding vectors to be the output at the m-th stage, and Else, find all the possible values of f(x), (2), corresponding to all the possible vectors obtained by substituting all the possible values of and in Set Choose the largest q values and the corresponding vectors to be the output at the m-th stage, and
5. If then and go back to step 4, else proceed. 6. The vector corresponding to the largest metric value is the final estimate of the vector x. If detection requires obtaining more than one value as in STC, then the examined vectors with estimates for all symbols, and the corresponding values of the function are saved throughout the stages. 3.3.
formation bearing signal across the assigned frequency band [1]. The unique signature waveforms allow and facilitate the separation and demodulation of these simultaneously transmitted signals. In this section we consider a synchronous CDMA system using antipodal signaling over additive white Gaussian (AWGN) channel and frequency-nonselective Rayleigh-fading channel as well. The greedy algorithm for the asynchronous case was presented in [53] and [54]. Furthermore, iterative greedy-turbo decoding was also studied in [55]. 4.1.
System Model
For a synchronous CDMA system, the equivalent lowpass received waveform can be expressed as [1, 56]:
Complexity
The complexity of the algorithm can be calculated as follows: There are terms that are sorted and examined. The complexity of sorting is if quick sort or merge sort is used [32, 33]. The function, f (x), is calculated times for the terms and times for the term if and/or otherwise it is calculated and times respectively. Thus, in total, if all terms are considered, the metric is calculated times. Therefore, the complexity of the algorithm is One should note that the greedy algorithm might converge to a maximum before examining all the stages. In this case, the optimum value of the function does not change after the stage where the maximum is reached. Therefore, this suggests that we do not need to examine all the stages. We can device a criterion to stop the algorithm after a certain stage. For example, we can consider weights that are greater than some certain value Z, such as the mean, or we terminate the algorithm if the largest value does not change after P stages. In this case, the complexity will be less than the expression given above. 4.
183
CDMA Systems
In CDMA systems, in general, each user is assigned a unique signature waveform that is used to spread the in-
where K is the number of users, and {– 1, 1} represent energy per bit, unit-energy signature waveform and bit value of the k -th user, respectively; T is the bit interval and are independent zero-mean complex-valued Gaussian random variables for the fading channel and for for the AWGN channel. n(t) is a complex zero-mean Gaussian random process for the fading channel and n(t) is a real zero-mean Gaussian random process for the AWGN channel. The receiver consists of a bank of matched filters and a multiuser detector. The output of the filter matched to the signature waveform of user k and sampled at t = T is:
where denotes the cross correlation of the signature waveforms of users i and k and denotes the noise at the output of the k-th matched filter:
184
AlRustamani, Vojcic and Stefanov
The outputs of the matched filters are sufficient statistics for optimum multiuser detection and can be expressed in vector form as [1, 56]:
where R is the normalized cross-correlation matrix of the signature waveforms, n is the noise vector with autocorrelation matrix and The ML receiver selects the bits b that maximize the metric:
Hence, the optimum receiver for the synchronous case consists of K single-user matched filters followed by a detector that computes the metrics for the possible transmitted information bit vectors, and selects the vector b that gives the largest metric value. 4.2.
The Greedy Multiuser Detector
Relating this example to the discussion presented in Section 3, n = K, M = 1 , x = b and represents the function f (x). The terms ln(Pr(x)) are eliminated in because, in this case, they are equal for all the possible bit combinations. Thus, based on (3), we use the following weights:
to maximize the metric in (7). 4.3.
Simulation Results
In order to show the performance of the greedy multiuser detector (MUD) algorithm, we conducted several simulations that are used to evaluate and compare its performance with other detectors. In all of the following simulations, we consider a bit synchronous CDMA system with binary random signatures and with spreading factor equal to 31. New signature waveforms are generated every 100 bits.
4.3.1. AWGN Channel. In the first set of simulations, Figs. 1, 2 and 3, we assumed perfect power control, i.e. all users have the same received energies Figure 1 shows the performance of the greedy MUD, the optimum detector and several suboptimum detectors, for K = 10. Specifically, we consider the following suboptimum detectors: conventional detector, successive interference cancellation (SIC), decorrelator and two-stage detector with conventional or decorrelator detectors in the first stage. The greedy MUD achieves almost the same performance as the optimum for L = 2 and outperforms the considered suboptimum detectors. The performance of the system for K = 20 is shown in Fig. 2. The algorithm with L = 1 outperforms the two-stage detector with the decorrelator in the first stage for for both K = 10 and K = 20. This is because, unlike the decorrelator detector, error-free demodulation in the absence of noise is not achievable by the algorithm because the algorithm considers a subset of the possible bit combinations and might not converge to the global maximum. However, the performance of the algorithm in the high SNR region improves significantly for L > 1. It should be mentioned that when forward error correction is used, we are interested in low to moderate SNR, where the greedy multiuser detector performance is very close to the optimum performance. Figure 3 shows the bit-error rate as a function of the number of users, K, for For a biterror rate of the capacity in terms of the number of users for the suboptimum detectors are: 3, 5, 8, 14 and 18 for conventional, SIC, two-stage detector with conventional in the first stage, decorrelator and two-stage detector with decorrelator in the first stage, respectively. On the other hand, the capacity of the greedy MUD for L = 1, 2, 4, 6 and 8 is 19, 24, 29, 31, and 32 users, respectively. Therefore, a significant increase in capacity is observed for L > 1, compared to the other examined suboptimum detectors. In [20], the results for the T- and M-algorithms, with and without a whitening filter, are obtained for the same system. The capacity of the M-algorithm with three maximum allowable number of surviving paths and without a whitening filter, is 13 users while with the whitening filter it is 25 users, which is greatly improved and it is similar to the one for the greedy algorithm for L = 2. Therefore, the greedy algorithm, with no whitening filter and with L = 2 can achieve the same performance as the M-algorithm with a whitening filter with M = 3. This demonstrates the importance of including
Greedy Detection
the cross-correlation coefficients to improve the performance of a detector employing the greedy principle. 4.3.2. Single-Path Fading Channel. Figures 4 and 5 show the performance of the considered detectors over
185
the single-path Rayleigh fading channel for K = 20 and K = 30, respectively. The statistical characteristics of the fading channel for each user are the same. For K = 20, Fig. 4, the performance of the greedy MUD algorithm with L = 1 is close to the single-user bound for
186
AlRustamani, Vojcic and Stefanov
SNR < 20 dB. Furthermore, among all the other considered suboptimum detectors, only the two-stage detector with the decorrelator in the first stage gives the same performance as the greedy MUD algorithm. How-
ever, there is a substantial difference in performance between the greedy MUD and the two-stage detector with the decorrelator in the first stage for K = 30, Fig. 5. For SNR < 30 dB, the greedy significantly outperforms
Greedy Detection
all other examined suboptimum detectors. For low SNR, the greedy detector performance is close to the single-user bound, whereas for high SNR region (SNR > 15 dB) the performance of the greedy algorithm is improved by increasing the value of L to overcome the error floor effect. Comparing the results obtained for the AWGN channel and the single-path fading channel, we observe that the greedy MUD algorithm performs better in the fading channel; this is specifically evident in Figs. 2, and 4. This is because the greedy MUD exploits the variability and sparsity of the coefficients of the ML metric, which, in turn, depend on the average received powers of the users. When there are large differences between the values of the coefficients in the ML metric, it is more likely that the greedy detector approaches the optimum performance.
formation theoretic results [57, 58] have demonstrated that the capacity of the system in the presence of block Rayleigh fading improves significantly with the use of multiple transmit and receive antennas. Similar results, for multiple antenna systems over quasi-static Rayleigh fading channels, have previously been reported by Foschini and Gans [2] and Telatar [59]. In this section, we extend the results for turbo coded modulation for systems with transmit and receive antenna diversity introduced in [6, 7], to the case of large number of antennas. The computational complexity of the decoding algorithm presented in [6, 7] is too high when the number of transmit antennas is large. Therefore, we present a new iterative-greedy demodulation-decoding algorithm.
5.1. 5.
Space-Time Coding Systems
In recent years, the goal of providing high speed wireless data services has generated a great amount of interest among the research community. The main challenge in achieving reliable communications lies in the severe conditions that are encountered when transmitting information over the wireless channel. Recent in-
187
System Model
We consider a mobile communication system that employs n antennas at the transmitter and m antennas at the receiver. The information bits are encoded by a channel encoder, the coded bits are passed through a serial to parallel converter, and are mapped to a particular signal constellation. At each time slot t, the output of the modulator is a signal that is transmitted using transmit antenna i, for All signals are transmitted
188
AlRustamani, Vojcic and Stefanov
simultaneously, each from a different transmit antenna, and all signals have the same transmission period T. The signal at each receive antenna is a noisy superposition of the transmitted signals corrupted by Rayleigh fading. The coefficient is the path gain from transmit antenna i, to the receive antenna j , Since we assume a Rayleigh fading channel, the path gains are modeled as samples of independent zero mean complex Gaussian random variables with variance 0.5 per dimension. The wireless channel is modeled as a block fading channel, i.e. the path gains are constant over S symbols which corresponds to information bits, where is the spectral efficiency of the system, and are independent from one block of size S to the next. At time t the received signal by antenna j, denoted by is given by
where the noise samples are modeled as independent samples of a zero mean complex Gaussian random variable with variance per dimension. The signal constellation at each transmit antenna is normalized such that the average energy of the constellation is 1 /n, hence the total transmitted energy at each transmission interval is normalized to unity. We define the signal to noise ratio (SNR) as 5.2.
Turbo Codes for Systems with Antenna Diversity
In this section, we describe the use of turbo coded modulation for wireless communication systems with multiple transmit and receive antennas. 5.2.1. Encoding. The data is divided in blocks of N bits, and encoded by a binary turbo code. The turbo code consists of two systematic recursive convolutional codes concatenated in parallel via a pseudo-random interleaver [60]. The turbo coded bits are then interleaved, passed through a serial to parallel converter, and mapped to a particular signal constellation. We can obtain different spectral efficiencies by varying the code rate and the constellation. The additional interleaver is used to remove the correlation between the consecutive bits being transmitted which helps us in the decoding process. Its size is chosen such that there is no additional increase in the delay requirements of
the system. Since we assume block fading, the turbo code interleaver size is chosen to be a multiple of Effectively, we are channel coding across consecutive “differently faded” blocks. The additional interleaver is necessary in order to uncorrelate the log-likelihoods of the adjacent bits. Furthermore, it distributes the burst errors due to a deeply faded block over the entire frame, which provides additional diversity. The above coded modulation scheme is obtained by concatenating a binary encoder to n memoryless modulators, through a bit interleaver. Therefore, it represents a realization of bit-interleaved coded modulation [61]. 5.2.2. Iterative Demodulation-Decoding. In this section, we present a sub-optimal iterative demodulationdecoding algorithm for the above system. We compute the log-likelihoods of the transmitted bits, and use them as if they are the likelihoods of the observations from a BPSK modulation over an additive white Gaussian noise channel. In the first iteration we obtain the loglikelihoods assuming that all constellation points are equally likely. This is a reasonable assumption considering that the a priori probabilities of the transmitted symbols are difficult to compute prior to the decoding process. However, due to the use of a soft-input soft-output decoder, we can obtain an estimate of the probabilities of the transmitted symbols and use them in the decoding process. That way we obtain an iterative demodulation-decoding algorithm [29, 62–64]. We now describe how the log-likelihoods of the individual bits are computed from the received signal. Assume that the number of different channel symbols at each transmit antenna, i.e., the size of the constellation is and a two dimensional modulation is used. Let us denote the set of constellation points by Note that each constellation point corresponds to M bits. Following the notation of the previous section, the received signal by antenna j at time t, denoted by is given by
At this point, for clarity, we drop the subscript t. We have
Notice that the received signals correspond to n M coded bits, hence we need to compute the
Greedy Detection
189
log-likelihoods of these nM bits using this set of signals. Let us denote the nM bits that construct by
The group of bits is used to select the constellation point for the i th transmit antenna, denoted by The log-likelihood for the lth element of b, is given by
which can also be written as
Note that by knowing b, we also have a knowledge of hence
Finally, since are conditionally independent given and taking the noise statistics into account, we obtain
as the log-likelihood for the bit In order to obtain the probabilities of the systematic and parity bits, we use the soft output turbo decoding algorithm to obtain log-likelihoods for both the systematic, as well as parity bits. In particular, the probabilities are obtained from the log-likelihoods of the extrinsic bit information, as described in [65]. Those probabilities are then fed-back to the demodulator and used as a priori information in the demodulation process, as given in Eq. (16). Examples of the iterative demodulation-decoding algorithm in the case of serially concatenated codes and multiple antennas are given in [29]. 5.3.
where and from b to c. Equivalently:
Iterative-Greedy Demodulation-Decoding
is the mapping
which under the assumption that the transmitted symbols are independent may be written as
Since we have bit interleaving, we may assume that the probabilities of the bits that compose the symbol are independent, we have hence
The main computational burden of the proposed scheme is imposed by the computation of all the log-likelihoods The computation of the loglikelihoods requires examining symbol vectors and calculating nM loglikelihoods per time interval. The complexity of the scheme can be profoundly reduced if we utilize a suboptimal approach that examines a subset of the possible symbol vectors c that corresponds to large values of the probabilities Therefore, we propose the use of the greedy algorithm. To employ the greedy algorithm in iterative decoding, the metric used in the algorithm is defined to incorporate the updated a priori distribution from the turbo decoder. Thus, the following metric is used for the greedy approach:
190
AlRustamani, Vojcic and Stefanov
which can be simplified to:
The above equation is the same as f (x) in (2). Based on the greedy principle, we need to utilize the coefficients to determine the order in which symbols can be estimated to maximize (17). Therefore, based on (3), we define the following weights:
Since the metric, in (17) is a function of the updated a priori distribution obtained from the turbo decoder, in every iteration the greedy algorithm is forced to explore a different subset of the symbol combinations c and the associated probabilities Therefore, the performance of the modified scheme could be improved by combining the values of obtained in the previous iterations and using them in current iterations. 5.4.
Simulation Results
In this section, we present the performance of the proposed scheme involving turbo codes. The component
codes of the turbo code are two recursive systematic convolutional codes, described by where and are the feedforward and feedback generating polynomials. We chose and to be and respectively. The turbo code employs a random interleaver, and it has a rate obtained by puncturing some of the parity bits; the puncturing matrix for the parity bits is given by
The interleaver that scrambles the turbo coded bits consists of two pseudo-random interleavers of length N. We use two interleavers, one for the systematic and one for the parity bits. We present examples where we use the 4-PSK constellation with Gray mapping at each transmit antenna. In the case of 4-PSK, we have a mapping of one systematic and one parity bit per channel symbol. We consider the cases when we have 4 and 8 transmit and receive antennas. Since the underlying code has a code rate and 8 × 8 antenna systems achieve a spectral efficiency of 4 and 8 bits/sec/Hz, respectively. We use the iterative turbo decoding algorithm employing MAP constituent decoders with 10 iterations. In Fig. 6, we compare the frame error rate (FER) for the turbo code with the full exponential complexity iterative demodulation-decoding algorithm, the iterativegreedy demodulation decoding algorithm and the iterative scheme when the MMSE detector proposed in [31] is employed as a complexity reduction technique. We assume that we have four transmit and four receive antennas and the channel is quasi-static flat Rayleigh fading. The turbo code interleaver size is N = 520 information bits. We observe that at a FER of and the performance of the turbo code with iterative-greedy demodulation decoding is within 1.5 and 2.5 dB from the outage capacity, respectively. On the other hand, the iterative MMSE scheme is 2 dB and 3 dB away from the outage capacity at a FER of and respectively. This indicates that the performance of the greedy algorithm is 0.5 dB better than the performance of the MMSE scheme. The complexity of the greedy algorithm and the MMSE, which is is almost the same for this example. Compared to the full exponential complexity iterative demodulation-decoding algorithm, the loss in performance when the greedy algorithm is utililized is about 0.5 dB. Figure 7, presents the frame error rate (FER) for the turbo code with the iterative demodulation-decoding
Greedy Detection
191
192
AlRustamani, Vojcic and Stefanov
algorithm for several different values of the parameter L. Since L is a design parameter, it may be used to provide a performance-complexity trade-off. We assume that we have eight transmit and eight receive antennas and a quasi-static flat Rayleigh fading channel. The spectral efficiency of the system is 8 bits/sec/Hz. The turbo code interleaver size is N = 1040 information bits. We present results for L = 2, 3, 4, 6, 8 and 10. At a FER of the turbo code with L = 6, 8 and 10 is about 1.6 dB away from the outage capacity, while for L = 2, 3 and 4 the performance is about 2.75, 2.1 and 1.8 dB from the outage capacity. Similarly, at FER of the performance of the turbo codes with L = 6, 8 and 10 is 2 dB away from the outage capacity and for L = 4 and 3 it is about 2.3 and 2.8 dB from the outage capacity, respectively. Finally, in Fig. 8 we present the bit error rate (BER) of the turbo code with the full exponential complexity iterative demodulation-decoding algorithm and the iterative-greedy demodulation decoding algorithm, for the case of the block fading channel model. Also. we compare the results with the iterative scheme when array processing is employed as a complexity reduction technique [66]. We assume that we have four transmit and four receive antennas. The turbo code interleaver size is N = 2600 bits and we have B = 5 fading blocks
per frame. We observe that for L = 2 there is virtually no difference in performance between the exponential complexity decoding and the iterative-greedy decoding. On the other hand, the iterative scheme with array processing is 5 dB away from the full exponential scheme at a BER of
6. Conclusion In this paper, we examine a new greedy algorithm developed for communication systems characterized with multiple simultaneous data transmission. Specifically, we consider code-division multiple-access (CDMA) systems and systems employing space-time coding (STC). The optimum detection in such systems has an exponential complexity and cannot be used in practical systems. For CDMA systems, the greedy MUD algorithm considerably outperforms most of the existing suboptimum schemes, especially for moderate and high loads in low and moderate signal-to-noise (SNR) region. The results show that the greedy MUD algorithm increases the capacity and improves the performance in the presence of fading. For many cases that we could check, the proposed detector has identical or almost identical
Greedy Detection
performance to that of the optimum detector in the range of SNR of most practical interest. We also introduced a new iterative-greedy demodulation-decoding algorithm for systems employing STC. The algorithm allows the receiver to operate with polynomial complexity in the number of transmit antennas with only a slight loss in performance as compared to the full exponential complexity iterative demodulationdecoding algorithm. References 1. S. Verdu, Multiuser Detection, Cambridge: Cambridge University Press, 1998. 2. G.J. Forschini and M.J. Gans, “On Limits of Wireless Communications in a Fading Environment When Using Multiple Antennas,” Wireless Personal Communications, vol. 6, no. 3,1998, pp. 311–335. 3. J.G. Proakis, Digital Communications, 3rd edn., New York: McGraw Hill, 1995. 4. S. Verdu, “Minimum Probability of Error for Asynchronous Gaussian Multiple-Access Channels,” IEEE Trans. Inform. Theory, vol. IT-32, no. 1, 1986, pp. 85–95. 5. S. Verdu, “Computational Complexity of Optimum Multiuser Detection,” Algorithmica, vol. 4, 1989, pp. 303–312. 6. A. Stefanov and T.M. Duman, “Turbo Coded Modulation for Wireless Communications with Antenna Diversity,” in Proc. IEEE VTC–Fall, Amsterdam, Netherlands, Sept. 1999, pp. 1565–1569. 7. A. Stefanov and T.M. Duman, “Turbo Coded Modulation for Systems with Transmit and Receive Antenna Diversity,” in Proc. IEEE GLOBCOM, Rio De Janeiro, Brazil, Dec. 1999, pp. 2336– 2340. 8. R. Lupas and S. Verdu, “Linear Multiuser Detectors for Synchronous Code-Division Multiple-Access Channels,” IEEE Trans. Inform. Theory, vol. 35, no. 1, 1989, pp. 123–136. 9. Z. Xie, R. Short, and C. Rushforth, “A Family of Suboptimum Detectors for Coherent Multiuser Communications,” IEEE J. Select. Areas Commun., vol. 8, 1990, pp. 683–690. 10. U. Madhow and M.L. Honig, “MMSE Interference Suppression for Direct-Sequence Spread Spectrum CDMA,” IEEE Trans. Commun., vol. 42, 1994, pp. 3178–3188. 11. J. Holtzman, “DS/CDMA Successive Interference Cancellation,” in Code Division Multiple Access Communications, S. Glisic and P. Leppanen (Eds.), Dordrecht, The Netherlands: Kluwer Academic, 1995, pp. 161–182. 12. M. Varanasi and B. Aazhang, “Multistage Detection in Asynchronous Code-Division Multiple-Access Communications,” IEEE Trans. Commun., vol. 38, no. 4, 1990, pp. 509–519. 13. M. Varanasi and B. Aazhang, “Near-Optimum Detection in Synchronous Code-Division Multiple-Access Systems,” IEEE Trans. Commun., vol. 39, no. 5, 1991, pp. 725–735. 14. S. Gollamudi, S. Nagaraj, and Y.-F. Huang, “Optimal Multistage Interference Cancellation for CDMA Systems Using the Nonlinear MMSE Criterion,” in Proc. Asilomar Conference on Signals, Systems and Computers, Nov. 1998. 15. V. Vangi and B. Vojcic, “Soft Interference Cancellation in Multiuser Communications,” Wireless Personal Communications,
193
vol. 3, 1996, pp. 118–128. 16. D. Divsalar, M. Simon, and D. Raphael, “Improved Parallel Interference Cancellation for CDMA,” IEEE Trans. Commun., vol. 46, no. 2,1998, pp. 258–268. 17. R.R. Muller and J. Huber, “Iterated Soft-Decision Interference Cancellation for CDMA,” in Broadband Wireless Communications, Luise and Pupolin (Eds.), London, U.K.: Springer, 1998, pp. 110–115. 18. A. Duel-Hellen, “Decorrelating Decision-Feedback Multiuser Detector for Synchronous Code-Division Multiple-Access Channel,” IEEE Trans. Commun., vol. 41, no. 2, 1993, pp. 285– 290. 19. L. Wei and C. Schlegel, “Synchronous DS-CDMA Systems with Improved Decorrelating Decision-Feedback Multiuser Detection,” IEEE Trans. Veh. Technol., vol. 43, no.3, 1994, pp. 767– 772. 20. L. Wei, L.K Rasmussen, and R. Wyrwas, “Near Optimum Tree Search Detection Schemes for Bit-Synchronous Multiuser CDMA Systems Over Gaussian and Two-Path Rayleigh-Fading Channels,” IEEE Trans. Commun., vol. 45, no. 6, 1997, pp. 691– 700. 21. L. Wei and L.K. Rasmussen, “A Near Ideal Noise Whitening Filter for an Asynchronous Time-Varying CDMA Systems,” IEEE Trans. Commun., vol. 44, no. 10, 1996, pp. 1355–1361. 22. Z. Xie, C.K. Rushforth, R. Short, and T.K. Moon, “Joint Signal Detection and Parameter Estimation in Multiuser Communications,” IEEE Trans. Commun., vol. 41, no. 7, 1993, pp. 1208– 1216. 23. Z. Xie and C.K. Rushforth, “Multiuser Signal Detection Using Sequential Decoding,” IEEE Trans. Commun., vol. 38, no. 5, 1990, pp. 578–583. 24. L.K. Rasmussen, T.J. Lim, and T.M. Aulin, “Breadth-First Maximum Likelihood Detection in Multiuser CDMA,” IEEE Trans. Commun., vol. 45, no. 10, 1997, pp. 1176–1178. 25. M.K. Varanasi, “Group Detection for Synchronous Gaussian Code-Division Multiple-Access Channels,” IEEE Transactions on Information Theory, vol. 41, no. 4, 1998, pp. 1083–1096. 26. M.K. Varanasi, “Parallel Group Detection for Synchronous CDMA Communication over Frequency-Selective Rayleigh Fading Channels,” IEEE Transactions on Information Theory, vol. 42, no. 1, 1996, pp. 116–128. 27. Z. Shi, W. Du, and Driessen, “A New Multistage Detector in Synchronous CDMA Communications,” IEEE Trans. Commun., vol. 44, no. 5, 1996, pp. 538–541. 28. B. Wu and Q. Wang, “New Suboptimum Multiuser Detectors for Synchronous CDMA Systems,” IEEE Trans. Commun., vol. 44, no. 7, 1996, pp. 782–785. 29. A. Reial and S.G. Wilson, “Concatenated Space-Time Coding,” in Proc. CISS, Princeton, March 2000. 30. A. Reial and S.G. Wilson, “Concatenated Space-Time Coding for Large Antenna Arrays,” IEEE Trans. IT, submitted. 31. H. El Gamal and A.R. Hammons, Jr., “The Layered Space-Time Architecture: A New Prespective,” IEEE Journal on Selected Areas in Communications, submitted. 32. T.C. Hu, Combinatorial Algorithms, Reading, MA: AddisonWesley Publishing Company, 1982. 33. K.P. Bogart, Introductory Combinatorics, Boston, Mass.: Pitman (Advanced Publishing Program), 1983. 34. F. Barahona, M. Gunger, and G. Reinelt, “Experiments in Quadratic 0-1 Programming,” Mathematical Programming,
194
AlRustamani, Vojcic and Stefanov
vol. 44, 1989, pp. 127–137. 35. B. Borchers and J.E. Mitchell, “An Improved Branch and Bound Algorithm for Mixed Integer Non-Linear Programs,” Computers and Operations Research, vol. 21, no. 4, 1994, pp. 359– 367. 36. P.L. Hammer, P. Hansen, and B. Simone, “Proof Duality, Complementation and Persistency in Quadratic 0-1 Optimization,” Mathematical Programming, vol. 28, 1984, pp. 121–155. 37. P.M. Pardalos and G.P. Rodgers, “Computational Aspects of a Branch and Bound Algorithm for Quadratic Zero-One Programming,” Computing, vol. 45, 1990, pp. 131–144. 38. P.M. Pardalos and Y. Li, “Integer Programming,” in Handbook of Statistics, C.R. Rao (Ed.), Amsterdam, Netherlands: Elsevier Science Publishers, vol. 9, 1993, pp. 279–302. 39. P.M. Pardalos, A.T. Phillips, and J.B. Rosen, Topics in Parallel Computing in Mathematical Programming, Beijing and New York: Science Press, 1993. 40. P.M. Pardalos and G.P. Rodgers, “Parallel Branch and Bound Algorithms for Quadratic Zero-One Programs on the Hypercube Architecture,” Annals of Operations Research, vol. 22, 1990, pp. 271–292. 41. P.M Pardalos and S. Jha, “Complexity of Uniqueness and Local Search in Quadratic 0-1 Programming,” Operations Research Letters, vol. 11, 1992, pp. 119–123. 42. P.M. Pardalos, “Construction of Test Problems in Quadratic Bivalent Programming,” ACM Trans. Math. Software, vol. 17, 1991, pp. 74–87. 43. M.W. Carter, “The Indefinite Zero-One Quadratic Problem,” Discrete Applied Mathematics, vol. 7, 1984, pp. 23–44. 44. V.P. Gulati, S.K. Gupta, and A.K. Mittal, “Unconstrained Quadratic Bivalent Programming Problem,” European Journal of Operational Research, vol. 15, 1984, pp. 121–125. 45. F. Barahona, “A Solvable Case for Quadratic 0-1 Programming,” Discrete Applied Mathematics, vol. 13, 1986, pp. 127–137. 46. P. Kempf, “A Non-Orthogonal Synchronous DS-CDMA Case, where Successive Cancellation and Maximum-Likelihood Multiuser Detectors are Equivalent,” in Proc. 1995 IEEE Int. Symp. Infor. Theory, Sept. 1995, p. 321. 47. J.C. Picard and M. Queyranne, “Selected Applications of Min Cut in Networks,” INFOR, vol. 20, no. 4, 1982, pp. 395– 422. 48. R. Learned, A. Willsky, and D. Boroson, “Low Complexity Optimal Joint Detection For Oversaturated Multiple Access Communications,” IEEE Trans. Signal Processing, vol. 45, no. 1, 1997, pp.113–123. 49. J.A.F. Ross and D.P. Taylor, “Vector Assignment Scheme for M + N Users in N-Dimensional Global Additive Channel,” Electronic Letters, vol. 28, no. 17, 1992, pp. 1634–1636. 50. C. Sankaran and A. Ephremides, “Solving a Class of Optimum Multiuser Detection Problems with Polynomial Complexity,” IEEE Trans. Info. Theory, vol. 44, no. 5, 1998, pp. 1958– 1961. 51. L. Welch, “Lower Bounds on the Maximum Crosscorrelation of Signals,” IEEE Trans. Infor. Theory, vol. 20, 1974, pp. 397– 399. 52. J.L. Massey and T. Mittelholzer, “Welch’s Bound and Sequence Sets For Code-Division Multiple-Access Systems,” in Sequence II, Methods in Communication, Security, and Computer Science, R. Capocelli, A. De Santis and U. Vaccaro (Eds.), New York: Springer-Verlag, 1993.
53. A. AlRustamani and B.R. Vojcic, “Greedy-Based Multiuser Detection for CDMA Systems,” in Proceedings of the 2000 Conference on Information Sciences and Systems, Princeton University, Princeton, New Jersey, vol II, March 2000, pp. FA8-11–FA8-14. 54. A. AlRustamani and B. Vojcic, “Greedy Multiuser Detection Over Single-Path Fading Channel,” 2000 IEEE Sixth International Symposium on Spread Spectrum Techniques and Applications (ISSSTA), vol. 2, pp. 708–712. 55. B. Vojcic, A. AlRustamani, and A. Damnjanovic, “Greedy Iterative Multiuser Detection for Turbo Coded Multiuser Communications,” Invited paper, ICT 2000, Mexico, May 2000. 56. Z. Zvonar and D. Brady, “Multiuser Detection in Single-Path Fading Channels,” IEEE Trans. Commun., vol. 42, no. 2/3/4, 1994, pp. 1729–1739. 57. E. Biglieri, G. Caire, and G. Taricco, “Limiting Performance of Block Fading Channels with Multiple Antennas,” Transactions on Information Theory, submitted. 58. T.L. Marzetta and B.M. Hochwald, “Capacity of a Mobile Multiple-Antenna Communication Link in Rayleigh Flat Fading,” IEEE Trans. Inform. Theory, vol. IT-45,1999, pp. 139–157. 59. E. Telatar, “Capacity of Multi-Antenna Gaussian Channels,” AT&T-Bell Labs Internal Tech. Memo., June 1995. 60. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error-Correcting Coding: Turbo Codes,” in Proc. of ICC’93, Geneva, Switzerland, May 1980, pp. 1064–1070. 61. G. Caire, G. Taricco, and E. Biglieri, “Bit-Interleaved Coded Modulation,” IEEE Trans. Inform. Theory, vol. 44, no. 3, 1998, pp. 927–946. 62. X. Li and J. A. Ritcey, “Trellis-Coded Modulation with Bit Interleaving and Iterative Decoding,” IEEE J. Select. Areas Commun., vol. 17, 1999, pp. 715–724. 63. X. Li and J.A. Ritcey, “Bit-Interleaved Coded Modulation with Iterative Decoding Using Soft Feedback,” IEE Electronics Letters, vol. 34, 1998, pp. 942–943. 64. I. Abramovici and S. Shamai, “On Turbo Encoded BICM,” Annales Des Telecommunications. 65. S. Benedetto, G. Montorsi, D. Divsalar, and F. Pollara, “A SoftInput Soft-Output Maximum A Posteriori (MAP) Module to Decode Parallel and Serial Concatenated Codes,” JPL TMO Progress Report, vol. 42-127, Nov. 1996. 66. A. Stefanov and T.M. Duman, “Turbo-Coded Modulation for Systems with Transmit and Receive Antenna Diversity Over Block Fading Channels: System Model, Decoding Approaches, and Practical Considerations,” IEEE Journal on Selected Areas in Communications, vol. 19, no. 5, 2001, pp. 958–968.
Amina AlRustamani received her B.S., M.S., and D.Sc. degrees in electrical engineering from the George Washington University in 1993, 1996 and 2001, respectively. She is currently working for
Greedy Detection
Dubai Internet City, Dubai, UAE. Her areas of interest are spread spectrum, multiuser detection, wireless/mobile networks and satellite communications.
[email protected]
Branimir R. Vojcic received the Diploma in electrical engineering, and the M.S. and D.Sc. degrees in 1980,1986 and 1989, respectively, from the Faculty of Electrical Engineering, University of Belgrade, Yugoslavia. Since 1991 he has been on the faculty at the George Washington University, where he is a Professor and Chairman in the Department of Electrical and Computer Engineering. His current research
195
interests are in the areas of communication theory, performance evaluation and modeling of terrestrial cellular and satellite mobile communications, code division multiple access, multiuser detection, adaptive antenna arrays and space-time coding and ad-hoc networks. He has also been an industry consultant in these areas and has published and lectured extensively in these areas. Dr. Vojcic is a Senior Member of IEEE, was an Associate Editor for IEEE Communications Letters and a receipient of 1995 National Science Foundation CAREER Award. vojcic @ seas.gwu.edu
Andrej Stefanov received the B.S. degree in electrical engineering from the University of Cyril and Methodius, Skopje, Macedonia, and the M.S. and Ph.D. degrees in electrical engineering from Arizona State University, Tempe, AZ, in 1996, 1998 and 2001, respectively. His current research interests are wireless and mobile communications, space-time coding and turbo codes. During the summer 2000, he held an internship with the Advanced Development Group, Hughes Network Systems, Germantown, MD. Dr. Stefanov is a corecipient of the IEEE Benelux Joint Chapter Best Paper Award at IEEE VTC-Fall’99, Amsterdam, the Netherlands.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 197–215, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
A New Class of Efficient Block-Iterative Interference Cancellation Techniques for Digital Communication Receivers* ALBERT M. CHAN AND GREGORY W. WORNELL Department of Electrical Engineering and Computer Science, and the Research Laboratory of Electronics, Massachusetts Institute of Technology, MIT, Rm. 36-677, Cambridge, MA 02139, USA Received November 13, 2000; Revised August 24, 2001
Abstract. A new and efficient class of nonlinear receivers is introduced for digital communication systems. These “iterated-decision” receivers use optimized multipass algorithms to successively cancel interference from a block of received data and generate symbol decisions whose reliability increases monotonically with each iteration. Two variants of such receivers are discussed: the iterated-decision equalizer and the iterated-decision multiuser detector. Iterated-decision equalizers, designed to equalize intersymbol interference (ISI) channels, asymptotically achieve the performance of maximum-likelihood sequence detection (MLSD), but only have a computational complexity on the order of a linear equalizer (LE). Even more importantly, unlike the decision-feedback equalizer (DFE), iterated-decision equalizers can be readily used in conjunction with error-control coding. Similarly, iterateddecision multiuser detectors, designed to cancel multiple-access interference (MAI) in typical wireless environments, approach the performance of the optimum multiuser detector in uncoded systems with a computational complexity comparable to a decorrelating detector or a linear minimum mean-square error (MMSE) multiuser detector. Keywords: equalization, multiuser detection, decision-feedback equalizer, multipass receivers, multistage detectors, iterative decoding, stripping, interference cancellation
1.
Introduction
Over the last several decades, a variety of equalization techniques have been proposed for use on intersymbol interference (ISI) channels. Linear equalizers (LE) are attractive from a complexity perspective, but often suffer from excessive noise enhancement. Maximumlikelihood sequence detection (MLSD) [1] is an asymptotically optimum receiver in terms of bit-error rate performance, but its high complexity has invariably precluded its use in practice. Decision-feedback equalizers (DFE) [2] are a widely used compromise, retaining a complexity comparable to the LE, but incurring much less noise enhancement. However, DFEs still * This work has been supported in part by Qualcomm, Inc., the Army Research Laboratory under Cooperative Agreement DA AL01-96-20002, and Sanders, a Lockheed-Martin Company.
have some serious shortcomings. First, decisions made at the slicer can only be fed back to improve future decisions due to the sequential way in which the receiver processes data. Thus, only postcursor ISI can be subtracted, so even if ideal postcursor ISI cancellation is assumed, the performance of the DFE is still limited by possible residual precursor ISI and noise enhancement. Second, and even more importantly, the sequential structure of the DFE makes it essentially incompatible for use in conjunction with error-control coding (on channels not known at the transmitter, as is the case of interest in this paper). As a result, use of the DFE has been largely restricted to uncoded systems. In parallel to these developments, a variety of multiuser detectors have been proposed for code-division multiple-access (CDMA) channels over the last decade and a half as solutions to the problem of mitigating multiple-access interference (MAI) [3]. Given the close
198
Chan and Wornell
coupling between the problems of suppressing ISI and MAI, there are, not surprisingly, close relationships between corresponding solutions to these problems. For example, decorrelating detectors and linear minimum mean-square error (MMSE) multiuser detectors— the counterparts of zero-forcing and MMSE linear equalizers—are attractive from a complexity perspective, but suffer from noise enhancement. Optimum maximum-likelihood (ML) multiuser detection, while superior in performance, is not a practical option because of its high complexity. In this paper, we introduce a class of promising multipass receivers that is a particularly attractive alternative to all these conventional equalizers and detectors. Specifically, in Section 2 we describe the iterateddecision equalizer, and in Section 3 we describe the corresponding iterated-decision multiuser detector. In particular, we show that these new receivers achieve asymptotically optimum performance while requiring surprisingly low complexity. 2.
The Iterated-Decision Equalizer
In the discrete-time baseband model of the pulse amplitude modulation (PAM) communication system we consider, the transmitted data is a white M-ary phaseshift keying (PSK) stream of coded or uncoded symbols x[n], each with energy The symbols x[n] are corrupted by a convolution with the impulse response of the channel, a[n], and by additive noise, to produce the received symbols
The noise is a zero-mean, complex-valued, circularly symmetric, stationary, white Gaussian noise sequence with variance that is independent of x[n]. The associated channel frequency response is denoted by
As increasingly aggressive data rates are pursued in wideband systems to meet escalating traffic requirements, ISI becomes increasingly severe. Accordingly, in this paper we pay special attention to the performance and properties of the equalizers in this regime. For the purposes of analysis, a convenient severeISI channel model we will exploit is one in which
a[n] is a finite impulse response (FIR) filter of length L, where L is large and the taps are mutually independent, zero-mean, complex-valued, circularly symmetric Gaussian random variables with variance The channel taps a[n] are also independent of the data x[n] and the noise It is also worth pointing out that this is also a good channel model for many wireless systems employing transmitter antenna diversity in the form of linear space-time coding [4]. In this section and Section 2.1, we summarize the results of [5], which focuses on the basic theory and fundamental limits of the iterated-decision equalizer when the receiver has accurate knowledge of a[n]. In Section 2.2, we develop and analyze adaptive implementations in which the channel coefficients a[n] are not known a priori. Examining the fixed and adaptive scenarios separately and comparing their results allows system designers to isolate channel tracking effects from overall equalizer behavior. We emphasize that in both cases, we restrict our attention to transmitters that have no knowledge of the channel, which is the usual case for reasonably rapidly time-varying channels. The iterated-decision equalizer we now develop processes the received data in a block-iterative fashion. Specifically, during each iteration or “pass,” a linear filter is applied to a block of received data, and tentative decisions made in the previous iteration are then used to construct and subtract out an estimate of the ISI. The resulting ISI-reduced data is then passed on to a slicer, which makes a new set of tentative decisions. With each successive iteration, increasingly refined hard decisions are generated using this strategy. The detailed structure of the iterated-decision equalizer is depicted in Fig. 1. The parameters of all systems and signals associated with the l th pass are denoted using the superscript l. On the l th pass of the equalizer where l = 1, 2, 3, . . . , the received data r[n] is first processed by a linear filter producing the sequence
Block-Iterative Interference Cancellation Techniques
Next, an appropriately constructed estimate ISI is subtracted from to produce
of the i.e.,
where
(In subsequent analysis, we will show that is never required for the first iteration, so the sequence may remain undefined.) Since is intended to be
some kind of ISI estimate, we restrict attention to the case in which
Finally, the slicer then generates the hard decisions from using a minimum-distance rule. The composite system consisting of the channel in cascade with l iterations of the multipass equalizer can be conveniently characterized. In particular, when x [n] and are sequences of zero-mean uncorrelated symbols with energy such that their normalized correlation is of the form
199
a [n]. A formal statement of this result and its associated proof is developed in [5]. The second-order model (8) turns out to be a useful one for analyzing and optimizing the performance of the iterated-decision equalizer. In particular, it can be used to obtain a surprisingly accurate estimate of the symbol error rate for M-ary PSK even though we ignore the higher-order statistical dependencies. The first step in developing these results is to observe that (8) implies that the signal-to-interference+noise ratio (SINR) at the slicer input during each pass can be written, using (9), as
and that the probability of symbol error at the lth iteration may be approximated by the high signal-to-noise ratio (SNR) formula for the M-ary PSK symbol error rate of a symbol-by-symbol threshold detector for additive white Gaussian noise (AWGN) channels, given by [6]
where
For the special case of QPSK (M = 4), the extension of (11) to arbitrary SNRs is given by [6] the slicer input after l iterations can be expressed as1
where and are the frequency responses of a[n] and respectively, where is a complexvalued, marginally Gaussian, zero-mean white noise sequence, uncorrelated with the input symbol stream x[n], and having variance
and where the accuracy of the approximation in (8) increases with the length L of the impulse response
Note that this equivalent channel model effectively suggests that, in the absence of coding, we replace the computationally expensive Viterbi-algorithm-based MLSD with a simple symbol-by-symbol detector, as if the channel were an AWGN channel.2 Since the probability of error given by (11) or (13) is a monotonically decreasing function of SINR, a natural equalizer design strategy involves maximizing the SINR over all and Thus, the optimal filters are [5]
200
Chan and Wornell
The result for is intuitively satisfying. If so that then the output of exactly reproduces the ISI component of More generally, describes our confidence in the quality of the estimate If is a poor estimate of x[n], then will in turn be low, and consequently a smaller weighting is applied to the ISI estimate that is to be subtracted from On the other hand, if is an excellent estimate of x[n], then and nearly all of the ISI is subtracted from Thus, while the strictly causal feedback filter of the DFE subtracts out only postcursor ISI, the noncausal nature of the filter allows the iterateddecision equalizer to cancel both precursor and postcursor ISI. Note also that the center tap of is indeed asymptotically zero, as stipulated by (6). Some comments can also be made about the special case when l = 1. During the first pass, the feedback branch is not used because so the sequence does not need to be defined. Moreover, the filter takes the form
which is the minimum mean-square error linear equalizer (MMSE-LE). Thus the performance of the iterated-decision equalizer, after just one iteration, is identical to the performance of the MMSE-LE. In Section 2.1, we show that the equalizer, when using multiple iterations, performs significantly better than both the MMSE-LE and the minimum mean-square error decision-feedback equalizer (MMSE-DFE). We now proceed to simplify the SINR expression that characterizes the resulting performance. With the optimum and the SINR from (10) becomes [5]
where
is the expected SNR at which the transmission is received. Evaluating the expectation in (17), our simplified SINR expression is [5]
where
is the exponential integral. Equation (21) can, in turn, be used in the following convenient iterative algorithm for determining the set of correlation coefficients to be used at each iteration, and simultaneously predicting the associated sequence of symbol error probabilities: 1. Set and let l = l. 2. Compute the SINR at the slicer input on the l th decoding pass from via (21), (19), and (20). [It is worth pointing out that for shorter ISI channels, we can alternatively (and in some cases more accurately) compute from via (17) and (18), where the expectation is replaced by a frequency average.] 3. Compute the symbol error probability at the slicer output from via (11). 4. Compute the normalized correlation coefficient between the symbols x[n] and the decisions generated at the slicer via the approximation [8]
where
5. Increment l and go to step 2.
Now since our channel model implies that is a complex-valued, circularly symmetric Gaussian random variable with zero mean and variance it follows that is exponentially distributed with mean
In the special case of QPSK, it can be shown that the algorithm can be streamlined by eliminating Step 3 and replacing the approximation (23) with the exact formula
Block-Iterative Interference Cancellation Techniques
2.1.
201
Performance
In Fig. 2, bit-error rate is plotted as a function of SNR per bit for 1, 2, 3, 5, and an infinite number of iterations. We observe that steady-state performance is approximately achieved with comparatively few iterations, after which additional iterations provide only negligibly small gains in performance. It is significant that few passes are required to converge to typical target bit-error rates, since the amount of computation is directly proportional to the number of passes required; we emphasize that the complexity of a single pass of the iterated-decision equalizer is comparable to that of the DFE or the LE. Figure 3 compares the theoretical performance of the iterated-decision equalizer when the number of channel taps with experimentally obtained results when L = 256. The experimental results are indeed consistent with theoretical predictions, especially at high SNR where it has been theoretically shown [5] that the equalizer achieves the matched filter bound, i.e., For comparison, in Fig. 3 we also plot the theoretical error rates of the ideal MMSE-DFE, the MMSE-LE, and the zero-forcing linear equalizer (ZF-LE), based on their asymptotic SINRs in the large ISI limit [5]:
We can readily see that at moderate to high SNR, the iterated-decision equalizer requires significantly less transmit power than any of the other equalizers to achieve the same probability of error. Specifically, at high SNR we have from [5] that and ln where denotes Euler’s constant. Thus, the MMSE-DFE theoretically requires times or dB more transmit power to achieve the same probability of error as the iterated-decision equalizer. Moreover, as the MMSE-LE requires increasingly more transmit power than the iterated-decision equalizer to achieve the same probability of error. The ZF-LE is even worse: for all which is expected since the zeros of the random channel converge uniformly on the unit circle in the long ISI limit [9]. These results emphasize the strong suboptimality of conventional equalizers. The performance of the iterated-decision equalizer for channels whose taps are few in number, nonGaussian, and/or correlated is discussed in [5, 10]. 2.2.
Adaptive Implementations
We now develop an adaptive implementation of the iterated-decision equalizer, in which optimal FIR filter coefficients are selected automatically (from the received data) without explicit knowledge of the channel
202
Chan and Wornell
characteristics. We focus on the single channel case; multichannel generalizations follow in a straightforward manner, as developed in [10]. The iterated-decision equalizer is designed to process received data in a block-iterative fashion, so it is ideally suited for packet communication in which the packet size is chosen small enough that the channel encountered by each packet appears linear time-invariant. As is typically the case with other adaptive equalizers, the adaptive iterated-decision equalizer makes use of training symbols sent along in the packet with the data symbols. Suppose that a block of white M-ary PSK symbols x[n] for n = 0, 1, . . . , N – 1 is transmitted; some of the symbols (not necessarily at the head of the packet) are for training, while the rest are data symbols. In the adaptive implementation of the iterateddecision equalizer, the filters and for the lth iteration are finite-length filters. Specifically, has strictly anticausal taps and strictly causal taps plus a center tap, while has strictly anticausal taps and strictly causal taps with no center tap. Before the first pass (l = 1), we need to initialize the hard decisions Since the locations and values of the training symbols in x[n] are known at the receiver, we set for the n corresponding to those locations. For all the other n between 0 and N – 1 inclusive, we set to be a “neutral” value—for white PSK symbols, this value should be zero. On the lth pass of the equalizer where l = 1, 2, 3 , . . . , the slicer input can be expressed as3
where
Using a minimum-distance rule, the slicer then generates the hard decisions from for all n between 0 and N – 1 inclusive, except for those n corres-
ponding to the locations of training symbols in x[n]. For those n, we set In the lth iteration, there are two sets of data available to the receiver: r[n] and If we assume that for the purposes of determining the optimal filters (as is similarly done in the adaptive DFE in decision-directed mode), then it is reasonable to choose and so as to minimize the sum of error squares:
Since this is a linear least-squares estimation problem, the optimum is [11]
(31) where
and The resulting equalizer lends itself readily to practical implementation, even for large filter lengths. In particular, the matrix can be efficiently computed using correlation functions involving r[n] and and can be efficiently computed using formulas for the inversion of a partitioned matrix [12]. We now turn to a couple of implementation issues. First, we would ideally like our finite-length adaptive filters to approximate (14) and (15), which are infinite length. The optimal in (14) includes a filter matched to a[n], and the optimal in (15) includes a cascade of a [n] and the corresponding matched filter, suggesting that a reasonable rule of thumb is to select Second, the blockiterative nature of the equalizer allows the training symbols to be located anywhere in the packet. Since— in contrast to the DFE—the locations do not appear to affect equalizer performance, we arbitrarily choose to uniformly space the training symbols within each packet. In Fig. 4, we plot the bit-error rate of the adaptive iterated-decision equalizer as a function of the number of iterations, for varying amounts of training data. The graph strongly suggests that there is a threshold for the number of training symbols, below which the adaptive equalizer performs poorly and above which the bit-error rate consistently converges to approximately the same steady-state value regardless of the exact number of training symbols. The excess training data is still important though, since the bit-error rate converges quicker with more training data.
Block-Iterative Interference Cancellation Techniques
203
perform as if the channel were exactly known at the receiver. For comparison purposes, we also plot in Fig. 5 the performance of the recursive least squares (RLS) based implementation of the adaptive DFE [11]. The DFE performs significantly worse than the iterated-decision equalizer for comparable amounts of training data. Indeed, the high SNR gap is even larger than the 2.507 dB determined for the nonadaptive case. This is because, as Figs. 3 and 5 show, the performance of the adaptive DFE is not accurately predicted by the nonadaptive MMSE-DFE, even in the long ISI limit. It is also worth stressing that the RLS-based adaptive DFE is much more computationally expensive than the adaptive iterated-decision equalizer because the RLS-based DFE requires the multiplication of large matrices for each transmitted symbol, whereas the iterated-decision equalizer essentially requires the computation of one large matrix inverse per iteration for all the symbols in the packets, with the number of required iterations being typically small. We next examine the probability of bit error as a function of SNR for varying amounts of training data. From Fig. 5 we see that, as expected, performance improves as the amount of training data is increased. Moreover, only a modest amount of training symbols is required at high SNR for the adaptive equalizer to
2.3.
Coded Implementations
For ideal bandlimited AWGN channels, powerful coding schemes such as trellis-coded modulation with maximum likelihood (ML) decoding can improve the performance over uncoded PAM so that channel capacity is approached. On the other hand, for bandlimited channels with strong frequency-dependent distortion, coding must be combined with equalization techniques. While the MMSE-DFE has certain attractive characteristics in the context of coded systems [13, 14], in many practical settings it is difficult to use effectively. In particular, in typical implementations the MMSE-DFE cancels postcursor ISI by using delay-free symbol decisions, which in a coded system are often highly unreliable compared to ML decisions, and performance is often poor as a result. From this perspective, the iterated-decision equalizer, which avoids this problem, is a compelling alternative to the MMSE-DFE in coded systems. The structure of a communication system that combines the iterated-decision equalizer with coding is shown in Fig. 6. Although the sequence x[n] is first encoded before it is transmitted, the approximation in (8) is still valid because typical trellis codes and random codes generally produce white symbol streams [7]. What makes the iterated-decision equalizer an
204
Chan and Wornell
attractive choice when coding schemes are involved is that the structure of the equalizer allows equalization and coding to be largely separable issues. One of the main differences now in the iterated-decision equalizer is that the symbol-by-symbol slicer has been replaced by a soft-decision ML decoder; the other is that the batch of decisions must be re-encoded before being processed by the filter or shorter ISI channels, performance of the system may be improved by inserting an interleaver after each encoder to reduce correlation between adjacent symbols, and by inserting a corresponding deinterleaver before the decoder to reduce the correlation of the residual ISI and noise. Among a variety of interesting issues to explore is the relationship between the structure and performance of such coded systems and those developed in [15]. 3.
The Iterated-Decision Multiuser Detector
We now develop the counterpart of the iterated-decision equalizer for the multiuser detection problem. As we will see, the resulting detectors are structurally similar to multistage detectors [16] in that they both generate tentative decisions for all users at each iteration and subsequently use these to cancel MAI at the next iteration. However, unlike those original multistage detectors, the new detectors developed in this section explicitly take into account the reliability of tentative
decisions and are optimized to maximize the signal-tointerference + noise (SINR) ratio at each iteration. For the purposes of illustration (and to simplify exposition), we consider a P-user discrete-time synchronous channel model, where the ith user modulates an M-ary PSK symbol onto a randomly generated signature sequence of length Q assigned to that user, where the taps of the sequence are mutually independent, zero-mean, complex-valued, circularly symmetric Gaussian random variables with variance 1 / Q . The received signal is where is the Q × P matrix of signatures, is the P × P diagonal matrix of received amplitudes, is the P × 1 vector of data symbols, and w is a Q-dimensional Gaussian vector with independent zero-mean, complex-valued, circularly symmetric components of variance The structure of the iterated-decision multiuser detector is depicted in Fig. 7. The parameters of all systems and signals associated with the lth pass are denoted using the superscript l. On the lth pass of the equalizer where l = 1, 2, 3 , . . . , the received vector r is first premultiplied by a P × Q matrix producing the P × 1 vector
Block-Iterative Interference Cancellation Techniques
Next, an appropriately constructed estimate MAI is subtracted from to produce i.e.,
205
of the
where
with a P × P matrix. (In subsequent analysis, we will show that is never required for the first iteration, so the vector may remain undefined.) Since is intended to be some kind of MAI estimate, we restrict attention to the case in which
Finally, a bank of slicers then generates the P × 1 vector of hard decisions from using a minimumdistance rule. Let us now characterize the composite system consisting of the signatures, the channel, and the multipass multiuser detector. Let x and be vectors of zero-mean uncorrelated symbols, each with energy and let the normalized correlation matrix of the two vectors be expressed in the form
where I is the identity matrix and is the P × P matrix with a 1 in the ith row and column as its only nonzero entry. Equation (39) implies that the SINR at the ith slicer input during each pass can be written as
and that the probability of symbol error for the ith user at the lth iteration can be approximated by (11) for the general M-ary PSK case or (13) for the QPSK case. Since the probability of error given by (11) or (13) is a monotonically decreasing function of SINR, a natural detector design strategy involves maximizing the SINR of the i th user over all and For a given filter it is straightforward to find the optimal filter In particular, note that appears only in a non-negative denominator term of the SINR expression given by (41) and (40), and that term can be made exactly zero by setting
or, equivalently, where can be interpreted as a measure of reliability of Moreover, let satisfy the tural requirement (37). Then, the slicer input fined via (35) with (34), (36), and (33) satisfies, i = 1, 2, . . . , P,
the nadefor Using (42) to eliminate now simplifies to
where is complex-valued, marginally Gaussian, zero-mean, and uncorrelated with having variance
the SINR expression in (41)
This result for is intuitively satisfying. If so that then the inner product exactly reproduces the MAI component of More generally, describes our confidence in the quality of the estimate If is a poor estimate of then will in turn be low, and consequently a smaller
206
Chan and Wornell
weighting is applied to the MAI estimate that is to be subtracted from On the other hand, if is an excellent estimate of then and nearly all of the MAI is subtracted from Note that the diagonal of is indeed asymptotically zero, as stipulated by (37). Next, we optimize the vector The identity
can be used to rewrite (44) as
where
Using the Schwarz inequality, we have4
with equality if and only if
Substituting (48) into (47), we see that (49) maximizes (47) and, in turn, (44). When we choose the proportionality constant to be the same for i = 1, 2 , . . . , P, we may write5
Some comments can be made about the special case when l = 1. During the first pass, feedback is not used
because so the vector does not need to be defined. Moreover, the filter takes the form
which is an expression for the linear MMSE multiuser detector. Thus the performance of the iterated-decision multiuser detector, after just one iteration, is identical to the performance of the linear MMSE multiuser detector. In Section 3.1, we show that the iterated-decision multiuser detector, when using multiple iterations, performs significantly better than the linear MMSE multiuser detector. The iterated-decision multiuser detector also has an interesting relationship with another multiuser detector. If is set to then the matrices (43) and (50) for the iterated-decision multiuser detector become the matrices used for the multistage detector [16]. In other words, the iterated-decision multiuser detector explicitly takes into account the reliability of tentative decisions, while the multistage detector assumes that all tentative decisions are correct. As we will see in Section 3.1, this difference is the reason that the decisions of the former asymptotically converge to the optimum ones, while the decisions of the latter often diverge. We now proceed to simplify the SINK expression that characterizes the resulting performance for the i th user. With the optimum and we have, substituting (48) into (47),
After some algebraic manipulation, the SINR from (46), with (52), then becomes
where
For the case of accurate power control, i.e., A = AI so it is shown in Appendix A that in the large system limit with held constant), the SINR in (53) for each user converges in the mean-square sense to
Block-Iterative Interference Cancellation Techniques
207
where
and
with
the received SNR. The iterative algorithm for computing the set of correlation coefficients and in turn predicting the sequence of symbol error probabilities is as follows. 1. Set and let l=l. 2. Compute the SINR from via (55), (57), and (58). [For smaller systems, we can alternatively (and in some cases more accurately) compute from by averaging (53) over all users.] 3. Compute the symbol error probability from via (11). 4. Compute via (23). 5. Increment l and go to step 2.
In the special case of QPSK, it can be shown that the algorithm can be streamlined by eliminating Step 3 and replacing the approximation (23) with the exact formula in (24). 3.1.
Performance
From Steps 2 and 3 of the algorithm, we see that can be expressed as
where is a monotonically decreasing function in both SNR and correlation but a monotonically increasing function in The monotonicity of is illustrated in Fig. 8 where the successively lower solid curves plot as a function
of for various values of with an SNR per bit of 7 dB and power control. Meanwhile, from Step 4 of the algorithm, we see that we can also express as
where is a monotonically decreasing function of The dashed line in Fig. 8 plots as a function of At a given and the sequence of error probabilities and correlation coefficients can be obtained by starting at the left end of the solid curve (corresponding to and then successively moving horizontally to the right from the solid curve to the dashed line, and then moving downward from the dashed line to the solid curve. Each “step” of the resulting descending staircase corresponds to one pass of the multiuser detector. In Fig. 8, the sequence of operating points is indicated on the solid curves with the symbols. That the sequence of error probabilities obtained by the recursive algorithm is monotonically decreasing suggests that additional iterations always improve performance. The error rate performance for a given SNR of and a given eventually converges to a steady-state value of which is the unique solution to the equation
208
Chan and Wornell
corresponding to the intersection of the dashed line and the appropriate solid curve in Fig. 8. If is relatively small, Fig. 8 suggests that steadystate performance is approximately achieved with comparatively few iterations, after which additional iterations provide only negligibly small gains in performance. This observation can also be readily made from Fig. 9, where bit-error rate is plotted as a function of SNR per bit for 1, 2, 3, 5, and an infinite number of iterations, with It is significant that, for small few passes are required to converge to typical target bit-error rates, since the amount of computation is directly proportional to the number of passes required; we emphasize that the complexity of a single pass of the iterated-decision multiuser detector is comparable to that of the decorrelating detector or the linear MMSE multiuser detector. As increases, Fig. 8 shows that the gap between the solid curve and the dashed curve decreases. Thus the “steps” of the descending staircase get smaller, and there is a significant increase in the number of iterations required to approximately achieve steadystate performance. Moreover, the probability of error at steady-state becomes slightly larger. When is greater than some SNR-dependent threshold, not only can (61) have multiple solutions, but one of the solutions occurs at a high probability of error, as illustrated by the curve in Fig. 8 corresponding
to The dependence of the threshold on SNR is shown in Fig. 10. As the SNR increases, the threshold increases, and the bit-error rate curve becomes much sharper at the threshold. Our experiments show that in the high SNR regime the threshold is near In Fig. 11, we compare the theoretical and simulated (Q = 128) bit-error rates of the iterateddecision multiuser detector with the bit-error rates
Block-Iterative Interference Cancellation Techniques
209
PSK symbol sequence for n = 0, 1, . . . , N – 1 onto a signature sequence of length Q assigned to that user; some of these symbols (not necessarily at the head of the packet) are for training, while the rest are data symbols. The received vector sequence is
of various other multiuser detectors as a function of SNR, with and power control. The iterateddecision multiuser detector significantly outperforms the other detectors at moderate to high SNR, and asymptotically approaches the single-user bound. Thus, perfect MAI cancellation is approached at high SNR. Next, in Fig. 12, we compare the effect of on the simulated bit-error rates of the various multiuser detectors6 when decoding Q = 128 simultaneous users at an SNR per bit of 10 dB with power control. The iterated-decision multiuser detector has clearly superior performance when Figure 12 also shows the corresponding theoretical curves for
where is the Q × P matrix of signatures, is the P × P diagonal matrix of received amplitudes, is the P × 1 vector sequence of data symbols, and w[n] is a noise vector sequence. Before the first pass (l = 1) of the adaptive iterateddecision multiuser detector, we need to initialize the hard decisions for each user’s packet. Since the locations and values of the training symbols in each packet are known at the receiver, we set for the i and n corresponding to those locations. For all other locations in the packets, we set to be a “neutral” value—for white PSK symbols, this value should be zero. On the lth pass of the detector where l = 1, 2, 3, . . . , each received vector for n = 0,1, . . . , N – 1 is first premultiplied by a P × Q matrix producing the P × 1 vector
Next, an appropriately constructed estimate MAI in that symbol period is subtracted from produce i.e.,
of the to
where 3.2.
Adaptive Implementations
In Section 3, we derived the optimal matrices and for known values of the channel and the user signatures. We now develop an adaptive implementation of the iterated-decision multiuser detector, in which optimal matrices are selected automatically (from the received data) without explicit knowledge of the channel or the signatures. Furthermore, we assume that the packet size is chosen small enough such that the channel encountered by each user's packet appears fixed. We consider a P-user discrete-time synchronous channel model, where the ith user modulates an M-ary
with a P × P matrix. Since is intended to be some kind of MAI estimate, we restrict attention to the case in which
Thus, the ith component of the slicer input be expressed as
can
210
Chan and Wornell
where
with and being the jkth elements of and respectively. The slicer then generates the hard decisions from for all i and n, except for those values corresponding to the locations of training symbols in For those n, we set If we assume that for all i and all n for the purposes of determining the optimal matrices, then it is reasonable to choose and so as to minimize the sum of error squares:
Since this is a linear least-squares estimation problem, the optimum is [11]
SNR for the adaptive multiuser detector to perform as if the channel and the signatures were exactly known at the receiver. For comparison purposes, we also plot in Fig. 13 the performance of the RLS-based implementation of the adaptive linear multiuser detector [19]. The linear multiuser detector performs significantly worse than the iterated-decision multiuser detector for comparable amounts of training data. 3.3.
where
and The matrices can be efficiently obtained by eliminating the (Q + i)th row and column of where and can be efficiently computed using formulas for the inversion of a partitioned matrix [12]. The block-iterative nature of the multiuser detector allows the training symbols to be located anywhere in the users’ packets. Since the locations do not appear to affect performance, we arbitrarily choose to uniformly space the training symbols within each user's packet. In Fig. 13, we plot the probability of bit error as a function of SNR for varying amounts of training data. We see that, as expected, performance improves as the amount of training data is increased. Moreover, only a modest amount of training symbols is required at high
Coded Implementations
For coded systems, an iterated-decision multiuser decoder is readily obtained, and takes a form analogous to the iterated-decision equalizer-decoder structure described in Section 2.3. A communication system that combines iterateddecision multiuser detection with coding is depicted in Fig. 14. The data streams i = 1, 2, . . . , P of the P users are encoded using separate encoders, and the corresponding streams of coded symbols are for i = 1, 2, . . . , P and n = 0, 1, . . . , N – 1. The received vector sequence is thus
where As in Section 3.2, on the lth pass of the multiuser detector, each received vector for n = 0, 1, . . . , N – 1 is processed independently to produce a corresponding
Block-Iterative Interference Cancellation Techniques
vector The sequences for i = 1, 2, . . . , P are then input to a bank of soft-decision ML decoders, thereby producing for i = 1, 2, . . . , P, the tentative decisions for These tentative decisions must be re-encoded before being processed by the matrix Performance may be improved by using an interleaver after each encoder and a deinterleaver before each decoder. This multiuser decoder structure for coded systems can be compared to those developed in [20, 21], which have a significantly different but similarly intriguing receiver structure.
The derivation of (55) requires the following two lemmas. Lemma 1. In the limit as with held constant, the expected value of is
where
Appendix A: Derivation of SINR Expression (55) With accurate power control, (52) becomes Proof: Substituting (73) into (46), we get
where
with
defined by (57).
211
From (75), the expected value of
is
212
Chan and Wornell
We can use Lemma 1 to prove an even stronger result. Lemma 2. In the limit as constant,
Proof:
with
held
Consider the normalized variance of
where we have used the identity7
and the fact that the trace of a square matrix is equal to the sum of its eigenvalues (denoted by If the ratio of the number of users to the signature length is, or converges to a constant:
where the upper bound comes from the fact that Thus, to show the mean-square convergence result in (85), we need to show that
To this end, we develop a useful expression for
then the percentage of the P eigenvalues of that lie below x converges to the cumulative distribution function of the probability density function [22]
Let
Then
where
and the operator
is defined according to
Thus, we can compute the limit as
of (78) as [3] where the third equality results from the matrix inversion lemma:
where W and Z are invertible square matrices.
Block-Iterative Interference Cancellation Techniques
Thus,
213
But this is equivalent to checking that the equation
has a solution at
which can be verified by substituting (96) into (95), so we have proved (85). We now proceed to show (55). With as defined in (76), and with in (74) bounded according to we have that as with held constant,
where the third equality comes from the fact that the components of are independent of and have zero mean and variance equal to 1 / Q . If the ratio of the number of users to the signature length is, or converges to a constant:
then we can use (81) to compute the limit of (91) as [3]:
where the final limit follows from Lemma 2. So the SINR for each user (74) converges in the mean-square sense to (55). Notes 1. Throughout the paper, our expectations involving functions of
To show (87), Lemma 1 tells us we need to check that
frequency do not depend on so we omit this dependence in our notation to emphasize this. 2. When x[n] is a sequence coded for the Gaussian channel, the approximation in (8) is still valid—typical trellis codes used with random bit streams generally produce white symbol streams [7], as do random codes. More will be said about coding in Section 2.3. 3. The superscripts T and denote the transpose and conjugatetranspose operations, respectively. 4. a square root matrix of the positive semidefinite matrix F,
satisfies 5. Using the matrix identity (79), we may alternatively write
214
Chan and Wornell
which may be easier to evaluate depending on the relative sizes of P and Q. 6. The theoretical large system performance of the decorrelator for the case is derived in [17], where the decorrelator is defined as the Moore–Penrose generalized inverse [18] of H. 7. The identity is a special case of the matrix inversion lemma (90).
References l. G.D. Forney, Jr., “Maximum Likelihood Sequence Estimation of 2. 3. 4.
5.
6. 7.
8.
9. 10.
11. 12. 13.
14.
15. 16.
17.
Digital Sequences in the Presence of Intersymbol Interference,” IEEE Trans. Inform. Theory, vol. IT-18, 1972, pp. 363–378. C.A. Belfiore and J.H. Park, Jr., “Decision-Feedback Equalization,” in Proc. IEEE, 1979, vol. 67, pp. 1143–1156. S. Verdú, Multiuser Detection, Cambridge, U.K.: Cambridge University Press, 1998. G.W. Wornell and M.D. Trott, “Efficient Signal Processing Techniques for Exploiting Transmit Antenna Diversity on Fading Channels,” IEEE Trans. Signal Processing, vol. 45, 1997, pp. 191–205. A.M. Chan and G.W. Wornell, “A Class of Block-Iterative Equalizers for Intersymbol Interference Channels: Fixed Channel Results,” IEEE Trans. Commun., vol. 49, Nov. 2001. J.G. Proakis, Digital Communications, 3rd edn. New York: McGraw-Hill, 1995. E. Biglieri, “Ungerboeck Codes Do Not Shape the Signal Power Spectrum,” IEEE Trans. Inform. Theory, vol. IT-32, 1986, pp. 595 –596. S. Beheshti, S.H. Isabelle, and G.W. Wornell, “Joint Intersymbol and Multiple-Access Interference Suppression Algorithms for CDMA Systems,” European Trans. Telecomm. & Related Technol., vol. 9, 1998, pp. 403–418. A.T. Bharucha-Reid and M. Sambandham, Random Polynomials, Orlando, FL: Academic Press, 1986. A.M. Chan, “A Class of Batch-Iterative Methods for the Equalization of Intersymbol Interference Channels,” S.M. Dissertation, M.I.T., Aug. 1999. S. Haykin, Adaptive Filter Theory, 3rd edn., Englewood Cliffs, NJ: Prentice Hall, 1996. H. Lütkepohl, Handbook of Matrices, Chichester, England: Wiley, 1996. J.M. Cioffi, G.P. Dudevoir, M.V. Eyuboglu, and G.D. Forney, Jr., “MMSE Decision-Feedback Equalizers and Coding—Part I: Equalization Results,” IEEE Trans. Commun., vol. 43, 1995, pp. 2582 –2594. J.M. Cioffi, G.P. Dudevoir, M.V. Eyuboglu, and G.D. Forney, Jr., “MMSE Decision-Feedback Equalizers and Coding— Part II: Coding Results,” IEEE Trans. Commun., vol. 43, 1995, pp. 2595–2604. M. Tüchler, R. Kötter, and A. Singer, “Turbo Equalization’: Principles and New Results,” IEEE Trans. Commun., submitted. M.K. Varanasi and B. Aazhang, “Near-Optimum Detection in Synchronous Code-Division Multiple-Access Systems,” IEEE Trans. Commun., vol. 39, May 1991, pp. 725–736. Y.C. Eldar and A.M. Chan, “On Wishart Matrix Eigenvalues and Eigenvectors and the Asymptotic Performance of the Decorrelator,” IEEE Trans. Inform. Theory, submitted.
18. G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd edn., Baltimore, MD: Johns Hopkins University Press, 1996. 19. M.L. Honig and H. V. Poor, “Adaptive Interference Suppression” in Wireless Communications: Signal Processing Perspectives, H.V. Poor and G.W. Wornell (Eds.), Upper Saddle River, NJ: Prentice-Hall, 1998. 20. X. Wang and H.V. Poor, “Iterative (Turbo) Soft Interference Cancellation and Decoding for Coded CDMA,” IEEE Trans. Commun., vol. 47, 1999, pp. 1047–1061. 21. J. Boutros and G. Caire, “Iterative Multiuser Joint Decoding: Unified Framework and Asymptotic Analysis,” IEEE Trans. Inform. Theory, sumbitted. 22. Z.D. Bai and Y.Q. Yin, “Limit of the Smallest Eigenvalue of a Large Dimensional Sample Covariance Matrix,” Annals of Probability, vol. 21, 1993, pp. 1275–1294.
Albert M. Chan received the B.A.Sc. degree from the University of Toronto, Canada, in 1997 and the S.M. degree from the Massachusetts Institute of Technology (MIT) in 1999, both in electrical engineering. He is currently pursuing the Ph.D. degree in electrical engineering at MIT. His research interests include signal processing and communications. Mr. Chan has served as a Teaching Assistant for probability, digital signal processing, and signals and systems courses in the MIT Department of Electrical Engineering and Computer Science. He is the recipient of the MIT Frederick C. Hennie III Award for Teaching Excellence (2000).
[email protected]
Gregory W. Wornell received the B.A.Sc. degree from the University of British Columbia, Canada, and the S.M. and Ph.D. degrees from the Massachusetts Institute of Technology, all in electrical engineering and computer science, in 1985, 1987 and 1991, respectively. Since 1991 he has been on the faculty of the Department of Electrical Engineering and Computer Science at MIT, where he is currently an Associate Professor. He has spent leaves at the University
Block-Iterative Interference Cancellation Techniques
of California, Berkeley, CA, in 1999-2000 and at AT&T Bell Laboratories, Murray Hill, NJ, in 1992-3. His research interests span the areas of signal processing, communication systems, and information theory, and include algorithms and architectures for wireless networks, broadband systems, and multimedia environments. He is author of a number of papers in these areas, as well as the Prentice-Hall monograph Signal Processing with Fractals: A Wavelet-Based Approach, and is co-editor (with H. V. Poor) of the Prentice-Hall collection Wireless Communications: Signal Processing Perspectives. Within the IEEE he has served as Associate Editor for Communications for IEEE Signal Processing Letters, and serves on the Communications Technical Committee of
215
the Signal Processing Society. He is also active in industry and an inventor on numerous issued and pending patents. Among the awards he has received for teaching and research are the MIT Goodwin Medal for “conspicuously effective teaching” (1991), the ITT Career Development Chair at MIT (1993), an NSF Faculty Early Career Development Award (1995), an ONR Young Investigator Award (1996), the MIT Junior Bose Award for Excellence in Teaching (1996), the Cecil and Ida Green Career Development Chair at MIT (1996), and an MIT Graduate Student Council Teaching Award (1998). Dr. Wornell is also a member of Tau Beta Pi and Sigma and a Senior Member of the IEEE.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 217–233, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Multiuser Detection for Out-of-Cell Cochannel Interference Mitigation in the IS–95 Downlink* D. RICHARD BROWN III Department of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester, MA 01609, USA H. VINCENT POOR AND SERGIO VERDÚ Department of Electrical Engineering, Princeton University, Princeton, NJ 08544 USA C. RICHARD JOHNSON, JR. School of Electrical Engineering, Cornell University, Ithaca, NY 14853 USA
Received July 10, 2001
Abstract. This paper considers the application of multiuser detection techniques to improve the quality of downlink reception in a multi-cell IS–95 digital cellular communication system. In order to understand the relative performance of suboptimum multiuser detectors including the matched filter detector, optimum multiuser detection in the context of the IS–95 downlink is first considered. A reduced complexity optimum detector that takes advantage of the structural properties of the IS–95 downlink and exhibits exponentially lower complexity than the brute-force optimum detector is developed. The Group Parallel Interference Cancellation (GPIC) detector, a suboptimum, lowcomplexity multiuser detector that also exploits the structure of the IS–95 downlink is then developed. Simulation evidence is presented that suggests that the performance of the GPIC detector may be near-optimum in several cases. The GPIC detector is also tested on a snapshot of on-air data measured with an omnidirectional antenna in an active IS–95 system and is shown to be effective for extracting weak downlink transmissions from strong out-of-cell cochannel interference. The results of this paper suggest that the GPIC detector offers the most performance gain in scenarios where weak downlink signals are corrupted by strong out-of-cell cochannel interference. Keywords: downlink
code division multiple access (CDMA), multiuser detection, interference cancellation, IS-95
1. Introduction
An important milestone in the development of personal wireless communication systems occurred in the early 1990’s when Qualcomm introduced a new digital cellular communication system based on Code Division Multiple Access (CDMA) technology. The proposed *This research was supported in part by NSF grants ECS-9811297, ECS-9811095, EEC-9872436, and Applied Signal Technology.
cellular system offered several advantages over first generation analog cellular systems including increased capacity and reliability as well as improved sound quality and battery life. Early trials of this new cellular system proved to be quite successful and, in 1993, the details of the system were published by the Telecommunications Industry Association as the IS-95A standard [1]. Despite the well documented performance benefits of multiuser detection in the literature prior to 1993,
218
Brown et al.
the IS–95 cellular standard was designed such that adequate performance could be achieved by a downlink receiver (typically a mobile handset) using conventional single-user matched filter detection with coherent multipath combining. The choice of matched filter detection for the IS–95 downlink resulted in the need for strict, closed-loop power control on both the uplink and downlink in order to avoid “near-far” problems (cf. [2]). One reason for this approach is that, unlike power control, the majority of the multiuser detectors proposed in the literature have been regarded as too complex for cost-effective implementation in IS–95 downlink receivers. Despite the exponential performance improvements in microprocessors and DSPs over the last decade, these constraints have continued to outweigh the potential performance benefits of multiuser detection for the IS–95 downlink. In this paper we attempt to bridge some of the gap between IS–95 and multiuser detection. Toward that goal, several authors have studied the problem of improving the performance of IS–95 and third generation Wideband-CDMA downlink reception in a singlecell environment. In [3–12] the authors observe that the orthogonality of the user transmissions within a particular cell is destroyed by multiple paths in the propagation channel between the base station and the IS–95 downlink receiver. The authors propose receivers that all share the common feature of a linear equalizer front-end that cancels the effects of the multipath propagation channel and restores the orthogonality of the users. This approach effectively eliminates the incell multiuser interference and allows the conventional matched filter detector to be used. In [13], the authors propose and compare several algorithms applicable to the IS–95 downlink for multipath channel estimation. In [14, 15], the authors developed a nonlinear multipath canceller to cancel the in-cell multiuser interference caused by multipath replicas of the base station’s pilot signal. Our approach here differs from the prior approaches in that we investigate multiuser detection for the IS–95 downlink in a multi-cell environment. Multiuser detection in a multi-cell environment was considered recently for the IS–95 uplink in [16, 17]. In this paper we consider receivers that mitigate the effects of the outof-cell cochannel interference from neighboring base stations in the IS–95 downlink. To establish a performance benchmark, we first examine optimum detection for the IS–95 downlink and develop a reduced complexity optimum detector that exploits the structure of
the IS–95 downlink. We then propose the Group Parallel Interference Cancellation (GPIC) detector, a suboptimum, low-complexity multiuser detector that also exploits the structure of the IS–95 downlink. The results of this paper suggest that multiuser detection may provide modest performance improvements over the conventional matched filter detector in scenarios where a strong desired signal is corrupted by weak out-of-cell cochannel interference. On the other hand, our results suggest that multiuser detection may offer significant performance improvements in conditions where an IS–95 downlink receiver is detecting a weak desired signal in the presence of strong outof-cell cochannel interference. Although this scenario would be unusual for a subscribing user in an IS–95 cellular system, the practical implications of our results include: Nonsubscribing users (e.g. eavesdroppers or test/ diagnostic receivers) which do not have the benefit of power control may wish to extract a weak desired signal from strong out-of-cell cochannel interference. Multiuser detection receivers tend to offer superior performance in these cases. Subscribing users with multiuser detectors require less transmit power from their base station to maintain an equal quality of service. This “good neighbor” effect leads to a reduced level of cochannel interference induced on other out-of-cell users in the system. This balance of this paper is organized as follows. In Section 2 we develop a concise model with mild simplifying assumptions for the IS–95 downlink that includes the effects of the time and phase asynchronous, nonorthogonal, and non-cyclostationary transmissions of a M base station communication system. In Section 3 we use this model to understand optimum multiuser detection in the context of the IS–95 downlink. Although the optimum detector is often too complex for implementation in realistic systems, its role is still important in order to determine the relative performance of suboptimum multiuser detectors. In Section 4 we show that the structure of the IS–95 downlink allows the optimum detector to be posed in a computationally efficient form with complexity exponentially less than a brute-force implementation. In Section 5 we develop a computationally efficient nonlinear multiuser detector for the IS–95 downlink called the Group Parallel Interference Cancellation (GPIC) detector. The GPIC detector is derived from examination of properties of the
Out-of-Cell Cochannel Interference Mitigation
reduced complexity optimum detector and also exploits the structure in the IS–95 downlink. In Section 6 we examine the performance of the GPIC detector relative to the conventional matched filter and optimum detectors via simulation and show that the GPIC detector exhibits near-optimum performance in the cases examined and provides the largest benefit when the desired signal is received in the presence of strong out-of-cell cochannel interference. Finally, in Section 7 we apply the GPIC detector to a snapshot of on-air data from an active IS–95 system and present results that suggest that GPIC detection offers significant performance improvements when extracting weak signals in the presence of severe out-of-cell cochannel interference.
2.
IS–95 Downlink System Model
Figure 1 shows a model of a single IS–95 base station downlink transmitter. This model is simplified in the sense that the scrambling and channel encoding operations specified by the IS–95 standard prior to the channelization block in Fig. 1 are not shown. An IS–95 downlink receiver typically consists of four fundamental stages: multipath timing and phase estimation (via the downlink pilot), coded symbol detection (matched filter with multipath combining), decoding (convolutional and repetition), and descrambling. Since this paper focuses on the problem of improving the performance of the detection stage, we consider the portion
219
of the IS–95 downlink transmitter “inside the coders” as shown in Fig. 1. We denote m as the base station index and as the number of data streams simultaneously transmitted by the mth base station, not including the pilot transmission. Note that is typically greater than the actual number of physical users in the cell since the IS–95 standard specifies that each base station must transmit additional data streams for call setup, paging, and overhead information. For the purposes of this paper, we will henceforth refer to each of these data streams as a “user” even if the data stream is an overhead channel and not actually allocated to a particular user in the cell. The details of Fig. 1 as specified by the IS–95 standard may be summarized as follows: Channelizer: Orthogonalizes the user transmissions by assigning a unique length-64 Walsh code to each user and spreads the input symbols with this code. Each user’s Walsh code remains fixed for the duration of their connection. The Walsh-0 code is always assigned to the base station’s pilot signal which transmits a constant stream of binary symbols equal to +1. The remaining 63 Walsh codes are assigned as needed to the users in the cell as well as to overhead and paging channels. Power Control: Sets the gain on each user’s transmission to provide a minimum acceptable transmission quality in order to avoid generating excessive cochannel interference in neighboring cells.
220
Brown et al.
PN-Code: Multiplies the chip-rate aggregate base station data stream by a complex pseudonoise (PN) code in order to cause the cochannel interference observed by the users in neighboring cells to appear noiselike and random [18, p. 11]. Each base station uses the same PN-code but is distinguished by a unique, fixed PN-phase. The PN-code has elements from the set {1 + j, 1 – j, –1 + j, – 1 – j} and has a period of chips. Pulse Shaping: Specified in the IS–95 standard. Note that the total spreading gain on the coded symbols in the IS–95 downlink is 64 and that all spreading occurs in the channelizer block. The PN-code does not provide any additional spreading. Since the IS–95 standard specifies universal frequency reuse for all base stations in a cellular system, it is reasonable to model the downlink receiver’s observation as the sum of transmissions from M base stations and additive channel noise as shown in Fig. 2. Each base station’s aggregate transmission passes through an individual propagation channel that accounts for the effects of multipath, delay, and attenuation. The channel noise is modeled as an additive, white, complex Gaussian random process denoted by where and The real and imaginary parts are uncorrelated and also assumed to be independent of the base station transmissions. We define the total number of users in the system as To facilitate the analytical development in the following sections, we also make the following simplifying assumptions which may be relaxed or eliminated at the expense of greater notational complexity: We ignore the soft-handoff feature of IS–95 where two base stations may be transmitting identical bit streams to a single user.
We assume the user population remains fixed over the receiver’s observation interval. This implies that users do not enter or leave the system, users are not handed off between cells, and that voice activity switching does not occur during the observation interval. We ignore base station antenna sectorization. Indexing the users by a two dimensional index (base station, user number), we denote the (m, k)th user’s positive real amplitude and coded binary symbol at symbol index as and respectively. We denote the unit-energy normalized combined impulse response of the (m, k)th user’s channelization code, PN-code, baseband pulse shaping, and propagation channel at symbol index as Note that includes any inherent propagation delay and asynchronicity between base stations and is assumed to be FIR. We also denote as the received phase of the transmission from the mth base station. The baseband signal observed at the downlink receiver may then be written as
where we have separated the terms corresponding to the non-data-bearing pilots with the superscript notation (m, 0). In order to represent the observation r ( t ) compactly, we establish the following vector notation. If represents a (possibly complex) scalar quantity corresponding to the (m, k)th user at symbol index we can construct the vectors
and
The superscripts and denote transpose, complex conjugate, and complex conjugate transpose, respectively. Define a and b according to this notation and define the vector of signature waveforms as
Out-of-Cell Cochannel Interference Mitigation
and
Finally, define
as the K(2L + 1) × K(2L + 1) dimensional diagonal matrix of user amplitudes multiplied by the appropriate base station transmission phases. We can then write the continuous time observation as
where the pilots are denoted as p(t) for notational convenience. 3.
221
the receiver now knows the Walsh codes of the active users, the phase of the PN-code, the propagation channels, and the baseband pulse-shaping, we can construct the set of for all users in the multi-cell system. Finally, amplitude estimates are generated for each user and the pilots (cf. [19]). For the purposes of the remaining analytical development, we assume that all of these estimates are perfect and that the only unknowns in (1) are b and Let represent a compact interval in time containing the support of r (t) and let represent the set of cardinality containing all admissible binary symbol vectors of length K(2L + 1). Then the jointly optimum symbol estimates [20] are given by
Manipulation of the term inside the exponent yields the expression for jointly optimum IS–95 downlink symbol estimates as
Optimum Detection
In this section we examine optimum (joint maximum likelihood) detection in the context of the previously developed IS–95 downlink system model. In a singlecell scenario with single-path channels, an IS–95 downlink receiver observes the sum of K orthogonally modulated signals in the presence of independent AWGN. It is easy to show that the optimum detector is equivalent to the conventional single-user matched filter detector in this case. In this paper, however, we consider the multi-cell scenario where an IS–95 downlink receiver observes nonorthogonal out-of-cell cochannel interference and the optimum detector is not the matched filter detector. We assume that the receiver is able to acquire the pilot (and hence the PN-phase) of each base station via correlation with the known periodic PN-code of length This then allows the receiver to estimate the impulse response of the propagation channel and transmission phases of each base station. For each base station, the receiver can then construct a bank of 63 matched filters, one matched filter for each of the non-pilot Walsh codes, in order to determine which users are active in each cell. Since
where represents the K ( 2 L + 1 ) vector of matched filter outputs, represents the K(2L +1) -vector of matched filter outputs for the pilot portion of the received signal, and represents the K(2L + 1) × K(2L + 1) dimensional user signature correlation matrix. The brute-force solution to the problem of computing the jointly optimum symbol estimates requires the exhaustive computation of over the set of all hypotheses to find the maximum. 4.
Reduced Complexity Optimum Detection
In this section we take advantage of the structure of the IS–95 downlink in order to propose an optimum detector that exhibits significantly less complexity than the brute-force approach. The intuitive idea behind the reduced complexity optimum detector is to use the fact that the synchronous user plus pilot transmissions from base station m are mutually orthogonal at every symbol index if the propagation channel from the mth base station to the receiver is single-path. This
222
Brown et al.
will allow us to “decouple” the decisions of one base station’s users to achieve the desired complexity reduction while retaining optimality. This idea can also be applied in the multipath channel case but since users within a cell are no longer orthogonal there will be some loss of optimality. Note that even if the orthogonality between the users within a particular cell is restored using the equalization techniques described in [3–11], the resulting matched filter bank outputs will contain noise terms that are correlated across the users. Hence, the ideas described in this section may also be applied to an IS–95 downlink receiver with an equalizer front-end but there will be some loss of optimality. To develop the reduced complexity optimum detector, we first observe that the signature correlation matrix exhibits the structure
in each cell are received without any intersymbol interference. In this case, the IS–95 downlink signature correlation matrix exhibits two special properties: 1. The lack of intersymbol interference implies that for 2. The orthonormal signature sequences of all in-cell users of each base-station at symbol index implies that
The combination of these two properties implies that for m = 1, . . . , M. Let and note that since R is a Hermitian matrix then H is also Hermitian. Moreover, since A is diagonal, H shares the same IS–95 structure properties as R except that
It turns out that this difference will not matter in the maximization of Using our previously developed notation, we can write
where has dimension (2L + 1). The submatrices have the structure
where
Since A is diagonal and u is real, we can isolate the symbols from the first1 base station to write
where vectors with an overbar are (2L + 1) × 1 dimensional with elements from all base stations except m = 1 and matrices with an overbar are dimensional with corresponding elements. The quadratic term in (3) may be rewritten as
At this point we require the propagation channels to be single-path in order to proceed with the complexity reduction. This assumption, combined with the facts that
The binary nature of u and (2) imply that
1. the IS–95 pulse shaping filters approximately satisfy the Nyquist pulse criterion and 2. each base station assigns orthonormal signature waveforms to the set of users in its cell,
where is a real positive constant that does not depend on u. Denoting we can then write
implies that the downlink transmissions in each cell do not interfere with the other downlink transmissions in the same cell and that the downlink transmissions
Out-of-Cell Cochannel Interference Mitigation
As before, we isolate the symbols from the first base station to write
Since H is a Hermitian matrix then and we can write
Finally, we plug (4) and (5) back into (3) and collect terms to write
Observe that, for any (6) is maximized when Hence, the reduced complexity optimum detector needs only to compute
5.
223
Group Parallel Interference Cancellation Detection
In this section, we examine the properties of the reduced complexity optimum detector in order to develop a low-complexity suboptimum detector called the Group Parallel Interference Cancellation (GPIC) detector. Like the reduced complexity optimum detector, the GPIC detector also exploits the orthogonality of the in-cell user transmissions in the IS–95 downlink. In the following development, we consider an approach similar to that described in [21] where a conditional maximum likelihood detector was developed by relaxing the maximum likelihood criterion for a set of undesired symbols. Suppose temporarily that the IS-95 downlink receiver has perfect knowledge of the jointly optimum symbol estimate of the users' symbols in cells 2 , … , M. In this case, we showed in Section 4 that is the jointly optimum estimate of the users' symbols in cell 1. Unfortunately, realistic receivers do not have access to the jointly optimum out-of-cell symbol estimates in general, but we are compelled to ask the following question: What if the receiver formed some low-complexity estimate of and we let In fact, consider the lowest complexity estimate of conventional matched filter estimates where Then
from which the vector of optimum symbol estimates can be written directly as
Note that, in contrast to the brute-force optimum detector, (7) only requires the computation of over a set of hypotheses in order to find the maximum. For a cell system with two or three significant base stations, this complexity reduction can be significant. This prior analysis can also be easily applied to the synchronous CDMA case where the brute-force optimum detector requires the evaluation of for hypotheses. In this case, the reduced complexity optimum detector requires the evaluation of for hypotheses.
It is evident from this last expression that a detector using (8) forms decisions by subtracting the estimated out-of-cell cochannel interference from the matched filter inputs (minus the known pilot terms) corresponding to the users in cell 1. When this operation is performed on all of the base stations it is called parallel
224
Brown et al.
interference cancellation (first called multistage detection in [22]) and, since the interference cancellation is performed over groups of users, we coin the name Group Parallel Interference Cancellation for this detector. We can extend this idea to write the following expression for the GPIC detector of base station m as
where Assembling the symbol estimates into a K(2L + 1) vector containing all of the users’ bits in the multi-cell system, we can write a simple expression for the GPIC detector as
6. Simulation Results In this section we compare the performance of the optimum, GPIC, and conventional matched filter detectors via simulation. We examine a scenario where a nonsubscribing downlink receiver (e.g. an eavesdropper) is listening to IS–95 downlink transmissions in the simple cellular system shown in Fig. 3 with B = 2 base stations and and users in each cell. The subscrib-
ing users in the system are represented by circles and our downlink receiver is represented by a square with an antenna symbol. We evaluate the quality of reception at the receiver from both base stations as the receiver moves on the dashed line from point a to point b. The propagation channels between the base stations and the eavesdropping receiver are assumed to be single-path with random received phases uniformly distributed in Asynchronism offsets between the base station transmissions are also assumed to be uniformly distributed. User powers, phases and delays are assumed to be time invariant over the duration of the receiver’s observation. We assume the user positions to be uniformly distributed within the cell. This assumption combined with IS–95 downlink power control implies that the user amplitudes observed at the eavesdropper are also random. The distribution of the user amplitudes is derived in the Appendix under similar path-loss modeling assumptions as the uplink study in [23, 24]. Figure 4 shows the bit error rate of the optimum (denoted as “OPT”), GPIC, and conventional matched filter (denoted by “MF”) detectors for a user in the first cell2 averaged over the user positions, delays, phases, amplitudes, and PN-codes. The single-user error probability (denoted as “SU”) is also shown for comparison. Note that, in this simulation, the distance to the desired base station is fixed and the out-of-cell cochannel interference is decreasing as we move the eavesdropper toward position b and away from base station 2. Figure 5 shows the results of the same simulation for a user in the second cell In this case the eavesdropper is moving away from the desired base station and remaining at a fixed distance from the interfering base station.
Out-of-Cell Cochannel Interference Mitigation
225
226
Brown et al.
Figure 4 shows that the conventional matched filter detector performs well when the eavesdropper is listening to a downlink transmission from base station 1 in a position distant from base station 2. However, because of its near-far susceptibility, the matched filter detector performs poorly when the eavesdropper attempts to extract weak downlink transmissions from strong out-of-cell cochannel interference. Figures 4 and 5 show that the GPIC detector does not suffer from this problem and actually exhibits performance indistinguishable from the optimum detector in these examples. These results suggest that the GPIC detector may offer near-optimum eavesdropping performance over a wide range of out-of-cell cochannel interference powers with the most benefit in severe out-of-cell cochannel interference environments.
7.
On-Air Data
This section compares the performance of the GPIC and conventional matched filter detectors on one snapshot of on-air measured data3 from an active IS–95 cellular system. One 45.6ms snapshot of IS–95 downlink measured data was gathered with an omnidirectional antenna. The received waveform was sampled
at twice the chip rate to yield a data file with 112000 samples corresponding to 875 (coded) symbol periods. The results of a base station pilot survey on this data are shown in Fig. 6. Throughout this section, base station 1 denotes the base station with the strongest pilot as seen at PN-offset 20000 in Fig. 6. Base station 2 denotes the second strongest base station at PN-offset 62500. The powers of each active Walsh channel at the output of their respective matched filter detector, for both base station 1 and base station 2, are given in Tables 1 and 2. It can be seen that the power of the pilot (Walsh channel 0) from base station 1 is approximately 11 dB higher than the pilot from base station 2, hence our receiver is positioned
Out-of-Cell Cochannel Interference Mitigation
close to base station 1 and relatively distant from base station 2. The remaining base stations seen in Fig. 6 are ignored in the following development for clarity. 7.1.
Conventional Matched Filter Detection
In this section we qualitatively examine the soft outputs of the conventional matched filter detector for base stations 1 and 2. The matched filters are obtained by estimating the impulse response of the combined propagation channel and pulse shaping filters via pilot cor-
227
relation and convolving this impulse response with the appropriate combined Walsh and PN-codes for each active user in the system. Although Rake detection is not used, the matched filter detector considered in this section automatically includes coherent multipath combining since it incorporates the estimated impulse response of the propagation channel. Figure 7 shows a histogram of the matched filter outputs for the active Walsh channels of base station 1. Figure 7 clearly shows that the eye is open for all of the active channels and implies that one could expect that these users’ decoded symbols would have very low probability of error. In addition to the strong pilot channel at Walsh code 0, there is a strong paging channel at Walsh code 1, a relatively weak sync channel at Walsh code 32, and two traffic channels of disparate power at Walsh codes 12 and 63. Qualitatively, matched filter detection appears to be adequate for downlink reception of this base station. Figure 8 shows a histogram of the matched filter outputs for the active Walsh channels of base station 2. Figure 8 clearly shows that, unlike the transmissions from base station 1, all of the channels from base station 2 are highly corrupted by interference (including
228
Brown et al.
out-of-cell cochannel interference from base station 1, other base stations, and “unstructured” noise sources). The eye is closed for all Walsh channels, implying that subsequent channel decoding may be unreliable. 7.2.
GPIC Detection
In this section we qualitatively examine the soft outputs of the GPIC detector for base stations 1 and 2. The matched filter outputs generated in the prior subsection are passed through a hard decision device and then respread by the combined impulse response of the appropriate Walsh codes, PN-codes, and estimated pulseshaping and propagation channel impulse responses. The waveforms are then scaled and rotated according to each user’s estimated amplitude and phase. Figure 9 shows the histogram of the matched filter outputs by Walsh channel of base station 1 after subtraction of the estimated interference from base station 2. There is little noticeable change from the results in Fig. 7 since the out-of-cell cochannel interference from base station 2 is very weak with respect to the transmission of base station 1 and interference cancellation has little effect.
Figure 10 shows the histogram of the matched filter outputs by Walsh channel of base station 2 after subtraction of the estimated interference from base station 1. The performance improvement is significant with respect to the conventional matched filter results in Fig. 8. Channels 1 and 20 appear to be much cleaner and channels 32 and 34 are beginning to exhibit troughs in the middle of their histograms indicating improved detection quality. The pilot channel is also significantly cleaner. We note that although it is certainly possible to perform GPIC detection in batch where all K(2L + 1) symbols are first estimated with the conventional matched filter detector and stored prior to calculation of the GPIC symbol estimates, it is also possible to implement the GPIC detector with a decision delay proportional to K. The results in this section agree with the simulation results in Section 6 and suggest that the GPIC detector may not offer much performance improvement when detecting strong signals in the presence of weak out-ofcell cochannel interference. On the other hand, comparison of Figs. 8 and 10 show that significant performance improvements are possible for a downlink receiver
Out-of-Cell Cochannel Interference Mitigation
229
230
Brown et al.
attempting to extract weak signals from strong out-ofcell cochannel interference. 8.
Conclusions
In this paper we investigated nonlinear multiuser detection for improving the performance of IS–95 downlink reception. We used the orthogonality of the in-cell users of the IS–95 downlink to develop a reduced complexity optimum detector with exponentially lower complexity than the brute-force optimum detector under the assumption that the propagation channels between the base stations and the receiver were single-path. Examination of the properties of the reduced complexity optimum detector led to the development of the suboptimum, low-complexity GPIC detector. The GPIC detector does not require any form of subspace tracking, matrix inversions, or exhaustive searches for global maxima. Simulations and experiments with on-air IS– 95 downlink data showed that the GPIC detector offers the greatest performance improvements when detecting weak desired signals in the presence of strong out-of-cell cochannel interference. Although this scenario would be unusual for a subscribing user in an IS–95 cellular system, a nonsubscribing user such as an eavesdropper may derive the greatest benefit from GPIC detection. Our results also suggest that subscribing users that use GPIC detection can achieve an acceptable quality of service with less base station transmit power which results in less induced cochannel interference on out-of-cell users in the system. Appendix: Received Power Distribution In this appendix, we derive the received user power distribution for transmissions to users in the mth cell observed by a receiver positioned at a deterministic distance from the mth base station. We impose the following assumptions: Each base station is located in the center of its circular cell of radius R. Each user’s position is uniformly distributed in their cell and is independent of other user positions. The kth user’s distance from base station m is denoted by Perfect power control is maintained between each base station and its users such that the power received is identical for all users within the cell.
The ratio of received to transmitted power obeys a simple path loss model where d is the distance separating the transmitter and receiver, and is the path loss exponent. The circular shape of each cell and the users’ uniformly random positions imply that the cumulative distribution function of the (m, k)th user’s distance from the mth base station, denoted as is equal to the ratio of the area of 2 circles,
The pdf of
follows directly as
IS–95 downlink power control leads to random realizations for the user amplitudes observed at a deterministically positioned receiver. The received power ratio (deterministically positioned receiver to randomly positioned user) may be expressed as
where and denote the power of the mth base station observed at the eavesdropper, the power of the mth base station observed at the (m, k)th user, and the power transmitted by the m th base station, respectively. To find the cumulative distribution of we note that hence
and the pdf of
follows directly as
Out-of-Cell Cochannel Interference Mitigation
This pdf is used to generate the random amplitude realizations used for the simulation results in Section 6.
Notes l. In order to achieve the maximum complexity reduction we assume without loss of generality that
2. The first cell denotes the cell on the left of Fig. 8 and the second cell denotes the cell on the right of Fig. 8.
3. The authors would like to thank Rich Gooch, Mariam Motamed, and David Chou of Applied Signal Technologies, Sunnyvale, CA, for providing us with this data and also for their assistance in testing the algorithms developed in this paper.
References l. Telecommunications Industry Association, Mobile Station— Base Station Compatibility Standard for Dual-Mode Wideband Spread Spectrum Cellular Systems IS–95A, Washington, DC: TIA/EIA, 1995. 2. R. Kohno, R. Meidan, and L. Milstein, “Spread Spectrum Access Methods for Wireless Communications,” IEEE Communications Magazine, vol. 33, 1995, pp. 58–67. 3. A. Klein, “Data Detection Algorithms Specially Designed for the Downlink of CDMA Mobile Radio Systems,” in 1997 IEEE 47th Vehicular Technology Conference: Technology in Motion, Phoenix, AZ, May 4–7, 1997, vol. 1, pp. 203–207. 4. I. Ghauri and D. Slock, “Linear Receivers for the DS-CDMA Downlink Exploiting Orthogonality of Spreading Sequences,” in Conference Record of the Thirty-Second Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 1–4, 1998, vol. 1, pp. 650–654. 5. C. Frank and E. Visotsky, “Adaptive Interference Suppression for Direct-Sequence CDMA Systems with Long Spreading Codes,” in Proceedings of the 36th Annual Allerton Conference on Communications, Control and Computing, Monticello, IL, Sep. 23– 25, 1998, pp. 411–420. 6. S. Werner and J. Lilleberg, “Downlink Channel Decorrelation in CDMA Systems with Long Codes,” in 1999 IEEE 49th Vehicular Techology Conference. Moving Into a New Millenium, Houston, TX, May 16–20, 1999, vol. 2, pp. 1614–1617. 7. K. Hooli, M. Latva-aho, and M. Juntti, “Linear Chip Equalization in WCDMA Downlink Receivers,” in 1999 Finnish Signal Processing Symposium, Oulu, Finland, May 31, 1999, vol. 1, pp. 1–5. 8. K. Hooli, M. Latva-aho, and M. Juntti, “Multiple Access Interference Suppression with Linear Chip Equalizers in WCDMA Downlink Receivers,” in Proceedings of the IEEE Global Telecommunications Conference—Globecomm’99, Rio de Janeiro, Brazil, Dec. 5–9, 1999, vol. 1A, pp. 467–471. 9. K. Hooli, M. Juntti, and M. Latva-aho, “Inter-Path Interference Suppression in WCDMA Systems with Low Spreading Factors,” in 1999 IEEE 50th Vehicular Techology Conference. Gateway to 21st Century Communications, Amsterdam, Netherlands, Sep. 19–22, 1999, vol. 1, pp. 421–425.
231
10. M. Heikkila, P. Komulainen, and J. Lilleberg, “Interference Suppression in CDMA Downlink Through Adaptive Channel Equalization,” in 1999 IEEE 50th Vehicular Techology Conference. Gateway to 21st Century Communications, Amsterdam, Netherlands, Sep. 19–22, 1999, vol. 2, pp. 978– 982. 11. K. Li and H. Liu, “Blind Channel Equalization for CDMA Forward Link,” in 1999 IEEE 50th Vehicular Techology Conference. Gateway to 21st Century Communications, Amsterdam, Netherlands, Sep. 19–22, 1999, vol. 4, pp. 2353– 2357. 12. I. Chih-Lin, C. Webb, H. Huang, S. Brink, S. Nanda, and R. Gitlin, “IS-95 Enhancements for Multimedia Services,” Bell Labs Technical Journal, vol. 1, Autumn 1996, pp. 60–87. 13. A. Weiss and B. Friedlander, “Channel Estimation for DSCDMA Downlink with Aperiodic Spreading Codes,” IEEE Transactions on Communications, vol. 47, Oct. 1999, pp. 1561– 1569. 14. H. Huang, I. Chih-Lin, and S. ten Brink, “Improving Detection and Estimation in Pilot-Aided Frequency Selective CDMA Channels,” in Proceedings of ICUCP 97—6th International Conference on Universal Personal Communications, San Diego, CA, Oct. 12–16, 1997, vol. 1, pp. 198–201. 15. H. Huang and I. Chih-Lin, “Improving Receiver Performance for Pilot-Aided Frequency Selective CDMA Channels Using a MMSE Switch Mechanism and Multipath Noise Canceller,” in Proceedings of the 8th International Symposium on Personal, Indoor, and Mobile Radio Communications, Helsinki, Finland, Sep. 1–4, 1997, vol. 3, pp. 1176–1180. 16. B. Zaidel, S. Shamai, and H. Messer, “Performance of Linear MMSE Multiuser Detection Combined with a Standard IS– 95 Uplink,” Wireless Networks, vol. 4, no. 6, 1998, pp. 429– 445. 17. B. Zaidel, S. Shamai, and S. Verdú, “Multi-Cell Uplink Spectral Efficiency of Randomly Spread DS-CDMA in Rayleigh Fading Channels,” in Proceedings of the 2001 International Symposium on Communication Theory and Applications, Ambleside, UK, July 2001. 18. A. Viterbi, CDMA: Principles of Spread Spectrum Communications, Reading, MA: Addison-Wesley, 1995. 19. Y. Steinberg and H. V. Poor, “Sequential Amplitude Estimation in Multiuser Communications,” IEEE Transactions on Information Theory, vol. 40, 1994, pp. 11–20. 20. S. Verdú, Multiuser Detection, New York, NY: Cambridge University Press, 1998. 21. C. Schlegel, S. Roy, P. Alexander, and Z. Xiang, “Multiuser Projection Receivers,” IEEE Journal on Selected Areas in Communications, vol. 14, 1996, pp. 1610–1618. 22. M. Varanasi and B. Aazhang, “Multistage Detection in Asynchronous Code-Division Multiple-Access Communications,” IEEE Transactions on Communications, vol. 38, 1990, pp. 509– 519. 23. A. McKellips and S. Verdú, “Eavesdropping Syndicates in Cellular Communications,” in 1998 IEEE 48th Vehicular Technology Conference, Ottowa, Canada, May 18–21, 1998, vol. 1, pp. 318–322. 24. A. McKellips and S. Verdú, “Eavesdropper Performance in Cellular CDMA,” European Transactions on Telecommunications, vol.9, 1998, pp. 379–389.
232
Brown et al.
D. Richard Brown III was born in Ridgeley, WV in 1969. From 1992 to 1997 he was employed by the General Electric Company, Plainville, CT as a Development Engineer. In 2000, he received the Ph.D. degree in electrical engineering with a minor in mathematics from Cornell University, Ithaca, NY. He is currently an assistant professor in the Electrical and Computer Engineering Department at Worcester Polytechnic Institute, Worcester, MA. His research interests include adaptive signal processing, multiuser communication systems, and interference cancellation.
H. Vincent Poor received the Ph.D. degree in electrical engineering and computer science in 1977 from Princeton University, where he is currently Professor of Electrical Engineering. He is also affiliated with Princeton’s Department of Operations Research and Financial Engineering, and with its Program in Applied and Computational Mathematics. From 1977 until he joined the Princeton faculty in 1990, he was a faculty member at the University of Illinois at UrbanaChampaign. He has also held visiting and summer appointments at several universities and research organizations in the United States, Britain, and Australia. His research interests are in the area of statistical signal processing and its applications, primarily in wireless multiple-access communication networks. His publications in this area include the book, Wireless Communications: Signal Processing Perspectives, with Gregory Wornell. Dr. Poor is a member of the U.S. National Academy of Engineering, and is a Fellow of the Acoustical Society of America, the American Association for the Advancement of Science, the IEEE, and the Institute of Mathematical Statistics. He has been involved in a number of IEEE activities, including having served as President of the IEEE Information Theory Society and as a member of the IEEE Board of Directors. Among his other honors are the Terman Award of the American Society for Engineering Education, the Distinguished Member Award from the IEEE Control Systems Society, the IEEE Third Millennium Medal, the IEEE Graduate Teaching Award, and the IEEE Communications Society and Information Theory Society Joint Paper Award.
Sergio Verdú is a Professor of electrical engineering with Princeton University, NJ. He is active in the fields of information theory and multiuser communications. In the 1980s, he pioneered the technology of multiuser detection which exploits the structure of multiaccess interference in order to increase the capacity of multiuser communication systems. His textbook Multiuser Detection was published in 1998 by Cambridge University Press. Dr. Verdú is a recipient of several paper awards: the IEEE Donald Fink Paper Award, a Golden Jubilee Paper Award and the 1998 Outstanding Paper Award from the IEEE Information Theory Society, and the 2000 Paper Award from the Telecommunications Advancement Foundation of Japan. He also received a Millennium Medal from the IEEE and the 2000 Frederick E. Terman Award from the American Society for Engineering Education. Dr. Verdú served as Associate Editor for Shannon Theory of the IEEE TRANSACTIONS ON INFORMATION THEORY. He served on the Board of Governors of the Information Theory Society in 1989–1999, and was President of the Society in 1997. He was Co-Chairman of the Program Committee of the 1998 IEEE International Symposium on Information Theory, and Co-Chairman of the 2000 IEEE International Symposium on Information Theory. Dr. Verdú served as Editor of the Special 1948–1998 Commemorative Issue of the IEEE TRANSACTIONS ON INFORMATION THEORY, reprinted by IEEE Press in 1999 as Information Theory: Fifty Years of Discovery.
C. Richard Johnson, Jr. was born in Macon, GA in 1950. He is currently a Professor of Electrical and Computer Engineering and a member of the Graduate Field of Applied Mathematics at Cornell University, Ithaca, NY. Professor Johnson received the Ph. D. in electrical engineering with minors in engineering-economic systems and art history from Stanford University in 1977. In 1989, he was elected a Fellow of the IEEE “for contributions to adaptive parameter estimation theory with applications in digital control and signal processing”. In the past decade, Professor Johnson held visiting appointments at
Out-of-Cell Cochannel Interference Mitigation
Stanford University, University of California—Berkeley, Chalmers University of Technology (Sweden), Technical University of Vienna (Austria), National Polytechnic Institute of Grenoble (France) and Australian National University (Australia). His research focus over the past decade has been blind adaptive fractionally-spaced linear
233
and decision feedback equalization for intersymbol and structured multiuser interference removal. The research of his group at Cornell (http://backhoe.ee.cornell.edu/BERG) is currently supported by the National Science Foundation, Applied Signal Technology, Lucent Technologies—Bell Labs, and NxtWave Communications.
This page intentionally left blank
Journal of VLSI Signal Processing 30, 235–256, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
COD: Diversity-Adaptive Subspace Processing for Multipath Separation and Signal Recovery* XINYING ZHANG AND S.-Y. KUNG Dept. of Electrical Engineering, Princeton University, Princeton, NJ 08544-5263 Received December 14, 2000; Revised July 25, 2001
Abstract. This paper proposes a generalized multipath separability condition for subspace processing and derives a novel COD (Combined Oversampling and Displacement) algorithm to utilize both spatial and temporal diversities for path separation, DOA (Direction of Arrival) estimation and signal recovery. A unique advantage lies in its ability to cope with the situation where the number of multipaths is larger than that of antenna elements, which has not been treated in the traditional approaches. COD strategy solves the antenna deficiency problem by combining vertical expansion with temporal oversampling and horizontal expansion with spatial displacement. We provide a detailed analysis on the theoretical footings for COD factorization and multipath separability conditions, which naturally leads to COD path separation and DOA estimation algorithms. Another advantage of COD factorization hinges upon its ability to generate a multiplicity of eigenvalues which greatly facilitates a SIMO channel equalization formulation useful for signal recovery. This paper also proposes a frequency-domain total-least-square algorithm for SIMO equalization procedure. Finally, simulation results on path separation, DOA estimation and signal recovery are demonstrated. Keywords: multipath separability condition, subspace processing, temporal oversampling, spatial displacement, smart antenna, DOA estimation, signal recovery 1. Introduction This paper considers the MIMO path separation problem at the receiving antenna array in up-link multipath propagation scenario as shown in Fig. 1. In designing an adaptive reception scheme for a smart antenna system, we must deal with inevitable uncertainties such as the changing number of signal sources (i.e., active users), existence of multi-paths, time-varying fadings and delays of the multipath channels. In particular, multipath fading, inter-symbol and inter-user interferences are very common problems encountered in wireless environments. In this paper, a blind diversity-adaptive processing technique is proposed to combat such problems. More specifically, this paper proposes a diversity combiner, which adaptively utilizes both the temporal *This research was supported by a Contract sponsored by Mitsubishi Electric Research Laboratories.
and spatial diversities, in order to compensate for each other’s deficiencies. The objective is to separate different (both inter- and intra-user) paths, estimate their DOAs, and subsequently recover their source signals. 1.1. Subspace Processing for Multipath Problems Spatial-domain processing techniques have long played an important role in wireless mobile communication applications [1]. In particular, our technique is based on subspace approach which has become a dominant trend for DOA (Direction of Arrival) estimation, e.g. MUSIC [2], TAM [3] and ESPRIT [4]. By integrating the spatial or angular information into time domain, the user capacity and system performance is expected to be enhanced in wireless communication applications [5–9]. The paper aims at harnessing spatial-temporal diversities to improve reception
236
Zhang and Kung
techniques of smart antennas in wireless communication systems. This paper studies a subspace smart antenna approach that does not require training sequences. It is built upon a rich literature on SVD (singular value decomposition)-based subspace method [2–4, 10,11]. Moreover, the paper presents a plausible strategy to deal with the situations that the number of antenna elements may be much less than the number of multipaths. In order to separate a large number of paths coming from multiple signal sources, at the same time coping with ISI (Inter-Symbol-Interference) and channel equalization, we will explore a COD (Combined Oversampling and Displacement) space-time processing technique, which exploits both spatial and temporal diversities. To our knowledge, the only prior approach based on artificial expansion of data matrix to treat the deficiency of antenna size was proposed by [12]. This approach is based on temporal oversampling, frequency domain analysis after FFT, and exploitation of shift-invariance property, whose accuracy depends largely on a high frequency of oversampling and computational complexity. In contrast, the present work stays with the spatial and temporal domain processing, without relying on frequency-domain processing. Moreover, it can yield a suitable signal subspace for separating a large number of paths using relatively small antenna array size. And the multiplicity of paths information found in our approach leads to a novel and more reliable signal recovery method. In a summary, our approach offers several innovative contributions: 1. A versatile space-time expanded data matrix that permits adaptive and effective compensation of spatial/temporal diversities inherent in the multipath wireless applications. 2. A generalized (multi-path) separability condition built upon a generalized space-time factorization
of the expanded data matrix such that each path has a uniquely defined subspace associated with it. 3. A generalized algorithm for path separation and DOA estimation with arbitrary antenna size. 4. A more effective channel equalization approach for signal recovery based on COD formulation (cf. Section 7). The organization of the paper is outlined in the flowchart of Fig. 2. Section 2 proposes the generalized multipath separability condition built upon a spacetime factorization of the expanded data matrix. Section 3 describes how an augmented data matrix would permit adaptive and effective compensation of spatial/ temporal diversities. Section 4 focuses on the diversity combination in data matrix under traditional assumption that the number of antenna elements is large enough (e.g. M > r). In COD algorithm of Section 5 we will look beyond such a limiting assumption and give a novel solution to the more challenging separation and recovery problem with deficient antenna elements. A somewhat extended DOA finding algorithm is described in Section 6. As shown in Fig. 2, the signal recovery process follows naturally after a successful path separation. In Section 7, we shall demonstrate that the COD formulation offers a unique advantage that it allows an effective SIMO (Single-InputMultiple-Output) channel equalization approach for signal recovery. Finally, representative simulation results are provided in Section 8 to support the theoretical findings. 1.2. Convolutional Model of Multipath Propagation For convenience, the symbol interval of the signal is normalized as one time unit. Consider d digital users each transmitted through independent multipaths with denoting the DOA, complex fading factor, overall temporal response and time delay for path (i, j)
COD
respectively. Assume a ULA (Uniformly-spaced Linear Antenna Array) of M antenna elements at the receiver and denote as the baseband signal observed at m-th antenna element. A parametric multipath model is commonly expressed as
237
same (single) path as that in the corresponding column of Once X is successfully decomposed into a valid factorization, either traditional [3,4] or our new DOA finding algorithms (cf. Section 6) may be applied to compute the DOA for each path and then the signal recovery process follows naturally thereafter. 2.2. Separability Analysis of Basic Data Matrix
where is well known as the antenna response vector with ( the spacing of two adjacent antenna elements; the carrier wavelength). Most existing subspace processing techniques are based on a data matrix X formed from For example, the (M × N) basic data matrix X is
In other words, the basic data matrix is simply a collection of both the spatial and baud-rate temporal samplings of the antenna observations. 2.
Separability Condition Based on Data Matrix
2.1. Multipath Separability Condition
For illustration simplicity, we first treat the separability condition of the basic data matrix X in Eq. (3) before engaging into more complex data matrices. 2.2.1. Notations and Matrix Factorization.
Definition 1 (Integer and Fractional Path Delays). The time delay of path (i, j) can be expressed as the sum of integer path delay and the fractional path delay
where Definition 2 (Convolutional Vectors, Durations & ISI Lengths). We define, for path (i, j), the (individualpath) convolutional signal vector as a vector of source signal sequence involved in producing the received antenna observation at time n. More exactly,
Given a diversity-combined data matrix X (see e.g. Eqs. (3), (26), (29), (48)) formed from signals at the antenna elements, what is in lack has been a general condition characterizing its effectiveness for the purpose of path separation. To this end, we introduce the following proposition: Proposition 1 (Multipath Separability Property). A data matrix X is said to have a “full multipath separability property,” if there exists a space-time factorization satisfying 1.
has full column rank and each of its column vector reveals the spatial information concerning DOA of exactly one path. Moreover, each path can find at least one corresponding column vector in 2.
has full row rank and each of its row vector bears the temporal signal information from exactly the
where the dimension of is the ISI length spanned by path (i, j) with baud-rate sampling (Note that is determined primarily by the combined length of the transceiver pulse-shaping filter and the dispersive channel, with fine-tuning caused by the fractional path delay ).1 The corresponding (individual-path) convolutional temporal vector is therefore
As indicated by Eq. (4), in this case the convolutional duration of path (i, j) is which in turn will help to track the rank structure and consequent separability of X.
238
Zhang and Kung
With definitions above Eq. (2) can be rewritten as the vector form
Lemma 1 (Basic Data Matrix Factorization). Based on the parametric multipath channel model in Eq. (2), the basic data matrix X in (3) is equivalent to a product form
where A, G and S are (M × r), and matrices determined by and respectively together with other path parameters Proof: Equation (7) can be proved easily if we transfer Eq. (6) into matrix form by setting the antenna response matrix
which holds a Vandermonde structure (cf. [12] for more details), the temporal response matrix
and the (expanded) source signal matrix
2.2.2. Separability Conditions for Basic Data Matrix. Proposition 1 suggests that separability of X depends largely on the rank properties of its matrix factors A, G and S. Notice that has rows. However, there is possibly a rank reduction due to inadequacy of temporal diversity (e.g. duplications among rows) in it (This situation arises especially when the different paths for user i have very close arrival times, causing their convolutional durations overlap significantly). There may also be extra rank reduction due to inadequacy of path diversity in the multiplication of with (cf. Lemma 1). For the purpose of checking multipath separability, we have the following definitions: Definition 3 (Accumulative ISI Length). For an individual user i, the accumulative ISI length is defined as the effective ISI length spanned by all the paths of user i. It is the length of the “union” of the convolutional durations for all the paths regarding user i. As exemplified in Fig. 3, where is the length of the “opening” interval not covered by any of the convolutional durations. Assuming N is sufficiently large so that all the “nonrepeating” rows of are linearly independent. By
COD
inspecting Fig. 3, it is obvious that the number of repeating rows in is exactly Therefore,
Thus we can remove all the repeated rows from to create a matrix which has full row rank In the matrix representation, such a contraction process (from to ) will necessarily induce a corresponding contraction on the nonzero entries in the matrix More specifically, some of its elements should also be accordingly shifted to match the contraction applied to (the rows of) This induced shift results in a matrix such that
and can be defined in the fashion as S, G (cf. Eqs. (9), (10)), except that being replaced by and Definition 4 (Basic Temporal Path Diversity). We define a notion of basic temporal path diversity as
to indicate the number of identifiable paths for user i. When N is sufficiently large,
Similarly, the total temporal path diversity of X is defined as The basic temporal path diversity depends not only on ISI lengths but also on the convolutional durations and patterns of all the intra-user paths. When it is said to have full temporal path diversity, i.e. all paths for user i are identifiable. Lemma 2 (Separability Condition of Basic Data Matrix). Assuming independent users, the basic data matrix of Eq. (3) will possess “full multipath separability property” in Proposition 1 with a valid space-time factorization
if and only if
239
1.
(i.e. antenna size larger than the number of multipaths), and 2. (i.e. all users have a full temporal path diversity). Proof: Note that the dimensions of (M × r) and (r × N) respectively.
and
are
1. Each column vector of A represents DOA information of one path, namely, these r vectors have a one-to-one correspondence with the r distinct paths. A is guaranteed to have full column rank since the antenna response vectors with r distinct DOAs are mutually independent when Thus is automatically met. 2. Now we need only to verify the to validate the above mentioned space-time factorization. We note that each row of indicates the temporal information of one path. The inherent signal independence among different users allows us to treat the rank of independently by each user. Consequently, has full row rank (i.e. D = r) if and only if each of its submatrix has full row rank, i.e. Thus the separability of X is now established.
The following case studies should help a better intuitive understanding on when to expect a full temporal path diversity for an individual user. 1. To have it is necessary that the accumulative ISI length 2. If each of the intra-user paths has at least some portion of its convolutional duration not covered by any other intra-user paths, it would be sufficient to assure the full diversity, i.e. 3. Even when there exists a strong overlap (i.e. synchrony) of the intra-user convolutional durations, the full diversity can still be achieved as long as the channel characteristics of intra-user paths are sufficiently different to guarantee independent linear combination of the source signals (As mentioned, one can have only if However, because the temporal characteristic of dispersive channels tends to be randomized, so the full rank condition will be met in practice as long as the “necessary” condition can be satisfied).
Example 1 (Worst Case Example of Non-Separability). When the ISI lengths are short and intra-user paths
240
Zhang and Kung
exhibit a strong synchrony, the rank reduction is becoming more likely. For example, consider the worst case when the temporal responses for all the paths are instantaneous so that and the delay spread of intra-user paths spans at most one symbol interval then for each user 1 since and the rank of matrix will be D = d < r. This of course results in a failure in the “separability condition.”
2.
In Section 5, we shall show that this corresponds to either the situation with vertical expansion or with combined vertical/horizontal expansion. Moreover, will be an expanded Vandermonde matrix with rank greater than r. 3.2.
3.
Diversity Compensation Strategies
3.1.
Options for Data Matrix Expansion
Since the basic data matrix X may fail the “separability property” due to deficiencies in spatial diversity or temporal path diversity, this paper addresses possible remedies for such deficiencies. One viable strategy is to somehow augment the basic data matrix X so as to boost the deficient diversities. In terms of data matrix expansion, there exist four different categories: 1. 2. 3. 4.
No Expansion Horizontal Expansion Vertical Expansion Combined Vertical/Horizontal Expansion
The factorization of an augmented data matrix X (note that for simplicity, we use X to denote a data matrix either with or without augmentation) with rank may be significantly affected by expansion schemes. Nevertheless, it has a general formulation as below:
where and are defined according to the specific application situations. There are two important categories of factorization schemes: 1.
In Section 4, we shall show that this corresponds to either the situation with no expansion or with only horizontal expansion. Moreover, will be simply the traditional antenna response matrix A, i.e. a (M × r) Vandermonde matrix.
Spatial and Temporal Diversity Compensations
The objective of the proposed diversity-reconfigurable smart antenna processing is to reach an optimal tradeoff in terms of diversity parameters to satisfy the full multipath separability condition. Most popular compensation strategies fall into one of the following two types: temporal oversampling and spatial displacement. 3.2.1. Temporal Oversampling. When a signal is sampled fractionally at the antenna, the oversampling factor P denotes the number of samples taken during each symbol interval. At p-th (p = 1 , . . . , P) oversampling point, the data vector can still be obtained from Eq. (2) by substituting with and other parameters unchanged. To cope with the case including temporal oversampling, it is imperative that we extend some of the previous definitions. The shifted time delay of path ( i , j ) at p-th oversampling point is defined as with denoting the integer and fractional parts of respectively, i.e. The shifted ISI length is the number of (baudrate) source signal samplings convolved in the antenna observation at oversampling point p. Consequently, the shifted convolutional signal vector and temporal vector of path (i, j) are derived similarly as in Eqs. (4), (5) with replaced by More exactly,
COD
241
3.2.3. Data Collection. We stack the antenna observations of section k at p-th oversampling point into a ((M – K + 1) × N) data matrix: Now the shifted convolutional duration is the time interval covered by With extended definitions above, the shifted antenna observation vector at p-th oversampling point is equivalent to the vector product:
3.2.2. Spatial Displacement. For diversity compensation we can form more than one virtual (spatial) sections from adjacent elements of a single antenna array. Here the spatial displacement K is defined as the total number of such sections (i.e. [1, 2 , . . . , (M – K + 1)] form the first section, [2, 3,..., (M – K + 2)] form the second and so forth). The partial antenna observation vector associated with section is denoted as
and the corresponding partial antenna response matrix is the portion of k-th to (M – K + k)-th rows of A. Defining a (r × r) diagonal matrix
When K = 1 (resp. P = 1), the above notation will be degenerated into simply To come up with a matrix factorization of we define and similarly as in Eqs. (9), (10) with each vector in them replaced by its corresponding shifted version. More precisely,
where and are defined in Eqs. (19), (20). Along this line, and can be defined in a similar fashion (cf. Eqs. (9), (10)). Then shares the similar factorization property as the basic data matrix (cf. Lemma 1):
Let denote the shifted accumulative ISI length at p-th oversampling point defined similarly as in Definition 3 and thus Following the previous idea of data contraction in Section 2.2.2 (cf. Fig. 3), we can again remove the duplicated rows of and obtain a contracted matrix with full row rank. Accordingly, can be derived similarly as before. The contracted matrices satisfy the following equation:
then the Vandermonde structure of A inherits the shiftinvariance property:
4. Horizontal Expansion for Meeting Condition In the basic data matrix X, the horizontal dimension is attributed to temporal diversity of baud-rate samples
242
Zhang and Kung
only. will fail whenever such a diversity is deficient for all paths identification. Assuming sufficient antenna elements, we can solve the problem by constructing horizontally augmented data matrices via either spatial displacement or temporal oversampling.
1. For temporally oversampled data matrix:
4.1. Matrix Construction and Space-Time Factorization 4.1.1. Horizontally Oversampled Data Matrix. Stacking oversamplings with factor P horizontally without spatial displacement yields a (M × PN) augmented data matrix Since the rightmost matrix has full row rank, Largely the same as before, we assert that X has a possible space-time factorization with
4.1.2. Spatially Displaced Data Matrix. Stacking samples of different antenna sections together yields a ((M – K + 1) × KN) augmented data matrix
Based on the shift-invariance property of A in Eq. (23), it is straightforward to see that
which naturally leads to the following space-time factorization:
4.2.
Separability Conditions
In order to analyze the separability conditions, similar to Definition 4 we define the (individual-user) horizontally augmented diversity as the rank of the submatrix extracted from the rows attributed to user i. Specifically,
2. For spatially displaced data matrix: By the same token, the horizontally augmented diversity in this case can be shown to be
Lemma 3 (S.C. for Horizontally Augmented Data Matrix). Assuming independent users, the horizontally augmented data matrix X with temporal oversampling factor P (resp. spatial displacement K) in Eq. (26) (or (29)) will possess “full multipath separability property” with the space-time factorization suggested in Section 4.1 if and only if (resp. ) and The proof (omitted here) is very similar to that for Lemma 2. Comparing Eqs. (35), (36) with (13), the horizontal dimension of the original matrix is enlarged by P or K times after compensation, therefore the added blocks induced by fractionally-spaced oversampling or spatial displacement should substantially boost the (numerical) independence of the rows in and thus yield improved for path separation over the basic temporal path diversity Hence for a horizontally expanded data matrix X, “full multipath separability property” is often guaranteed as long as proper P or K is selected. To be more specific, we have the following corollaries:
COD
Corollary 1 (Necessary & Sufficient Separability Conditions 1). For horizontally augmented X under the enhancement via temporal oversampling in Eq. (26),
243
will create one additional column-block in Eq. (36), which in turn can increase the rank of the submatrix attributed to user i by an amount of at most i.e.
1. a necessary condition for separability is
2. a sufficient condition for separability is that for any individual user, each of its path has at least some portion of its shifted convolutional duration not covered by those of any other paths from the same user. 3. When there is a significant overlap of the shifted convolutional durations, the (fractionally-spaced) path channel difference usually helps enhance numerically the linear independence when the delayed signals are combined as shown in Eq. (35). This again helps enforcing the full temporal path diversity condition.
Proof: From linear algebra we know that the rank of a full matrix is no more than the sum of ranks of all the submatrices. This implies that for temporally oversampled X,
which directly leads to the sufficient condition above.
Corollary 2 (Necessary & Sufficient Separability Conditions 2). For horizontally augmented X with spatial displacement in Eq. (29),
This leads to the necessary condition If we further note that is a (non-identity) diagonal matrix, it can be shown that each additional column-block in Eq. (36) will at least increase the rank by 1. This leads to the sufficient condition that Theoretically, can be formally regarded only as a necessary condition for separability. In practical applications, however, due to the randomness of channel response with very rare exceptions we have
In other words, in practice, the augmented data matrix X possess “full multipath separability property” if there is an integer K such that This assertion has been verified by our simulations (cf. Example 2). Example 2 (Numerical Examples of Rank Readjustment). As in Example 1, for the synchronous case that all the intra-user paths arrive in the same symbol interval and the filters and channel disperses are very short (so that all the paths have virtually instantaneous responses), the rank of the basic data matrix will be as small as D = d with Even under such “worst” scenario, the deficient rank may be reinforced via spatial displacement. By setting or the rank of the spatially displaced or temporally oversampled matrix X in Eqs. (26), (29) can usually be readjusted to the desired rank r.
1. a necessary condition for separability is
2. a sufficient condition for separability is
Proof: The choice of K can be derived by inspecting Eq. (36). It can be verified that each increment of K
Diversity Enhancement via Transceiver Redesign. Finally, it is worth noting that another promising approach to boost the temporal path diversity is to redesign the transceiver parameters without data matrix expansion in Eq. (3). The pulse shaping filter “length” in transceiver has significant impact on which in turn should numerically enhance the temporal path diversity. Therefore, when a rank failure takes place, increasing pulse-shaping filter length offers an effective way to boost the rank of the basic
244
Zhang and Kung
data matrix. The same numerical advantage also carries through to both types of the horizontally expanded data matrices. 5.
COD Diversity Compensation
Traditional approaches assume a large number of antenna elements and the basic data matrix X in Eq. (3) has a space-time factorization with satisfying the in Proposition 1. Under the much more challenging situation when M is small (possibly M < r), the (M × r) matrix A will no longer necessarily have full column rank. Therefore, it is necessary to (1) compensate the inadequacy in spatial diversity along vertical dimension of A; (2) come up with a substituting space-time factorization which holds both and in Proposition 1 and meanwhile (3) retain a proper structure of for DOA information extraction. To this end, we propose the novel COD strategy which benefits from combined vertical expansion with temporal oversampling and horizontal expansion with spatial displacement. 5.1. Notations for Vertical Temporal Oversampling For the basic or horizontally augmented data matrix, the vertical dimension is attributed to spatial diversity only. Thus when M < r it is intuitive to pursue an idea which could expand the vertical dimension by some other means, for example, a vertical temporal oversampling strategy. In this case, it is useful to derive some notations as following for separability study.
In other words, the ISI length is being enlarged to cover all the oversamplings (But it is enlarged by no more than one compared with any shifted ISI length 3. Define in a similar way as in Eq. (10) with replace by
4. With defined as above, it contains as a submatrix with possibly one additional row. Therefore, can be defined accordingly. More precisely, if we insert extra zero-columns into the corresponding columns of we will be able to create a slightly augmented matrix such that
It is worth noting that has a block-diagonal structure and the rank of is denoted as 5. Similarly as in Section 3.2.3, we define the overall source signal matrix S and shifted temporal response matrix (after augmentation) as following:
1. Let us denote an overall convolutional signal vector
6. With the notations given above, one can easily verify the following factorization from Eq. (25):
2. The overall convolutional duration of path (i, j) with oversampling factor P covers the interval of Thus, the overall ISI length of all the P samplings within a symbol interval [n, n + 1) for path (i, j) is
5.2.
COD Expansion and Separability
Combining temporal oversampling vertically and spatial displacement horizontally leads to the COD data matrix:
COD
Theorem 1 (COD Space-Time Factorization). For COD data matrix X in Eq. (48), we have the spacetime factorization with
where
is a diagonal matrix with dimension
Proof: Substitute each submatrix in X of Eq. (48) with its matrix product form in (47). Exploiting the shift-invariance of A in Eq. (23) and block diagonal structure of we have
where is the diagonal expansion of by repeating each of its element times consecutively. Then it is trivial to obtain the space-time factorization in the lemma. We are now ready to state the condition on K and P for Eq. (50) to yield a valid factorization. Theorem 2 (COD Separability Condition). Assuming independent users and P sufficiently large to yield a full column rank of COD matrix X meets the “ full multipath separability condition” if and only if
By inspection, each column (resp. row) of bears the information of DOA (resp. signal and ISI) corresponding to exactly one path. Having established the fact that path (i, j) (i = has corresponding column (row) vectors in what remains to be verified
245
is their full column (row) rankness. Since P is assumed to be sufficiently large to yield the full column rank (cf. Discussion 2), therefore, what remains to be verified is the full row rankness of Assuming all users have different (and linearly independent) signal sequences, it suffices that we verify the full-rankness of the rows attributed to each individual user. Each block column of has rank (attributed to user i). Each increment of K brings in one additional column-block resulting in a net increase of rank by exactly (A disclaimer: we exclude pathological situations such as two DOA’s happen to coincide, etc.). The theorem is thus proved. Discussion 1 (Practical Range of K). Note that due to overlapping of convolutional durations among intra-user paths. In most practical situations, it is reasonable to assume that all the paths have independent delays and channels so that the overlapping will not be very severe. Thus a displacement of K = 2 (or at most K = 3) suffices to meet Eq. (53) in most cases. So we conclude that DOA estimation and path separation problem is theoretically tractable by COD as long as i.e. it requires a very small antenna size. In some cases when the different intra-user paths arrive temporally far enough from each other such that then separability is already achieved by setting K = 1, i.e. no spatial displacement needed in Eq. (48). Discussion 2 (Practical Range of P). As a practical guideline, we suggest to guarantee the (numerical) full column rankness of Example 3 (Special Case When There Are No Synchronous Paths). Suppose it is further assumed that the different intra-user paths arrive sparsely enough so that we have then separability can be achieved by setting K =1, i.e. no spatial displacement will be needed in COD data matrix constructed in Eq. (48). This is especially valuable for the cases of small antenna array size. Vertical temporal oversampling in general is still needed.
Proof:
6. COD Path Separation and DOA Finding Algorithm In this section, we will treat the problems of DOA estimation and path separation. The objective of path
246
Zhang and Kung
separation is to remove most inter-user or inter-path interferences. The task of reducing or removing ISI will be deferred to Section 7.
as below:
6.1. Subspaces Extraction The expression suggests that path separation and identification is basically a matrix decomposition problem, which is to extract spatial and temporal vectors that deliver information pertaining to the desired paths from an information-mixed data matrix. Since SVD (Singular Value Decomposition) is known as a most powerful tool on matrix decomposition, we adopt a SVD-based subspace extraction:
where U, V are unitary matrices with full column (row) rank and is a diagonal matrix containing the singular values. When the separability condition in Section 2.1 is met, COD data matrix X has the same low rank as and thus they share the same column (row) span. Mathematically, and can be expressed in terms of U, V, respectively:
where R is an (unknown) invertible matrix denoting a rotation. The left singular vectors in U are first used to find DOA information embedded in The extracted spatial features help to separate paths via linear combinations of the right singular vectors in V. Finally, the row span after linear transformation can be exploited for source signal recovery. 6.2.
Column Span Processing for DOA Computation
The most prominent prior works on the subspace processing approach to DOA finding with large antenna array size include for example MUSIC [2], TAM [3] and ESPRIT [4], in which the special structure inherited in the antenna response vectors of a ULA was well exploited [12]. Assuming a ULA with adjacent spacing from Eq. (8) it is easy to show that the antenna response matrix A is a (M × r) matrix with Vandermonde structure
The special Vandermonde structure inherits the shiftinvariance property as indicated in Eq. (23):
where and are operations that respectively truncate the first and last row of a (M – K + l)-row matrix. With reference to Theorem 1, in COD expansion, each vertical block of retains the Vandermonde structure. Thus X possesses an “intra-block shift-invariance property” , which is critical for DOA estimation. Let and be matrixtruncation operations on which respectively extract the (M – K) upper and (M – K) lower rows out of each of the P vertical subblocks, i.e.
then we have Theorem 3 (COD Eigenvalue Theorem). Given a COD data matrix X which satisfies the “ multipath separability property,” apply SVD (Singular Value Decomposition) to obtain We assert that there exists an invertible matrix R such that
with defined in Theorem 1, and hence DOAs of all the paths can be derived from the diagonal elements in ((·)+ denotes pseudo-inverse).
COD
Proof:
After the extractions, we have
247
tors). In other words, each of its row vector reveals information of signal pertaining to one path. 6.3.1. Path Separation for Basic or Horizontally Expanded Matrices. For the basic or horizontally expanded data matrix, after applying SVD
Applying Eq. (23), it is immediate to show
By Eq. (56) it follows that
Equation (62) represents an eigenvalue decomposition where is the nonsingular, diagonal eigenvalue matrix (cf. the definition of in Section 5.2). The DOAs then can be calculated from its diagonal elements (i= 1, . . . , d; j = 1 , . . . , ) and each eigenvalue will have a multiplicity of The following COD algorithm can be regarded as a (block-) generalized version of TAM and ESPRIT. Algorithm 1 (COD DOA-Finding Algorithm).
1. Select suitable parameters K and P according to Theorem 2, Discussion 1, 2 to form COD data matrix X; 2. Apply SVD on X to obtain and 3. Block-wise Extraction of U for 4. Find the eigenvalues of 5. Calculate DOAs from the eigenvalues 6.3.
Path Separation
From the factorization of a basic, horizontally expanded or COD data matrix, we can obtain the matrix R via eigenvalue decomposition in Eq. (62). The path separation problem can be solved by applying the rotation matrix R to obtain a matrix satisfying the condition. According to Eqs. (15), (28), (33), (50), the operation in Eq. (56) which yields will serve this purpose if R can be uniquely determined (modulo some inconsequential scaling fac-
guarantees to satisfy the of Proposition 1 when the data matrix X is constructed to have the full multipath separability property. As an additional bonus, we are also assured the uniqueness of the solution of (modulo some inconsequential scaling factors) since all the DOAs are assumed to be distinct. According to Eqs. (15), (28), (33), each row of displays a convoluted form of a source signal (since the convolution operation is basically a weighted sum of shifted signals, so each row of will have a form shown in e.g. Eqs. (66), (67)). In order to utilize the (blind) SIMO signal recovery scheme proposed in Section 7, we need to further identify two or more row vectors from which belong to the same user. If a source signal has unfortunately only single path, then the COD approach (see discussion below) may be adopted to artificially create multiple row vectors for the same user. 6.3.2. Path Separation for COD. However, unlike the horizontal expansion case mentioned previously, the rotation matrix R will not be unique for the COD factorization due to the presence of repeated eigenvalues. More precisely, by applying SVD on X we can obtain
instead of Note that the rank of COD data matrix X exceeds the number of paths, each of which corresponds to an eigenvalue This implies that there must be repeated eigenvalues in Algorithm 1. In fact, the multiplicity of the eigenvalue can be shown to be equal to Thus, there are right singular vectors corresponding to it. Ideally, these (row) singular vectors would have represented the same source signal through the same path, but with varying delays ranging from to (cf. Section 5.1. For convenience, we focus on one path (i, j) and denote the rows in (cf. Section 5.1) corresponding to it as where
248
Zhang and Kung
7.1. Signal Recovery via SIMO Formulation The multipath diversity could potentially yield a viable bonus technique for signal recovery via the SIMO approach. After COD DOA path separation, we will obtain multiple (more exactly, ) sequences representing a convoluted version of the original signal of user i. This represents a typical blind SIMO (single-inputmultiple-output) problem formulation, where multiple signals can be combined to recover one original signal. For our signal recovery application, the formulation of blind SIMO equalization may be applied in two ways: 1. While it is feasible for the basic or horizontally expanded matrix (as long as cf. Section 8.2), there involves some extra efforts and challenges: (a) We need to identify which paths are from the same user; (b) The range of path delays could be so large (thus increasing the number of unknown parameters) that render the SIMO solution less effective.
In practice, the rows of the first block in obtainable from SVD will all be in the form of some rather arbitrary linear combinations of (cf. Eqs. (66), (67)). Consequently, the final recovery of signals will still require a (blind) equalization as elaborated below (cf. Fig. 4).
2. As explained below, COD approach would obviate these difficulties as each path (i, j) will itself generate multiple copies of convoluted results of the same signal. Moreover, the range of path delays of the multiple copies will be much more manageable since they are all originated from the same path. More exactly, the range of delays (i.e. the number of complex parameters) is as compact as
7. COD Signal Recovery In the previous section, the path separation problem was solved by applying the rotation matrix R to obtain a matrix whose row vectors would satisfy the condition. Such a separation process accomplishes the task of removing most inter-user or inter-path interferences. The present section will handle the remaining task of reducing or removing ISI and ultimately recover the original signal. The multiple path information in basic, horizontally compensated data matrices or multiplicity of eigenvectors in COD factorization regarding the same user sequence would lead to a SIMO (single-input-multiple-output) formulation as depicted in Fig. 4. Note that such a SIMO formulation is most beneficial to use when the signal constellation structure is exploited (cf. [9], Chapter 5). Prior treatments of SIMO equalization can be found in reports concerning blind identification (e.g. [13–23]).
7.2. Frequency-Domain Total-Least-Square Technique for Blind SIMO Channel Equalization Here we adopt a so-called cross correlation approach to equalization originally proposed in [19, 22]. For simplicity, suppose we have successfully extracted two intra-user rows from the first block of either matrix in Eq. (63) or in Eq. (64) and denote them by and respectively. Since they represent responses of two outputs from the same source signal in SIMO channel (cf. Fig. 4), we can express them as
COD
Note that, for simplicity, the dimension in the formulas reflects the matrix in Eq. (64) for the COD formulation. The analysis is also valid for in Eq. (63) for either basic or horizontally augmented data matrices, except that its dimension needs to be properly modified. Let and denote the Fourier transforms of and respectively. Since the following relationship must hold true:
Therefore we have and it follows that
249
2. Take Fourier transformation on any extracted two row vectors of the first block in which correspond to one of the following: (a) The same path, i.e. rows regarding the same eigenvalue. (In our simulation study, this is referred to as a single path recovery.) (b) Two different paths from the same user. This will incur some extra preprocessing effort in order to correctly identify two paths from the same user via signal signatures or codes, etc. (This is referred to as an intra-user multipath recovery.)
3. Take Fourier transformation on the two extracted row vectors. 4. Calculate and 5. Determine the parameters by solving the total least-square system given in Eq. (70); 6. Calculate 7. Take IFFT and get the original source signal. 7.3. COD Procedure Based on the above analysis and algorithms, we propose the following COD (Combined (Temporal) Oversampling and (Spatial) Displacement) algorithm for path separation, parameter estimation and signal recovery. It is worthy noting that the COD algorithm may be effective for both cases of M > r and M < r, as demonstrated by simulation results in Section 8.
where
Procedure 1 (COD Procedure).
Base on the above, the well-known total least-square solution can be applied to estimate the parameters in Eq. (70). Subsequently, we can in turn derive via Eq. (69) and finally take the Inverse FFT (IFFT) to retrieve the original source signal. The frequency-domain blind SIMO signal recovery algorithm can be summarized into the following steps: Algorithm 2 (Blind SIMO Algorithm for Signal Recovery). 1. Obtain
from Eq. (64);
1. Data Matrix Construction. Construct a COD data matrix X form the antenna observations by proper selections of the oversampling factor P and spatial displacement K according to Theorem 2, Discussion 1 and 2; 2. SVD Subspace Extraction. Apply SVD on and obtain the column span U and row span V for processing; 3. Principal Subspace Selection. Select the signal subspaces and from the noisy subspaces U and V according to singular value distribution; 4. Column Subspace Processing for DOA Finding. Apply Algorithm 1 to calculate DOAs and get the rotation matrix R or 5. Path Separation. Apply the rotation matrix R to obtain (cf. Eqs. (63), (64)); 6. Row Span Processing for Signal Recovery. Apply Algorithm 2 for signal recovery.
250
8.
Zhang and Kung
Simulation
The COD framework covers an extended family including basic TAM [3], ESPRIT [4], HT-TAM (horizontal-temporally-oversampled TAM), HS-TAM (horizontal-spatially-displaced TAM), and combined COD. Our simulation study provides a comparison of their relative performances versus the maximum thermal SNR (among all the paths).
8.1.
DOA Estimation and Path Separation
8.1.1. Performance Comparison of DOA Estimation When M>r. We have conducted 500 experiments with M=10, r=7, d = 3, and the thermal SNR ranging from 0 to 20 db. In terms of finding all the paths, COD has a successful rate around 80%–99% statistically, which is clearly superior to all the others (cf. Fig. 5(a)). In terms of all paths DOA estimation accuracy, COD and HT-TAM deliver superior performance (around 1.0 to 0.3 degree in error) than the other two (cf. Fig. 5(b)). From a practical perspective, the dominant paths would often suffice for the purpose of signal recovery. As a selection criterion, we take advantage of the knowledge that the dominant path’s eigenvalues are
more likely to comply with unit-circle condition. Similar as in all paths case, for dominant paths, COD has an extraction rate around 94% to 99% and estimation error around 0.75 to 0.2 degree, whose performance is well above those of others (cf. Fig. 5(c) and (d)). Figure 6
COD
depicts the DOAs estimated by TAM and COD, when M > r. Note that COD successfully find all the paths (see the outermost ring) while TAM misses quite a few (the second ring). 8.1.2. COD DOA Estimation Performance When When M < r, the traditional TAM/ESPRIT doesn’t work. For investigation of COD performance, we have conducted 200 experiments for M = 6, r = 10, d = 3 with SNR ranging from 0 to 20 db. As shown in Fig. 7, COD has an all-path-extraction rate of 60% to 90% with 1.6 to 0.45 degree error in DOA estimation. For dominant paths, the success rate is around 85% to 95% and it delivers an estimation accuracy of 1.1 to 0.35 degree in DOA error. Figure 8 depicts the DOAs estimated by COD when
8.2.
Signal Recovery
In this section we give several simulation examples for source signal recovery. 8.2.1. Signal Recovery When M>r. In our simulation, we choose a case with 3 users, 8 multipaths (i.e. r = 8), M = 10 and SNR = 10 db.
251
1. HT-TAM Performance. Figure 9(a) shows the polar plot indicating the original and the estimated DOAs. ‘+’ on the out-most ring illustrate the DOAs estimated and the inner dots stand for the original path DOAs.
252
Zhang and Kung
The simulation is performed based on discussion in Section 6.3.1 and the SIMO blind equalization algorithm. The recovered source signal clustering (of 150 symbols) of the three users are shown in Fig. 9(b)–(d). While the result for user 1 is much better than those others, they can all rely heavily on exploitation of Finite Alphabet property to further reduce the signal recovery error. 2. COD performance. Figure 10(a) shows the polar plot indicating the original and the estimated DOAs of the paths.
(a) Figure 10(b)–(d) show the signal clustering obtained via single-path signal recovery.
(b) Figure 10(e)–(g) show the signal clustering obtained via intra-user multipath signal recovery. In this example, we have for all the paths. It is worth noting that they basically deliver the same performance, which is much superior to those obtained via the HT-TAM solution. However, the intra-user multipath approach requires some extra preprocessing effort to identify different paths from the same user. In contrast, the single-path approach clearly enjoys the advantage that it needs no such preprocessing due to the multiplicity of eigenvalues corresponding to the same path.
COD
253
254
Zhang and Kung
COD
8.2.2. Signal Recovery When Here we choose d = 3, r = 8, M = 6 and SNR = 50 db. As before, Fig. 11 (a) shows the polar plot indicating the original and the estimated DOAs of the paths.
1. Figure 11(b)–(d) show the signal clustering obtained via single-path signal recovery. 2. Figure 11 (e)–(g) show the signal clustering obtained via intra-user multipath signal recovery. These figures imply that even when M < r, COD still deliver a satisfactory performance for DOA estimation and signal recovery around SNR = 50 db. When SNR decreases, the performance will degenerate gravely. However, COD makes it possible for path separation and signal recovery with deficient antenna elements. And it is hopeful to improve the performance in a tremendous way if other signal properties (e.g. Finite Alphabet Property of the digital signal) are further exploited.
9.
255
terms of path separation, DOA estimation, and signal recovery. Even for the case when the number of multipaths is very large and M < r, COD algorithm still delivers satisfactory results. We believe the fundamental principles established in the paper on diversity interchangeability and cross compensation will bear significant impacts on a broad spectrum of applications. In the future, we will take several emergent technological features into account and pursue further applications of the proposed algorithms for COD path separation and frequency-domain SIMO equalization for the 3G/4G wireless communication systems. Note 1. For simplicity we take gij(t) as a causal function of t. Strictly speaking, the integer Lij may also be affected by the oversampling rate if oversamplings are used in data matrix construction. In short, Lij has to be large enough to cover all the samples within a symbol interval used for that particular path. See Section 3.2.1 for details on the extended definition of Lij when oversampling is involved.
Conclusion
The paper aims to capitalize on the use of spatialtemporal diversity reception techniques to improve the system performance of wireless communication systems. The core technologies include signal processing algorithms for multipath separation, parameter estimation and signal recovery. The proposed adaptive smart antenna system offers the following promising features: • Versatile space-time diversity combination— unifying a large family of subspace processing techniques. • Effective SIMO channel equalization exploiting the diversity due to original intra-user multipaths and/or COD-induced multiplicity. The proposed COD formulation can deal with the situation when the number of multipaths is larger than that of antenna elements, a very demanding situation failed by most traditional approaches. In order to fully exploit the multipath diversity, this paper also introduces a frequency-domain total-least-square algorithm for SIMO signal recovery. As demonstrated by simulation, the COD algorithm have proven to be superior than traditional subspace formulations when M > r, in
References 1. R. Kohno, “Spatial and Temporal Communication Theory Using Adaptive Antenna Array,” IEEE Personal Communications, vol. 5, no. 1, 1998, pp. 28–35. 2. R.O. Schmidt, “Multiple Emitter Location and Signal Parameter Estimation,” IEEE Transactions on Antennas and Propagation, vol. AP-34, no. 3, 1986. 3. S.Y. Kung, K.S. Arun, and B. Rao, “State Space and Singular Value Decomposition Based on Approximation Methods for Harmonic Retrieval,” Journal of the Optical Society of America, Dec. 1983, pp. 1799–1811. 4. R. Roy and T. Kailath, “ESPRIT-Estimation of Signal Parameters via Rotational Invariance Techniques,” IEEE Transactions on ASSP, vol. 37, no. 7, 1986, pp. 984–995. 5. A.J. Paulraj and C.B. Papadias, “Space-Time Processing for Wireless Communications,” IEEE Signal Processing Magazine, vol. 14, no. 5, 1997, pp. 49–83. 6. H. Krim and M. Viberg, “Two Decades of Array Signal Processing Research,” IEEE Signal Processing Magazine, vol. 13, no. 4, July 1996, pp. 67–94. 7. L. Tong, G. Xu, and T. Kailath, “Blind Identification and Equalization Based on Second-order Statistics: A Time Domain Approach,” IEEE Transactions on Information Theory, vol. 40, 1994, pp. 340–349. 8. A.J. van der Veen, S. Talwar, and A. Paulraj, “A Subspace Approach to Blind Space-time Signal Processing for Wireless Communication Systems,” IEEE Transactions on Signal Processing, vol. 43, 1995, pp. 2982–2993. 9. G.B. Giannakis, Y. Hua, P. Stoica, and L. Tong, Signal Processing Advances in Wireless and Mobile Communications: Trends
256
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Zhang and Kung
in Channel Estimation and Equalization, Prentice Hall, NJ, 2000, Upper Saddle River, NJ 07458. P. Stoica and A. Nehorai, “MUSIC, Maximum Likelihood, and Cramer-Rao Bound,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 5, 1989, pp. 720– 741. R.L. Johnson, “Eigenvector Matrix Partition and Radio Direction Finding Performance,” IEEE Transactions on Antenna and Propagation, vol. AP-34, no. 8, 1986, pp. 985–991. A.J. van der Veen, “Algebraic Method for Deterministic Blind Beamforming,” Proceeding of the IEEE, vol. 86, no. 10, 1998, pp. 1987–2008. M. Kristensson, D.T.M. Slock, and B. Ottersten, “Blind Subspace Identification of a BPSK Communication Channel,” Proceedings of the 30th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, vol. 2, Nov. 1996, pp. 828– 832. E. Moulines, J.F. Cardoso, A. Gorokhov, and P. Loubaton, “Subspace Methods for Blind Identification of SIMO-FIR Systems,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Atlanta, Georgia, May 1996, vol. 5, pp. 2449–2952. E. Moulines, P. Duhamel, J.F. Cardoso, and S. Mayrargue, “Subspace Methods for the Blind Identification of Multichannel FIR Filters,” IEEE Transactions on Signal Processing, vol. 43, no. 2, April 1995, pp. 516–525. V.U. Reddy, C.B. Papadias, and A.J. Paulraj, “Second-order Blind Identifiability of Certain Classes of Multipath Channels Using Antenna Arrays,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Munich, Germany, April 1997, vol. 5, pp. 3465–3468. H. Zeng and L. Tang, “Blind Channel Estimation Using the Second-order Statistics: Algorithms,” IEEE Transactions on Signal Processing, vol. 45, no. 8, 1997, pp. 1919–1930. L. Tong and S. Perreau, “Multichannel Blind Identification: from Subspace to Maximum Likelihood Methods,” in Proceeding of the IEEE, vol. 86, no. 10, 1998, pp. 1951–1968. G. Xu, H. Liu, L. Tong, and T. Kailath, “A Least-Squares Approach to Blind Channel Identification,” IEEE Transactions on Signal Processing, vol. 43, Dec. 1995, pp. 2982– 2993. L. Tong, G. Xu, and T. Kailath, “A New Approach to Blind Identification and Equalization of Multipath Channels,” in Proceedings of the 25th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, Nov. 1991, pp. 856– 860. A.J. van der Veen and A. Paulraj, “Singular Value Analysis of Space-time Equalization in the GSM Mobile System,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Atlanta, Georgia, USA, May 1996, vol. 2, pp. 1073–1076. Y. Hua, “Maximum Likelihood for Blind Identification of Multiple FIR Channels,” IEEE Transactions on Signal Processing, vol. 44, 1996, pp. 756–759. Y. Hua and M. Wax, “Strict Identifiability of Multiple FIR Channels Driven by an Unknown Arbitrary Sequence,” IEEE Transactions on Signal Processing, vol. 44, 1996, pp. 661– 672.
Xinying Zhang received the B.S. degree from Electronics Engineering Department, Tsinghua University, China in 1998. She is now a Ph.D. candidate in the Department of Electrical Engineering at Princeton University. Her research interests lie in the areas of communications and signal processing, including channel equalization, space-time coded system and multicarrier communication systems, xinying @ ee.princeton.edu
Sun-Yuan Kung received his Ph.D. Degree in Electrical Engineering from Stanford University. In 1974, he was an Associate Engineer of Amdahl Corporation, Sunnyvale, CA. From 1977 to 1987, he was a Professor of Electrical Engineering-Systems of the University of Southern California. Since 1987, he has been a Professor of Electrical Engineering at the Princeton University. Since 1990, he has served as an Editor-in-Chief of Journal of VLSI Signal Processing Systems. He served a founding member and General Chairman of various international conferences, including IEEE Workshops on VLSI Signal Processing in 1982 and 1986 (L.A.), International Conference on Application Specific Array Processors in 1990 (Princeton) and 1991 (Barcelona), and IEEE Workshops on Neural Networks and Signal Processing in 1991 (Princeton), 1992 (Copenhagen) and 1998 (Cambridge, UK), the First IEEE Workshops on Multimedia Signal Processing in 1997 (Princeton), and International Computer Symposium in 1998 (Tainan). Dr. Kung is a Fellow of IEEE. He was the recipient of 1992 IEEE Signal Processing Society’s Technical Achievement Award for his contributions on “parallel processing and neural network algorithms for signal processing”. He was appointed as an IEEE-SP Distinguished Lecturer in 1994. He received 1996 IEEE Signal Processing Society’s Best Paper Award. He was a recipient of the IEEE Third Millennium Medal in 2000. He has authored more than 300 technical publications, including three books “VLSI Array Processors”, (Prentice Hall, 1988) (with Russian and Chinese translations), “Digital Neural Networks”, Prentice Hall, 1993, and “Principal Component Neural Networks”, John Wiley, 1996.
[email protected]
Journal of VLSI Signal Processing 30, 257–271, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Multistage Nonlinear Blind Interference Cancellation for DS-CDMA Systems* DRAGAN SAMARDZIJA Bell Labs, Lucent Technologies, 791 Holmdel-Keyport Road, Holmdel, NJ 07733, USA NARAYAN MANDAYAM AND IVAN SESKAR WINLAB, Rutgers University, 73 Brett Road, Piscataway, NJ 08854, USA Received September 1, 2000; Revised May 16, 2001
Abstract. In this paper we propose a multistage nonlinear blind interference cancellation (MS-NL-BIC) receiver for direct-sequence code-division multiple-access (DS-CDMA) systems. The receiver uses higher order statistics of the received baseband signal. Specifically, we use the second and fourth moments of the received signal to determine a component of the received vector that has significant mean energy and low variability of the energy, both of which are favorable characteristics for application in an interference cancellation scheme that uses hard decisions. The structure of the receiver is multidimensional and can be viewed as a matrix of receivers. Each row in the matrix consists of receivers that perform (hard decisions) cancellation of successive components that have significant mean energy and low variability of the energy. The columns of the matrix essentially resemble multistage receivers that iteratively refine performance from earlier stages. Simulation results show that unlike linear receivers, the MS-NL-BIC is exceptionally efficient in systems with strong and highly correlated interferers, as may be the case in overloaded DS-CDMA systems. Keywords:
blind interference cancellation, successive interference cancellation, iterative refinement
1. Introduction One of the driving forces of the next generation of wireless communications is the demand for higher data rates and higher capacity of wireless systems. Primary applications of higher data rates seem to be in the downlink direction (for example, typical internet data traffic such as downloading of web pages). Furthermore, wideband direct-sequence code-division multiple-access (DS-CDMA) technology has emerged as one of the most promising candidates for future wireless systems (e.g., third generation systems [1, 2]). It *This work is supported in part by the New Jersey Commission on Science and Technology under the New Jersey Center for Wireless Communication Technologies. This paper was presented, in part, at VTC 2000, Boston, September 2000.
is therefore of great interest to investigate the performance of these systems and its viability for higher data rates envisioned in the future. In DS-CDMA systems, in general, crosscorrelations between signature (spreading) sequences are nonzero. This results in the near-far effect where the multipleaccess interference (MAI) can disrupt reception of a highly attenuated desired user signal [3]. Baseband signal processing techniques such as multiuser detection and interference cancellation have the potential to combat this problem and provide higher performance at the cost of increasing complexity of the receiver. Rapid progress in the area of semiconductor technology has resulted in a significant increase in the processing speeds of core technologies (DSP, FPGA and ASIC device). Advances in VLSI technology, and design of the algorithms that are optimized with respect to
258
Samardzija, Mandayam and Seskar
a specific implementation platform, are further narrowing the gap between the complexity of the algorithms and processing speeds (e.g., solutions that are presented in [4, 5]). These and other developments suggest that the transceivers in future wireless systems will employ some form of interference mitigation. Several multiuser receivers have been proposed (for example, see [6–9]). These receivers are denoted as centralized because they require knowledge of parameters (signature sequences, amplitudes and timing) for all users in the system. Therefore, they are more suitable for processing at the base station. For the downlink, it is desirable to devise decentralized receivers that exploit the knowledge of the desired user parameters only. The use of short signature sequences simplify the task of multiuser detection and interference cancellation, since a receiver can adaptively learn (estimate) the structure of the MAI [10]. Decentralized receivers may be further classified into data aided and nondata aided receivers. Data aided adaptive multiuser detection is an approach which does not require prior knowledge of the interference parameters. But, it requires a training data sequence for every active user. For example, adaptive receivers in [7, 11, 12] are based on the MMSE criterion, and the one in [13] is based on minimizing probability of bit-error. More recently, decision feedback detectors using the MMSE criterion have been proposed [14, 15]. Unlike data aided receivers, blind (or nondata aided) multiuser detectors require no training data sequence, but only knowledge of the desired user signature sequence and its timing. The receivers treat MAI and background noise as a random process, whose statistics must be estimated. Majority of blind multiuser detectors are based on estimation of second order statistics of the received signal. In [16], a blind adaptive MMSE multiuser detector is introduced (proven to be equivalent to the minimum output energy (MOE) detector). A subspace approach for blind multiuser detection is presented in [17]; where both the decorrelating and the MMSE detector are obtained blindly. Further, adaptive and blind solutions are analyzed in [18], with an overview in [10]. A blind successive interference cancellation (SIC) scheme, which uses second order statistics, is proposed in [19, 20]. In this paper we propose a novel blind interference cancellation receiver, which assumes knowledge of only the desired user’s signature sequence. The receiver is based on determining that component of the received signal that has significant mean energy and low variability in the energy. It applies the minimum vari-
ance of energy and maximum mean energy criterion (MVE-MME), which is described in Section 3. Furthermore, we analyze a relationship between the above criterion and Godard’s dispersion function [21] and constant modulus (CM) criterion [22], In Section 4, using the MVE-MME criterion, we derive a nonlinear multistage blind interference cancellation (NL-MS-BIC) receiver. The structure of the NL-MS-BIC receiver is multidimensional and can be viewed as a matrix of IC stages. Each row in the matrix consists of IC stages that perform the blind (hard decisions) successive interference cancellation. The columns of the matrix essentially resemble multistage receivers that iteratively refine performance from earlier stages. This particular multistage structure of the receiver allows concurrent (parallel) execution of the IC stages which makes it very suitable for implementation using multiprocessor DSP and/or FPGA (or ASIC) platform. Simulation results are presented in Section 5, and we conclude in Section 6. 2.
Background
The received baseband signal, r ( t ) , in a K-user asynchronous DS-CDMA additive white Gaussian noise (AWGN) system is
where is the received amplitude, is binary, independent and equiprobable data, is the signature sequence which is assumed to have unit energy, is the relative time offset, all for the kth user. T is the symbol period and n(t) is AWGN with unit power spectral density, with σ being square root of the noise power. 2J + 1 is the number of data symbols per user per frame. It is well known that an asynchronous system with independent users can be analyzed as synchronous if equivalent synchronous users are introduced, which are effectively additional interferers [3]. In this paper we consider the received signal r(t) over only one symbol period that is synchronous to the desired user (k = 1). The discrete representation for the received signal in (1) can be written in vector form as
Multistage Nonlinear Blind Interference
where the number of the interferers (L – 1 = 2(K – 1)) is doubled due to the equivalent synchronous user analysis. and n are vectors in where M is the number of chips per bit. Consider the nonlinear centralized SIC scheme which is presented in [5, 8]. We now present a brief outline of the above scheme because its approach to nonlinear interference cancellation is generalized in this paper and later applied in a blind interference cancellation scheme (Section 4). In the nonlinear centralized SIC scheme [5, 8] it is assumed that the signature sequences are perfectly known (centralized approach). The basic operations of the SIC algorithm are (see Fig. 1): 1. Detect one user with the conventional detector, i.e., matched filter (MF). 2. Regenerate the baseband signal (vector) for this user. 3. Cancel the regenerated signal (vector) from the received baseband signal.
259
Then, this operation is repeated successively for all the users in the system. The idea is that successive cancellations result in reduced MAI for the remaining users. The received vector after stage j of the cancellation is given by
where is the received vector, and are the corresponding estimates of the amplitude and the bit, respectively, all for stage . The above implementation of the SIC algorithm is nonlinear in that it uses hard decisions in successive stages. A primary reason why the nonlinear centralized SIC cannot achieve performance of the single user lower bound (SULB) is due to an erroneous estimate of bit decision (i.e., When an error happens, it causes the SIC scheme to double the interference, which is, of course, undesirable. Furthermore, the doubled interference propagates through the following IC stages, which degrades the overall performance of the receiver. For the same reason, this receiver is also not near-far resistant [3]. Further, imperfections in amplitude and delay estimates can lead to the non ideal regeneration and cancellation. Accordingly, to obtain best results, the user with the highest signal to interference ratio (SIR) should be cancelled first. This condition is usually relaxed and the user with highest received power is cancelled first, followed by the second strongest, and so forth [8, 9]. Thus, it is desirable to identify users (or signature sequences) that have significant power (energy). Note that the SIC scheme requires amplitude estimates for the users, which implicitly requires low variability of the amplitude estimates for perfect cancellation. Let us now generalize the nonlinear cancellation given by (3). In Eq. (3), let us replace with a vector (not necessarily a signature sequence). Furthermore, we replace with the square root of the estimate of and with Thus, the nonlinear cancellation in the jth stage is executed as
In the following, we propose favorable characteristics of the vector to be successfully applied in the above scheme. We now analyze the estimate of the energy of the received signal in the direction of the vector To estimate we use sample statistics as
260
Samardzija, Mandayam and Seskar
where N is the size of the averaging window (number of samples), and n and m are time indices (will be omitted in the following). It is well known that the error of the estimate in (5) is directly related to the variance of Using the Chebyshev inequality [23] it can be shown that as the variance of gets lower, the accuracy (mean square error) of the energy estimate is improved:
We can say that for the vector that corresponds to with lowest variance (among all vectors in ), the estimate of the energy is the most reliable, i.e., the mean square error of the energy estimate is the lowest. Note that the variance of is the variance of the energy of the received vector r in the direction of the vector (i.e., variance of squared projection of the vector r onto the vector ). The above analysis leads us to believe that the vector which corresponds to the low variability of the energy and significant mean energy of the vector r in the direction of is desirable for the nonlinear cancellation given by (4). These characteristics of the vector offer reliable estimates of the corresponding energy and sign of In the following we present a scheme that blindly determines (estimates) the vector and further applies this vector to realize a multistage nonlinear interference cancellation scheme.
The function ergy given as
Consider the function We now present the following proposition that gives an intuitive description of the minimum variance of energy criterion, which is obtained by minimizing the expression in (8). Proposition 1. For the synchronous antipodal DSCDMA system (described in (2)), with zero AWGN and linearly independent signature sequences the solutions for constrained as are classified in two groups: (i = l,...,L), where span and Actually, is the modified matched filter that corresponds to the decorrelating detector for user i [6]. (b) N is any vector from the noise subspace. Further, the above solutions correspond to the absolute minimum of where it is zero.
(a)
We present a proof of the above proposition in the Appendix. Let us now compare with the following, well known, Godard’s dispersion function [21]:
where
3. MVE-MME Optimization Criterion
We now present an optimization criterion which is used in deriving a nonlinear blind adaptive interference cancellation scheme. According to the analysis in Section 2, the goal of the optimization approach is to determine a component of the received vector r that has low variability in the energy and significant mean energy. We consider the squared output of the projection of r onto a vector The vector v is obtained from the following nonlinear procedure which is
where is subject to and 0 < µ < 1. The function denotes the variance of the squared output and is given as
in (7) denotes the square mean en-
is a real constant, and p is an integer. For and p = 2, the cost function in (10) is directly proportional to Inotherwords, penalizes dispersions of the squared output away from the constant Furthermore, the well studied constant modulus (CM) cost function is defined as a special form of the function in Eq. (10), where and p = 2. The CM cost function is widely used for blind equalization (see [22] and references therein). Later in this paper, which may be viewed as a slightly modified form of the CM cost function, is applied for blind interference cancellation in DS-CDMA systems. Let us now consider the function It can be shown that the vector constrained as is equal to the vector that also maximizes the mean energy It is shown in [19, 20] that is the eigenvector that corresponds to the largest eigenvalue
Multistage Nonlinear Blind Interference
of the input covariance matrix Instead of the mean energy is applied in (7) such that both terms and are of the same order (i.e., fourth order). Based on the above, the vector v, which is defined in the Eq. (7), corresponds to that component of the received signal r that has low variability in the energy and significant mean energy. As discussed in Section 2, these characteristics are favorable for application of the vector v in a nonlinear interference cancellation scheme. The parameter µ is used to control which of these two characteristics (low variability of the energy or significant mean energy) is dominant. For example, if µ = 0, the optimization in (7) is equivalent to minimum variance of energy (MVE), and for µ = 1 it is equivalent to maximum mean energy (MME) optimization criterion. Therefore, we refer to (7) as the minimum variance of energy and maximum mean energy (MVE-MME) optimization criterion. Note that in Subsection 3.2 we revisit issues related to the parameter µ and propose its design. 3.1.
261
where F is the number of consecutive symbols used for the approximation. Gradient of f (u, n) is defined as
We can use a stochastic gradient algorithm [24] that solves (7) as
Adaptive Solution
We now present an adaptive algorithm that solves (7). We exploit some properties of the functions given in (8) and (9). Let us assume that the input process r is wide sense stationary (WSS) and also that
where l is the index of the iteration step, and is a certain scalar which defines the length of the adaptation step. The constraint is forced after every iteration, where stands for the estimate of v in lth iteration step. 3.2.
where n and m are time indices, and In other words, we assume that the energy of r in direction of the vector u is uncorrelated in different symbol (bit) intervals. Using the properties of WSS processes and (11) we can show that (8) can be written as
for all integer n and m, (9) can be written as
Similarly, the expression
According to (12) and (13), and using sample statistics, the function f (u) is defined as an approximation of as
Choice of Parameter µ
As addressed earlier, the parameter µ is used to control which of the two characteristics of v (low variability of the energy or significant mean energy) is dominant. We choose µ as
Note that the above definition is similar to the inverse of the normalized kurtosis [23] defined as but further analysis of this relationship is beyond the scope of this paper. Furthermore, as an approximation of the above definition, we set
262
Samardzija, Mandayam and Seskar
dom process. In addition, we note in Eq. (19) converges towards µ that corresponds to the continuous uniformly distributed random process (CU in Fig. 2).
in Eq. (15). Considering characteristics of the parameter µ that is defined by (17), it can be shown that is a real-valued Gaussian random process, µ 1. If is 1/3. 2. Let denote µ corresponding to which is a uniform discrete real-valued M-ary random process, i.e., where A is the maximum absolute value of Based on the above definition, it can be shown that
Figure 2 depicts the parameter µ as a function of the alphabet size of a uniform, real-valued M-ary random process. As a reference, we present µ that corresponds to a continuous uniformly distributed random process (denoted as CU), and a Gaussian random process (denoted as GP). Note that the function is decreasing with alphabet size M, or in other words,
Furthermore, we may note that µ is maximum at M=2 i.e., for a real-valued bipolar ran-
From the above properties of the parameter µ, we may draw the following conclusions. When the received signal at the output of the correlator, is a real-valued Gaussian random process (i.e., u lies in the noise subspace of the received vector r), then µ takes a value close to its minimum thereby steering the MVEMME criterion towards minimizing variance of energy (MVE). On the other hand, when the output is a close to a discrete-valued random process (as in the case when MAI dominates), µ approaches its maximum value thus steering the MVE-MME criterion towards maximizing mean energy (MME). In the course of adaptation, the value of µ given in Eq. (18) changes according to the projection i.e., u being in the noise (Gaussian) part of the signal subspace or the interference (discrete-valued random process) subspace. 4.
Application of the MVE-MME Criterion in the Multistage Nonlinear Blind IC Receiver
We now present a multistage nonlinear blind interference canceler, denoted as MS-NL-BIC. The structure
Multistage Nonlinear Blind Interference
263
of the receiver is multidimensional and can be viewed as a matrix of receivers (i.e., matrix of IC stages). The MS-NL-BIC receiver consists of P rows and Q columns, where each entry of the matrix corresponds to an interference cancellation stage denoted as (i = 1 , . . . , P, j = 1 ,..., Q). The following steps are executed in the stage (where is the input vector to that stage):
1. Add back
2.
3.
4.
5.
as
where is a portion of the received signal that is cancelled in the stage. Note that the stage is the same column, but earlier row of the matrix. For the first row (i = 1), and because no cancellation is performed prior to this row. Use adaptation rule in (16) ( replaces r) to estimate as (see Fig. 3). Note that the vector is further processed in the very same manner as an interferer signature sequence in the case of the nonlinear centralized SIC scheme (see Section 2). Estimate the energy Note that the estimation should be reliable because as a component of the vector has low variability in the energy (due to the term in (7)). Detect the sign of Note that detection should be reliable, because the component has significant mean energy (due to the term in (7)) and low variability. Perform nonlinear cancellation as
where (see Fig. 4)
The above procedure is executed successively (within the ith row of the matrix), where for the new stage the input vector is (see Eq. (22)). The structure of the ith row (i.e., horizontal topology) is depicted in Fig. 5. From the above, each row may be viewed as a blind equivalent to the nonlinear centralized SIC scheme, where the components replace the actual signature sequence. After a sufficient number Q of the stages in the ith row, cancellation is repeated in the ((' + l)th row (see Fig. 6). The input vector of the (i + l)th row is The stage is used to iteratively refine the cancellation which is executed in the earlier stage With appropriate delay, the vector that is canceled in the stage is added back (step 1), and within the stage processing is performed again (steps 2 to 5). In Section 5, Q is selected to be equal to the number of dominant interferers, but in the more general case, this number might not be known at the receiver. A number of different schemes can be employed in order to determine the number of IC stages within each row of this receiver. Here, we propose the following simple scheme. In the first row i = 1, the stage
264
Samardzija, Mandayam and Seskar
the vector (defined in Eq. (23), where i = P) should be added back to Having inspected all the vectors the addition is performed as follows
may be determined as the last stage in the row (Q = j ) , if the estimate of the energy drops below a certain threshold In other words
This simple scheme assumes that (i.e., the energy estimate is decreasing with column index j). In addition, the scheme is based on the assumption that the component that corresponds to the mean energy which is below the threshold is not relevant for the cancellation. Furthermore, the number of the rows P is directly related to the performance of the receiver. Thus, the trade-off in performance versus complexity can be controlled by the number P. After a sufficient number P of the rows, detection of the desired user is performed using a linear detector (e.g., matched filter). Note that implicit in the above algorithm is the assumption that the interferers are stronger than the desired user. If the desired user is strong, then additional processing is required to ensure that the desired signal is not canceled out before the detection. Briefly, we propose a corresponding scheme which is based on a threshold rule. In the last row, i = P, each vector is projected onto the desired user signature sequence The absolute value of the projection is compared against a predefined threshold If the absolute value exceeds the threshold
where the corresponds to all vectors that have met the criterion in (25). Further, the linear detection of the desired user is performed using as input signal. Note that a centralized multiuser detection scheme, which is proposed in [2, 25], applies a similar iterative (recursive) refinement approach that is presented above. That particular scheme executes centralized SIC and iterative refinement in order to improve channel estimates for the users in the system. Unlike the MSNL-BIC receiver, the scheme in [2, 25] assumes the knowledge of all signature sequences of the users in the system (i.e., it is not blind). Further, the multistage structure of the receiver allows concurrent (parallel) execution of the IC stages. This inherent parallelism of the algorithm is a favorable characteristic for its implementation using multiprocessor DSP and/or FPGA (or ASIC) platform (see [4, 5] and references therein). 5.
Simulation Results
We consider a synchronous AWGN DS-CDMA system using randomly generated signature sequences with processing gain M = 8. The users are independent and the following cases are analyzed: 1. System with L = 8 users (fully loaded), and equalenergy interferers: i = 2 , . . . , 8. 2. System with L = 4 users, and equal-energy interferers: i = 2 , . . . , 4. 3. System with L = 12 users (overloaded system); three strong equal-energy interferers: i = 2 , . . . ,4, and eight interferers with the same energy as the desired user: i = 5,..., 12.
The crosscorrelation profile of the users with respect to the desired user is depicted in Fig. 7. Note that in this particular example the crosscorrelations are very high, except for users 5, 10 and 12 which happen to be
Multistage Nonlinear Blind Interference
orthogonal to the desired user. In the case 1, the system has users i = 1 , . . . , 8, and in the case 2 only i = 1 , . . . , 4. In all our results the input sample covariance matrix is estimated according to
where N is the size of the averaging window (number of samples), and i is the time index (will be omitted in the following text). Performance of the conventional matched filter (MF), the centralized MMSE receiver (denoted as MMSE), the blind MMSE receiver (BMMSE) (detector [17]) and the single user lower bound (SULB) are used as benchmarks for evaluation of the MS-NL-BIC receiver. The centralized MMSE assumes perfect knowledge of all the signature sequences, amplitudes and the variance of the AWGN. Performance of the MS-NL-BIC is evaluated for MF (MS-NL-BIC-(MF)) and the blind MMSE (MS-NLBIC-(MMSE)); where these linear detectors are used for detection of the desired user after the cancellation (after P rows and Q IC stages within each row). The MS-NL-BIC-(MMSE) uses the sample covariance matrix of the output signal of the last IC stage Note that the desired user energy is set to be much lower than
265
the energy of the interferers, and, as discussed in Section 4, the prevention of the excessive cancellation of the desired user is not performed. In each IC stage, the performance is measured after 1000 symbols used for the estimation in (16) and (27), and F = 5 in (14), (15) and (18). Regarding the parameter µ, we apply the approximation given by (18). We assume the knowledge of the number of dominant interferers Q, which is the number of columns of the receiver matrix. For the case 1, Fig. 8(a) depicts bit-error rate (BER) as a function of signal to background noise ratio (SNR) (with respect to the desired user). The results are obtained after a total of P = 4 rows and Q = 7 columns, which is where the BER reaches minimum. Additional IC stages do not introduce any improvement for this particular example. For SNR = 8 dB, BER versus total number of IC stages is presented in Fig. 8(b). Note that in this example the MS-NL-BIC(MMSE) performs better than the MS-NL-BIC(MF). In this case, after the last IC stage MAI is still present, therefore, BMMSE detector can further improve the performance of the MS-NL-BIC receiver in the case 1. Equivalent results, for the case 2, with P = 4 and Q = 3 and SNR = 8 dB are shown in Fig. 9(a) and (b) respectively. In Fig. 9(b), note that the MS-NLBIC(MMSE) converges faster with respect to number
266
Samardzija, Mandayam and Seskar
Multistage Nonlinear Blind Interference
267
268
Samardzija, Mandayam and Seskar
of IC stages, but, at the end the MS-NL-BIC(MF) offers lower BER for this particular example. In this case, after the last IC stage MAI is almost completely removed. Introduction of the BMMSE as linear detector in the MS-NL-BIC receiver, may cause a drop in the performance due to estimation errors of the covariance matrix (in (27)) which is used to derive the BMMSE detector (this particular topic is analyzed in [20]). We consider the performance of our receiver in the case 3, which is an overloaded DS-CDMA system. Figure 10 depicts BER versus SNR (with respect to the desired user). The same figure presents the performance of the matched filter (denoted as MF-8) for the system without the strong interferers (only the desired user and eight equal-energy interferers, which is identical to ideal cancellation of the users 2, 3 and 4). The receiver with Q = 3 and P = 4 is used. From the results in Fig. 10, we note that the MS-NL-BIC completely cancels the strong users i.e., the MS-NL- BIC(MF) performance is identical to the MF-8 performance (in Fig. 10 their characteristics overlap). From these results, it is seen that the MS-NL-BIC outperforms linear receivers (MF and BMMSE), significantly. The performance of the linear receivers is expected because it is well known that they do not perform well in the case of the systems with strong
and highly correlated interferers (with respect to the desired user signature sequence) [3], as may be the case in overloaded systems. But, these results suggest that the MS-NL-BIC may be applied as a blind solution in the case of overloaded systems with strong interferers. Let us now study the characteristics of the estimates (i = 1,..., P, j = 1 , . . . , Q). In all cases that we have observed, as the processing progresses from row to row, estimates within each column of the receiver matrix, approach one of the actual signature sequences (with sign ambiguity). Each column corresponds to a different signature sequence. For example, in the case 1, we observe how the estimates within columns j = l, 4, 7 approach the actual signature sequences l = 2, 5, 4, respectively. The absolute value of the crosscorrelation is depicted in Fig. 11, where the abscissa represents the index of the row (i = 1 , . . . , 4). The results appear to be similar for all columns j = 1 , . . . , 7. Note that in the last row, the estimates are practically identical to the signature sequences i.e., the absolute value of the normalized crosscorrelation is 1. We have consistently observed these results in our simulations, and a detailed mathematical analysis and explanation of the above phenomenon is of future interest.
Multistage Nonlinear Blind Interference
6.
Conclusion
We have introduced the MVE-MME optimization criterion which is then used to implement the MS-NLBIC receiver. The receiver is based on determining the component of the received vector that has significant mean energy and low variability in the energy. The MS-NL-BIC consists of multiple IC stages, and can be viewed as a matrix of IC stages. The columns of the matrix resemble multistage receivers that iteratively refine performance from earlier stages, while each row corresponds to a blind equivalent of the nonlinear centralized SIC scheme. The ability of the receiver to exceed the performance of linear receivers is observed via simulation results. It is seen that this scheme is particularly effective for systems with very strong interferers which are strongly correlated with the desired user signature sequence. Therefore, this may be a very viable solution for implementation in the case of overloaded systems with strong interferers. Appendix: Proof of Proposition 1 Consider a synchronous antipodal DS-CDMA system, with zero AWGN and linearly independent
269
signature sequences. Let us denote a component of the signature sequence as which is orthogonal to other users’ signature sequences, or in other words, and which can be written as
where and and is the Kronecker delta function. Projection of the received vector r (see Eq. (2)) on yields (in the absence of additive background noise):
where and are the amplitude and bit, respectively, all for user i. From (29), it follows that = const, which results in
Note that
270
Samardzija, Mandayam and Seskar
Therefore, the absolute minimum of is zero. Based on (30) and (31), reaches the absolute minimum for (i = 1 , . . . , L). Using the above approach, the same result can be obtained for (i = 1 , . . . , L). This proves part (a) of Proposition 1. Further, if N denotes the noise subspace and then by definition of the noise subspace [26], it follows that
Consequently
10.
11.
12.
13.
14.
which proves that for any vector u from the noise subspace, reaches the absolute minimum of zero. This concludes proof for part (b) of Proposition 1.
15.
16.
Note 1. Notation:
denotes an estimate of z.
References 1. T. Ojanpera and R. Prasad, “An Overview of Air Interface Multiple Access for IMT-2000/UMTS,” IEEE Communication Magazine, vol. 36,1998, pp. 82–95. 2. F. Adachi, M. Sawahashi, and H. Suda, “Wideband DS-CDMA for Next-Generation Mobile Communications Systems,” IEEE Communication Magazine, vol. 36, 1998, pp. 56–69. 3. S. Verdú, Multiuser Detection, Cambridge: Cambridge University Press, 1998. 4. I. Seskar and N. Mandayam, “Software-Defined Radio Architectures for Interference Cancellation in DS-CDMA Systems,” IEEE Personal Communications, vol. 6, no. 4, 1999, pp. 26–34. 5. I. Seskar, K. Pedersen, T. Kolding, and J. Holtzman, “Implementation Aspects for Successive Interference Cancellation in DS/CDMA Systems,” Wireless Networks, no. 4, 1998, pp. 447– 452. 6. R. Lupas and S. Verdú, “Linear Multiuser Detectors for Synchronous Code-Division Multiple-Access Channels,” IEEE Transactions on Information Theory, vol.35,1989,pp. 123–136. 7. U. Madhow and M. Honig, “MMSE Interference Suppression for Direct-Sequence Spread-Spectrum CDMA,” IEEE Transactions on Communications, vol. 42, 1994, pp. 3178–3188. 8. P. Patel and J. Holtzman, “Analysis of a Simple Successive Interference Cancellation Scheme in DS/CDMA Systems,” IEEE JSAC, Special Issue on CDMA, vol. 12, 1994, pp. 796–807. 9. A. Duel-Hallen, “A Family of Multiuser Decision-Feedback Detectors for Asynchronous Code-Division Multiple-Access Chan-
17.
18.
19.
20.
21.
22. 23. 24. 25.
26.
nels,” IEEE Transactions on Communications, vol. 43, 1995, pp. 421–434. U. Madhow, “Blind Adaptive Interference Suppression for Direct-Sequence CDMA,” IEEE Proceedings, Special Issue on Blind Identification and Equalization, 1998, pp. 2049–2069. S. Miller, “An Adaptive Direct-Sequence Code-Division Multiple-Access Receiver for Multiuser Interference Rejection,” IEEE Transactions on Communications, vol. 43, 1995, pp. 1746– 1755. P. Rapajic and B. Vucetic, “Adaptive Receiver Structure for Asynchronous CDMA Systems,” IEEE JSAC, vol. 12, 1994, pp. 685–697. N. Mandayam and B. Aazhang, “Gradient Estimation for Sensitivity Analysis and Adaptive Multiuser Interference Rejection in Code Division Multiple Access Systems,” IEEE Transactions on Communications, vol. 45, 1997, pp. 848–858. G. Woodward, R. Ratasuk, and M. Honig, “Multistage Multiuser Decision Feedback Detection for DS-CDMA,” in International Conference on Communications, Vancouver, June 1999, vol. 1, pp. 68–72. R. Ratasuk, G. Woodward, and M. Honig, “Adaptive Multiuser Decision Feedback for Asynchronous Cellular DS-CDMA,” in 37th Allerton Conference, 1999. M. Honig, U. Madhow, and S. Verdú, “Blind Adaptive Multiuser Detection,” IEEE Transactions on Information Theory, vol. 41, 1995, pp. 944–960. X. Wang and V. Poor, “Blind Multiuser Detection: A Subspace Approach,” IEEE Transactions on Information Theory, vol. 44, 1998, pp. 677–690. S. Ulukus and R. Yates, “A Blind Adaptive Decorrelating Detector for CDMA Systems,” IEEE JSAC, vol. 16, 1998, pp. 1530– 1541. D. Samardzija, N. Mandayam, and I. Seskar, “Blind Interference Cancellation for the Downlink of CDMA Systems,” in Conference on Information Sciences and Systems, Princeton University, March 2000, vol. 2, pp. TP3.17–TP3.22. D. Samardzija, N. Mandayam, and I. Seskar, “Blind Successive Interference Cancellation for DS-CDMA Systems,” IEEE Transactions on Communications, 2001, to appear: D. Godard, “Self-Recovering Equalization and Carrier Tracking in Two-Dimensional Communication Systems,” IEEE Transactions on Communications, vol. 28, no. 11, 1980, pp. 1867–1875. S. Haykin (Ed.), Unsupervised Adaptive Filtering, Blind Deconvolution, vol. 2, 1st edu., Wiley Interscience, 2000. B. Picinbono, Random Signals and Systems, Englewood Cliffs, NJ, USA: Prentice Hall, 1993. Y.Z. Tsypkin, Adaptation and Learning in Automatic Systems, NY: Academic Press, 1971. M. Sawahashi, Y. Miki, H. Andoh, and K. Higuchi, “Pilot Symbol-Aided Coherent Multistage Interference Canceller Using Recursive Channel Estimation for DS-CDMA Mobile Radio,” IEICE Transactions on Communications, vol. E79-B, 1996, pp. 1262–1270. D. Ramakrishna, N. Mandayam, and R. Yates, “Subspace Based Estimation of the Signal to Interference Ratio for CDMA Cellular Systems,” IEEE Transactions on Vehicular Technology, vol. 49, 2000.
Multistage Nonlinear Blind Interference
Dragan Samardzija received the B.S. degree in electrical engineering and computer science in 1996 from the University of Novi Sad, Yugoslavia, and the M.S. degree in electrical engineering in 2000 from Rutgers University. Currently he has been working on his Ph.D. thesis at the Wireless Information Network Laboratory (WINLAB), Rutgers University. He also works at Wireless Research Laboratory, Bell Labs, Lucent Technologies, where he is involved in research in the field of MIMO wireless systems. His research interests include detection, estimation and information theory for MIMO wireless systems, interference cancellation and multiuser detection for multiple-access systems. He also works on implementation aspects of various receiver architectures and implementation platforms.
[email protected]
Narayan Mandayam received the B.Tech (Hons.) degree in 1989 from the Indian Institute of Technology, Kharagpur, and the M.S. and
271
Ph.D. degrees in 1991 and 1994 from Rice University, Houston, TX, all in electrical engineering. Since 1994, he has been at the Wireless Information Network Laboratory (WINLAB), Rutgers University where he is currently an Associate Professor in the Dept. of Electrical & Computer Engineering and also serves as Associate Director at WINLAB. He also served as the interim Director of WINLAB from January to July 2001. His research interests are in various aspects of wireless data transmission including software defined radios for interference cancellation, wireless system modeling and performance, multiaccess protocols and radio resource management with emphasis on pricing. Dr. Mandayam is a recipient of the Institute Silver Medal from the Indian Institute of Technology, Kharagpur in 1989 and the National Science Foundation CAREER Award in 1998. He was selected by the National Academy of engineering in 1999 for the Annual Symposium on Frontiers of Engineering. He serves as an Associate Editor for IEEE Communications Letters.
[email protected]
Ivan Seskar received a B.S. degree in electrical engineering and computer science from the University of Novi Sad, Yugoslavia and a M.S. degree in electrical engineering from Rutgers University. Since 1991 he has been at the Wireless Information Networks Laboratory (WINLAB) at Rutgers, The State University of New Jersey where he is currently Associate Director of IT. His research interests include software and reconfigurbale radios, spread spectrum systems, multiuser detection, mobility management and traffic simulations.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 273–291, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Adaptive Interference Suppression for the Downlink of a Direct Sequence CDMA System with Long Spreading Sequences* COLIN D. FRANK Advanced Radio Technology Group, Global Telecommunications Solution Sector, Motorola, 1501 W. Shure Drive, IL27-3G6, Arlington Heights, IL 60004, USA EUGENE VISOTSKY Communication Systems and Technologies Labs, Motorola Labs, Schaumburg, IL 60196, USA UPAMANYU MADHOW† Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA Received October 30, 2000; Revised July 2, 2001
Abstract. A simple approach for adaptive interference suppression for the downlink (base-to-mobile link) of a direct sequence (DS) based cellular communication system is presented. The base station transmits the sum of the signals destined for the different mobiles, typically attempting to avoid intra-cell interference by employing orthogonal spreading sequences for different mobiles. However, the signal reaching any given mobile passes through a dispersive channel, thus destroying the orthogonality. In this paper, we propose an adaptive linear equalizer at the mobile that reduces interference by approximately restoring orthogonality. The adaptive equalizer uses the pilot’s spreading sequence (which observes the same channel as the spreading sequence for the desired mobile) as training. Simulation results for the linear Minimum Mean Squared Error (MMSE) equalizer are presented, demonstrating substantial performance gains over the RAKE receiver. Long spreading sequences (which vary from symbol to symbol) are employed, so that the equalizer adapts not to the time-varying spreading sequences, but to the slowly varying downlink channel. Since the inter-cell interference from any other base station also has the structure of many superposed signals passing through a single channel, the adaptive equalizer can also suppress inter-cell interference, with the tradeoff between suppression of intra- and inter-cell interference and noise enhancement depending on their impact on the Mean Squared Error (MSE). Keywords:
CDMA, MMSE, equalization, adaptive, multi-user detection
1. Introduction IS-95, the second generation US digital cellular standard, is based on direct sequence (DS) code division *Part of this work was presented at the 36th Annual Allerton Conference on Communication, Control and Computing, Monticello, Illinois, September 1998. † The work of U. Madhow was supported in part by the National Science Foundation under grant NCR96-24008 (CAREER) and by the Army Research Office under grant DAAD19-00-1-0567.
multiple access (CDMA), in which each user uses all of the available bandwidth, with different spreading sequences being employed to distinguish between different users. Worldwide third generation cellular standards also appear to have converged on DS-CDMA. The primary traffic on present day cellular networks is voice, so that the forward link or downlink (base-tomobile) and the reverse link or uplink (mobile-to-base) carry similar traffic volumes. However, in the near future, the downlink is expected to carry significantly
274
Frank, Visotsky and Madhow
more traffic due to applications such as web browsing by mobile terminals. The downlink is therefore expected to be the bottleneck in traditional cellular architectures, which currently allocate equal bandwidth resources to both the uplink and downlink. Even for the current cellular telephony applications, the uplink has several advantages over the downlink: uplink power control eliminates the near-far problem, whereas downlink power control leads to a possible near-far problem by design (since the base transmits at higher power to mobiles who are further away, multipath components for the signal destined for such users can seriously impact reception at a mobile close to the base station); receiver processing can be more sophisticated at the base for the uplink than at the mobile for the downlink. Interference suppression at the mobile receiver appears to be an attractive method of enhancing the downlink, since the small number of interfering sources (the neighboring base stations), together with the information available from the pilot signal, present some practical opportunities for improvement. In IS-95 as well as proposed CDMA-based third generation standards, the downlink is designed to be free of intra-cell interference under ideal conditions. The base station transmits the sum of the signals destined for the different mobiles in the cell, with orthogonal spreading sequences assigned to different mobiles. Of course, this orthogonality is destroyed when the signal passes through a multipath channel, leading to intracell interference. The key idea in this paper is to equalize the channel at the mobile receiver, thus approximately restoring the orthogonality among the spreading sequences and reducing the interference. The problem is exactly analogous to that of equalization of a narrowband system, except that the chips now play the role of symbols. In a given chip interval, the base station transmits the sum of the chips (times the symbols) destined for each user, together with the chip for the pilot sequence. All elements in this sum observe the same channel, so that, for the linear equalization strategy considered here, an equalizer for the pilot sequence works for the desired user as well. The pilot sequence can therefore play the role of a perpetual training sequence for adaptive equalization. The adaptation is based on the linear MMSE criterion. The idea of equalizing the downlink channel for the purpose of restoring orthogonality of the user spreading sequences was first proposed in [1]. There a block equalizer is developed for a hybrid time division CDMA system. Equalization of the downlink for a
DS-CDMA system was first independently introduced in [2], [3] and [4]. In [2], an MMSE solution for the downlink equalizer of a CDMA system is computed. The use of orthogonal spreading sequences, however, is not considered. In [3], an adaptive MMSE equalizer is presented, while [4] considers its zero-forcing counterpart. Equalization of the downlink has subsequently been proposed in numerous other works, such as [5– 11] to name a few. See also the paper by T.P. Krauss, W.J. Hillery, and M.D. Zoltowski in this journal issue. The idea of restoring orthogonality on the downlink through the use of equalization is especially useful since it leads to an adaptive receiver performing interference suppression even for CDMA systems with no spreading code periodicities. In [3], an adaptive algorithm for training the downlink equalizer is developed based on the pilot training sequence. A very similar algorithm was proposed later in [10]. An adaptive algorithm not requiring the pilot training sequence is proposed in [7]. Finally, an adaptive reduced-rank equalizer for sparse downlink channels is introduced in [12]. The rest of this paper is organized as follows. Section 2 introduces the system model. Section 3 develops the basic MMSE chip equalizer designed to equalize the downlink channel. This equalizer is a function of the user spreading sequences, and hence time-variant and computationally complex. A time-invariant version of the MMSE chip equalizer, denoted the average MMSE chip equalizer, is derived in Section 3. A symbol-level implementation of the average MMSE chip equalizer, amenable to adaptive implementation, is proposed in Section 4. Simulation results illustrating the performance of the adaptive receiver in a variety of channel conditions are shown in Section 5. Finally, conclusions are presented in Section 6. Throughout this paper, denote transpose, conjugate and hermitian transpose of vector a, respectively.
2.
System Model
Since the proposed equalization technique mitigates inter- as well as intra-cell interference, we consider a two base station model in this paper. However, for simplicity of exposition, we first explain the structure of the signals transmitted from a single base station. The discussion is then extended to accommodate multiple base stations by using superscripts to index the different base stations.
Adaptive Interference Suppression
Consider a given cell. The base station simultaneously transmits data to the active users (or mobiles) in its cell, and transmissions to all users are symbolsynchronous. To facilitate channel estimation in the mobile, the base station also transmits a synchronous pilot signal. The users and the pilot are assigned distinct spreading sequences which enables the mobile to separate the pilot signal and the desired transmission from multiple access interference. In IS-95, every base-station is assigned a unique complex spreading sequence where n indexes the chip time. All spreading sequences within the cell are derived from this basic spreading sequence. Let us consider this operation in detail. Each user, including the pilot, is assigned an orthogonal Walsh code of length N, with the all-ones Walsh code reserved for the pilot. Let denote the Walsh code for user k, expressed as a periodic sequence of period N, with for all n. Letting denote the Walsh code for user k over a symbol period as an N-length vector, we have, due to orthogonality of the Walsh codes over a symbol period, that where is the delta function. Over each symbol interval, the user’s spreading sequence is generated by multiplying the corresponding Walsh code and the base station’s spreading sequence. That is,
where K is the number of simultaneous users in the cell. Denoting the pilot as user with subscript 0, we also have that
since for all n. Note that the spreading sequences for different users, including the pilot, inherit the orthogonality of the Walsh codes over each symbol interval. The base station’s (or pilot’s ) spreading sequence itself is a long complex spreading sequence of period much larger than N, and is well modeled as aperiodic with random independent and identically distributed components, with Note that, under this model, any user’s spreading sequence can be determined from knowledge of the pilot’s spreading code and the particular Walsh code being used by the user. The equalization method proposed here applies both to the preceding orthogonal spreading model, or to a random spreading model in which the are modeled as independent and identically
275
distributed for different k and n, such that Furthermore, under both models we have
for all k, n and m. Let denote the symbol sequence for mobile k, expressed at the chip rate. Thus, is piecewise constant over symbol intervals of length N chips, where N denotes the processing gain. That is, the lth symbol sent by user k is given by
Letting denote the transmit power assigned to user k, we obtain that is the net transmitted sequence, expressed at the chip rate, destined for that user. As noted above, we denote the pilot code as user 0, with since (the pilot is unmodulated). The net chip rate transmitted sequence from the base station is therefore given by
where K again denotes the total number of users transmitted by the base station, so that the total number of traffic channels utilized by the base station, including the pilot channel, is K + 1. Since the spreading sequences for different users are orthogonal over each symbol interval, it is possible to recover the symbols without incurring intra-cell interference by despreading over a symbol interval. Thus, the despreader output is proportional to the lth symbol of user k. The complex baseband signal transmitted from the base station is given by
where is the chip rate, and is the chip waveform. In IS-95, the Fourier transform of the chip waveform is roughly square root raised cosine at the chip rate. Ideally, would be chosen as square root Nyquist at the chip rate. Typically, the receive filter is also chosen as a square root Nyquist pulse so that the composition of the transmit and receive filters is
276
Frank, Visotsky and Madhow
Nyquist at the chip rate, thus eliminating inter-chip interference in channels with no multipath. In this case, the transmitted sequence u (n) would be recovered by sampling at the chip rate, and the symbol-level orthogonality of the would then permit recovery of the symbols without incurring intra-cell interference. In practice, the channel to the desired mobile is dispersive, leading to inter-chip, and hence intra-cell, interference. The output of the receive filter is therefore typically sampled faster than the chip rate (say at a rate of where is the oversampling factor). For implementing the RAKE receiver, the delays of the significant multipath components of the channel are estimated (typically with a resolution of ), and maximal-ratio combining is performed. For simplicity, however, we consider a chip rate discrete-time channel model in this paper. For a time-invariant channel, the chip-rate sequence at the output of the receive filter is given by the convolution of the chip-rate transmitted sequence with the chip-rate discrete time channel to the desired mobile, plus interference and noise. 2.1.
Two Base Station Model
The notation for this model is as before, except for the addition of a superscript identifying the base station. The received sequence r(n) is given by
where denotes the convolution between sequences {a(n)} and {b(n)}. For is the chip rate sequence transmitted by base station i, and is the channel from base station i to the desired mobile. Finally, {n(n)} is the additive noise, which accounts for the interference from distant cells as well as receiver thermal noise. To obtain (2), synchronization at the chip rate among the two base stations is assumed. Synchronization is assumed only for notational convenience and has no effect on either the implemenation or the performance of the receiver. Due to the interference averaging effect of CDMA, the noise is well modeled as White Gaussian Noise (WGN) of variance per dimension. Note that under this system model there are two types of other-cell interference impinging on the desired cell: non-white interference from the adjacent base station given by the second summand in (2), and white interference from distant base stations included in the noise term of (2). This distinction is necessary since linear equalization is capable of sup-
pressing only non-white interference. Detailed results on the suppression capabilities of non-white other-cell interference are presented in Section 4.3. It is convenient to normalize the channels from the two base stations by absorbing the channel gains into the power of the transmitted sequences i = 1, 2, so that
Without loss of generality, let the desired mobile have index 1 and be served by base station 1. Consistent with IS-95 terminology, we refer to the total power received by the mobile from the desired base station as and the total power received by the mobile from the other base station as That is,
and
where
denotes the number of mobiles in cell i.
3. Linear MMSE Equalization As noted initially in [1], and subsequently for IS-95 spreading sequences in [3], the benefit of equalization on the forward link is greatly enhanced when orthogonal sequences are used to separate the different users. With orthogonal spreading sequences, the multipleaccess intra-cell interference introduced by multipath is completely eliminated through the use of zero-forcing equalization, i.e. full inversion of the downlink channel. In order to minimize the noise enhancement associated with zero-forcing equalization and also to suppress non-white additive noise (such as that produced by other base stations), finite impulse response (FIR) MMSE equalization is used. The proposed receiver architecture is depicted in Fig. 1. Our objective is to estimate the chip rate sequence (the desired signal), from which the desired symbol sequence is recovered by despreading using the desired spreading sequence Consider an arbitrary time index n. Consider a finite length equalizer, of length that uses the block of samples to estimate The estimate, or the decision statistic, is
Adaptive Interference Suppression
Let the length
vector
277
be defined
as
If the user symbols are independent and identically distributed, so that
given by the complex inner product It is shown below that, in general, the MMSE equalizer of is time-varying, i.e., a function of n. Without loss of generality, the impulse responses from both base stations are assumed to have finite support on the interval With this assumption, the received vector r(n) can be written in matrix form as
where
and the vectors
is a matrix of dimension of the form
and have dimension and are given by
The expression for the MMSE estimate of the desired user’s signal at time n, from the observation r(n) is given by [13]
where
and denotes the expectation over the user symbols and noise, conditioned on the user spreading sequences
where denotes the floor operation, then the p-th element of the expectation vector can be written as
for
Let denote the correlation matrix, given by
where
and is the k × k identity matrix. The matrix is a square matrix of dimension and has elements
278
Frank, Visotsky and Madhow
for and 0 else. If, as assumed above, the user symbol sequences are independent and identically distributed, the expectation in (5) can be computed as follows
complexity constraints, impractical. For this reason, it is useful to define an average MMSE chip equalizer, which will be shown to be both substantially less complex and amenable to adaptive implementation. 3.1.
Average MMSE Chip Equalizer
We define the average MMSE chip equalizer of from observation r as To summarize the above development, the MMSE equalizer for from the observation r(n) is given by
Because the correlation matrix and the mean vector are functions of the chip index n, the MMSE equalizer is also a function of the index n. In fact, if the spreading sequences are periodic with a period P divisible by N (as in IS-95), the correlation matrix, the mean vector, and the equalizer in (7) all have period P also. As a result of the above, the equalizer in (7) has the following properties, when applied at mobile j, served by base station i.
1. The mean vector, depends on the base station index i, the user index j, and the chip index n. Consequently, the mean vector cannot be measured as a time average, and must be computed for each chip index n. Calculation of the mean requires knowledge of the desired user’s spreading sequence and power as well as the channel impulse response from the base station of interest. 2. The correlation matrix also changes with the chip index n. As a result, the correlation matrix cannot be measured as a time average. Instead, and its inverse must be computed anew for each chip index n. This computation requires knowledge of the channel matrices and and the set of spreading sequences and user amplitudes used by both base stations. It is apparent that the MMSE chip equalizer is computationally very complex and depends on detailed parameters of the transmitted signal. Furthermore, a new chip equalizer must be recomputed for each chip index n, which renders its implementation, under current
where the outer expectations average over the user spreading sequences. At first glance, it is reasonable to expect that the final expression for the average MMSE chip equalizer depends upon whether the random or orthogonal spreading model is used to compute the required expectations. However, we now show that this is not the case. From (4) and (6), it is apparent that the instantaneous MMSE chip equalizer depends on the user spreading sequences only through auto-correlations. But from (1) the statistical auto-correlation properties under both spreading models of interest are the same. Hence, the average MMSE chip equalizer is the same. In particular, by taking the expectations in (4) and (6), it is straightforward to show that under both spreading models, we have
where and
In summary, the average MMSE estimate of from the observation r(n) is given by where
Note that, within a multiplicative constant, the average MMSE chip equalizer is the same for all users transmitted from base station one. Furthermore, because the correlation matrix does not depend on the base station of interest, the average MMSE chip equalizer for the second base station is given by simply replacing with in (10).
Adaptive Interference Suppression
The average MMSE chip equalizer can be calculated directly from the knowledge of channels and from which channel matrices and can be computed, the sum powers received from the serving and interfering base stations, and and the noise spectral density The channels can be estimated, within a positive multiplicative constant, by time-averaging and and the average signal correlation matrix can be estimated as the time-average of As a result, the complexity associated with direct computation of the average MMSE chip equalizer is not unreasonable. 3.2.
SIR Expressions
Consider an arbitrary (time-varying) linear estimate used to estimate from the observation r(n). A standard performance measure for the estimator v(n) is the signal-to-interference ratio (SIR) of the estimate which is defined as the ratio of the desired signal energy to the interference variance. That is, for the SIR is given by
where the expectations are taken over the user symbol sequences, conditioned on user spreading sequences. For the optimum time-varying chip equalizer in (7), this expression simplifies to the following
Note that the equalizer and, consequently, the SIR expression are time-varying. More specifically, the meansquare energy, the noise variance, and the ratio of these two quantities are periodic in the chip index n. Consistent with the time-invariant structure of the average MMSE chip equalizer in (8), we define a timeinvariant average SIR performance measure for this equalizer as the ratio of the average desired signal energy to the average interference variance, where the averaging is performed over the user spreading sequences as well the user symbols. That is,
4.
279
Symbol-Level Implementation of the Equalizer
The low complexity average MMSE equalizer (8) is well suited for stationary channels, in which the equalizer parameters are time-invariant and a single set of the equalizer coefficients can be precomputed and reapplied. In dynamic situations, where computation of the equalizer coefficients in real time is not possible due to complexity constraints, an adaptive implementation of the equalizer is desirable. However, although timeinvariant, the average chip equalizer in (8) is not well suited for adaptive implementation. An adaptive implementation of the chip equalizer would adapt using the pilot’s signal or the desired user’s signal, which,
for heavily loaded systems, are training signals with a very low SIR. As described in [3], to resolve this problem, the despreading operation can be directly incorporated into the equalization cost function, which enables (i) adaptive implementation of the equalizer using the training signal at a higher SIR, (ii) direct estimation of the desired user’s symbols, rather than chips. We refer to this approach as symbol-level equalization. Hence, in contrast to the chip-level equalizers, for which the objective is to estimate the chip sequence for the user of interest, the objective of the symbollevel equalizer is to directly estimate the symbols of the user of interest. The basic functional diagram of the symbol-level equalizer, shown in Fig. 2, is similar to that of the chip equalizer. The distinction is that the chip equalizer measures the equalization error at the chip rate, prior to the despreader, whereas the symbollevel equalizer measures the error after the despreading and integration, at the symbol rate. Note that the equalizer coefficients are fixed for the duration of the
280
Frank, Visotsky and Madhow
symbol period. In general, the equalizer coefficients can be time-varying from symbol to symbol. Two implementations of the symbol-level equalizer are possible. In a conceptually simplified implementation shown in Fig. 2, the symbol-level equalizer precedes the despreading operation (pre-despreading implemenation). The output of the equalizer is obtained at the chip rate and despread to arrive at the symbol estimate. For analytical purposes, it is convenient to consider the post-despreading implementation of the symbol level equalizer, where the despreading precedes equalization. Post-despreading implementation of the equalizer is shown in Fig. 3. As above, let the equalizer consist of chip-spaced taps. For the j-th symbol of the desired user, the equalizer input is a vector of length formed by correlating the received sequence, {r(n)}, with the spreading sequence of the user of interest, over the chip index interval [ j N , (j + 1)N – 1], at each of the
tap positions within the span of the equalizer. Note that in this implementation, the equalizer is applied once per symbol, and the output of the equalizer is at symbol rate. If the same set of the equalizer coefficients is used for the pre- and post-despreading implementations, the equalizer yields the same output at symbol rate and, hence the MMSE solution for both symbol-level implementations is the same. We proceed with the post-despreading implementation to derive the optimum symbol-level equalizer. Let denote the input vector to the equalizer corresponding to the j-th symbol of the desired user. As previously, the subscript denotes the user index and the superscript denotes the serving base station. The l-th element of this vector is given by
Adaptive Interference Suppression
where fined in (3), vector form as follows
Using the matrix notation decan be expressed in a compact
281
negative multiplicative factor) under the two spreading models. For the random spreading model, the average covariance matrix in (14) is given by
where For the orthogonal spreading model, (15) needs to be modified to account for the orthogonality of the user spreading sequences over a symbol period. In particular, we obtain
Let denote the MMSE symbol-level equalizer for symbol which is given by where Hence, the covariance matrices under the two spreading models differ only by a rank one matrix. The mean vector is the same under the two spreading models and is given by and where and are defined implicitly. The MMSE estimate of the desired symbol is given by the inner product In general, the MMSE equalizer is time-varying and depends on both the user of interest and the particular symbol to be estimated. From (13), the expressions for the mean vector and the covariance matrix in (13) can be derived, similarly to Eqs. (4–6). Although such derivation does not involve any conceptual difficulties, the resulting expressions are quite involved and omitted for conciseness. A time-invariant average symbol-level equalizer is obtained by performing averaging of the mean vector and covariance matrix in (13) over the user spreading sequences. Thus, the average symbollevel equalizer is given by
Since the user spreading sequences are orthogonal over a symbol period under the orthogonal spreading model and non-orthogonal under the random spreading model, the expression for the average covariance is different under the two spreading models. Nevertheless, as with the chip-level equalizer, the average symbol-level equalizer will be shown to be identical (within a non-
A key fact used in the derivation of (15) and (16) is that, under both spreading models employed here,
for all k and i (see (1)). Note that this property does not hold for real spreading sequences. From (15) and (16), and with the use of the matrix inversion lemma [14], it can be shown that for both spreading models
where
282
Frank, Visotsky and Madhow
from (17) by noting that the equalizer is independent, within a positive multiplicative constant, of the processing factor N.
and
for the orthogonal spreading model and
for the random spreading model. 4.1.
Adaptive Implementation
We note the following properties of the average symbol-level equalizer: 1. The average symbol-level equalizer is the same, within a positive multiplicative constant, for the random and orthogonal spreading models. 2. Analogous to the average chip equalizer, within a positive multiplicative constant, the average symbol-level equalizer is the same for all users transmitted from the same base station. 3. Under both spreading models, the average symbollevel and chip-level equalizers are equal, within a positive multiplicative constant. More generally, the average symbol-level equalizer is the same regardless of the number of chips combined in the post-despreading implementation. This follows
Properties two and three provide a great degree of flexibility to the adaptive implementation of the average symbol-level MMSE equalizer. In particular, for an IS-95-type CDMA system, property two enables the equalizer to be trained using the pilot channel but to be used on any traffic channel originating from the same base station as the training pilot. This is especially desirable for an IS-95-type system, where all base stations transmit continuous pilots, in a sense providing perpetual training sequence for the equalizer. A conceptual diagram of the proposed adaptive equalizer using the pilot for adaptation is shown in Fig. 4. To generate the symbol-level decision statistic, the output of the equalizer is despread over N chips using the desired user’s spreading sequence. To facilitate training, the chip-rate output of the equalizer is despread over N1 chips and compared with the pilot’s transmitted sequence (a sequence of + 1’s for our system model). The difference is then used as an error signal in standard adaptive algorithms, such as LMS or RLS [15]. In this implementation then, the equalizer coefficients are adjusted times per symbol. By property three, the average symbol-level MMSE equalizer is independent of N and Hence is a free parameter which provides additional flexibility to the adaptive implementation. In particular, it allows a trade-off between the speed of
Adaptive Interference Suppression
adaptation and the SIR of the training signal. In fast fading channels, it may be desirable to adapt the equalizer multiple times per symbol which is accomplished by setting However, this leads to a lower SIR in the post-despread training signal. Conversely, in static channels, it may be more appropriate to set Numerical simulations of the proposed symbol-level adaptive equalizer in time-varying and static channel conditions are presented in Section 5. Finally, note that, unlike the matched filter receiver and equalizers based on the exact computation of the MMSE solution, the adaptive equalizer does not require any explicit estimation of channel parameters. In particular, the adaptive equalizer requires only coarse timing information in order to capture the energy from the desired chip in its observation window. Provided that the equalizer spans a sufficiently long observation window, the equalizer is robust against timing uncertainty and is capable of tracking timing variations. 4.2.
Average SIR Expressions and Comparison with RAKE Receiver
The instantaneous SIR attained by the optimum timevarying symbol-level equalizer for the j-th symbol of the desired user is given by (11) with and substituted in place of and Following the definition of the average SIR for the average chip equalizer, we define an average SIR for the average symbol-level equalizer as follows
where is defined in (14). Consider now the average SIR of the RAKE receiver. RAKE reception is a standard demodulation structure for direct sequence systems operating in frequency selective channels. In general, the RAKE receiver can be viewed as a filter matched to the convolution of the chip waveform with the channel, followed by the despreading operation. For the discrete-time chip-synchronous channel model adopted here, the matched filter for the transmission from the first base station is simply the conjugate of the channel which is a scalar multiple of the conjugate of the average mean vector Substituting
283
for in (19), the average SIR attained by the RAKE receiver for the desired user is given by
where As is well known, linear MMSE equalizers attain the highest SIR among all linear receivers. In particular, it is always true that
4.2.1. Comparison Sequences.
for
Orthogonal
Spreading
For the orthogonal spreading model, expressions for the and can be simplified even further by using the relationships
and
where is defined in (18). Performing the substitutions and using the matrix inversion lemma, we obtain for the orthogonal spreading model
Note that, by virtue of its definition, depends only upon noise power spectral density and the channels Hence, the average SIR performance gain of the equalizer over the RAKE receiver, defined as the ratio of the corresponding SIRs, is independent of the individual user parameters, such as user power, and only depends upon the overall system parameters: and the channels. Thus, even in a heavily loaded system using orthogonal spreading sequences in which the desired signal to interference ratio is quite low, the gains achievable through the use of the MMSE average symbol-level equalizer are not diminished, unlike the case of single-user equalization in which the MMSE and matched filter receivers coincide at low signal-to-noise ratios. This desirable feature is validated by the simulation results presented in Section 5.
284
Frank, Visotsky and Madhow
4.2.2. Comparison for Random Spreading Sequences. For random spreading sequences, using the correlation matrix in (15), the corresponding SIR expressions for the RAKE and the average symbol-level equalizer become
where and are defined above. From these expressions, it follows that
Note also, that as and
we have
Consider now the SIR performance gain of the equalizer relative to the RAKE receiver, given by
From this expression, it is apparent that in this case the SIR performance gain does depend upon the individual user parameters through the quantity Furthermore, it is easy to see that (i) the gain is a monotonically increasing function of for all values of (by definition (ii) in the limit as the gain converges to unity. Hence, unlike the case of orthogonal spreading sequences, the gain of the equalizer over the RAKE receiver diminishes to unity as the desired signal to interference ratio decreases when random spreading sequences are used.
4.3. Suppression of Intra-Cell and Other-Cell Interference For reasons of practicality, only the finite-impulse response MMSE receivers have been considered up to this point. However, some useful insight can be achieved by considering the form of the infiniteimpulse response (IIR) implementation of the MMSE equalizer. Let denote the z-transform of the channel between the serving base station and the desired mobile. Similarly, let denote the z-transform between the interfering base station and the desired mobile. In general, the other-cell interference spectral density, and the channel, can be redefined to include the interference contribution of additive white Gaussian noise of spectral density Given that the other-cell interference has been redefined in this manner, the z-transform of the average symbol level IIR MMSE equalizer for the DS-CDMA forward link, for both orthogonal and random codes can be shown to be given by
Note that this equalizer is precisely the MMSE equalizer for a non-spread single-user system in which the energy per symbol is unity, the channel between the transmitter and the receiver is given by and the stationary additive interference has power spectrum
Note that in the expression (20), the ratio of intra-cell to other-cell interference, takes the place of the noise spectral density, used in the standard single user MMSE equalization result. The asymptotic behavior of the average symbollevel IIR MMSE equalizer provides insight into its interference suppression capabilities. Specifically, note that if the other-cell interference is dominant, and thus the IIR MMSE equalizer becomes
which is precisely the whitened-matched filter for the single-user problem [15]. Thus, on the forward link, the MMSE equalizer suppresses other-cell interference by first whitening the interference with the
Adaptive Interference Suppression
filter
and then filtering with matched filter Note that if the other-cell interference is white, so that this interference cannot be suppressed, and the IIR MMSE equalizer becomes the matched-filter receiver. For an environment dominated by intra-cell interference, so that the IIR MMSE equalizer becomes simply the zero-forcing equalizer, given by
interference is possible. Otherwise, the MMSE solution performs suppression of other-cell interference through the whitening operation. Finally, in the region where white noise dominates the interference, the MMSE solution is given by a simple matched filter, and suppression of other-cell interference is not possible. A numerical example illustrating suppression of intra- and other-cell interference can be found in the following section. 5.
and inverts the channel between the serving basestation and the desired user. By inverting the channel for DS-CDMA systems with orthogonal spreading sequences, the MMSE equalizer restores orthogonality of the spreading sequences and hence completely suppresses the intra-cell interference. The FIR MMSE equalizer suppresses intra-cell and other-cell interference in similar fashion. Beginning with Eqs. (17) and (18), and using the matrix inversion lemma, the average symbol-level equalizer under both spreading models is given by (within a positive multiplicative constant)
In the region where the intra-cell interference is dominant, so that and the MMSE equalizer is well approximated by the minimum distortion solution, given by
Unlike the IIR equalizer, the chip-spaced FIR equalizer does not have enough degrees of freedom to completely invert the channel In general, this situation can be remedied by perfoming fractionally spaced equalization. In the region where non-white interference dominates, i.e., the MMSE equalizer becomes a whitened matched filter given by
Note that if the other-cell channel is a one-path channel, then the whitened matched filter becomes a simple matched filter and no suppression of other-cell
285
Simulation Results
In this section, simulation results are presented for the symbol-level receivers introduced in the previous section. Our main interest lies with the performance of the adaptive average symbol-level equalizer implemented as shown in Fig. 4. The performance of both the RAKE receiver, with perfect knowledge of all channel parameters, and the average symbol-level MMSE equalizer, computed analytically according to the solution in (14) with perfect knowledge of all channel parameters, is shown for comparison. The performance of the adaptive receiver is shown only with the RLS adaptation algorithm. The performance of the LMS adaption algorithm was found not to provide any significant gain over that of the RAKE receiver. This is to be expected, since the LMS algorithm exhibits poor performance in time-varying or noisy channels [15]. The orthogonal spreading model with processing gain of N = 64 chips per symbol is assumed for the desired base station. The desired cell is loaded with twenty-two users: five equal-power desired users, the pilot, and sixteen equal-power interferers. The pilot allocation is twenty percent of while the power of the desired users is a simulation parameter. The interferers’ and desired user’s powers are scaled so that remains constant. The numerical results are obtained by averaging across the desired users. For all simulations and Interference due to the second base station is simulated as white Gaussian noise of variance filtered through the other cell channel For all simulations, is fixed as a two path channel with equal path gains and delay spread of one chip, so that
For the desired base station, two multipath channels are simulated. First, consider a relatively short channel (normalized for unit energy) with five multipath components given by
286
Frank, Visotsky and Madhow
The length of the equalizer is twice the length of the channel (ten taps in this case). Note that to be consistent with the notation defined in (9), the channel needs to be padded with zeros to be the same length as the equalizer. For the simulations with fading, the channel gains in are independently faded using the Rayleigh fading model. In Fig. 5, the uncoded bit error rate (BER) performance of the three receivers is shown for a static versus the ratio of the desired user’s symbol energy to the total transmitted power in the desired cell where A forgetting factor of 0.98 is used for the RLS algorithm and, since speed of adaptation is not an issue for static channels, the RLS algorithm is updated only once per symbol, i.e. Note that the performance of the adaptive receiver is very close to that of the analytical solution. Furthermore, the adaptive receiver exhibits a significant gain over the RAKE receiver. The gain is independent of the desired user’s power, confirming our observation in the previous section that the equalizer gain over the RAKE receiver does not diminish in the region of low In Figs. 6 and 7 performance of the three receivers is shown in fading conditions for Doppler frequencies of
and respectively. Comparing the curves for the static and fading conditions, we note that the performance gains of the adaptive and analytical solutions over the RAKE receiver diminish in fading conditions. In Fig. 7, the performance of the adaptive equalizer with adaptation at both the symbol rate and four times the symbol rate is displayed. At 40 Hz Doppler frequency, the equalizer using symbol rate adaptation does not provide any performance gain over the RAKE receiver. However, adaptation at four times the symbol rate significantly improves the equalizer tracking and results in performance gain over the RAKE receiver in the region of large We now consider a longer channel for the desired cell given by
Figures 8, 9, 10 display the performance of the three receivers for in static conditions and fading conditions at 10 Hz Doppler frequency and 40 Hz Doppler frequency, respectively. For these simulations, the equalizer has eighteen taps. In static conditions, both adaptive and analytical solutions exhibit a significant performance gain over the RAKE receiver. In comparing Figs. 5 and 8, we note that on the longer channel,
Adaptive Interference Suppression
the gap between the adaptive and analytical solutions is more pronounced, and this suggests that the steadystate performance of the adaptive algorithm is sensitive to the number of equalizer taps. The observations made based on the fading simulations in Figs. 6 and
287
7 are further validated by Figs. 9 and 10. In particular, we note that updating the equalizer taps multiple times per symbol helps the tracking performance of the adaptive solution. Nevertheless, the large gap between the analytical and adaptive solutions, as seen in
288
Frank, Visotsky and Madhow
Fig. 10, underscores the need for the design of adaptive algorithms which track more accurately. 5.1.
Suppression of Intra-Cell and Other-Cell Interference
In Fig. 11, analytical expression for the SIR performance gain of the equalizer over the RAKE receiver is
plotted as a function of for orthogonal spreading sequences and channels as above in static conditions. As derived in Section 4.2.1, the gain is given by
Adaptive Interference Suppression
In order to see the limiting performance of the equalizer as the number of taps grows large, we consider an example in which the equalizer has 100 taps. From Fig. 11, it is apparent that the equalizer offers a signifi-
289
cant performance gain in SIR over the RAKE receiver over the entire range of interest of In this example, the largest gains occur if either the other-cell interference or the intra-cell interference is dominant.
290
6.
Frank, Visotsky and Madhow
Conclusions
Linear MMSE equalization at the mobile can significantly improve the performance of DS-CDMA systems that assign orthogonal codes to the users on the forward link. The MMSE equalizer suppresses intra-cell interference introduced by a multipath channel by inverting the channel and restoring code orthogonality. The linear MMSE equalizer can suppress interference from other-cells by whitening the interference. The benefits of other-cell interference suppression apply to CDMA systems with both orthogonal codes and random codes. Other-cell interference is often non-white because the interference from each base station can be modeled as the output of a white source that has been filtered by the channel between the base station and the subscriber. For mobiles in environments dominated by other-cell interference, the MMSE equalizer becomes a whitened-matched filter. Typically, direct implementation of the MMSE equalizer requires knowledge of the channel between the base-station and the mobile, the covariance of the other-cell interference, and the relative strengths of the intra-cell and other-cell interference. Furthermore, given this information, calculation of the MMSE equalizer is a computationally difficult task. This paper introduces a practical adaptive implementation of the MMSE equalizer for the DS-CDMA forward link, which adapts using the pilot code already present in commercial DS-CDMA systems such as IS-95 and its derivatives. The basic version of the adaptation algorithm presented here and introduced in [3] updates at the symbol rate and measures the pilot error after the received signal has been despread and correlated with the pilot code. In general, chip-rate adaptation of the MMSE equalizer will not perform adequately because the signal-to-noise ratio of the pilot signal prior to despreading and correlation with the pilot code is typically very low.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Acknowledgments The authors would like to thank Motorola Labs for supporting this work and allowing its publication.
13.
References
14.
1. A. Klein, “Data Detection Algorithms Specially Designed for the Downlink of CDMA Mobile Radio Systems,” in Proc.
15.
IEEE International Vehicular Technology Conference, VTC’97, Phoenix, AZ, May 1997, pp. 203–207. G.E. Bottomley, “Optimizing the RAKE Receiver for the CDMA Downlink,” in Proc. IEEE International Vehicular Technology Conference, VTC’93, 1993, pp. 742–745. C.D. Frank and E. Visotsky, “Adaptive Interference Suppression for Direct-Sequence CDMA Systems with Long Spreading Codes,” in Proc. 36th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, Sept. 1998. I. Ghauri and D.T.M. Slock, “Linear Receivers for the DSCDMA Downlink Exploiting Orthogonality of Spreading Sequences,” in Proc. 32nd Asilomar Conference on Signals, Systems and Computers, Asilomar, CA, Nov. 1998, pp. 650– 654. S. Werner and J. Lilleberg, “Downlink Channel Decorrelation in CDMA Systems with Long Codes,” in Proc. IEEE International Vehicular Technology Conference, VTC’99, Houston, TX, May 1999, pp. 1614–1617. M.J. Heikkilä, P. Komulainen, and J. Lilleberg, “Interference Suppression in CDMA Downlink Through Adaptive Channel Equalization,” in Proc. IEEE International Vehicular Technology Conference, VTC’99—Fall, Amsterdam, Netherlands, Sept. 1999, pp. 978–982. P. Komulainen, M.J. Heikkilä, and J. Lilleberg, “Adaptive Channel Equalization and Interference Suppression for CDMA Downlink,” in Proc. IEEE 6th International Symposium on Spread-Spectrum Techniques and Applications, ISSSTA 2000, NJ, Sept. 2000, pp. 363–367. T.P. Krauss and M.D. Zoltowski, “Chip-Level MMSE Equalization at the Edge of the Cell,” in Proc. IEEE Wireless Communications and Networking Conference, WCNC 2000, Chicago, IL, Sept. 2000, pp. 23–28. K. Hooli, M. Latva-aho, and M. Juntti, “Multiple Access Interference Suppression with Linear Chip Equalizers in WCDMA Downlink Receivers,” in Proc. IEEE Global Telecommunications Conference, Globecom, Rio de Janeiro, Brazil, Dec. 1999, pp. 467–471. F. Petre, M. Moonen, M. Engels, B. Gyselinckx, and H.D. Man, “Pilot-Aided Adaptive Chip Equalizer Receiver for Interference Suppression in DS-CDMA Forward Link,” in Proc. IEEE International Vehicular Technology Conference, VTC—Fall, Boston, MA, Sept. 2000. G.E. Bottomley, T. Ottosson, and Y.E. Wang, “A Generalized RAKE Receiver for Interference Suppression,” IEEE Journal on Selected Areas in Communications, vol. 18, Aug. 2000, pp.1536–1545. S. Chowdury, M.D. Zoltowski, and J.S. Goldstein, “ReducedRank Adaptive MMSE Equalization for High-Speed CDMA Forward Link with Sparse Multipath Channels,” in Proc. 38th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, Sept. 2000. U. Madhow and M. Honig, “MMSE Interference Suppression for Direct-Sequence Spread Spectrum CDMA,” IEEE Transactions on Communications, vol. 32, Dec. 1994, pp. 3178–3188. S. Haykin, Adaptive Filter Theory, Englewood Cliff, NJ: Prentice Hall, 1996. J.G. Proakis, Digital Communications, New York, NY: McGraw-Hill, 1995.
Adaptive Interference Suppression
291
Ph.D. degree in 2000, all from the University of Illinois at UrbanaChampaign. He joined the Communication Systems and Research Lab, Motorola Labs, Schaumburg, IL, in 2000. His current research interests are in the areas of wireless communications, multiuser detection, interference suppression, and space-time processing. visotsky @ labs.mot.com
Colin D. Frank received the B.S. degree from the Massachusetts Institute of Technology and the M.S. and Ph.D. degrees from the University of Illinois, all in Electrical Engineering. He was with the Space and Communications Group of the Hughes Aircraft Company (now Boeing) in El Segundo California from 1983 to 1985. While at the University of Illinois, he served as a consultant for ITT Aerospace and Techno-Sciences, Inc. He joined Motorola Labs in 1993 and is currently with Motorola Systems Sector. For the past 8 years, his work has focused on technology and product development for CDMA systems. His current research interests are in the general area of communication system design and analysis with emphasis on the application of modulation and coding theory to wideband wireless systems. Dr. Frank holds 9 U.S. patents and has published over 20 journal and conference articles.
[email protected]
Eugene Visotsky received bachelor’s degree in electrical engineering in 1995, M.S. degree in electrical engineering in 1997, and
Upamanyu Madhow received his bachelor’s degree in electrical engineering from the Indian Institute of Technology, Kanpur, in 1985. He received the M.S. and Ph.D. degrees in electrical engineering from the University of Illinois, Urbana-Champaign in 1987 and 1990, respectively. From 1990 to 1991, he was a Visiting Assistant Professor at the University of Illinois. From 1991 to 1994, he was a research scientist at Bell Communications Research, Morristown, NJ. From 1994 to 1999, he was with the Department of Electrical and Computer Engineering at the University of Illinois, Urbana Champaign, first as an Assistant Professor and, since 1998, as an Associate Professor. Since 1999, he has been an Associate Professor in the Department of Electrical and Computer Engineering at the University of California, Santa Barbara. His research interests are in communication systems and networking, with current emphasis on wireless communications and high speed networks. Dr. Madhow is a recipient of the NSF CAREER award. He has served as Associate Editor for Spread Spectrum for the IEEE Transactions on Communications, and as Associate Editor for Detection and Estimation for the IEEE Transactions on Information Theory.
[email protected]
This page intentionally left blank
Journal of VLSI Signal Processing 30, 293–309, 2002 © 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
Constrained Adaptive Linear Multiuser Detection Schemes* GEORGE V. MOUSTAKIDES Institut National de Recherche en Informatique et en Automatique (INRIA), Rennes 35042, France; Department of Computer Engineering and Informatics, University of Patras, Greece Received September 1, 2000; Revised June 7, 2001
Abstract. By using a fair comparison method we show that contrary to the general belief the conventional LMS, when in training mode, does not necessarily outperform the popular blind LMS (BLMS). With the help of a constrained MMSE criterion we identify the correct trained version which is guaranteed to have uniformly superior performance over BLMS since it maximizes the SIR over an algorithmic class containing BLMS. Because the proposed optimum trained version requires knowledge of the amplitude of the user of interest we also present simple and efficient techniques that estimate the amplitude in question. The resulting algorithm in both modes, training and decision directed, is significantly superior to BLMS.
1. Introduction Code division multiple access (CDMA) implemented with direct sequence (DS) spread spectrum signaling is a technology applied to a number of important applications nowadays such as mobile telephony, wireless networks and personal communications. In these systems, multiple access interference (MAI) becomes an intrinsic factor for severe performance degradation that necessitates the application of signal processing techniques to improve quality. Multiuser detection schemes developed over the last years, successfully mitigate MAI achieving at the same time significant capacity improvement for the corresponding CDMA systems. Due to their capability to combat MAI, multiuser detection schemes have attracted considerable attention and currently significant research is devoted in this direction [1]. Among multiuser detection schemes, the linear MMSE detector appears to be the most popular one [2–9]. This popularity mainly stems from its characteristic simplicity which is combined with excellent performance. Although the MMSE detector is not op* This work was supported by a Collaborative Research Grant from the NATO International Scientific Exchange Programme.
timum from a minimum bit error rate (BER) point of view, it nevertheless optimizes a number of alternative criteria as asymptotic efficiency and near-far resistance [10, pages 195–202]. Despite its asymptotic optimality, the MMSE detector was recently found not to uniformly outperform, in the BER sense, the other two well known linear detectors, namely the conventional matched filter and the decorrelating detector [11]. This fact however does not attenuate the significance of this popular detection scheme since counterexamples seem to be possible only at extreme signaling conditions. After its introduction [9], the usefulness of the MMSE detector was spurred with the appearance of a blind adaptive realization [12] that does not require training or knowledge of the interfering users’ signature waveforms. Following this work a number of alternative blind techniques were proposed, differing mainly in the adaptive algorithm used to estimate the corresponding linear filter [3, 6, 8]. Among these versions the subspace based adaptations [6, 8], due to the special structure of the multiuser detection problem, tend to exhibit superior performance as compared to the corresponding classical adaptive schemes. It is often stated in the literature that the most severe drawback of blind schemes is their inferior performance as compared to adaptations that use training
294
Moustakides
[3, 12]. The first important result of this paper consists in showing that this widely accepted statement is in fact FALSE. In order for a trained algorithm to uniformly outperform its blind counterpart, as we will show, it is necessary to have also available the information of the amplitude of the user of interest. Unfortunately this type of a priori knowledge is difficult to obtain in practice, we therefore propose a simple adaptive algorithm to estimate it. This adaptation combined with the trained algorithm that estimates the linear filter, results in almost optimum performance. The considerable difference in performance between blind and optimum trained schemes suggests that there is room for performance improvement of blind adaptations. The second main result of this work aims in this direction. Specifically we show that the decision directed version of our optimum trained scheme is extremely efficient with performance that follows closely the performance of its trained prototype. Since decision directed algorithms do not require training or knowledge of interfering users’ signatures, they are obviously blind as well. Due to its excellent performance, the proposed decision directed scheme can consequently become a possible alternative to the existing popular blind algorithm of [12]. 2.
n(n) a white Gaussian noise vector with i.i.d. components of zero mean and unit variance that models the ambient noise. Using matrix notation, Eq. (1) can be transformed into
where
and
A linear detector estimates the i-th user’s transmitted bits by taking the sign of the inner product of r(n) with a properly selected vector (filter) c of length N, specifically
Three are the most well known linear detectors encountered in the literature, namely the conventional matched filter, the decorrelating detector (or decorrelator) and the MMSE detector which is also equivalent to the Minimum Output Energy (MOE) detector of [12]. Without loss of generality if we consider User-1 as the user of interest then the corresponding c filters for the three detectors take the form
Signal Model and Background
Consider a K-user synchronous DS-CDMA system with identical chip waveforms and signaling antipodally through an additive white Gaussian noise (AWGN) channel. Although the signals appearing in CDMA systems are continuous in time, the system we are interested in, can be adequately modeled by an equivalent discrete time system [10]. Specifically no information is lost if we limit ourselves to the discrete time output of a chip matched filter applied to the received analog signal [10, page 310]. The resulting sequence can be represented as a collection of vectors r(n) of length N, with N denoting the common spreading factor of all signature waveforms. To be more precise, if denotes the power of the AWGN we can then write1
where is a unit norm vector denoting the discrete time version of the signature of User-i, the corresponding amplitude, the n-th symbol of User-i and finally
where is the correlation matrix of the signature waveforms, and (with I the identity matrix) is the data covariance matrix. The three detectors are extensively analyzed in [10] and their relative performance considered in [11]. Notice that apart from the conventional matched filter the other two linear detectors require knowledge of all interfering users’ signatures, while the MMSE requires additional knowledge of all user and noise powers. 3. A Constrained MMSE Criterion
The MMSE linear detector presented in the previous section can be obtained by minimizing the MSE between the output of the filter and the desired bit sequence, that is
Minimizing this criterion yields the following optimum filter
Constrained Adaptive Linear Multiuser Detection Schemes
295
of view, any filter c is equivalent to its scaled version with because which is a scaled version of the filter introduced in (4). Although the two filters are not equal, they are equivalent because, when substituted in (3), produce exactly the same bit estimates. The MOE criterion on the other hand is defined as
and its optimum filter is exactly the one presented in (4). Even though the two criteria produce equivalent optimum filters, when these filters are estimated adaptively, the resulting schemes tend to differ considerably in nature and in performance. Specifically the MMSE criterion gives rise to adaptations requiring training [9] whereas the MOE results in the popular blind version of [12]. As it was stated in the introduction, contrary to the general belief, the trained version does not uniformly outperform the blind. From the analysis that follows it will become apparent that it is relatively easy to generate counterexamples (see for instance Fig. 1). Consequently this section will be devoted to the identification of the correct trained version that uniformly outperforms the blind. To achieve our goal we first need to introduce a modified MMSE criterion. Since we detect the bit sequence through relation (3) we can conclude that, from a detection point
From the above we understand that, as far as detection is concerned, there is an ambiguity in c which can be eliminated by imposing a constraint on the filter. We intend to use the same constraint as the one introduced in [4], namely
With (9) we force our filter c to leave unchanged any information coming from the “direction of interest” The criterion we now propose is the following
where is a scalar parameter. In other words we are interested in minimizing the MSE between the output of the filter and a scaled version of the bit sequence. Notice that our criterion reduces to MOE when we select Using Lagrange multipliers let us transform the above constrained problem into an equivalent unconstrained one. Define the function
296
Moustakides
then the solution to (10) can be obtained by solving
which if substituted in (18) yields
with the necessary Lagrange multiplier. It is quite easy to verify that the optimum filter satisfies
which is an expression independent of the parameter and equal to the MMSE and MOE optimum filter introduced in (4). 3.1.
Constrained Adaptations
If the statistics of the processes involved in the minimization problem defined in (12) are not known, it is still possible to obtain the optimum solution using stochastic gradient techniques. A stochastic gradient algorithm that solves (12) can be defined by the following recursion [10, pages 306–308]
where a positive constant known as step size and is defined in (11). Using (14) in (11) generates an LMS like adaptation of the form
When Q(n) = I, the algorithm reduces to a constrained LMS version whereas with the (exponentially weighted) sample covariance matrix of the data, leads to a constrained RLS version. 3.2.
Robust Constrained Adaptations
It is known that the recursion defined in (20), (21) exhibits instability under finite precision (nonrobustness) which eventually results in useless filter estimates [7, 10, page 320]. In order to correct this serious handicap, in [7] the original constrained minimization problem is transformed into an equivalent unconstrained one by enforcing (9) directly onto the filter elements. Then the application of (14) gives rise to robust adaptations. We would like here to propose an alternative method which achieves robustness by slightly modifying the recursion in (21). If we make a first order perturbation analysis to identify the error accumulation mechanism in (21) we can show that, if (21) is stable under infinite precision then, the linear system that describes the error accumulation exhibits instability only along By perturbing (21) we can write
that can be generalized to the following richer algorithmic class
with Q(n) a sequence of nonsingular matrices that can depend on the data. We can now identify the Lagrange multiplier by enforcing validity of the constraint at every time step n, that is, Indeed if we multiply (18) from the left by and require we obtain
with all operations performed now in infinite precision. In other words rounding errors are modeled as additive (white) noise. Multiplying the last equation from the left by and subtracting from unity yields
The quantity measures the accumulated finite precision error along direction since it is zero under infinite precision (because of the constraint). The divergence of (24) towards infinity is of a random walk type because, for sufficiently high accuracy, rounding
Constrained Adaptive Linear Multiuser Detection Schemes
errors tend to be independent. Of course, due to the small variance of the rounding errors, the divergence is in fact be very slow. Having identified the form of instability in our recursion we can proceed with the modification of the algorithm in order to correct its non-robustness. If we return to the identification of the Lagrange multiplier in (19), we recall that was computed by assuming However, due to rounding errors, the second equality is clearly false. Taking this fact into account and recomputing the Lagrange multiplier we obtain
Another notable property of the recursion in (27) is the fact that, if for some reason the product differs significantly from unity, (27) enforces validity of the constraint in a single step. In Fig. 2 we present a simulation of the constrained LMS algorithm (i.e. Q(n) = I, with and without the modification. We observe that the error of the original unmodified algorithm increases continuously whereas in the modified version it remains bounded. Although the instability appears to be extremely slow, we should bear in mind that the simulation was performed with Matlab’s high accuracy computations. In a less accurate environment this instability would have been more pronounced. 3.3.
Substituting in (18) results in the following modified version
Using again first order perturbation analysis one can verify that is no longer accumulating rounding errors therefore the proposed modification is robust.
297
Algorithms of Interest
From now on, for simplicity, we will limit our presentation to the LMS like algorithmic class corresponding to Q(n) = I; generalization to other Q(n) matrices is straightforward. Let us present the special form of the LMS like recursion. By substituting Q(n) = I in (27) and recalling that we obtain
Notice that we initialize the algorithm with the matched filter. As we will see in the next section this form of
298
Moustakides
initialization, combined with the somewhat uncommon second order statistics of the data, turns out to be the reason for an unconventional behavior of this algorithmic class. Let us now consider parameter α; we distinguish the following selections This value generates the well known blind LMS (BLMS) of [12] (more precisely its robust version). Notice that BLMS requires the same amount of a priori information as the conventional matched filter, namely only the signature of the user of interest. With this selection we generate a constrained version of LMS (CLMS). This algorithm must be distinguished from the conventional (unconstrained) LMS, used in the literature for the same problem [9], that satisfies the recursion
which is known to be robust. Here parameter is equal to the amplitude of User-1 (user of interest). This gives rise to a constrained LMS with amplitude information (CLMSAI). As we show in Section 5, CLMS-AI turns out to be optimum in a very well defined sense. 4.
where the data,
denotes the interference plus noise part of and For our problem it is more convenient to use the inverse SIR (ISIR) which is defined as
To put this quantity under a more suitable form, let denote the mean and the covariance matrix of the filter estimates c(n), that is
Let also denote the covariance matrix of the interference plus noise part of the data then, because of independence between and c(n – 1) and because for any two matrices D, E of the same dimensions we have we can write
Performance Measure and Fair Comparisons
The most suitable measure of performance for the multiuser detection problem is definitely the BER. This quantity however suffers from serious mathematical intractability, therefore alternative measures have been proposed which, at least asymptotically, are equivalent to BER. One such possibility is the signal to interference ratio (SIR) that can also be used to obtain efficient approximations for BER [5]. To define the SIR let us recall that detection at time n is performed through (3), but with c replaced with the filter estimate at time n – 1, that is,
We can then write
which leads to
We can now make the following observations from (38) for the three terms comprising ISIR. The first term is constant and common to all adaptive algorithms since it involves the optimum MMSE filter This term would have been our ISIR had we available the statistics of the data. The next two terms are due to the adaptive algorithm. The second term involves the deterministic sequence of mean estimates with the first element of this sequence being (since and the limit, for (asymptotically) unbiased estimators, being the optimum MMSE filter Therefore the second term in (38) starts from an O (1) (order of a constant) value and tends to zero as time progresses.
Constrained Adaptive Linear Multiuser Detection Schemes
The last term in (38) is due to the randomness of our estimates. This term is initially zero (since is deterministic) and converges, at steady state, to an value [13, pages 106–107]. Therefore this term is always small. From the above we conclude that the second term is mainly responsible for the transient phase of the algorithm while the third for its steady state behavior. Since the first term in (38) is common to all algorithms we will not consider it in our performance evaluation process. As our final performance measure we therefore propose the sum of the last two terms in (38) which are the terms directly related to the adaptive algorithm. Specifically we propose the following performance measure
299
filter estimates it is through this process that the convergence rate will be defined. 5.
Performance Analysis
In this section we will analyze the behavior of the algorithm defined by (28), (29). In particular we are interested in the mean trajectory and the second order statistics of the corresponding estimates. It turns out that for our analysis we can discard the last term in (29) introduced to correct the non-robustness problem. This is because stability (convergence) and robustness are two problems that are traditionally considered separately. Therefore for studying convergence, we will assume infinite precision, which results in the elimination of the last term in (29). Consequently the algorithm we intend to analyze is the following
It is clear that J(n) expresses excess ISIR due to adaptation. 4.1. Fair Comparisons of Adaptive Algorithms
A common mistake made in the literature when comparing constant step size adaptive algorithms consists in performing comparisons by selecting the same step size µ in all algorithms under consideration. As it is discussed in detail in [14, 15], this selection has no mathematical grounds and can often lead to erroneous conclusions. Since the step size µ affects both the convergence rate and the steady state behavior of the algorithm, its correct choice is crucial for the comparison process. A fair comparison method proposed in [14] and extensively analyzed in [15] consists in selecting the step sizes in such a way that all algorithms attain the same steady state performance level. Once the selection of step sizes is completed, the algorithms can be ranked according to their convergence rate. Alternatively, we could select the step size in each algorithm so that all algorithms have the same convergence rate and then rank the algorithms according to their steady state performance. It is the latter method we find more appropriate for our problem; both approaches however are theoretically equivalent. A last point that needs to be said here is that, since convergence refers to the transient phase and this phase is primarily due to the mean
We recall that BLMS and 5.1.
corresponds to CLMS, to CLMS-AI.
to
Qualitative Analysis of Trained Algorithms
We can now state our first theorem that provides the necessary statistics for the estimates c(n) of (40), (41). Theorem 1. The trajectory of the mean filter estimates of the algorithm in (40), (41) satisfies the recursion
The covariance matrix be written as the sum
of the filter estimates can
where and are the covariance matrices of two vector processes x(n) and y(n) defined by the recursions
300
Proof:
Moustakides
The proof is presented in the Appendix.
Using Theorem 1 we can now make the following important remarks. Trajectory of the mean filter estimates, i.e. recursion (42), is independent of In particular CLMS has the same mean trajectory with BLMS This equality does not for example apply when we compare BLMS to the conventional LMS defined in (30), (31). An important consequence of this property is the fact that if we like to compare algorithms corresponding to different values with the fair method described in Subsection 4.1, it is sufficient to select the same step size µ. Indeed this selection guarantees exactly the same trajectory for the mean filter estimates and therefore the same convergence rate (remember that the transient phase is primarily due to mean filter estimates). Again we should stress that this statement is not true when comparing any member from our class to the conventional LMS, in this case we do need to select different step sizes. Our performance measure J(n), using (43), can be written as
Using (46) we can also compare algorithms corresponding to different values of In particular if we like to compare CLMS to BLMS we need to set respectively and We can then verify that CLMS is better than BLMS iff or equivalently iff This means that although CLMS uses the exact bits in its adaptation, it does not necessarily perform better than BLMS which completely ignores bit information. This fact is also true when we compare the conventional LMS of (30), (31) with BLMS. Figure 1 depicts such an example. The previous conclusion is rather surprising because in the literature it is widely believed that LMS is uniformly better than its blind counterpart! The results of Theorem 1 are exact (there is no approximation involved in any sense) and no additional assumptions were used apart the ones we initially made regarding the statistics of the data. In fact the previous remarks hold for every time instant, for every µ and even when the algorithms diverge. So far we were able to rank the algorithms of interest without quantifying their relative performance. This is the subject of our next subsection. 5.2.
We note that the first two terms are independent of and so is trace in the third term while parameter α appears only in this last term. Furthermore this term is nonnegative because
the last inequality being true because traces of nonnegative definite matrices are also nonnegative. Due to (47), from (46) we can now conclude that the algorithm that has the uniformly (at all time instances) smallest excess ISIR corresponds to i.e. CLMS-AI. Because of its importance let us explicitly write the recursion for the optimum algorithm CLMS-AI, we have
Quantitative Analysis of Trained Algorithms
In this subsection we are going to use results from Stochastic Approximation Theory pertinent to the analysis of constant step size adaptive algorithms [13, 16]. In order for this theory to be applicable we need to assume that the step size µ is small, i.e. which is usually the case in practice. Our goal is to find expressions for the performance measure J(n) at steady state; this of course presumes convergence of the algorithms of interest. Convergence is assured [13] for sufficiently small µ and because matrix has real nonnegative eigenvalues (it has the same eigenvalues with since the two matrices are related through a similarity transformation). To estimate we need to compute and Although it is possible to find exact expressions for both covariances the results turn out to be mathematically involved. To simplify our presentation and at the same time gain a realistic feeling of
Constrained Adaptive Linear Multiuser Detection Schemes
the relative performance of the algorithms of interest we will make, as in [4], the assumption that In other words that the correlation matrix of the signature waveforms is close to the identity, or equivalently that the signature waveforms are almost orthogonal. We have now the second theorem that quantifies the performance of the algorithm in (40), (41). Theorem 2. Let x(n), y(n) be the processes defined in (44), (45); if then, to a first order approximation in µ, we have that
301
than CLMS and the conventional LMS but is of course inferior to the optimum CLMS-AI. We used and Although the conventional LMS does not have the same convergence rate as the remaining algorithms it is safe to conclude that it is inferior to BLMS. This is because, at the same time, it has a smaller convergence rate and a larger steady state excess ISIR. Consequently, attempting to make its rate equal to the rate of BLMS (for fair comparison), requires increase of its step size which will further increase its steady state performance. One can also verify that the relative performance of CLMS, BLMS, CLMSAI, is very closely predicted by (53). 5.3. Modes of Convergence
with initial conditions If we further assume that the correlation matrix of the signature waveforms satisfies then at steady state we can write
Proof:
The proof is presented in the Appendix.
Using (52) we can compute the relative performance of any two algorithms, at steady state, corresponding to different parameters This takes the following simple form
In particular if we consider the relative performance of BLMS with respect to the optimum CLMS-AI we obtain
suggesting that the optimum algorithm can be significantly better than BLMS in high SNR channels. Figure 1 depicts a case where BLMS performs better
A convergence characteristic that can be observed from Fig. 1 is the fact that the three algorithms from our class (CLMS, BLMS, CLMS-AI) exhibit two different modes of convergence, namely a fast mode during the initial transient phase and a subsequent slow drift toward inferior performance values. This behavior, particularly apparent when the number of users K is significantly smaller than the spreading factor N, is very uncommon in adaptive algorithms. To understand why the algorithms behave in such a way, we recall from (39) that the excess ISIR is the sum of two components, the first due to mean filter estimates and the second to the covariance of the estimates. From Eq. (42) we have that the exponential convergence rate of the mean estimates is equal to (minus) the logarithm of the largest (in amplitude) eigenvalue of the matrix which, for K < N, is equal This rate however is significantly slower than the one observed in Fig. 1 during the initial transient phase. After careful consideration of Eq. (42) one can show by induction that if the mean filter, at some instant, lies in the signal subspace generated by the signatures, it remains in this subspace afterwards (i.e. the mean filter is a linear combination of the signatures). This is exactly what happens in our algorithm since the filter estimate is initialized in the signal subspace Therefore the rate of convergence becomes with denoting the smallest nonzero eigenvalue of the rank K – 1 matrix We thus conclude that the first term in the performance measure starts from an O (1) value and with an exponential rate equal to (which is usually significantly larger than converges to zero.
302
Moustakides
From Eqs. (50), (51) we can conclude that, because of the ambient noise, the covariance matrix is not limited into the signal subspace and has components lying in the complementary noise subspace (for example contains a term equal to Therefore the covariance has a part that is slowly converging with a rate equal to This means that the second term of our measure, since its initial value is zero increases slowly from zero to its steady state value estimated by (52). It is now possible to understand the behavior of the algorithms in Fig. 1. During the transient phase the leading term of our measure is the first one due to the mean estimates, which converges very quickly to zero. The second term, due to the covariance, during the transient phase has negligible values (since it starts from zero and increases very slowly). After some point however the second term becomes the leading one and this is why we observe this second slowly increasing mode. Notice that for fast convergence it is necessary for the mean estimates to lie completely in the signal subspace, because otherwise if there is a component in the noise subspace this part will exhibit slow convergence towards zero. When initializing the algorithm with the mean estimates do lie in the signal subspace. This is also the case if we have an increase in the number of users. When however we have a reduction in the number of users then this property is no longer true. This is because the component of the filter corresponding to the signatures of the users that departed lies now inside the noise subspace and is therefore slowly converging to zero. We will be able to observe this mode of behavior in Section 6 where we present our simulations.
to the data from the first time instant and not after we have obtained satisfactory filter estimates using some other scheme (as is usually the case in practice). Due to the nonlinear function sgn{·} involved in its definition, the analysis of DD-CLMS-AI is not as simple as CLMS-AI, neither our results can be of the same generality. In order to obtain closed form expressions for the mean field and the second order statistics we will make the simplifying assumption that the interference plus noise part of the data can be adequately modeled as a Gaussian process. Signaling conditions that ensure efficiency of this approximation are given in [5] (we basically need validity of the Central Limit Theorem). Theorem 3. If interference plus noise is Gaussian then, for sufficiently small and to a first order approximation in the trajectory of the mean filter estimates of the algorithm defined by (55), (56) satisfies
where scalar function
is defined as
If we further assume that at steady state takes the form
then the excess ISIR
5.4. Decision Directed Version It is interesting at this point to introduce the decision directed version of the optimum algorithm CLMS-AI. From (48), (49), by replacing with the estimate we obtain
where Q(x) is the complementary Gaussian cumulative distribution function. Proof:
The proof is presented in the Appendix.
From our last theorem we can draw the following conclusions. Let us call this algorithm decision directed constrained LMS with amplitude information (DD-CLMS-AI). We should note that we consider the recursion to be applied
Comparing the trajectory of the mean estimates in (57) with the corresponding in (42) we observe that the difference exists only in the scalar
Constrained Adaptive Linear Multiuser Detection Schemes
quantity Although the adaptation in (57) is nonlinear it has invariant eigenspaces which coincide with the eigenspaces of the adaptation in (42). This facilitates the convergence analysis of the decision directed version considerably. Notice that which gurantees convergence of the mean field at a rate at least 0.706 times the rate of the optimum CLMSAI. Furthermore as the algorithm converges, quantity approaches its limit which, for is approximately equal to the square root of the SNR i.e. It turns out that even for moderate SNR values the corresponding value of is very close to unity. For example for SNR =10 while for 20 db, This suggests that initially DD-CLMS-AI has a smaller convergence rate than CLMS-AI however, as time progresses, its rate approaches that of the optimum algorithm. Using (52) with and (59) we can compare the steady state performance of CLMS-AI and DDCLMS-AI. If we form the ratio then this quantity is close to unity even for very low SNR. For example for SNR = 0 db it is equal to 1.2920; for 10 db, 1.0146; whereas for 20 db it is equal to unity (with Maltab’s accuracy). This clearly suggests that DD-CLMS-AI has steady performance which is extremely close to the optimum, since even for SNR = 0 db it differs from the optimum slightly more than 1 db.
303
Figure 3 presents a typical example of the relative performance of DD-CLMS-AI and CLMS-AI. We can see that initially the decision directed version has a nonlinear behavior, however after some point the two curves tend to be parallel, meaning that the two convergence rates approach each other. At steady state on the other hand we have that the two algorithms are practically indistinguishable, as was predicted by our analysis. 5.5.
Amplitude Estimation Algorithms
So far we have seen that the optimum trained algorithm in our LMS like class is CLMS-AI, i.e. the constrained LMS with amplitude information. Furthermore its decision directed version DD-CLMS-AI was seen to be equally efficient having slightly inferior performance. As it was stated in our introduction, it is unrealistic to assume knowledge of the amplitude of the user of interest even in training mode, we therefore need a means to acquire this information. It is this point we like to answer here by proposing a simple yet efficient adaptive estimation scheme that is consistent with the algorithmic class introduced in Subsection 3.3. We can verify that for any filter c(n) that satisfies the constraint we have If we approximate expectation with sample mean, this suggests the following simple adaptation
304
Moustakides
to be paid in order to track the signal subspace and detect changes in its dimension (i.e. changes in the number of users). For the above algorithm computes the sample mean of the quantities However if we allow v to be a positive constant smaller than unity the algorithm will also be able to adapt to slow variations in the amplitude The corresponding decision directed version satisfies
For c(n) we can of course use the filter estimates from the corresponding trained or decision directed filter estimation algorithm. Estimates can now be used to supply the necessary amplitude information to CLMS-AI and DDCLMS-AI. The resulting algorithms will be called respectively constrained LMS with amplitude estimation (CLMS-AE) and decision directed constrained LMS with amplitude estimation (DD-CLMS-AE). Figure 3 presents a typical example of the relative performance of CLMS-AI and CLMS-AE as well as their decision directed versions. As far as CLMS-AE is concerned we can see that it is extremely close to the optimum CLMS-AI, whereas DD-CLMS-AE is slightly inferior to DD-CLMS-AI. The relative performance however is very much dependent on the signaling conditions, as we will soon find out in the simulations section. 5.6.
Subspace Based Adaptations
For our algorithmic class it is also possible to define subspace based adaptations similar to [6, 8]. To obtain the corresponding algorithms it is sufficient to replace the data vector r(n) with its projection onto the signal subspace. This will affect the convergence rate of the covariance matrix and partly the convergence of the mean filter estimates. In both cases the rate will be equal to the slowest mode defined on the signal subspace. This means that the subspace based algorithms will not exhibit any annoying slow performance degradation, as the one observed in Fig. 1; furthermore the mean filter estimates will converge fast even after a reduction in the number of users. Finally the steady state excess ISIR will be smaller since the term in (52) will become The above improved characteristics are obtained at the expense of an increased complexity [8] which needs
6. Simulations In this section we present a number of simulations to compare the relative performance of DD-CLMS-AE, CLMS-AE and BLMS. A common feature of all three algorithms is the fact that they are scale invariant. By this we mean that if we scale our data r(n) by a factor c then the three algorithms yield exactly the same estimates as with the initial unsealed data provided that we divide µ with (this property is not true for LMS or CLMS). We can therefore, without loss of generality, fix In order to obtain algorithms with steady state performance that does not change every time there is a variation in the number of users or in user and noise powers, it is convenient to employ a normalized version of µ. We therefore propose to replace µ by where
is an estimate of the data power with κ a constant satisfying 1 > κ > 0. The final form of the three algorithms we intend to test is the following BLMS
Constrained Adaptive Linear Multiuser Detection Schemes
It should be noted that DD-CLMS-AE, like BLMS, is blind since it requires the same a priori information (i.e. and uses the same data (i.e. r(n)). It is exactly this version we propose as an alternative to BLMS. Comparing also the computational complexity of the two algorithms, we realize that DD-CLMS-AE requires only an additional constant number of scalar operations as compared to BLMS. It is therefore equally computationally efficient. For our simulations we used a spreading factor N = 128, with signature waveforms generated randomly but then kept constant during the whole simulation set. For the user of interest we selected while for noise power (SNR of 20 db). Step sizes were selected as follows: Our performance measure from (39) takes the form Expectations were estimated by taking the sample mean of 100 independent runs, while and were computed using the exact signatures and user and noise powers. Since CLMSAE has almost optimum performance it was regarded
305
as a point of reference and BLMS and DD-CLMS-AE were compared against it. In the first example, we initially have the user of interest with six interfering users of power 10 db; at time 10000 a 20 db interferer enters the channel, and at time 20000 this interferer exits along with three more 10 db interfering users. This example was selected in order to observe the behavior of the algorithms under signaling condition that do not favor validity of the Gaussian assumption of the signal plus noise data part (because of small K). Figure 4 depicts the outcome of our simulation. We can see that DD-CLMS-AE follows very closely CLMS-AE. Notice the slow convergence, after the second change, that was predicted by our analysis for the case of users exiting the channel. The second example consists in comparing the behavior of the algorithms when power control is employed (i.e. all ). Initially we have thirty users; at time 10000 the number of users increases to thirty five while at 20000 it is reduced to twenty five. Notice from Fig. 5 that the decision directed version follows again very closely CLMS-AE. The third and final example contains rather extreme signaling conditions. We start with thirty users, only now the interferers have power equal to 10 db. At time 10000 five 20 db interferers enter, while at time 20000 all five 20 db along with five 10 db interferes exit the channel. In Fig. 6 we can observe that the initial nonlinear behavior of DD-CLMS-AE is now more
306
Moustakides
pronounced. However the algorithm, very quickly, establishes a convergence rate that is similar to CLMSAE. The steady state behavior of the two algorithms on the other hand is indistinguishable. In all three examples we observe that the steady state excess ISIR for BLMS is approximately equal to 13 db whereas that of DD-CLMS-AE, to 30 db. To translate
these numbers into actual BER we use the estimates BER and We then obtain that the BER of BLMS is of the order of whereas that of DD-CLMS-AE is Finally we should mention that under extreme interference conditions it is possible that DD-CLMS-AE exhibits divergence. This is because the algorithm is
Constrained Adaptive Linear Multiuser Detection Schemes
unable to obtain, quickly enough, satisfactory estimates of the amplitude and therefore provides erroneous amplitude estimates to the filter estimation part resulting in divergence. In such cases it is advisable to run BLMS in parallel with DD-CLMS-AE and use its filter estimates in (69) to estimate the amplitude (BLMS tends to be more robust to extreme signaling conditions). Once convergence has been established we can use the filter estimates of DD-CLMS-AE in (69) and discard completely BLMS. 7.
Conclusion
We presented a constrained class of adaptive linear multiuser detection algorithms that constitutes an extension to the popular blind LMS algorithm of [12]. Applying a detailed analysis to the proposed class we showed that the conventional LMS and its constrained version, under training, do not necessarily outperform the blind LMS. In order for this property to be true it is necessary to incorporate the information of amplitude of the user of interest in the trained algorithm. Simple and efficient adaptations that estimate the required amplitude were proposed which, when combined with the filter estimation algorithm result in both modes, trained and decision directed, in nearly optimum performance. Since the proposed decision directed version is also blind, it is clear that, it could constitute a serious alternative to the popular blind LMS.
307
the column vector that we obtain by stacking the columns of one after the other then
where denotes Kronecker product. Using the independence of the processes involved in the recursions and induction we can show that vector processes x(n), y (n) are zero mean and uncorrelated. This in turn can help us to prove that satisfies exactly the same recursion as This concludes the proof. Proof of Theorem 2: The easiest way to show (50), (51) is to consider the column version of the corresponding covariance matrices. Let us for simplicity show only (51), in the same way we can show (50). We have
Now notice that, to a first order approximation in µ, we can write
Appendix Proof of Theorem 1: By using the constraint and the fact that we can write (40), (41) as follows
Taking expectation in (73) and using the independence between data r(n) and c(n – 1) as well as independence between data bits and noise, we can easily show (42). To show (43), notice first that by subtracting (42) from (41) and combining (44) with (45), we can show that satisfies exactly the same recursion as therefore Now notice that if col denotes
Since terms do not contribute in approximations up to order µ we replaced in one of the previous equations the term with another, suitable to our goal, expression. Substituting the above into (75) yields the column version of (51). To show (52) we have from (46) that, at steady state
308
Moustakides
Consider now (50) at steady state, we have the following Lyapunov equation that defines to a first order approximation in µ
From (50) we can show by induction that which suggests that
With the assumption which means that
define certain quantities according to the notation used in this reference
where c is a deterministic vector and denotes expectation with respect to the data vector r(n). In order to proceed we need a number of identities that we present without proof. Let z be a random variable and z a random vector that are both zero mean and jointly Gaussian and a constant, then
we conclude that
Taking traces in (78), using (79), (80) and the fact that yields where and is a scalar quantity whose exact form is unimportant. Using (86) we can now compute h(c) and we obtain
From [13] we then have that the mean trajectory satisfies the recursion
In a similar way we can show that
Substituting in (77) yields the desired result. Proof of Theorem 3: To show the theorem, as in the trained case, we will disregard the last part of the recursion introduced to correct the non-robustness problem. Since we will use results from Stochastic Approximation Theory contained in [13, Pages 104–108], let us
which is exactly (57). To compute the steady state covariance matrix we need to find Again using the identities (86), (87), (88), assuming that which yields and we have that
Constrained Adaptive Linear Multiuser Detection Schemes
Furthermore we need to compute
Substituting in the Lyapunov equation that determines the steady state covariance matrix of the estimates [13, page 107]
and taking traces yields (59). Note l. With lower case letters we denote scalars, with bold face lowercase, vectors and with boldface upper case, matrices.
309
10. S. Verdú, Multiuser Detection, New York: Cambridge University Press, 1998. 11. G.V. Moustakides and H.V. Poor, “On the Relative Error Probabilities of Linear Multiuser Detectors,” IEEE Trans. Inform. Theory, vol. 47, no. 1, Jan. 2001, pp. 450–456. 12. M. Honig, U. Madhow, and S. Verdú, “Blind Multiuser Detection,” IEEE Trans. Inform. Theory, vol. IT-41, July 1995, pp. 944–960. 13. A. Benveniste, M. Métivier, and P. Priouret, Adaptive Algorithms and Stochastic Approximation. Berlin: Springer, 1990. 14. J.A. Bucklew, T.G. Kurtz, and W.A. Sethares, “Weak Convergence and Local Stability Properties of Fixed Step Size Recursive Algorithms,” IEEE Trans. Inform. Theory, vol. IT-39, no. 3, May 1993, pp. 966–978. 15. G.V. Moustakides, “Locally Optimum Signal Processing Adaptive Estimation Algorithms,” IEEE Trans. Signal Process., vol. SP-46, no. 12, Dec. 1998, pp. 3315–3325. 16. V. Solo and X. Kong, Adaptive Signal Processing Algorithms, Stability and Performance, Englewood Cliffs, NJ: Prentice Hall, 1995. 17. S. Haykin, Adaptive Algorithms, 3rd edn., Upper Saddle River, NJ: Prentice Hall, 1997. 18. R. Lupas and S. Verdú, “Linear Multiuser Detectors for Synchronous Code-Division Multiple-Access Channels,” IEEE Trans. Inform. Theory, vol. IT-35, Jan. 1989, pp. 123–136.
References 1. M. Honig and H.V. Poor, “Adaptive Interference Suppression,” in Wireless Communications: Signal Processing Perspectives, H.V. Poor and G.W. Wornell (Eds.), Upper Saddle River, NJ: Prentice Hall, 1998, Ch. 2, pp. 64–128. 2. U. Madhow and M. Honig, “MMSE Interference Suppression for Direct Sequence Spread Spectrum CDMA,” IEEE Trans. Commun., vol. COM-42, Dec. 1994, pp. 3178–3188. 3. U. Madhow, “Blind Adaptive Interference Suppression for Direct-Sequence CDMA,” in Proceedings of IEEE, Oct. 1998, vol. 86, no. 10, pp. 2049–2069. 4. S .L. Miller, “An Adaptive Direct Sequence Code Division Multiple Access Receiver for Multiuser Interference Rejection,” IEEE Trans. Commun., vol. COM-33, 1995, pp. 1746–1755. 5. H.V. Poor and S. Verdú, “Probability of Error in MMSE Multiuser Detection,” IEEE Trans. Inf. Theory, vol. 43, no. 2, May 1997, pp. 858–871. 6. S. Roy, “Subspace Blind Adaptive Detection for Multiuser CDMA,” IEEE Trans. Commun., vol. 48, no. 1, Jan. 2000, pp. 169–175. 7. J.B. Schodorf and D.B. Douglas, “A Constraint Optimization Approach to Multiuser Detection,” IEEE Trans. Sign. Process., vol. SP-45, no. 1, Jan. 1997, pp. 258–262. 8. X. Wang and H.V. Poor, “Blind Multiuser Detection: A Subspace Approach,” IEEE Trans. Inform. Theory, vol. IT-44, no. 2, March 1998, pp. 667–690. 9. Z. Xie, R.T. Short, and C.K. Rushforth, “A Family of Suboptimum Detectors for Coherent Multiuser Communications,” IEEE J. Selec. Areas Commun., vol. 8, May 1990, pp. 683–690.
George V. Moustakides was born in Greece, in 1955. He received the diploma in Electrical Engineering from the National Technical University of Athens, Greece, in 1979, the M.Sc in Systems Engineering from the Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia, in 1980, and the Ph.D. in Electrical Engineering and Computer Science from Princeton University, Princeton NJ, in 1983. From 1983 to 1986 he held a research position at INRIA, Rennes, France and from 1987 to 1990 a research position at the Computer Technology Institute of Patras, Patras, Greece. From 1991 to 1996, he was Associate Professor at the Computer Engineering and Informatics department (CEID) of the University of Patras and in 1996 he became a Professor at the same department. Since 2001 he is with INRIA, Rennes, France as research director of the Communication Signal Processing group and part time professor at CEID. His interests include optical communications, multiuser detection, adaptive algorithms and sequential change detection. moustaki @ceid.upatras.gr