Matching Items (84)
Filtering by

Clear all filters

150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150175-Thumbnail Image.png
Description
The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban

The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban terrain environments is traditionally implemented using sequential Monte Carlo filtering algorithms and data association techniques. However, data association techniques can be computationally intensive and require very strict conditions for efficient performance. This thesis investigates the probability hypothesis density (PHD) method for tracking multiple targets in urban environments. The PHD is based on the theory of random finite sets and it is implemented using the particle filter. Unlike data association methods, it can be used to estimate the number of targets as well as their corresponding tracks. A modified maximum-likelihood version of the PHD (MPHD) is proposed to automatically and adaptively estimate the measurement types available at each time step. Specifically, the MPHD allows measurement-to-nonlinearity associations such that the best matched measurement can be used at each time step, resulting in improved radar coverage and scene visibility. Numerical simulations demonstrate the effectiveness of the MPHD in improving tracking performance, both for tracking multiple targets and targets in clutter.
ContributorsZhou, Meng (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2011
150187-Thumbnail Image.png
Description
Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry,

Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry, physics, computer science and electrical engineering. In particular, signal processing techniques were applied to the problems of sequence querying and alignment, that compare and classify regions of similarity in the sequences based on their composition. However, although current approaches obtain results that can be attributed to key biological properties, they require pre-processing and lack robustness to sequence repetitions. In addition, these approaches do not provide much support for efficiently querying sub-sequences, a process that is essential for tracking localized database matches. In this work, a query-based alignment method for biological sequences that maps sequences to time-domain waveforms before processing the waveforms for alignment in the time-frequency plane is first proposed. The mapping uses waveforms, such as time-domain Gaussian functions, with unique sequence representations in the time-frequency plane. The proposed alignment method employs a robust querying algorithm that utilizes a time-frequency signal expansion whose basis function is matched to the basic waveform in the mapped sequences. The resulting WAVEQuery approach is demonstrated for both DNA and protein sequences using the matching pursuit decomposition as the signal basis expansion. The alignment localization of WAVEQuery is specifically evaluated over repetitive database segments, and operable in real-time without pre-processing. It is demonstrated that WAVEQuery significantly outperforms the biological sequence alignment method BLAST for queries with repetitive segments for DNA sequences. A generalized version of the WAVEQuery approach with the metaplectic transform is also described for protein sequence structure prediction. For protein alignment, it is often necessary to not only compare the one-dimensional (1-D) primary sequence structure but also the secondary and tertiary three-dimensional (3-D) space structures. This is done after considering the conformations in the 3-D space due to the degrees of freedom of these structures. As a result, a novel directionality based 3-D waveform mapping for the 3-D protein structures is also proposed and it is used to compare protein structures using a matched filter approach. By incorporating a 3-D time axis, a highly-localized Gaussian-windowed chirp waveform is defined, and the amino acid information is mapped to the chirp parameters that are then directly used to obtain directionality in the 3-D space. This mapping is unique in that additional characteristic protein information such as hydrophobicity, that relates the sequence with the structure, can be added as another representation parameter. The additional parameter helps tracking similarities over local segments of the structure, this enabling classification of distantly related proteins which have partial structural similarities. This approach is successfully tested for pairwise alignments over full length structures, alignments over multiple structures to form a phylogenetic trees, and also alignments over local segments. Also, basic classification over protein structural classes using directional descriptors for the protein structure is performed.
ContributorsRavichandran, Lakshminarayan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas S (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2011
150108-Thumbnail Image.png
Description
In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of

In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of these methods have evolved to capture non-linear relations and information flow between coupled dynamical systems with applications to fields like biomedical signal processing. This thesis deals with the application of information theory to non-linear multivariate time series and develops measures of information flow to identify significant drivers and response (driven) components in networks of coupled sub-systems with variable coupling in strength and direction (uni- or bi-directional) for each connection. Transfer Entropy (TE) is used to quantify pairwise directional information. Four TE-based measures of information flow are proposed, namely TE Outflow (TEO), TE Inflow (TEI), TE Net flow (TEN), and Average TE flow (ATE). First, the reliability of the information flow measures on models, with and without noise, is evaluated. The driver and response sub-systems in these models are identified. Second, these measures are applied to electroencephalographic (EEG) data from two patients with focal epilepsy. The analysis showed dominant directions of information flow between brain sites and identified the epileptogenic focus as the system component typically with the highest value for the proposed measures (for example, ATE). Statistical tests between pre-seizure (preictal) and post-seizure (postictal) information flow also showed a breakage of the driving of the brain by the focus after seizure onset. The above findings shed light on the function of the epileptogenic focus and understanding of ictogenesis. It is expected that they will contribute to the diagnosis of epilepsy, for example by accurate identification of the epileptogenic focus from interictal periods, as well as the development of better seizure detection, prediction and control methods, for example by isolating pathologic areas of excessive information flow through electrical stimulation.
ContributorsPrasanna, Shashank (Author) / Jassemidis, Leonidas (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011