Matching Items (247)
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149885-Thumbnail Image.png
Description
The cyanobacterium Synechocystis sp. PCC 6803 performs oxygenic photosynthesis. Light energy conversion in photosynthesis takes place in photosystem I (PSI) and photosystem II (PSII) that contain chlorophyll, which absorbs light energy that is utilized as a driving force for photosynthesis. However, excess light energy may lead to formation of reactive

The cyanobacterium Synechocystis sp. PCC 6803 performs oxygenic photosynthesis. Light energy conversion in photosynthesis takes place in photosystem I (PSI) and photosystem II (PSII) that contain chlorophyll, which absorbs light energy that is utilized as a driving force for photosynthesis. However, excess light energy may lead to formation of reactive oxygen species that cause damage to photosynthetic complexes, which subsequently need repair or replacement. To gain insight in the degradation/biogenesis dynamics of the photosystems, the lifetimes of photosynthetic proteins and chlorophyll were determined by a combined stable-isotope (15N) and mass spectrometry method. The lifetimes of PSII and PSI proteins ranged from 1-33 and 30-75 hours, respectively. Interestingly, chlorophyll had longer lifetimes than the chlorophyll-binding proteins in these photosystems. Therefore, photosynthetic proteins turn over and are replaced independently from each other, and chlorophyll is recycled from the damaged chlorophyll-binding proteins. In Synechocystis, there are five small Cab-like proteins (SCPs: ScpA-E) that share chlorophyll a/b-binding motifs with LHC proteins in plants. SCPs appear to transiently bind chlorophyll and to regulate chlorophyll biosynthesis. In this study, the association of ScpB, ScpC, and ScpD with damaged and repaired PSII was demonstrated. Moreover, in a mutant lacking SCPs, most PSII protein lifetimes were unaffected but the lifetime of chlorophyll was decreased, and one of the nascent PSII complexes was missing. SCPs appear to bind PSII chlorophyll while PSII is repaired, and SCPs stabilize nascent PSII complexes. Furthermore, aminolevulinic acid biosynthesis, an early step of chlorophyll biosynthesis, was impaired in the absence of SCPs, so that the amount of chlorophyll in the cells was reduced. Finally, a deletion mutation was introduced into the sll1906 gene, encoding a member of the putative bacteriochlorophyll delivery (BCD) protein family. The Sll1906 sequence contains possible chlorophyll-binding sites, and its homolog in purple bacteria functions in proper assembly of light-harvesting complexes. However, the sll1906 deletion did not affect chlorophyll degradation/biosynthesis and photosystem assembly. Other (parallel) pathways may exist that may fully compensate for the lack of Sll1906. This study has highlighted the dynamics of photosynthetic complexes in their biogenesis and turnover and the coordination between synthesis of chlorophyll and photosynthetic proteins.
ContributorsYao, Cheng I Daniel (Author) / Vermaas, Wim (Thesis advisor) / Fromme, Petra (Committee member) / Roberson, Robert (Committee member) / Webber, Andrew (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149795-Thumbnail Image.png
Description
ATP synthase is a large multimeric protein complex responsible for generating the energy molecule adenosine triphosphate (ATP) in most organisms. The catalysis involves the rotation of a ring of c-subunits, which is driven by the transmembrane electrochemical gradient. This dissertation reports how the eukaryotic c-subunit from spinach chloroplast ATP

ATP synthase is a large multimeric protein complex responsible for generating the energy molecule adenosine triphosphate (ATP) in most organisms. The catalysis involves the rotation of a ring of c-subunits, which is driven by the transmembrane electrochemical gradient. This dissertation reports how the eukaryotic c-subunit from spinach chloroplast ATP synthase has successfully been expressed in Escherichia coli and purified in mg quantities by incorporating a unique combination of methods. Expression was accomplished using a codon optimized gene for the c-subunit, and it was expressed as an attachment to the larger, more soluble, native maltose binding protein (MBP-c1). The fusion protein MBP-c1 was purified on an affinity column, and the c1 subunit was subsequently severed by protease cleavage in the presence of detergent. Final purification of the monomeric c1 subunit was accomplished using reversed phase column chromatography with ethanol as an eluent. Circular dichroism spectroscopy data showed clear evidence that the purified c-subunit is folded with the native alpha-helical secondary structure. Recent experiments appear to indicate that this monomeric recombinant c-subunit forms an oligomeric ring that is similar to its native tetradecameric form when reconstituted in liposomes. The F-type ATP synthase c-subunit stoichiometry is currently known to vary from 8 to 15 subunits among different organisms. This has a direct influence on the metabolic requirements of the corresponding organism because each c-subunit binds and transports one H+ across the membrane as the ring makes a complete rotation. The c-ring rotation drives rotation of the gamma-subunit, which in turn drives the synthesis of 3 ATP for every complete rotation. The availability of a recombinantly produced c-ring will lead to new experiments which can be designed to investigate the possible factors that determine the variable c-ring stoichiometry and structure.
ContributorsLawrence, Robert Michael (Author) / Fromme, Petra (Thesis advisor) / Chen, Julian J.L. (Committee member) / Woodbury, Neal W. (Committee member) / Arizona State University (Publisher)
Created2011
149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150175-Thumbnail Image.png
Description
The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban

The tracking of multiple targets becomes more challenging in complex environments due to the additional degrees of nonlinearity in the measurement model. In urban terrain, for example, there are multiple reflection path measurements that need to be exploited since line-of-sight observations are not always available. Multiple target tracking in urban terrain environments is traditionally implemented using sequential Monte Carlo filtering algorithms and data association techniques. However, data association techniques can be computationally intensive and require very strict conditions for efficient performance. This thesis investigates the probability hypothesis density (PHD) method for tracking multiple targets in urban environments. The PHD is based on the theory of random finite sets and it is implemented using the particle filter. Unlike data association methods, it can be used to estimate the number of targets as well as their corresponding tracks. A modified maximum-likelihood version of the PHD (MPHD) is proposed to automatically and adaptively estimate the measurement types available at each time step. Specifically, the MPHD allows measurement-to-nonlinearity associations such that the best matched measurement can be used at each time step, resulting in improved radar coverage and scene visibility. Numerical simulations demonstrate the effectiveness of the MPHD in improving tracking performance, both for tracking multiple targets and targets in clutter.
ContributorsZhou, Meng (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2011
150187-Thumbnail Image.png
Description
Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry,

Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry, physics, computer science and electrical engineering. In particular, signal processing techniques were applied to the problems of sequence querying and alignment, that compare and classify regions of similarity in the sequences based on their composition. However, although current approaches obtain results that can be attributed to key biological properties, they require pre-processing and lack robustness to sequence repetitions. In addition, these approaches do not provide much support for efficiently querying sub-sequences, a process that is essential for tracking localized database matches. In this work, a query-based alignment method for biological sequences that maps sequences to time-domain waveforms before processing the waveforms for alignment in the time-frequency plane is first proposed. The mapping uses waveforms, such as time-domain Gaussian functions, with unique sequence representations in the time-frequency plane. The proposed alignment method employs a robust querying algorithm that utilizes a time-frequency signal expansion whose basis function is matched to the basic waveform in the mapped sequences. The resulting WAVEQuery approach is demonstrated for both DNA and protein sequences using the matching pursuit decomposition as the signal basis expansion. The alignment localization of WAVEQuery is specifically evaluated over repetitive database segments, and operable in real-time without pre-processing. It is demonstrated that WAVEQuery significantly outperforms the biological sequence alignment method BLAST for queries with repetitive segments for DNA sequences. A generalized version of the WAVEQuery approach with the metaplectic transform is also described for protein sequence structure prediction. For protein alignment, it is often necessary to not only compare the one-dimensional (1-D) primary sequence structure but also the secondary and tertiary three-dimensional (3-D) space structures. This is done after considering the conformations in the 3-D space due to the degrees of freedom of these structures. As a result, a novel directionality based 3-D waveform mapping for the 3-D protein structures is also proposed and it is used to compare protein structures using a matched filter approach. By incorporating a 3-D time axis, a highly-localized Gaussian-windowed chirp waveform is defined, and the amino acid information is mapped to the chirp parameters that are then directly used to obtain directionality in the 3-D space. This mapping is unique in that additional characteristic protein information such as hydrophobicity, that relates the sequence with the structure, can be added as another representation parameter. The additional parameter helps tracking similarities over local segments of the structure, this enabling classification of distantly related proteins which have partial structural similarities. This approach is successfully tested for pairwise alignments over full length structures, alignments over multiple structures to form a phylogenetic trees, and also alignments over local segments. Also, basic classification over protein structural classes using directional descriptors for the protein structure is performed.
ContributorsRavichandran, Lakshminarayan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas S (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2011
150108-Thumbnail Image.png
Description
In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of

In the late 1960s, Granger published a seminal study on causality in time series, using linear interdependencies and information transfer. Recent developments in the field of information theory have introduced new methods to investigate the transfer of information in dynamical systems. Using concepts from Chaos and Markov theory, much of these methods have evolved to capture non-linear relations and information flow between coupled dynamical systems with applications to fields like biomedical signal processing. This thesis deals with the application of information theory to non-linear multivariate time series and develops measures of information flow to identify significant drivers and response (driven) components in networks of coupled sub-systems with variable coupling in strength and direction (uni- or bi-directional) for each connection. Transfer Entropy (TE) is used to quantify pairwise directional information. Four TE-based measures of information flow are proposed, namely TE Outflow (TEO), TE Inflow (TEI), TE Net flow (TEN), and Average TE flow (ATE). First, the reliability of the information flow measures on models, with and without noise, is evaluated. The driver and response sub-systems in these models are identified. Second, these measures are applied to electroencephalographic (EEG) data from two patients with focal epilepsy. The analysis showed dominant directions of information flow between brain sites and identified the epileptogenic focus as the system component typically with the highest value for the proposed measures (for example, ATE). Statistical tests between pre-seizure (preictal) and post-seizure (postictal) information flow also showed a breakage of the driving of the brain by the focus after seizure onset. The above findings shed light on the function of the epileptogenic focus and understanding of ictogenesis. It is expected that they will contribute to the diagnosis of epilepsy, for example by accurate identification of the epileptogenic focus from interictal periods, as well as the development of better seizure detection, prediction and control methods, for example by isolating pathologic areas of excessive information flow through electrical stimulation.
ContributorsPrasanna, Shashank (Author) / Jassemidis, Leonidas (Thesis advisor) / Tsakalis, Konstantinos (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150118-Thumbnail Image.png
Description
Conformational changes in biomolecules often take place on longer timescales than are easily accessible with unbiased molecular dynamics simulations, necessitating the use of enhanced sampling techniques, such as adaptive umbrella sampling. In this technique, the conformational free energy is calculated in terms of a designated set of reaction coordinates. At

Conformational changes in biomolecules often take place on longer timescales than are easily accessible with unbiased molecular dynamics simulations, necessitating the use of enhanced sampling techniques, such as adaptive umbrella sampling. In this technique, the conformational free energy is calculated in terms of a designated set of reaction coordinates. At the same time, estimates of this free energy are subtracted from the potential energy in order to remove free energy barriers and cause conformational changes to take place more rapidly. This dissertation presents applications of adaptive umbrella sampling to a variety of biomolecular systems. The first study investigated the effects of glycosylation in GalNAc2-MM1, an analog of glycosylated macrophage activating factor. It was found that glycosylation destabilizes the protein by increasing the solvent exposure of hydrophobic residues. The second study examined the role of bound calcium ions in promoting the isomerization of a cis peptide bond in the collagen-binding domain of Clostridium histolyticum collagenase. This study determined that the bound calcium ions reduced the barrier to the isomerization of this peptide bond as well as stabilizing the cis conformation thermodynamically, and identified some of the reasons for this. The third study represents the application of GAMUS (Gaussian mixture adaptive umbrella sampling) to on the conformational dynamics of the fluorescent dye Cy3 attached to the 5' end of DNA, and made predictions concerning the affinity of Cy3 for different base pairs, which were subsequently verified experimentally. Finally, the adaptive umbrella sampling method is extended to make use of the roll angle between adjacent base pairs as a reaction coordinate in order to examine the bending both of free DNA and of DNA bound to the archaeal protein Sac7d. It is found that when DNA bends significantly, cations from the surrounding solution congregate on the concave side, which increases the flexibility of the DNA by screening the repulsion between phosphate backbones. The flexibility of DNA on short length scales is compared to the worm-like chain model, and the contribution of cooperativity in DNA bending to protein-DNA binding is assessed.
ContributorsSpiriti, Justin Matthew (Author) / van der Vaart, Arjan (Thesis advisor) / Chizmeshya, Andrew (Thesis advisor) / Matyushov, Dmitry (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011