Matching Items (205)
150022-Thumbnail Image.png
Description
Membrane proteins are very important for all living cells, being involved in respiration, photosynthesis, cellular uptake and signal transduction, amongst other vital functions. However, less than 300 unique membrane protein structures have been determined to date, often due to difficulties associated with the growth of sufficiently large and well-ordered crystals.

Membrane proteins are very important for all living cells, being involved in respiration, photosynthesis, cellular uptake and signal transduction, amongst other vital functions. However, less than 300 unique membrane protein structures have been determined to date, often due to difficulties associated with the growth of sufficiently large and well-ordered crystals. This work has been focused on showing the first proof of concept for using membrane protein nanocrystals and microcrystals for high-resolution structure determination. Upon determining that crystals of the membrane protein Photosystem I, which is the largest and most complex membrane protein crystallized to date, exist with only a hundred unit cells with sizes of less than 200 nm on an edge, work was done to develop a technique that could exploit the growth of the Photosystem I nanocrystals and microcrystals. Femtosecond X-ray protein nanocrystallography was developed for use at the first high-energy X-ray free electron laser, the LCLS at SLAC National Accelerator Laboratory, in which a liquid jet would bring fully hydrated Photosystem I nanocrystals into the interaction region of the pulsed X-ray source. Diffraction patterns were recorded from millions of individual PSI nanocrystals and data from thousands of different, randomly oriented crystallites were integrated using Monte Carlo integration of the peak intensities. The short pulses ( 70 fs) provided by the LCLS allowed the possibility to collect the diffraction data before the onset of radiation damage, exploiting the diffract-before-destroy principle. At the initial experiments at the AMO beamline using 6.9- Å wavelength, Bragg peaks were recorded to 8.5- Å resolution, and an electron-density map was determined that did not show any effects of X-ray-induced radiation damage. Recently, femtosecond X-ray protein nanocrystallography experiments were done at the CXI beamline of the LCLS using 1.3- Å wavelength, and Bragg reflections were recorded to 3- Å resolution; the data are currently being processed. Many additional techniques still need to be developed to explore the femtosecond nanocrystallography technique for experimental phasing and time-resolved X-ray crystallography experiments. The first proof-of-principle results for the femtosecond nanocrystallography technique indicate the incredible potential of the technique to offer a new route to the structure determination of membrane proteins.
ContributorsHunter, Mark (Author) / Fromme, Petra (Thesis advisor) / Wolf, George (Committee member) / Levitus, Marcia (Committee member) / Arizona State University (Publisher)
Created2011
149753-Thumbnail Image.png
Description
Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600

Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600 million years ago. The use of Mo for nitrogen assimilation can be understood in terms of the changing Mo availability through time; for instance, the higher Mo content of eukaryotic vs. prokaryotic nitrate reductase may have stalled proliferation of eukaryotes in low-Mo Proterozoic oceans. Field and laboratory experiments were performed to study Mo requirements for NO3- assimilation and N2 fixation, respectively. Molybdenum-nitrate addition experiments at Castle Lake, California revealed interannual and depth variability in plankton community response, perhaps resulting from differences in species composition and/or ammonium availability. Furthermore, lake sediments were elevated in Mo compared to soils and bedrock in the watershed. Box modeling suggested that the largest source of Mo to the lake was particulate matter from the watershed. Month-long laboratory experiments with heterocystous cyanobacteria (HC) showed that <1 nM Mo led to low N2 fixation rates, while 10 nM Mo was sufficient for optimal rates. At 1500 nM Mo, freshwater HC hyperaccumulated Mo intercellularly, whereas coastal HC did not. These differences in storage capacity were likely due to the presence in freshwater HC of the small molybdate-binding protein, Mop, and its absence in coastal and marine cyanobacterial species. Expression of the mop gene was regulated by Mo availability in the freshwater HC species Nostoc sp. PCC 7120. Under low Mo (<1 nM) conditions, mop gene expression was up-regulated compared to higher Mo (150 and 3000 nM) treatments, but the subunit composition of the Mop protein changed, suggesting that Mop does not bind Mo in the same manner at <1 nM Mo that it can at higher Mo concentrations. These findings support a role for Mop as a Mo storage protein in HC and suggest that freshwater HC control Mo cellular homeostasis at the post-translational level. Mop's widespread distribution in prokaryotes lends support to the theory that it may be an ancient protein inherited from low-Mo Precambrian oceans.
ContributorsGlass, Jennifer (Author) / Anbar, Ariel D (Thesis advisor) / Shock, Everett L (Committee member) / Jones, Anne K (Committee member) / Hartnett, Hilairy E (Committee member) / Elser, James J (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150394-Thumbnail Image.png
Description
Anti-retroviral drugs and AIDS prevention programs have helped to decrease the rate of new HIV-1 infections in some communities, however, a prophylactic vaccine is still needed to control the epidemic world-wide. Despite over two decades of research, a vaccine against HIV-1 remains elusive, although recent clinical trials have shown promising

Anti-retroviral drugs and AIDS prevention programs have helped to decrease the rate of new HIV-1 infections in some communities, however, a prophylactic vaccine is still needed to control the epidemic world-wide. Despite over two decades of research, a vaccine against HIV-1 remains elusive, although recent clinical trials have shown promising results. Recent successes have focused on highly conserved, mucosally-targeted antigens within HIV-1 such as the membrane proximal external region (MPER) of the envelope protein, gp41. MPER has been shown to play critical roles in the viral mucosal transmission, though this peptide is not immunogenic on its own. Gag is a structural protein configuring the enveloped virus particles, and has been suggested to constitute a target of the cellular immunity potentially controlling the viral load. It was hypothesized that HIV-1 enveloped virus-like particles (VLPs) consisting of Gag and a deconstructed form of gp41 comprising the MPER, transmembrane, and cytoplasmic domains (dgp41) could be expressed in plants. Plant-optimized HIV-1 genes were constructed and expressed in Nicotiana benthamiana by stable transformation, or transiently using a tobacco mosaic virus-based expression system or a combination of both. Results of biophysical, biochemical and electron microscopy characterization demonstrated that plant cells could support not only the formation of HIV-1 Gag VLPs, but also the accumulation of VLPs that incorporated dgp41. These particles were purified and utilized in mice immunization experiments. Prime-boost strategies combining systemic and mucosal priming with systemic boosting using two different vaccine candidates (VLPs and CTB-MPR - a fusion of MPER and the B-subunit of cholera toxin) were administered to BALB/c mice. Serum antibody responses against both the Gag and gp41 antigens could be elicited in mice systemically primed with VLPs and these responses could be recalled following systemic boosting with VLPs. In addition, mucosal priming with VLPs allowed for a robust boosting response against Gag and gp41 when boosted with either candidate. Functional assays of these antibodies are in progress to test the antibodies' effectiveness in neutralizing and preventing mucosal transmission of HIV-1. This immunogenicity of plant-based Gag/dgp41 VLPs represents an important milestone on the road towards a broadly-efficacious and inexpensive subunit vaccine against HIV-1.
ContributorsKessans, Sarah (Author) / Mor, Tsafrir S (Thesis advisor) / Matoba, Nobuyuki (Committee member) / Mason, Hugh (Committee member) / Hogue, Brenda (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149885-Thumbnail Image.png
Description
The cyanobacterium Synechocystis sp. PCC 6803 performs oxygenic photosynthesis. Light energy conversion in photosynthesis takes place in photosystem I (PSI) and photosystem II (PSII) that contain chlorophyll, which absorbs light energy that is utilized as a driving force for photosynthesis. However, excess light energy may lead to formation of reactive

The cyanobacterium Synechocystis sp. PCC 6803 performs oxygenic photosynthesis. Light energy conversion in photosynthesis takes place in photosystem I (PSI) and photosystem II (PSII) that contain chlorophyll, which absorbs light energy that is utilized as a driving force for photosynthesis. However, excess light energy may lead to formation of reactive oxygen species that cause damage to photosynthetic complexes, which subsequently need repair or replacement. To gain insight in the degradation/biogenesis dynamics of the photosystems, the lifetimes of photosynthetic proteins and chlorophyll were determined by a combined stable-isotope (15N) and mass spectrometry method. The lifetimes of PSII and PSI proteins ranged from 1-33 and 30-75 hours, respectively. Interestingly, chlorophyll had longer lifetimes than the chlorophyll-binding proteins in these photosystems. Therefore, photosynthetic proteins turn over and are replaced independently from each other, and chlorophyll is recycled from the damaged chlorophyll-binding proteins. In Synechocystis, there are five small Cab-like proteins (SCPs: ScpA-E) that share chlorophyll a/b-binding motifs with LHC proteins in plants. SCPs appear to transiently bind chlorophyll and to regulate chlorophyll biosynthesis. In this study, the association of ScpB, ScpC, and ScpD with damaged and repaired PSII was demonstrated. Moreover, in a mutant lacking SCPs, most PSII protein lifetimes were unaffected but the lifetime of chlorophyll was decreased, and one of the nascent PSII complexes was missing. SCPs appear to bind PSII chlorophyll while PSII is repaired, and SCPs stabilize nascent PSII complexes. Furthermore, aminolevulinic acid biosynthesis, an early step of chlorophyll biosynthesis, was impaired in the absence of SCPs, so that the amount of chlorophyll in the cells was reduced. Finally, a deletion mutation was introduced into the sll1906 gene, encoding a member of the putative bacteriochlorophyll delivery (BCD) protein family. The Sll1906 sequence contains possible chlorophyll-binding sites, and its homolog in purple bacteria functions in proper assembly of light-harvesting complexes. However, the sll1906 deletion did not affect chlorophyll degradation/biosynthesis and photosystem assembly. Other (parallel) pathways may exist that may fully compensate for the lack of Sll1906. This study has highlighted the dynamics of photosynthetic complexes in their biogenesis and turnover and the coordination between synthesis of chlorophyll and photosynthetic proteins.
ContributorsYao, Cheng I Daniel (Author) / Vermaas, Wim (Thesis advisor) / Fromme, Petra (Committee member) / Roberson, Robert (Committee member) / Webber, Andrew (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011