Matching Items (247)
150022-Thumbnail Image.png
Description
Membrane proteins are very important for all living cells, being involved in respiration, photosynthesis, cellular uptake and signal transduction, amongst other vital functions. However, less than 300 unique membrane protein structures have been determined to date, often due to difficulties associated with the growth of sufficiently large and well-ordered crystals.

Membrane proteins are very important for all living cells, being involved in respiration, photosynthesis, cellular uptake and signal transduction, amongst other vital functions. However, less than 300 unique membrane protein structures have been determined to date, often due to difficulties associated with the growth of sufficiently large and well-ordered crystals. This work has been focused on showing the first proof of concept for using membrane protein nanocrystals and microcrystals for high-resolution structure determination. Upon determining that crystals of the membrane protein Photosystem I, which is the largest and most complex membrane protein crystallized to date, exist with only a hundred unit cells with sizes of less than 200 nm on an edge, work was done to develop a technique that could exploit the growth of the Photosystem I nanocrystals and microcrystals. Femtosecond X-ray protein nanocrystallography was developed for use at the first high-energy X-ray free electron laser, the LCLS at SLAC National Accelerator Laboratory, in which a liquid jet would bring fully hydrated Photosystem I nanocrystals into the interaction region of the pulsed X-ray source. Diffraction patterns were recorded from millions of individual PSI nanocrystals and data from thousands of different, randomly oriented crystallites were integrated using Monte Carlo integration of the peak intensities. The short pulses ( 70 fs) provided by the LCLS allowed the possibility to collect the diffraction data before the onset of radiation damage, exploiting the diffract-before-destroy principle. At the initial experiments at the AMO beamline using 6.9- Å wavelength, Bragg peaks were recorded to 8.5- Å resolution, and an electron-density map was determined that did not show any effects of X-ray-induced radiation damage. Recently, femtosecond X-ray protein nanocrystallography experiments were done at the CXI beamline of the LCLS using 1.3- Å wavelength, and Bragg reflections were recorded to 3- Å resolution; the data are currently being processed. Many additional techniques still need to be developed to explore the femtosecond nanocrystallography technique for experimental phasing and time-resolved X-ray crystallography experiments. The first proof-of-principle results for the femtosecond nanocrystallography technique indicate the incredible potential of the technique to offer a new route to the structure determination of membrane proteins.
ContributorsHunter, Mark (Author) / Fromme, Petra (Thesis advisor) / Wolf, George (Committee member) / Levitus, Marcia (Committee member) / Arizona State University (Publisher)
Created2011
150007-Thumbnail Image.png
Description
Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation

Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site.
ContributorsCoelho, Clyde (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Wu, Tong (Committee member) / Das, Santanu (Committee member) / Rajadas, John (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149753-Thumbnail Image.png
Description
Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600

Molybdenum (Mo) is a key trace nutrient for biological assimilation of nitrogen, either as nitrogen gas (N2) or nitrate (NO3-). Although Mo is the most abundant metal in seawater (105 nM), its concentration is low (<5 nM) in most freshwaters today, and it was scarce in the ocean before 600 million years ago. The use of Mo for nitrogen assimilation can be understood in terms of the changing Mo availability through time; for instance, the higher Mo content of eukaryotic vs. prokaryotic nitrate reductase may have stalled proliferation of eukaryotes in low-Mo Proterozoic oceans. Field and laboratory experiments were performed to study Mo requirements for NO3- assimilation and N2 fixation, respectively. Molybdenum-nitrate addition experiments at Castle Lake, California revealed interannual and depth variability in plankton community response, perhaps resulting from differences in species composition and/or ammonium availability. Furthermore, lake sediments were elevated in Mo compared to soils and bedrock in the watershed. Box modeling suggested that the largest source of Mo to the lake was particulate matter from the watershed. Month-long laboratory experiments with heterocystous cyanobacteria (HC) showed that <1 nM Mo led to low N2 fixation rates, while 10 nM Mo was sufficient for optimal rates. At 1500 nM Mo, freshwater HC hyperaccumulated Mo intercellularly, whereas coastal HC did not. These differences in storage capacity were likely due to the presence in freshwater HC of the small molybdate-binding protein, Mop, and its absence in coastal and marine cyanobacterial species. Expression of the mop gene was regulated by Mo availability in the freshwater HC species Nostoc sp. PCC 7120. Under low Mo (<1 nM) conditions, mop gene expression was up-regulated compared to higher Mo (150 and 3000 nM) treatments, but the subunit composition of the Mop protein changed, suggesting that Mop does not bind Mo in the same manner at <1 nM Mo that it can at higher Mo concentrations. These findings support a role for Mop as a Mo storage protein in HC and suggest that freshwater HC control Mo cellular homeostasis at the post-translational level. Mop's widespread distribution in prokaryotes lends support to the theory that it may be an ancient protein inherited from low-Mo Precambrian oceans.
ContributorsGlass, Jennifer (Author) / Anbar, Ariel D (Thesis advisor) / Shock, Everett L (Committee member) / Jones, Anne K (Committee member) / Hartnett, Hilairy E (Committee member) / Elser, James J (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150394-Thumbnail Image.png
Description
Anti-retroviral drugs and AIDS prevention programs have helped to decrease the rate of new HIV-1 infections in some communities, however, a prophylactic vaccine is still needed to control the epidemic world-wide. Despite over two decades of research, a vaccine against HIV-1 remains elusive, although recent clinical trials have shown promising

Anti-retroviral drugs and AIDS prevention programs have helped to decrease the rate of new HIV-1 infections in some communities, however, a prophylactic vaccine is still needed to control the epidemic world-wide. Despite over two decades of research, a vaccine against HIV-1 remains elusive, although recent clinical trials have shown promising results. Recent successes have focused on highly conserved, mucosally-targeted antigens within HIV-1 such as the membrane proximal external region (MPER) of the envelope protein, gp41. MPER has been shown to play critical roles in the viral mucosal transmission, though this peptide is not immunogenic on its own. Gag is a structural protein configuring the enveloped virus particles, and has been suggested to constitute a target of the cellular immunity potentially controlling the viral load. It was hypothesized that HIV-1 enveloped virus-like particles (VLPs) consisting of Gag and a deconstructed form of gp41 comprising the MPER, transmembrane, and cytoplasmic domains (dgp41) could be expressed in plants. Plant-optimized HIV-1 genes were constructed and expressed in Nicotiana benthamiana by stable transformation, or transiently using a tobacco mosaic virus-based expression system or a combination of both. Results of biophysical, biochemical and electron microscopy characterization demonstrated that plant cells could support not only the formation of HIV-1 Gag VLPs, but also the accumulation of VLPs that incorporated dgp41. These particles were purified and utilized in mice immunization experiments. Prime-boost strategies combining systemic and mucosal priming with systemic boosting using two different vaccine candidates (VLPs and CTB-MPR - a fusion of MPER and the B-subunit of cholera toxin) were administered to BALB/c mice. Serum antibody responses against both the Gag and gp41 antigens could be elicited in mice systemically primed with VLPs and these responses could be recalled following systemic boosting with VLPs. In addition, mucosal priming with VLPs allowed for a robust boosting response against Gag and gp41 when boosted with either candidate. Functional assays of these antibodies are in progress to test the antibodies' effectiveness in neutralizing and preventing mucosal transmission of HIV-1. This immunogenicity of plant-based Gag/dgp41 VLPs represents an important milestone on the road towards a broadly-efficacious and inexpensive subunit vaccine against HIV-1.
ContributorsKessans, Sarah (Author) / Mor, Tsafrir S (Thesis advisor) / Matoba, Nobuyuki (Committee member) / Mason, Hugh (Committee member) / Hogue, Brenda (Committee member) / Fromme, Petra (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011