Matching Items (188)
Filtering by

Clear all filters

152233-Thumbnail Image.png
Description
Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large

Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large sample volumes from over a wide area and transporting them to laboratory testing facilities, which fail to provide any real-time information. This dissertation evaluates the systems currently utilized for in-situ field analyses and the issues hampering the successful deployment of such bioanalytial instruments for environmental applications. The design and development of high throughput, low power, and autonomous Polymerase Chain Reaction (PCR) instruments, amenable for portable field operations capable of providing quantitative results is presented here as part of this dissertation. A number of novel innovations have been reported here as part of this work in microfluidic design, PCR thermocycler design, optical design and systems integration. Emulsion microfluidics in conjunction with fluorinated oils and Teflon tubing have been used for the fluidic module that reduces cross-contamination eliminating the need for disposable components or constant cleaning. A cylindrical heater has been designed with the tubing wrapped around fixed temperature zones enabling continuous operation. Fluorescence excitation and detection have been achieved by using a light emitting diode (LED) as the excitation source and a photomultiplier tube (PMT) as the detector. Real-time quantitative PCR results were obtained by using multi-channel fluorescence excitation and detection using LED, optical fibers and a 64-channel multi-anode PMT for measuring continuous real-time fluorescence. The instrument was evaluated by comparing the results obtained with those obtained from a commercial instrument and found to be comparable. To further improve the design and enhance its field portability, this dissertation also presents a framework for the instrumentation necessary for a portable digital PCR platform to achieve higher throughputs with lower power. Both systems were designed such that it can easily couple with any upstream platform capable of providing nucleic acid for analysis using standard fluidic connections. Consequently, these instruments can be used not only in environmental applications, but portable diagnostics applications as well.
ContributorsRay, Tathagata (Author) / Youngbull, Cody (Thesis advisor) / Goryll, Michael (Thesis advisor) / Blain Christen, Jennifer (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2013
151950-Thumbnail Image.png
Description
Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational

Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational perspectives and utilitarian gains from these services. Unlike through traditional commerce channels, including e-commerce retailers, consumption enhancing hedonic utility is experienced differently in the context of a social media site; consequently, the dynamic of the decision-making process shifts when it is made in a social context. Previous research assumed a limited influence of a small, immediate group of peers. But the rules change when the network of peers expands exponentially. The assertion is that, while there are individual differences in the level of susceptibility to influence coming from others, these are not the most important pieces of the analysis--unlike research centered completely on influence. Rather, the context of the consumption can play an important role in the way social influence factors affect consumer behavior on Social Media Sites. Over the course of three studies, this dissertation will examine factors that influence consumer decision-making and the brand personalities created and interpreted in these SMS. Study one examines the role of different types of peer influence on consumer decision-making on Facebook. Study two observes the impact of different types of producer message posts with the different types of influence on decision-making on Twitter. Study three will conclude this work with an exploratory empirical investigation of actual twitter postings of a set of musicians. These studies contribute to the body of IS literature by evaluating the specific behavioral changes related to consumption in the context of digital social media: (a) the power of social influencers in contrast to personal preferences on SMS, (b) the effect on consumers of producer message types and content on SMS at both the profile level and the individual message level.
ContributorsSopha, Matthew (Author) / Santanam, Raghu T (Thesis advisor) / Goul, Kenneth M (Committee member) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2013
151953-Thumbnail Image.png
Description
Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first

Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first part of the dissertation, a distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise, and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward (AF) and detect-and-forward (DF) schemes. The effect of fading is shown to be detrimental to the detection performance and simulations are provided to corroborate the analytical results. The second part of the dissertation studies a distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel. The conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify and forward technique and its robustness to impulsive sensing noise distributions is highlighted. It is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results. In the third part of this dissertation, the problem of estimating the average of samples distributed at the nodes of a sensor network is considered. A distributed average consensus algorithm in which every sensor transmits with bounded peak power is proposed. In the presence of communication noise, it is shown that the nodes reach consensus asymptotically to a finite random variable whose expectation is the desired sample average of the initial observations with a variance that depends on the step size of the algorithm and the variance of the communication noise. The asymptotic performance is characterized by deriving the asymptotic covariance matrix using results from stochastic approximation theory. It is shown that using bounded transmissions results in slower convergence compared to the linear consensus algorithm based on the Laplacian heuristic. Simulations corroborate our analytical findings. Finally, a robust distributed average consensus algorithm in which every sensor performs a nonlinear processing at the receiver is proposed. It is shown that non-linearity at the receiver nodes makes the algorithm robust to a wide range of channel noise distributions including the impulsive ones. It is shown that the nodes reach consensus asymptotically and similar results are obtained as in the case of transmit non-linearity. Simulations corroborate our analytical findings and highlight the robustness of the proposed algorithm.
ContributorsDasarathan, Sivaraman (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Reisslein, Martin (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
151814-Thumbnail Image.png
Description
This research emphasizes the use of low energy and low temperature post processing to improve the performance and lifetime of thin films and thin film transistors, by applying the fundamentals of interaction of materials with conductive heating and electromagnetic radiation. Single frequency microwave anneal is used to rapidly recrystallize the

This research emphasizes the use of low energy and low temperature post processing to improve the performance and lifetime of thin films and thin film transistors, by applying the fundamentals of interaction of materials with conductive heating and electromagnetic radiation. Single frequency microwave anneal is used to rapidly recrystallize the damage induced during ion implantation in Si substrates. Volumetric heating of the sample in the presence of the microwave field facilitates quick absorption of radiation to promote recrystallization at the amorphous-crystalline interface, apart from electrical activation of the dopants due to relocation to the substitutional sites. Structural and electrical characterization confirm recrystallization of heavily implanted Si within 40 seconds anneal time with minimum dopant diffusion compared to rapid thermal annealed samples. The use of microwave anneal to improve performance of multilayer thin film devices, e.g. thin film transistors (TFTs) requires extensive study of interaction of individual layers with electromagnetic radiation. This issue has been addressed by developing detail understanding of thin films and interfaces in TFTs by studying reliability and failure mechanisms upon extensive stress test. Electrical and ambient stresses such as illumination, thermal, and mechanical stresses are inflicted on the mixed oxide based thin film transistors, which are explored due to high mobilities of the mixed oxide (indium zinc oxide, indium gallium zinc oxide) channel layer material. Semiconductor parameter analyzer is employed to extract transfer characteristics, useful to derive mobility, subthreshold, and threshold voltage parameters of the transistors. Low temperature post processing anneals compatible with polymer substrates are performed in several ambients (oxygen, forming gas and vacuum) at 150 °C as a preliminary step. The analysis of the results pre and post low temperature anneals using device physics fundamentals assists in categorizing defects leading to failure/degradation as: oxygen vacancies, thermally activated defects within the bandgap, channel-dielectric interface defects, and acceptor-like or donor-like trap states. Microwave anneal has been confirmed to enhance the quality of thin films, however future work entails extending the use of electromagnetic radiation in controlled ambient to facilitate quick post fabrication anneal to improve the functionality and lifetime of these low temperature fabricated TFTs.
ContributorsVemuri, Rajitha (Author) / Alford, Terry L. (Thesis advisor) / Theodore, N David (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
152307-Thumbnail Image.png
Description
Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of

Immunosignaturing is a medical test for assessing the health status of a patient by applying microarrays of random sequence peptides to determine the patient's immune fingerprint by associating antibodies from a biological sample to immune responses. The immunosignature measurements can potentially provide pre-symptomatic diagnosis for infectious diseases or detection of biological threats. Currently, traditional bioinformatics tools, such as data mining classification algorithms, are used to process the large amount of peptide microarray data. However, these methods generally require training data and do not adapt to changing immune conditions or additional patient information. This work proposes advanced processing techniques to improve the classification and identification of single and multiple underlying immune response states embedded in immunosignatures, making it possible to detect both known and previously unknown diseases or biothreat agents. Novel adaptive learning methodologies for un- supervised and semi-supervised clustering integrated with immunosignature feature extraction approaches are proposed. The techniques are based on extracting novel stochastic features from microarray binding intensities and use Dirichlet process Gaussian mixture models to adaptively cluster the immunosignatures in the feature space. This learning-while-clustering approach allows continuous discovery of antibody activity by adaptively detecting new disease states, with limited a priori disease or patient information. A beta process factor analysis model to determine underlying patient immune responses is also proposed to further improve the adaptive clustering performance by formatting new relationships between patients and antibody activity. In order to extend the clustering methods for diagnosing multiple states in a patient, the adaptive hierarchical Dirichlet process is integrated with modified beta process factor analysis latent feature modeling to identify relationships between patients and infectious agents. The use of Bayesian nonparametric adaptive learning techniques allows for further clustering if additional patient data is received. Significant improvements in feature identification and immune response clustering are demonstrated using samples from patients with different diseases.
ContributorsMalin, Anna (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2013
152285-Thumbnail Image.png
Description
Zinc oxide (ZnO), a naturally n-type semiconductor has been identified as a promising candidate to replace indium tin oxide (ITO) as the transparent electrode in solar cells, because of its wide bandgap (3.37 eV), abundant source materials and suitable refractive index (2.0 at 600 nm). Spray deposition is a convenient

Zinc oxide (ZnO), a naturally n-type semiconductor has been identified as a promising candidate to replace indium tin oxide (ITO) as the transparent electrode in solar cells, because of its wide bandgap (3.37 eV), abundant source materials and suitable refractive index (2.0 at 600 nm). Spray deposition is a convenient and low cost technique for large area and uniform deposition of semiconductor thin films. In particular, it provides an easier way to dope the film by simply adding the dopant precursor into the starting solution. In order to reduce the resistivity of undoped ZnO, many works have been done by doping in the ZnO with either group IIIA elements or VIIA elements using spray pyrolysis. However, the resistivity is still too high to meet TCO's resistivity requirement. In the present work, a novel co-spray deposition technique is developed to bypass a fundamental limitation in the conventional spray deposition technique, i.e. the deposition of metal oxides from incompatible precursors in the starting solution. With this technique, ZnO films codoped with one cationic dopant, Al, Cr, or Fe, and an anionic dopant, F, have been successfully synthesized, in which F is incompatible with all these three cationic dopants. Two starting solutions were prepared and co-sprayed through two separate spray heads. One solution contained only the F precursor, NH 4F. The second solution contained the Zn and one cationic dopant precursors, Zn(O 2CCH 3) 2 and AlCl 3, CrCl 3, or FeCl 3. The deposition was carried out at 500 &degC; on soda-lime glass in air. Compared to singly-doped ZnO thin films, codoped ZnO samples showed better electrical properties. Besides, a minimum sheet resistance, 55.4 Ω/sq, was obtained for Al and F codoped ZnO films after vacuum annealing at 400 &degC;, which was lower than singly-doped ZnO with either Al or F. The transmittance for the Al and F codoped ZnO samples was above 90% in the visible range. This co-spray deposition technique provides a simple and cost-effective way to synthesize metal oxides from incompatible precursors with improved properties.
ContributorsZhou, Bin (Author) / Tao, Meng (Thesis advisor) / Goryll, Michael (Committee member) / Vasileska, Dragica (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2013
152288-Thumbnail Image.png
Description
Chalcogenide glass (ChG) materials have gained wide attention because of their applications in conductive bridge random access memory (CBRAM), phase change memories (PC-RAM), optical rewritable disks (CD-RW and DVD-RW), microelectromechanical systems (MEMS), microfluidics, and optical communications. One of the significant properties of ChG materials is the change in the resistivity

Chalcogenide glass (ChG) materials have gained wide attention because of their applications in conductive bridge random access memory (CBRAM), phase change memories (PC-RAM), optical rewritable disks (CD-RW and DVD-RW), microelectromechanical systems (MEMS), microfluidics, and optical communications. One of the significant properties of ChG materials is the change in the resistivity of the material when a metal such as Ag or Cu is added to it by diffusion. This study demonstrates the potential radiation-sensing capabilities of two metal/chalcogenide glass device configurations. Lateral and vertical device configurations sense the radiation-induced migration of Ag+ ions in germanium selenide glasses via changes in electrical resistance between electrodes on the ChG. Before irradiation, these devices exhibit a high-resistance `OFF-state' (in the order of 10E12) but following irradiation, with either 60-Co gamma-rays or UV light, their resistance drops to a low-resistance `ON-state' (around 10E3). Lateral devices have exhibited cyclical recovery with room temperature annealing of the Ag doped ChG, which suggests potential uses in reusable radiation sensor applications. The feasibility of producing inexpensive flexible radiation sensors has been demonstrated by studying the effects of mechanical strain and temperature stress on sensors formed on flexible polymer substrate. The mechanisms of radiation-induced Ag/Ag+ transport and reactions in ChG have been modeled using a finite element device simulator, ATLAS. The essential reactions captured by the simulator are radiation-induced carrier generation, combined with reduction/oxidation for Ag species in the chalcogenide film. Metal-doped ChGs are solid electrolytes that have both ionic and electronic conductivity. The ChG based Programmable Metallization Cell (PMC) is a technology platform that offers electric field dependent resistance switching mechanisms by formation and dissolution of nano sized conductive filaments in a ChG solid electrolyte between oxidizable and inert electrodes. This study identifies silver anode agglomeration in PMC devices following large radiation dose exposure and considers device failure mechanisms via electrical and material characterization. The results demonstrate that by changing device structural parameters, silver agglomeration in PMC devices can be suppressed and reliable resistance switching may be maintained for extremely high doses ranging from 4 Mrad(GeSe) to more than 10 Mrad (ChG).
ContributorsDandamudi, Pradeep (Author) / Kozicki, Michael N (Thesis advisor) / Barnaby, Hugh J (Committee member) / Holbert, Keith E. (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013