Matching Items (12,116)
Filtering by

Clear all filters

161810-Thumbnail Image.png
Description
Desalination of seawater, wastewater, and impaired groundwater is becoming essential to meet global water demands. In Saudi Arabia alone, desalination will meet 70% of the country’s water sources by 2050. Three selective desalination processes are presented in this dissertation including i) pressure-driven membrane using reverse osmosis (RO), ii) thermal-driven process

Desalination of seawater, wastewater, and impaired groundwater is becoming essential to meet global water demands. In Saudi Arabia alone, desalination will meet 70% of the country’s water sources by 2050. Three selective desalination processes are presented in this dissertation including i) pressure-driven membrane using reverse osmosis (RO), ii) thermal-driven process by membrane distillation (MD), and iii) electro-potential driven of electrocatalytic for selective ion conversion. Modern RO membranes have reached their theoretical performance limits, resulting in minimal need to innovate at membrane material level. Bulk salt removal is not always needed, however, selective removal of problematic salt may provide lower cost strategies. Therefore, the overarching goal of this dissertation involves i) evaluating wastewater desalination at system level to reduce energy required to enable wastewater reuse, and ii) exploring micro level reactor architectures to identify low-energy strategies for selective ion treatment in impaired waters.System level strategies at wastewater facilities by leveraging local co-located cool water source enabled MD system to treat warm wastewater RO brine resulting in enhanced water recovery, decreased brine volume, and minimized energy requirements. A temperature differential of (ΔT= 10 ͦ C) between brine and surface water was adequate for membrane distillation process leading to 25% less energy than normal MD. Two micro-sized reactor designs were considered for selective salt removal. First, microfluidic testing platforms were successfully designed and fabricated using natural and engineered nanotubes as potential new architectures for salts separation. Tobacco mosaic virus (TMV) was gown and purified along with carbon nanotubes (CNTs) and were deposited on silicon wafers as part of the microfluidic devices. Progress was terminated after two years, due to complications associated with alignment of the nanotubes on wafers. Specifically, the separation issues and straight alignment of nanotubes as a key parameter for microfluidic device fabrication. The innovation I made provided a platform for further research through micro-sized devices. I pivoted to study selective ion destruction rather than separation, using an electrochemical microfluidic device. The electrochemical microfluidic device allowed probing of energy consumption in microchannel and showed one order of magnitude lower energy for nitrite removal when compared to a conventional electrochemical reactor.
ContributorsAlrehaili, Omar (Author) / Westerhoff, Paul PW (Thesis advisor) / Perreault, Francois FP (Committee member) / Sinha, Shahnawaz SS (Committee member) / Garcia-Segura, Sergi SGS (Committee member) / Arizona State University (Publisher)
Created2021
161811-Thumbnail Image.png
Description
I studied the molecular mechanisms of ultraviolet radiation mitigation (UVR) in the terrestrial cyanobacterium Nostoc punctiforme ATCC 29133, which produces the indole-alkaloid sunscreen scytonemin and differentiates into motile filaments (hormogonia). While the early stages of scytonemin biosynthesis were known, the late stages were not. Gene deletion mutants were interrogated by

I studied the molecular mechanisms of ultraviolet radiation mitigation (UVR) in the terrestrial cyanobacterium Nostoc punctiforme ATCC 29133, which produces the indole-alkaloid sunscreen scytonemin and differentiates into motile filaments (hormogonia). While the early stages of scytonemin biosynthesis were known, the late stages were not. Gene deletion mutants were interrogated by metabolite analyses and confocal microscopy, demonstrating that the ebo gene cluster, was not only required for scytonemin biosynthesis, but was involved in the export of scytonemin monomers to the periplasm. Further, the product of gene scyE was also exported to the periplasm where it was responsible for terminal oxidative dimerization of the monomers. These results opened questions regarding the functional universality of the ebo cluster. To probe if it could play a similar role in organisms other than scytonemin producing cyanobacteria, I developed a bioinformatic pipeline (Functional Landscape And Neighbor Determining gEnomic Region Search; FLANDERS) and used it to scrutinize the neighboring regions of the ebo gene cluster in 90 different bacterial genomes for potentially informational features. Aside from the scytonemin operon and the edb cluster of Pseudomonas spp., responsible for nematode repellence, no known clusters were identified in genomic ebo neighbors, but many of the ebo adjacent regions were enriched in signal peptides for export, indicating a general functional connection between the ebo cluster and biosynthetic compartmentalization. Lastly, I investigated the regulatory span of the two-component regulator of the scytonemin operon (scyTCR) using RNAseq of scyTCR deletion mutants under UV induction. Surprisingly, the knockouts had decreased expression levels in many of the genes involved in hormogonia differentiation and in a putative multigene regulatory element, hcyA-D. This suggested that UV could be a cue for developmental motility responses in Nostoc, which I could confirm phenotypically. In fact, UV-A simultaneously elicited hormogonia differentiation and scytonemin production throughout a genetically homogenous population. I show through mutant analyses that the partner-switching mechanism coded for by hcyA-D acts as a hinge between the scytonemin and hormogonia based responses. Collectively, this dissertation contributes to the understanding of microbial adaptive responses to environmental stressors at the genetic and regulatory level, highlighting their phenomenological and mechanistic complexity.
ContributorsKlicki, Kevin (Author) / Garcia-Pichel, Ferran (Thesis advisor) / Wilson, Melissa (Committee member) / Mukhopadhyay, Aindrila (Committee member) / Misra, Rajeev (Committee member) / Arizona State University (Publisher)
Created2021
161812-Thumbnail Image.png
Description
Published in 1992, “The osteological paradox: problems of inferring prehistoric health from skeletal samples” highlighted the limitations of interpreting population health from archaeological skeletal samples. The authors drew the attention of the bioarchaeological community to several unfounded assumptions in the field of paleopathology. They cautioned that bioarchaeologists needed to expand

Published in 1992, “The osteological paradox: problems of inferring prehistoric health from skeletal samples” highlighted the limitations of interpreting population health from archaeological skeletal samples. The authors drew the attention of the bioarchaeological community to several unfounded assumptions in the field of paleopathology. They cautioned that bioarchaeologists needed to expand their methodological and theoretical toolkits and examine how variation in frailty influences mortality outcomes. This dissertation undertakes this task by 1) establishing a new approach for handling missing paleopathology data that facilitates the use of new analytical methods for exploring frailty and resiliency in skeletal data, and 2) investigating the role of prior frailty in shaping selective mortality in an underexplored epidemic context. The first section takes the initial step of assessing current techniques for handling missing data in bioarchaeology and testing protocols for imputation of missing paleopathology variables. A review of major bioarchaeological journals searching for terms describing the treatment of missing data are compiled. The articles are sorted by subject topic and into categories based on the statistical and theoretical rigor of how missing data are handled. A case study test of eight methods for handling missing data is conducted to determine which methods best produce unbiased parameter estimates. The second section explores how pre-existing frailty influenced mortality during the 1918 influenza pandemic. Skeletal lesion data are collected from a sample of 424 individuals from the Hamann-Todd Documented Collection. Using Kaplan-Meier and Cox proportional hazards, this chapter tests whether individuals who were healthy (i.e. non-frail) were equally likely to die during the flu as frail individuals. Results indicate that imputation is underused in bioarchaeology, therefore procedures for imputing ordinal and continuous paleopathology data are established. The findings of the second section reveal that while a greater proportion of non-frail individuals died during the 1918 pandemic compared to pre-flu times, frail individuals were more likely to die at all times. The outcomes of this dissertation help expand the types of statistical analyses that can be performed using paleopathology data. They contribute to the field’s knowledge of selective mortality and differential frailty during a major historical pandemic.
ContributorsWissler, Amanda (Author) / Buikstra, Jane E (Thesis advisor) / DeWitte, Sharon N (Committee member) / Stojanowski, Christopher M (Committee member) / Mamelund, Svenn-Erik (Committee member) / Arizona State University (Publisher)
Created2021
161813-Thumbnail Image.png
Description
Oxygen transfer reactions are central to many catalytic processes, including those underlying automotive exhaust emissions control and clean energy conversion. The catalysts used in these applications typically consist of metal nanoparticles dispersed on reducible oxides (e.g., Pt/CeO2), since reducible oxides can transfer their lattice oxygen to reactive adsorbates at the

Oxygen transfer reactions are central to many catalytic processes, including those underlying automotive exhaust emissions control and clean energy conversion. The catalysts used in these applications typically consist of metal nanoparticles dispersed on reducible oxides (e.g., Pt/CeO2), since reducible oxides can transfer their lattice oxygen to reactive adsorbates at the metal-support interface. There are many outstanding questions regarding the atomic and nanoscale spatial variation of the Pt/CeO2 interface, Pt metal particle, and adjacent CeO2 oxide surface during catalysis. To this end, a range of techniques centered around aberration-corrected environmental transmission electron microscopy (ETEM) were developed and employed to visualize and characterize the atomic-scale structural behavior of CeO2-supported Pt catalysts under reaction conditions (in situ) and/or during catalysis (operando). A model of the operando ETEM reactor was developed to simulate the gas and temperature profiles during conditions of catalysis. Most importantly, the model provides a tool for relating the reactant conversion measured with spectroscopy to the reaction rate of the catalyst that is imaged on the TEM grid. As a result, this work has produced a truly operando TEM methodology, since the structure observed during an experiment can be directly linked to quantitative chemical kinetics of the same catalyst. This operando ETEM approach was leveraged to investigate structure-activity relationships for CO oxidation over Pt/CeO2 catalysts. Correlating atomic-level imaging with catalytic turnover frequency reveals a direct relationship between activity and dynamic structural behavior that (a) destabilizes the supported Pt particle, (b) marks an enhanced rate of oxygen vacancy creation and annihilation, and (c) leads to increased strain and reduction in the surface of the CeO2 support. To further investigate the structural meta-stability (i.e., fluxionality) of 1 – 2 nm CeO2-supported Pt nanoparticles, time-resolved in situ AC-ETEM was employed to visualize the catalyst’s dynamical behavior with high spatiotemporal resolution. Observations are made under conditions relevant to the CO oxidation and water-gas shift (WGS) reactions. Finally, deep learning-based convolutional neural networks were leveraged to develop novel denoising techniques for ultra-low signal-to-noise images of catalytic nanoparticles.
ContributorsVincent, Joshua Lawrence (Author) / Crozier, Peter A (Thesis advisor) / Liu, Jingyue (Committee member) / Muhich, Christopher L (Committee member) / Nannenga, Brent L (Committee member) / Singh, Arunima K (Committee member) / Arizona State University (Publisher)
Created2021
161814-Thumbnail Image.png
Description
Army Futures Command (AFC) has the implicit mission of ensuring that the Army does not get locked into a technology that might be ineffective in a future of competition and conflict. In this dissertation I develop insights and tools that can help assess and inform AFC’s efforts to understand and

Army Futures Command (AFC) has the implicit mission of ensuring that the Army does not get locked into a technology that might be ineffective in a future of competition and conflict. In this dissertation I develop insights and tools that can help assess and inform AFC’s efforts to understand and avoid undesirable technological lock-in. I started with three historical case studies of the interactions between technology and military strategy. The first examined the German Army’s strategic commitment to using railroads before World War I, forcing them into a military answer to rapidly increased diplomatic tensions in 1914. The second explored how the US Army Air Corps became locked into a doctrine of strategic bombing before World War II, affecting their ability to support ground troops during the Cold War. The third studied why the US Army was able to avoid becoming locked into a tactical nuclear doctrine in the 1950s, despite initial efforts to change Army structure and tactics to accommodate the nuclear battlefield. I identified three factors: 1) rapid changes in the strategic environment; 2) lack of civilian analogues to nuclear weapons; 3) the novelty of tactical nuclear technology, and availability of operational alternatives. The second part of my research sought to identify applicable theories from the fields of science, technology and society studies (STS). I identified five theories (technological systems, co-production, technological lock-in, path dependence, and economic growth theory), each with a brief case study. I sent my initial analysis to eighteen professionals at AFC and used their feedback to determine the utility of these theories for military planning. Finally, I analyzed AFC's current initiatives via semi-structured interviews, gaining insight into AFC's operations to identify three classes of issues that they face: complicated, exterior, and complex. Complicated issues are manageable through organizational methods. Exterior issues require planning to accommodate irreducible uncertainties (such as budgeting processes). Complex issues involved unpredictable interactions among technology and military strategy. I focused on three AFC programs, (artificial intelligence, robotics, and autonomous systems) demonstrating how STS theories can offer additional tools to help guide technological and strategic planning for an uncertain future.
ContributorsMcCafferty, Sean (Author) / Sarewitz, Daniel (Thesis advisor) / Allenby, Braden (Committee member) / Kubiak, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2021
161891-Thumbnail Image.png
Description
Many filmmakers have explored the sonic possibilities offered by experimental, avant-garde, and modernist music as it prospered in the mid-twentieth century. Fascinatingly, horror cinema, with all its eerie subject matter, has championed the use of experimental music in its films. Since the silent-film era, horror has stood much to gain

Many filmmakers have explored the sonic possibilities offered by experimental, avant-garde, and modernist music as it prospered in the mid-twentieth century. Fascinatingly, horror cinema, with all its eerie subject matter, has championed the use of experimental music in its films. Since the silent-film era, horror has stood much to gain by deviating from the normative film scoring standards developed in Hollywood. Filmmakers indebted to horror continually seek new sounds and approaches to showcase the otherworldly and suspenseful themes of their films. Numerous movies that challenged the status quo through transformative scoring practices achieved distinction among rival films. The rise of auteurist films in the 1950s further instigated experimental practices as the studio system declined and created a space for new filmmakers to experiment with aesthetic strategies. Film music scholarship has paid relatively little attention to the convergences between experimental concert music and horror scoring practices. This topic is crucial, especially horror’s employment of existing experimental music, as it has played a critical role in American filmmaking in the second half of the twentieth century. My thesis traces the relationship between horror cinema and experimental music. I survey the use of experimental music throughout the history of horror films and examine the scores for three films: William Friedkin’s The Exorcist (1973), Stanley Kubrick’s The Shining (1980), and Martin Scorsese’s Shutter Island (2010). With my case studies of these three films, I aim to fill a significant gap in film music scholarship, highlight the powerful use of experimental music textures and timbres and demonstrate this music’s significant role in cultivating new scoring practices that succeed in engaging, unnerving and shocking audiences of horror cinema.
ContributorsAle, Lea (Author) / Feisst, Sabine (Thesis advisor) / Saucier, Catherine (Committee member) / Schmelz, Peter (Committee member) / Arizona State University (Publisher)
Created2021
161892-Thumbnail Image.png
Description
Since the No Child Left Behind (NCLB) Act required classifications of students’ performance levels, test scores have been used to measure students’ achievement; in particular, test scores are used to determine whether students reach a proficiency level in the state assessment. Accordingly, school districts have started using benchmark assessments to

Since the No Child Left Behind (NCLB) Act required classifications of students’ performance levels, test scores have been used to measure students’ achievement; in particular, test scores are used to determine whether students reach a proficiency level in the state assessment. Accordingly, school districts have started using benchmark assessments to complement the state assessment. Unlike state assessments administered at the end of the school year, benchmark assessments, administered multiple times during the school year, measures students’ learning progress toward reaching the proficiency level. Thus, the results of the benchmark assessments can help districts and schools prepare their students for the subsequent state assessments so that their students can reach the proficiency level in the state assessment. If benchmark assessments can predict students’ future performance measured in the state assessments accurately, the assessments can be more useful to facilitate classroom instructions to support students’ improvements. Thus, this study focuses on the predictive accuracy of a proficiency cut score in the benchmark assessment. Specifically, using an econometric research technique, Regression Discontinuity Design, this study assesses whether reaching a proficiency level in the benchmark assessment had a causal impact on increasing the probability of reaching a proficiency level in the state assessment. Finding no causal impact of the cut score, this study alternatively applies a Precision-Recall curve - a useful measure for evaluating predictive performance of binary classification. By using this technique, this study calculates an optimal proficiency cut score in the benchmark assessment that maximizes the accuracy and minimizes the inaccuracy in predicting the proficiency level in the state assessment. Based on the results, this study discusses issues regarding the conventional approaches of establishing cut scores in large-scale assessments and suggests some potential approaches to increase the predictive accuracy of the cut score in benchmark assessments.
ContributorsTerada, Takeshi (Author) / Chen, Ying-Chih (Thesis advisor) / Edwards, Michael (Thesis advisor) / Garcia, David (Committee member) / Arizona State University (Publisher)
Created2021
161893-Thumbnail Image.png
Description
A $k$-list assignment for a graph $G=(V, E)$ is a function $L$ that assigns a $k$-set $L(v)$ of "available colors" to each vertex $v \in V$. A $d$-defective, $m$-fold, $L$-coloring is a function $\phi$ that assigns an $m$-subset $\phi(v) \subseteq L(v)$ to each vertex $v$ so that each color class

A $k$-list assignment for a graph $G=(V, E)$ is a function $L$ that assigns a $k$-set $L(v)$ of "available colors" to each vertex $v \in V$. A $d$-defective, $m$-fold, $L$-coloring is a function $\phi$ that assigns an $m$-subset $\phi(v) \subseteq L(v)$ to each vertex $v$ so that each color class $V_{i}=\{v \in V:$ $i \in \phi(v)\}$ induces a subgraph of $G$ with maximum degree at most $d$. An edge $xy$ is an $i$-flaw of $\phi$ if $i\in \phi(x) \cap \phi(y)$. An online list-coloring algorithm $\mathcal{A}$ works on a known graph $G$ and an unknown $k$-list assignment $L$ to produce a coloring $\phi$ as follows. At step $r$ the set of vertices $v$ with $r \in L(v)$ is revealed to $\mathcal{A}$. For each vertex $v$, $\mathcal{A}$ must decide irrevocably whether to add $r$ to $\phi(v)$. The online choice number $\pt_{m}^{d}(G)$ of $G$ is the least $k$ for which some such algorithm produces a $d$-defective, $m$-fold, $L$-coloring $\phi$ of $G$ for all $k$-list assignments $L$. Online list coloring was introduced independently by Uwe Schauz and Xuding Zhu. It was known that if $G$ is planar then $\pt_{1}^{0}(G) \leq 5$ and $\pt_{1}^{1}(G) \leq 4$ are sharp bounds; here it is proved that $\pt_{1}^{3}(G) \leq 3$ is sharp, but there is a planar graph $H$ with $\pt_{1}^{2}(H)\ge 4$. Zhu conjectured that for some integer $m$, every planar graph $G$ satisfies $\pt_{m}^{0}(G) \leq 5 m-1$, and even that this is true for $m=2$. This dissertation proves that $\pt_{2}^{1}(G) \leq 9$, so the conjecture is "nearly" true, and the proof extends to $\pt_{m}^{1}(G) \leq\left\lceil\frac{9}{2} m\right\rceil$. Using Alon's Combinatorial Nullstellensatz, this is strengthened by showing that $G$ contains a linear forest $(V, F)$ such that there is an online algorithm that witnesses $\mathrm{pt}_{2}^{1}(G) \leq 9$ while producing a coloring whose flaws are in $F$, and such that no edge is an $i$-flaw and a $j$-flaw for distinct colors $i$ and $j$.
Contributorshan, ming (Author) / Kierstead, Henry A. (Thesis advisor) / Czygrinow, Andrzej (Committee member) / Sen, Arunabha (Committee member) / Spielberg, John (Committee member) / Fishel, Susanna (Committee member) / Arizona State University (Publisher)
Created2021
161894-Thumbnail Image.png
Description
Heterogenous SoCs are in development that marry multiple architectural patterns together. In order for software to be run on such a platform, it must be broken down into its constituent parts, kernels, and scheduled for execution on the hardware. Although this can be done by hand, it would be arduous

Heterogenous SoCs are in development that marry multiple architectural patterns together. In order for software to be run on such a platform, it must be broken down into its constituent parts, kernels, and scheduled for execution on the hardware. Although this can be done by hand, it would be arduous and time consuming; rather, a tool should be developed that analyzes the source binary, extracts the kernels, schedules the kernels, and optimizes the scheduled kernels for their target component. This dissertation proposes a decidable kernel definition that enables an algorithmic approach to detecting kernels from arbitrary programs. This definition is built upon four constraints that can be tested using basic graph theory. In addition, two algorithms are proposed that successfully extract kernels based upon runtime information. The first utilizes dynamic traces, which are generated using a collection of novel optimizations. The second utilizes a simple affinity matrix, which has no runtime overhead during program execution. Finally, a Dense Neural Network is proposed that is capable of detecting a kernel's archetype based upon only the composition of the source program and the number of times individual basic blocks execute. The contributions proposed in this dissertation provide the necessary infrastructure to perform a litany of other optimizations on kernels. By detecting kernels algorithmically, any program can be analyzed and optimized with techniques that have heretofore required kernels be written in a compatible form. Computational kernels can be extracted from any program with no constraints. The innovations describes here will form the foundation for automated kernel optimization in the future, helping optimize the code of the future.
ContributorsUhrie, Richard Lawrence (Author) / Brunhaver, John (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastiva, Aviral (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2021
161895-Thumbnail Image.png
Description
The purpose of the PhotoStory Professional Development (PPD) action researchstudy was to explore the relationship between dialogical narrative analysis and reducing compassion fatigue in teachers working in a trauma-informed behavior management program. The PPD was designed to elicit conversations related to the psychological effects of compassion fatigue which were identified in previous cycles

The purpose of the PhotoStory Professional Development (PPD) action researchstudy was to explore the relationship between dialogical narrative analysis and reducing compassion fatigue in teachers working in a trauma-informed behavior management program. The PPD was designed to elicit conversations related to the psychological effects of compassion fatigue which were identified in previous cycles of action research. Through the iterative process, teachers identified they needed administrative support and mitigation strategies for stress reduction related to working in a trauma-informed context. As a result, the PPD was developed to provide opportunity for disclosure, discussion, and reflection regarding experiences with compassion fatigue related to the school context. The study was grounded in a constructivist framework, and aspects of trauma theory, connection, and storytelling were explored. The literature review includes studies centered on professional development for teachers working in trauma-informed programs, and psychological effects and mitigations strategies related to compassion fatigue. The PPD study participants included six kindergarten through eighth grade educators. Participants completed a presurvey, attended three workshops over the course of four weeks, and completed a postsurvey. Each workshop provided an opportunity for participants to create and present a PhotoStory collage, participate in a Talking Circle discussion, and write journal reflections. All six participants completed a 30-minute individual mid-study interview. The results of the study indicated that providing participants with an opportunity to engage in dialogue regarding compassion fatigue reduced the negative psychological effects associated with their roles as trauma-informed educators.
ContributorsEcheverria, Lushanya (Author) / Giorgis, Cyndi (Thesis advisor) / Anoyke, Duku (Thesis advisor) / Cecena, Aracele (Committee member) / Arizona State University (Publisher)
Created2021