Matching Items (13,651)
Filtering by

Clear all filters

158542-Thumbnail Image.png
Description
I describe the first continuous space nuclear path integral quantum Monte Carlo method, and calculate the ground state properties of light nuclei including Deuteron, Triton, Helium-3 and Helium-4, using both local chiral interaction up to next-to-next-to-leading-order and the Argonne $v_6'$ interaction. Compared with diffusion based quantum Monte Carlo methods such

I describe the first continuous space nuclear path integral quantum Monte Carlo method, and calculate the ground state properties of light nuclei including Deuteron, Triton, Helium-3 and Helium-4, using both local chiral interaction up to next-to-next-to-leading-order and the Argonne $v_6'$ interaction. Compared with diffusion based quantum Monte Carlo methods such as Green's function Monte Carlo and auxiliary field diffusion Monte Carlo, path integral quantum Monte Carlo has the advantage that it can directly calculate the expectation value of operators without tradeoff, whether they commute with the Hamiltonian or not. For operators that commute with the Hamiltonian, e.g., the Hamiltonian itself, the path integral quantum Monte Carlo light-nuclei results agree with Green's function Monte Carlo and auxiliary field diffusion Monte Carlo results. For other operator expectations which are important to understand nuclear measurements but do not commute with the Hamiltonian and therefore cannot be accurately calculated by diffusion based quantum Monte Carlo methods without tradeoff, the path integral quantum Monte Carlo method gives reliable results. I show root-mean-square radii, one-particle number density distributions, and Euclidean response functions for single-nucleon couplings. I also systematically describe all the sampling algorithms used in this work, the strategies to make the computation efficient, the error estimations, and the details of the implementation of the code to perform calculations. This work can serve as a benchmark test for future calculations of larger nuclei or finite temperature nuclear matter using path integral quantum Monte Carlo.
ContributorsChen, Rong (Author) / Schmidt, Kevin E (Thesis advisor) / Alarcon, Ricardo O (Committee member) / Beckstein, Oliver (Committee member) / Comfort, Joseph R. (Committee member) / Shovkovy, Igor A. (Committee member) / Arizona State University (Publisher)
Created2020
158543-Thumbnail Image.png
Description
Immunotherapy has received great attention recently, as it has become a powerful tool in fighting certain types of cancer. Immunotherapeutic drugs strengthen the immune system's natural ability to identify and eradicate cancer cells. This work focuses on immune checkpoint inhibitor and oncolytic virus therapies. Immune checkpoint inhibitors act as blocking

Immunotherapy has received great attention recently, as it has become a powerful tool in fighting certain types of cancer. Immunotherapeutic drugs strengthen the immune system's natural ability to identify and eradicate cancer cells. This work focuses on immune checkpoint inhibitor and oncolytic virus therapies. Immune checkpoint inhibitors act as blocking mechanisms against the binding partner proteins, enabling T-cell activation and stimulation of the immune response. Oncolytic virus therapy utilizes genetically engineered viruses that kill cancer cells upon lysing. To elucidate the interactions between a growing tumor and the employed drugs, mathematical modeling has proven instrumental. This dissertation introduces and analyzes three different ordinary differential equation models to investigate tumor immunotherapy dynamics.

The first model considers a monotherapy employing the immune checkpoint inhibitor anti-PD-1. The dynamics both with and without anti-PD-1 are studied, and mathematical analysis is performed in the case when no anti-PD-1 is administrated. Simulations are carried out to explore the effects of continuous treatment versus intermittent treatment. The outcome of the simulations does not demonstrate elimination of the tumor, suggesting the need for a combination type of treatment.

An extension of the aforementioned model is deployed to investigate the pairing of an immune checkpoint inhibitor anti-PD-L1 with an immunostimulant NHS-muIL12. Additionally, a generic drug-free model is developed to explore the dynamics of both exponential and logistic tumor growth functions. Experimental data are used for model fitting and parameter estimation in the monotherapy cases. The model is utilized to predict the outcome of combination therapy, and reveals a synergistic effect: Compared to the monotherapy case, only one-third of the dosage can successfully control the tumor in the combination case.

Finally, the treatment impact of oncolytic virus therapy in a previously developed and fit model is explored. To determine if one can trust the predictive abilities of the model, a practical identifiability analysis is performed. Particularly, the profile likelihood curves demonstrate practical unidentifiability, when all parameters are simultaneously fit. This observation poses concerns about the predictive abilities of the model. Further investigation showed that if half of the model parameters can be measured through biological experimentation, practical identifiability is achieved.
ContributorsNikolopoulou, Elpiniki (Author) / Kuang, Yang (Thesis advisor) / Gardner, Carl (Committee member) / Gevertz, Jana (Committee member) / Kang, Yun (Committee member) / Kostellich, Eric (Committee member) / Arizona State University (Publisher)
Created2020
158544-Thumbnail Image.png
Description
This thesis addresses the following fundamental maximum throughput routing problem: Given an arbitrary edge-capacitated n-node directed network and a set of k commodities, with source-destination pairs (s_i,t_i) and demands d_i> 0, admit and route the largest possible number of commodities -- i.e., the maximum throughput -- to satisfy their demands.

This thesis addresses the following fundamental maximum throughput routing problem: Given an arbitrary edge-capacitated n-node directed network and a set of k commodities, with source-destination pairs (s_i,t_i) and demands d_i> 0, admit and route the largest possible number of commodities -- i.e., the maximum throughput -- to satisfy their demands.

The main contributions of this thesis are three-fold: First, a bi-criteria approximation algorithm is presented for this all-or-nothing multicommodity flow (ANF) problem. This algorithm is the first to achieve a constant approximation of the maximum throughput with an edge capacity violation ratio that is at most logarithmic in n, with high probability. The approach used is based on a version of randomized rounding that keeps splittable flows, rather than approximating those via a non-splittable path for each commodity: This allows it to work for arbitrary directed edge-capacitated graphs, unlike most of the prior work on the ANF problem. The algorithm also works if a weighted throughput is considered, where the benefit gained by fully satisfying the demand for commodity i is determined by a given weight w_i>0. Second, a derandomization of the algorithm is presented that maintains the same approximation bounds, using novel pessimistic estimators for Bernstein's inequality. In addition, it is shown how the framework can be adapted to achieve a polylogarithmic fraction of the maximum throughput while maintaining a constant edge capacity violation, if the network capacity is large enough. Lastly, one important aspect of the randomized and derandomized algorithms is their simplicity, which lends to efficient implementations in practice. The implementations of both randomized rounding and derandomized algorithms for the ANF problem are presented and show their efficiency in practice.
ContributorsChaturvedi, Anya (Author) / Richa, Andréa W. (Thesis advisor) / Sen, Arunabha (Committee member) / Schmid, Stefan (Committee member) / Arizona State University (Publisher)
Created2020
158545-Thumbnail Image.png
Description
Due to the increase in computer and database dependency, the damage caused by malicious codes increases. Moreover, gravity and the magnitude of malicious attacks by hackers grow at an unprecedented rate. A key challenge lies on detecting such malicious attacks and codes in real-time by the use of existing methods,

Due to the increase in computer and database dependency, the damage caused by malicious codes increases. Moreover, gravity and the magnitude of malicious attacks by hackers grow at an unprecedented rate. A key challenge lies on detecting such malicious attacks and codes in real-time by the use of existing methods, such as a signature-based detection approach. To this end, computer scientists have attempted to classify heterogeneous types of malware on the basis of their observable characteristics. Existing literature focuses on classifying binary codes, due to the greater accessibility of malware binary than source code. Also, for the improved speed and scalability, machine learning-based approaches are widely used. Despite such merits, the machine learning-based approach critically lacks the interpretability of its outcome, thus restricts understandings of why a given code belongs to a particular type of malicious malware and, importantly, why some portions of a code are reused very often by hackers. In this light, this study aims to enhance understanding of malware by directly investigating reused codes and uncovering their characteristics.

To examine reused codes in malware, both malware with source code and malware with binary code are considered in this thesis. For malware with source code, reused code chunks in the Mirai botnet. This study lists frequently reused code chunks and analyzes the characteristics and location of the code. For malware with binary code, this study performs reverse engineering on the binary code for human readers to comprehend, visually inspects reused codes in binary ransomware code, and illustrates the functionality of the reused codes on the basis of similar behaviors and tactics.

This study makes a novel contribution to the literature by directly investigating the characteristics of reused code in malware. The findings of the study can help cybersecurity practitioners and scholars increase the performance of malware classification.
ContributorsLEe, Yeonjung (Author) / Bao, Youzhi (Thesis advisor) / Doupe, Adam (Committee member) / Shoshitaishvili, Yan (Committee member) / Arizona State University (Publisher)
Created2020
158546-Thumbnail Image.png
Description
As experiencing hot months and thermal stresses is becoming more common, chemically protective fabrics must adapt and provide protections while reducing the heat stress to the body. These concerns affect first responders, warfighters, and workers regularly surrounded by hazardous chemical agents. While adapting traditional garments with cooling devices provides one

As experiencing hot months and thermal stresses is becoming more common, chemically protective fabrics must adapt and provide protections while reducing the heat stress to the body. These concerns affect first responders, warfighters, and workers regularly surrounded by hazardous chemical agents. While adapting traditional garments with cooling devices provides one route to mitigate this issue, these cooling methods add bulk, are time limited, and may not be applicable in locations without logistical support. Here I take inspiration from nature to guide the development of smart fabrics that have high breathability, but self-seal on exposure to target chemical(s), providing a better balance between cooling and protection.

Natural barrier materials were explored as a guide, focusing specifically on prickly pear cacti. These cacti have a natural waxy barrier that provides protection from dehydration and physically changes shape to modify surface wettability and water vapor transport. The results of this study provided a basis for a shape changing polymer to be used to respond directly to hazardous chemicals, swelling to contain the agent.

To create a stimuli responsive material, a novel superabsorbent polymer was synthesized, based on acrylamide chemistry. The polymer was tested for swelling properties in a wide range of organic liquids and found to highly swell in moderately polar organic liquids. To help predict swelling in untested liquids, the swelling of multiple test liquids were compared with their thermodynamic properties to observe trends. As the smart fabric needs to remain breathable to allow evaporative cooling, while retaining functionality when soaked with sweat, absorption of water, as well as that of an absorbing liquid in the presence of water were tested.

Micron sized particles of the developed polymer were deposited on a plastic mesh with pore size and open area similar to common clothing fabric to establish the proof of concept of using a breathable barrier to provide chemical protection. The polymer coated mesh showed minimal additional resistance to water vapor transport, relative to the mesh alone, but blocked more than 99% of a xylene aerosol from penetrating the barrier.
ContributorsManning, Kenneth (Author) / Rykaczewski, Konrad (Thesis advisor) / Burgin, Timothy (Committee member) / Emady, Heather (Committee member) / Green, Matthew (Committee member) / Thomas, Marylaura (Committee member) / Arizona State University (Publisher)
Created2020
158547-Thumbnail Image.png
Description
Vibrational spectroscopy is a ubiquitous characterization tool in elucidating atomic structure at the bulk and nanoscale. The ability to perform high spatial resolution vibrational spectroscopy in a scanning transmission electron microscope (STEM) with electron energy-loss spectroscopy (EELS) has the potential to affect a variety of materials science problems. Since 2014,

Vibrational spectroscopy is a ubiquitous characterization tool in elucidating atomic structure at the bulk and nanoscale. The ability to perform high spatial resolution vibrational spectroscopy in a scanning transmission electron microscope (STEM) with electron energy-loss spectroscopy (EELS) has the potential to affect a variety of materials science problems. Since 2014, instrumentation development has pushed for incremental improvements in energy resolution, with the current best being 4.2 meV. Although this is poor in comparison to what is common in photon or neutron vibrational spectroscopies, the spatial resolution offered by vibrational EELS is equal to or better than the best of these other techniques.

The major objective of this research program is to investigate the spatial resolution of the monochromated energy-loss signal in the transmission-beam mode and correlate it to the excitation mechanism of the associated vibrational mode. The spatial variation of dipole vibrational signals in SiO2 is investigated as the electron probe is scanned across an atomically abrupt SiO2/Si interface. The Si-O bond stretch signal has a spatial resolution of 2 – 20 nm, depending on whether the interface, bulk, or surface contribution is chosen. For typical TEM specimen thicknesses, coupled surface modes contribute strongly to the spectrum. These coupled surface modes are phonon polaritons, whose intensity and spectral positions are strongly specimen geometry dependent. In a SiO2 thin-film patterned with a 2x2 array, dielectric theory simulations predict the simultaneous excitation of parallel and uncoupled surface polaritons and a very weak excitation of the orthogonal polariton.

It is demonstrated that atomic resolution can be achieved with impact vibrational signals from optical and acoustic phonons in a covalently bonded material like Si. Sub-nanometer resolution mapping of the Si-O symmetric bond stretch impact signal can also be performed in an ionic material like SiO2. The visibility of impact energy-loss signals from excitation of Brillouin zone boundary vibrational modes in hexagonal BN is seen to be a strong function of probe convergence, but not as strong a function of spectrometer collection angles. Some preliminary measurements to detect adsorbates on catalyst nanoparticle surfaces with minimum radiation damage in the aloof-beam mode are also presented.
ContributorsVenkatraman, Kartik (Author) / Crozier, Peter (Thesis advisor) / Rez, Peter (Committee member) / Wang, Robert (Committee member) / Tongay, Sefaattin (Committee member) / Arizona State University (Publisher)
Created2020
158548-Thumbnail Image.png
Description
Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself

Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself to be an enduring presence in the network science community throughout the 2010s, with no signs of fading into obscurity anytime soon. Hyperbolic embeddings, which map a given graph to hyperbolic space, have particularly proven to be a powerful and dynamic tool for studying complex networks. Hyperbolic embeddings are exploited in this thesis to illustrate centrality in a graph. In network science, centrality quantifies the influence of individual nodes in a graph. Eigenvector centrality is one type of such measure, and assigns an influence weight to each node in a graph by solving for an eigenvector equation. A procedure is defined to embed a given network in a model of hyperbolic space, known as the Poincare disk, according to the influence weights computed by three eigenvector centrality measures: the PageRank algorithm, the Hyperlink-Induced Topic Search (HITS) algorithm, and the Pinski-Narin algorithm. The resulting embeddings are shown to accurately and meaningfully reflect each node's influence and proximity to influential nodes.
ContributorsChang, Alena (Author) / Xue, Guoliang (Thesis advisor) / Yang, Dejun (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158549-Thumbnail Image.png
Description
Plastic pollution has become a global threat to ecosystems worldwide, with microplastics now representing contaminants reported to occur in ambient air, fresh water, seawater, soils, fauna and people. Over time, larger macro-plastics are subject to weathering and fragmentation, resulting in smaller particles, termed ‘microplastics’ (measuring < 5 mm in diameter),

Plastic pollution has become a global threat to ecosystems worldwide, with microplastics now representing contaminants reported to occur in ambient air, fresh water, seawater, soils, fauna and people. Over time, larger macro-plastics are subject to weathering and fragmentation, resulting in smaller particles, termed ‘microplastics’ (measuring < 5 mm in diameter), which have been found to pollute virtually every marine and terrestrial ecosystem on the planet. This thesis explored the transfer of plastic pollutants from consumer products into the built water environment and ultimately into global aquatic and terrestrial ecosystems.

A literature review demonstrated that municipal sewage sludge produced by wastewater treatment plants around the world contains detectable quantities of microplastics. Application of sewage sludge on land was shown to represent a mechanism for transfer of microplastics from wastewater into terrestrial environments, with some countries reporting as high as 113 ± 57 microplastic particles per gram of dry sludge.

To address the notable shortcoming of inconsistent reporting practices for microplastic pollution, this thesis introduced a novel, online calculator that converts the number of plastic particles into the unambiguous metric of mass, thereby making global studies on microplastic pollution directly comparable.

This thesis concludes with an investigation of a previously unexplored and more personal source of plastic pollution, namely the disposal of single-use contact lenses and an assessment of the magnitude of this emerging source of environmental pollution. Using an online survey aimed at quantifying trends with the disposal of lenses in the US, it was discovered that 20 ± 0.8% of contact lens wearers flushed their used lenses down the drain, amounting to 44,000 ± 1,700 kg y-1 of lens dry mass discharged into US wastewater.

From the results it is concluded that conventional and medical microplastics represent a significant global source of pollution and a long-term threat to ecosystems around the world. Recommendations are provided on how to limit the entry of medical microplastics into the built water environment to limit damage to ecosystems worldwide.
ContributorsRolsky, Charles (Author) / Halden, Rolf (Thesis advisor) / Green, Matthew (Committee member) / Neuer, Susanne (Committee member) / Polidoro, Beth (Committee member) / Smith, Andrew (Committee member) / Arizona State University (Publisher)
Created2020
158550-Thumbnail Image.png
Description
Novel electric field-assisted microfluidic platforms were developed to exploit unique migration phenomena, particle manipulation, and enhanced droplet control. The platforms can facilitate various analytical challenges such as size-based separations, and delivery of protein crystals for structural discovery with both high selectivity and sensitivity. The vast complexity of biological analytes requires

Novel electric field-assisted microfluidic platforms were developed to exploit unique migration phenomena, particle manipulation, and enhanced droplet control. The platforms can facilitate various analytical challenges such as size-based separations, and delivery of protein crystals for structural discovery with both high selectivity and sensitivity. The vast complexity of biological analytes requires efficient transport and fractionation approaches to understand variations of biomolecular processes and signatures. Size heterogeneity is one characteristic that is especially important to understand for sub-micron organelles such as mitochondria and lipid droplets. It is crucial to resolve populations of sub-cellular or diagnostically relevant bioparticles when these often cannot be resolved with traditional methods. Herein, novel microfluidic tools were developed for the unique migration mechanism capable of separating sub-micron sized bioparticles by size. This based on a deterministic ratchet effect in a symmetrical post array with dielectrophoresis (DEP) for the fast migration allowing separation of polystyrene beads, mitochondria, and liposomes in tens of seconds. This mechanism was further demonstrated using high throughput DEP-based ratchet devices for versatile, continuous sub-micron size particle separation with large sample volumes. Serial femtosecond crystallography (SFX) with X-ray free-electron lasers (XFELs) revolutionized protein structure determination. In SFX experiments, a majority of the continuously injected liquid crystal suspension is wasted due to the unique X-ray pulse structure of XFELs, requiring a large amount (up to grams) of crystal sample to determine a protein structure. To reduce the sample consumption in such experiments, 3D printed droplet-based microfluidic platforms were developed for the generation of aqueous droplets in an oil phase. The implemented droplet-based sample delivery method showed 60% less sample volume consumption compared to the continuous injection at the European XFEL. For the enhanced control of aqueous droplet generation, the device allowed dynamic triggering of droplets for further improvement in synchronization between droplets and the X-ray pulses. This innovative technique of triggering droplets can play a crucial role in saving protein crystals in future SFX experiments. The electric field-assisted unique migration and separation phenomena in microfluidic platforms will be the key solution for revolutionizing the field of organelle separation and structural analysis of proteins.
ContributorsKim, Dai Hyun (Author) / Ros, Alexandra (Thesis advisor) / Hayes, Mark (Committee member) / Borges, Chad (Committee member) / Arizona State University (Publisher)
Created2020
158552-Thumbnail Image.png
Description
The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale

The recent increase in users of cellular networks necessitates the use of new technologies to meet this demand. Massive multiple input multiple output (MIMO) communication systems have great potential for increasing the network capacity of the emerging 5G+ cellular networks. However, leveraging the multiplexing and beamforming gains from these large-scale MIMO systems requires the channel knowlege between each antenna and each user. Obtaining channel information on such a massive scale is not feasible with the current technology available due to the complexity of such large systems. Recent research shows that deep learning methods can lead to interesting gains for massive MIMO systems by mapping the channel information from the uplink frequency band to the channel information for the downlink frequency band as well as between antennas at nearby locations. This thesis presents the research to develop a deep learning based channel mapping proof-of-concept prototype.



Due to deep neural networks' need of large training sets for accurate performance, this thesis outlines the design and implementation of an autonomous channel measurement system to analyze the performance of the proposed deep learning based channel mapping concept. This system obtains channel magnitude measurements from eight antennas autonomously using a mobile robot carrying a transmitter which receives wireless commands from the central computer connected to the static receiver system. The developed autonomous channel measurement system is capable of obtaining accurate and repeatable channel magnitude measurements. It is shown that the proposed deep learning based channel mapping system accurately predicts channel information containing few multi-path effects.
ContributorsBooth, Jayden Charles (Author) / Spanias, Andreas (Thesis advisor) / Alkhateeb, Ahmed (Thesis advisor) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2020