This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 127
152319-Thumbnail Image.png
Description
In this research, our goal was to fabricate Josephson junctions that can be stably processed at 300°C or higher. With the purpose of integrating Josephson junction fabrication with the current semiconductor circuit fabrication process, back-end process temperatures (>350 °C) will be a key for producing large scale junction circuits reliably,

In this research, our goal was to fabricate Josephson junctions that can be stably processed at 300°C or higher. With the purpose of integrating Josephson junction fabrication with the current semiconductor circuit fabrication process, back-end process temperatures (>350 °C) will be a key for producing large scale junction circuits reliably, which requires the junctions to be more thermally stable than current Nb/Al-AlOx/Nb junctions. Based on thermodynamics, Hf was chosen to produce thermally stable Nb/Hf-HfOx/Nb superconductor tunnel Josephson junctions that can be grown or processed at elevated temperatures. Also elevated synthesis temperatures improve the structural and electrical properties of Nb electrode layers that could potentially improve junction device performance. The refractory nature of Hf, HfO2 and Nb allow for the formation of flat, abrupt and thermally-stable interfaces. But the current Al-based barrier will have problems when using with high-temperature grown and high-quality Nb. So our work is aimed at using Nb grown at elevated temperatures to fabricate thermally stable Josephson tunnel junctions. As a junction barrier metal, Hf was studied and compared with the traditional Al-barrier material. We have proved that Hf-HfOx is a good barrier candidate for high-temperature synthesized Josephson junction. Hf deposited at 500 °C on Nb forms flat and chemically abrupt interfaces. Nb/Hf-HfOx/Nb Josephson junctions were synthesized, fabricated and characterized with different oxidizing conditions. The results of materials characterization and junction electrical measurements are reported and analyzed. We have improved the annealing stability of Nb junctions and also used high-quality Nb grown at 500 °C as the bottom electrode successfully. Adding a buffer layer or multiple oxidation steps improves the annealing stability of Josephson junctions. We also have attempted to use the Atomic Layer Deposition (ALD) method for the growth of Hf oxide as the junction barrier and got tunneling results.
ContributorsHuang, Mengchu, 1987- (Author) / Newman, Nathan (Thesis advisor) / Rowell, John M. (Committee member) / Singh, Rakesh K. (Committee member) / Chamberlin, Ralph (Committee member) / Wang, Robert (Committee member) / Arizona State University (Publisher)
Created2013
151100-Thumbnail Image.png
Description
The ability to shift the photovoltaic (PV) power curve and make the energy accessible during peak hours can be accomplished through pairing solar PV with energy storage technologies. A prototype hybrid air conditioning system (HACS), built under supervision of project head Patrick Phelan, consists of PV modules running a DC

The ability to shift the photovoltaic (PV) power curve and make the energy accessible during peak hours can be accomplished through pairing solar PV with energy storage technologies. A prototype hybrid air conditioning system (HACS), built under supervision of project head Patrick Phelan, consists of PV modules running a DC compressor that operates a conventional HVAC system paired with a second evaporator submerged within a thermal storage tank. The thermal storage is a 0.284m3 or 75 gallon freezer filled with Cryogel balls, submerged in a weak glycol solution. It is paired with its own separate air handler, circulating the glycol solution. The refrigerant flow is controlled by solenoid valves that are electrically connected to a high and low temperature thermostat. During daylight hours, the PV modules run the DC compressor. The refrigerant flow is directed to the conventional HVAC air handler when cooling is needed. Once the desired room temperature is met, refrigerant flow is diverted to the thermal storage, storing excess PV power. During peak energy demand hours, the system uses only small amounts of grid power to pump the glycol solution through the air handler (note the compressor is off), allowing for money and energy savings. The conventional HVAC unit can be scaled down, since during times of large cooling demands the glycol air handler can be operated in parallel with the conventional HVAC unit. Four major test scenarios were drawn up in order to fully comprehend the performance characteristics of the HACS. Upon initial running of the system, ice was produced and the thermal storage was charged. A simple test run consisting of discharging the thermal storage, initially ~¼ frozen, was performed. The glycol air handler ran for 6 hours and the initial cooling power was 4.5 kW. This initial test was significant, since greater than 3.5 kW of cooling power was produced for 3 hours, thus demonstrating the concept of energy storage and recovery.
ContributorsPeyton-Levine, Tobin (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Wang, Robert (Committee member) / Arizona State University (Publisher)
Created2012
151124-Thumbnail Image.png
Description
The study of high energy particle irradiation effect on Josephson junction tri-layers is relevant to applications in space and radioactive environments. It also allows us to investigate the influence of defects and interfacial intermixing on the junction electrical characteristics. In this work, we studied the influence of 2MeV Helium ion

The study of high energy particle irradiation effect on Josephson junction tri-layers is relevant to applications in space and radioactive environments. It also allows us to investigate the influence of defects and interfacial intermixing on the junction electrical characteristics. In this work, we studied the influence of 2MeV Helium ion irradiation with doses up to 5.2×1016 ions/cm2 on the tunneling behavior of Nb/Al/AlOx/Nb Josephson junctions. Structural and analytical TEM characterization, combined with SRIM modeling, indicates that over 4nm of intermixing occurred at the interfaces. EDX analysis after irradiation, suggests that the Al and O compositions from the barrier are collectively distributed together over a few nanometers. Surprisingly, the IV characteristics were largely unchanged. The normal resistance, Rn, increased slightly (<20%) after the initial dose of 3.5×1015 ions/cm2 and remained constant after that. This suggests that tunnel barrier electrical properties were not affected much, despite the significant changes in the chemical distribution of the barrier's Al and O shown in SRIM modeling and TEM pictures. The onset of quasi-particle current, sum of energy gaps (2Δ), dropped systematically from 2.8meV to 2.6meV with increasing dosage. Similarly, the temperature onset of the Josephson current dropped from 9.2K to 9.0K. This suggests that the order parameter at the barrier interface has decreased as a result of a reduced mean free path in the Al proximity layer and a reduction in the transition temperature of the Nb electrode near the barrier. The dependence of Josephson current on the magnetic field and temperature does not change significantly with irradiation, suggesting that intermixing into the Nb electrode is significantly less than the penetration depth.
ContributorsZhang, Tiantian (Author) / Newman, Nathan (Thesis advisor) / Rowell, John M (Committee member) / Singh, Rakesh K. (Committee member) / Chamberlin, Ralph (Committee member) / Wang, Robert (Committee member) / Arizona State University (Publisher)
Created2012
153948-Thumbnail Image.png
Description
Nanoparticle suspensions, popularly termed “nanofluids,” have been extensively investigated for their thermal and radiative properties. Such work has generated great controversy, although it is arguably accepted today that the presence of nanoparticles rarely leads to useful enhancements in either thermal conductivity or convective heat transfer. On the other hand, there

Nanoparticle suspensions, popularly termed “nanofluids,” have been extensively investigated for their thermal and radiative properties. Such work has generated great controversy, although it is arguably accepted today that the presence of nanoparticles rarely leads to useful enhancements in either thermal conductivity or convective heat transfer. On the other hand, there are still examples of unanticipated enhancements to some properties, such as the reported specific heat of molten salt-based nanofluids and the critical heat flux. Another largely overlooked example is the apparent effect of nanoparticles on the effective latent heat of vaporization (hfg) of aqueous nanofluids. A previous study focused on molecular dynamics (MD) modeling supplemented with limited experimental data to suggest that hfg increases with increasing nanoparticle concentration.

Here, this research extends that exploratory work in an effort to determine if hfg of aqueous nanofluids can be manipulated, i.e., increased or decreased, by the addition of graphite or silver nanoparticles. Our results to date indicate that hfg can be substantially impacted, by up to ± 30% depending on the type of nanoparticle. Moreover, this dissertation reports further experiments with changing surface area based on volume fraction (0.005% to 2%) and various nanoparticle sizes to investigate the mechanisms for hfg modification in aqueous graphite and silver nanofluids. This research also investigates thermophysical properties, i.e., density and surface tension in aqueous nanofluids to support the experimental results of hfg based on the Clausius - Clapeyron equation. This theoretical investigation agrees well with the experimental results. Furthermore, this research investigates the hfg change of aqueous nanofluids with nanoscale studies in terms of melting of silver nanoparticles and hydrophobic interactions of graphite nanofluid. As a result, the entropy change due to those mechanisms could be a main cause of the changes of hfg in silver and graphite nanofluids.

Finally, applying the latent heat results of graphite and silver nanofluids to an actual solar thermal system to identify enhanced performance with a Rankine cycle is suggested to show that the tunable latent heat of vaporization in nanofluilds could be beneficial for real-world solar thermal applications with improved efficiency.
ContributorsLee, Soochan (Author) / Phelan, Patrick E (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Wang, Robert (Committee member) / Wang, Liping (Committee member) / Taylor, Robert A. (Committee member) / Prasher, Ravi (Committee member) / Arizona State University (Publisher)
Created2015
156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155963-Thumbnail Image.png
Description
Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is

Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is generalization of supervised learning, is one

example of task learning that is discussed. In particular, a novel non-parametric k-

NN-based multiple-instance learning is proposed, which is shown to outperform other

existing approaches. This solution is applied to a diabetic retinopathy pathology

detection problem eectively.

In cases of representation learning, generality of neural features are investigated

rst. This investigation leads to some critical understanding and results in feature

generality among datasets. The possibility of learning from a mentor network instead

of from labels is then investigated. Distillation of dark knowledge is used to eciently

mentor a small network from a pre-trained large mentor network. These studies help

in understanding representation learning with smaller and compressed networks.
ContributorsVenkatesan, Ragav (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
156193-Thumbnail Image.png
Description
With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable

With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable information.

A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).

In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.

Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
ContributorsFreitas, Scott (Author) / Tong, Hanghang (Thesis advisor) / Maciejewski, Ross (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
156208-Thumbnail Image.png
Description
In recent years, 40% of the total world energy consumption and greenhouse gas emissions is because of buildings. Out of that 60% of building energy consumption is due to HVAC systems. Under current trends these values will increase in coming years. So, it is important to identify passive cooling or

In recent years, 40% of the total world energy consumption and greenhouse gas emissions is because of buildings. Out of that 60% of building energy consumption is due to HVAC systems. Under current trends these values will increase in coming years. So, it is important to identify passive cooling or heating technologies to meet this need. The concept of thermal energy storage (TES), as noted by many authors, is a promising way to rectify indoor temperature fluctuations. Due to its high energy density and the use of latent energy, Phase Change Materials (PCMs) are an efficient choice to use as TES. A question that has not satisfactorily been addressed, however, is the optimum location of PCM. In other words, given a constant PCM mass, where is the best location for it in a building? This thesis addresses this question by positioning PCM to obtain maximum energy savings and peak time delay. This study is divided into three parts. The first part is to understand the thermal behavior of building surfaces, using EnergyPlus software. For analysis, a commercial prototype building model for a small office in Phoenix, provided by the U.S. Department of Energy, is applied and the weather location file for Phoenix, Arizona is also used. The second part is to justify the best location, which is obtained from EnergyPlus, using a transient grey box building model. For that we have developed a Resistance-Capacitance (RC) thermal network and studied the thermal profile of a building in Phoenix. The final part is to find the best location for PCMs in buildings using EnergyPlus software. In this part, the mass of PCM used in each location remains unchanged. This part also includes the impact of the PCM mass on the optimized location and how the peak shift varies. From the analysis, it is observed that the ceiling is the best location to install PCM for yielding the maximum reduction in HVAC energy consumption for a hot, arid climate like Phoenix.
ContributorsPrem Anand Jayaprabha, Jyothis Anand (Author) / Phelan, Patrick (Thesis advisor) / Wang, Robert (Committee member) / Parrish, Kristen (Committee member) / Arizona State University (Publisher)
Created2018
156138-Thumbnail Image.png
Description
A novel Monte Carlo rejection technique for solving the phonon and electron

Boltzmann Transport Equation (BTE), including full many-particle interactions, is

presented in this work. This technique has been developed to explicitly model

population-dependent scattering within the full-band Cellular Monte Carlo (CMC)

framework to simulate electro-thermal transport in semiconductors, while ensuring

the conservation of energy

A novel Monte Carlo rejection technique for solving the phonon and electron

Boltzmann Transport Equation (BTE), including full many-particle interactions, is

presented in this work. This technique has been developed to explicitly model

population-dependent scattering within the full-band Cellular Monte Carlo (CMC)

framework to simulate electro-thermal transport in semiconductors, while ensuring

the conservation of energy and momentum for each scattering event. The scattering

algorithm directly solves the many-body problem accounting for the instantaneous

distribution of the phonons. The general approach presented is capable of simulating

any non-equilibrium phase-space distribution of phonons using the full phonon dispersion

without the need of the approximations commonly used in previous Monte Carlo

simulations. In particular, anharmonic interactions require no assumptions regarding

the dominant modes responsible for anharmonic decay, while Normal and Umklapp

scattering are treated on the same footing.

This work discusses details of the algorithmic implementation of the three particle

scattering for the treatment of the anharmonic interactions between phonons, as well

as treating isotope and impurity scattering within the same framework. The approach

is then extended with a technique based on the multivariable Hawkes point process

that has been developed to model the emission and the absorption process of phonons

by electrons.

The simulation code was validated by comparison with both analytical, numerical,

and experimental results; in particular, simulation results show close agreement with

a wide range of experimental data such as the thermal conductivity as function of the

isotopic composition, the temperature and the thin-film thickness.
ContributorsSabatti, Flavio Francesco Maria (Author) / Saraniti, Marco (Thesis advisor) / Smith, David J. (Committee member) / Wang, Robert (Committee member) / Goodnick, Stephen M (Committee member) / Arizona State University (Publisher)
Created2018
156084-Thumbnail Image.png
Description
The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle

The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.

The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.

In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017