This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 152
Filtering by

Clear all filters

152131-Thumbnail Image.png
Description
The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research

The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research efforts to study these vitally important systems. Expected outcomes from easily accessible test environments and more tractable studies include the development of more advanced and adaptive material systems, including biological systems, particularly as humans ponder human exploration in deep space. The specific focus of the research was the design and development of a prototypical experimental test system that could preliminarily meet the challenging design specifications required of such test systems. Guided by a more unified theoretical foundation and building upon concept design and development heuristics, assessment of the feasibility of two experimental test systems was explored. Test System I was a rotating wall reactor experimental system that closely followed the specifications of a similar test system, Synthecon, designed by NASA contractors and thus closely mimicked microgravity conditions of the space shuttle and station. The latter includes terminal velocity conditions experienced by both innate material systems, as well as, biological systems, including living tissue and humans but has the ability to extend to include those material test systems associated with mineralization processes. Test System II is comprised of a unique vertical column design that offered more easily controlled fluid mechanical test conditions over a much wider flow regime that was necessary to achieving terminal velocities under free convection-less conditions that are important in mineralization processes. Preliminary results indicate that Test System II offers distinct advantages in studying microgravity effects in test systems operating in unit gravity environments and particularly when investigating mineralization and related processes. Verification of the Test System II was performed on validating microgravity effects on calcite mineralization processes reported earlier others. There studies were conducted on calcite mineralization in fixed-wing, reduced gravity aircraft, known as the `vomit comet' where reduced gravity conditions are include for very short (~20second) time periods. Preliminary results indicate that test systems, such as test system II, can be devised to assess microgravity conditions in unit gravity environments, such as earth. Furthermore, the preliminary data obtained on calcite formation suggest that strictly physicochemical mechanisms may be the dominant factors that control adaptation in materials processes, a theory first proposed by Liu et al. Thus the result of this study may also help shine a light on the problem of early osteoporosis in astronauts and long term interest in deep space exploration.
ContributorsSeyedmadani, Kimia (Author) / Pizziconi, Vincent (Thesis advisor) / Towe, Bruce (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
152154-Thumbnail Image.png
Description
As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell

As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell surfaces. The focus of this work is to understand the properties of charges present in the SiNx films and then to develop a mechanism to manipulate the polarity of charges to either negative or positive based on the end-application. Specific silicon-nitrogen dangling bonds (·Si-N), known as K center defects, are the primary charge trapping defects present in the SiNx films. A custom built corona charging tool was used to externally inject positive or negative charges in the SiNx film. Detailed Capacitance-Voltage (C-V) measurements taken on corona charged SiNx samples confirmed the presence of a net positive or negative charge density, as high as +/- 8 x 1012 cm-2, present in the SiNx film. High-energy (~ 4.9 eV) UV radiation was used to control and neutralize the charges in the SiNx films. Electron-Spin-Resonance (ESR) technique was used to detect and quantify the density of neutral K0 defects that are paramagnetically active. The density of the neutral K0 defects increased after UV treatment and decreased after high temperature annealing and charging treatments. Etch-back C-V measurements on SiNx films showed that the K centers are spread throughout the bulk of the SiNx film and not just near the SiNx-Si interface. It was also shown that the negative injected charges in the SiNx film were stable and present even after 1 year under indoor room-temperature conditions. Lastly, a stack of SiO2/SiNx dielectric layers applicable to standard commercial solar cells was developed using a low temperature (< 400 °C) PECVD process. Excellent surface passivation on FZ and CZ Si substrates for both n- and p-type samples was achieved by manipulating and controlling the charge in SiNx films.
ContributorsSharma, Vivek (Author) / Bowden, Stuart (Thesis advisor) / Schroder, Dieter (Committee member) / Honsberg, Christiana (Committee member) / Roedel, Ronald (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
152112-Thumbnail Image.png
Description
With the advent of social media (like Twitter, Facebook etc.,) people are easily sharing their opinions, sentiments and enforcing their ideologies on others like never before. Even people who are otherwise socially inactive would like to share their thoughts on current affairs by tweeting and sharing news feeds with their

With the advent of social media (like Twitter, Facebook etc.,) people are easily sharing their opinions, sentiments and enforcing their ideologies on others like never before. Even people who are otherwise socially inactive would like to share their thoughts on current affairs by tweeting and sharing news feeds with their friends and acquaintances. In this thesis study, we chose Twitter as our main data platform to analyze shifts and movements of 27 political organizations in Indonesia. So far, we have collected over 30 million tweets and 150,000 news articles from RSS feeds of the corresponding organizations for our analysis. For Twitter data extraction, we developed a multi-threaded application which seamlessly extracts, cleans and stores millions of tweets matching our keywords from Twitter Streaming API. For keyword extraction, we used topics and perspectives which were extracted using n-grams techniques and later approved by our social scientists. After the data is extracted, we aggregate the tweet contents that belong to every user on a weekly basis. Finally, we applied linear and logistic regression using SLEP, an open source sparse learning package to compute weekly score for users and mapping them to one of the 27 organizations on a radical or counter radical scale. Since, we are mapping users to organizations on a weekly basis, we are able to track user's behavior and important new events that triggered shifts among users between organizations. This thesis study can further be extended to identify topics and organization specific influential users and new users from various social media platforms like Facebook, YouTube etc. can easily be mapped to existing organizations on a radical or counter-radical scale.
ContributorsPoornachandran, Sathishkumar (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Woodward, Mark (Committee member) / Arizona State University (Publisher)
Created2013
151952-Thumbnail Image.png
Description
Microwave dielectrics are widely used to make resonators and filters in telecommunication systems. The production of thin films with high dielectric constant and low loss could potentially enable a marked reduction in the size of devices and systems. However, studies of these materials in thin film form are very sparse.

Microwave dielectrics are widely used to make resonators and filters in telecommunication systems. The production of thin films with high dielectric constant and low loss could potentially enable a marked reduction in the size of devices and systems. However, studies of these materials in thin film form are very sparse. In this research, experiments were carried out on practical high-performance dielectrics including ZrTiO4-ZnNb2O6 (ZTZN) and Ba(Co,Zn)1/3Nb2/3O3 (BCZN) with high dielectric constant and low loss tangent. Thin films were deposited by laser ablation on various substrates, with a systematical study of growth conditions like substrate temperature, oxygen pressure and annealing to optimize the film quality, and the compositional, microstructural, optical and electric properties were characterized. The deposited ZTZN films were randomly oriented polycrystalline on Si substrate and textured on MgO substrate with a tetragonal lattice change at elevated temperature. The BCZN films deposited on MgO substrate showed superior film quality relative to that on other substrates, which grow epitaxially with an orientation of (001) // MgO (001) and (100) // MgO (100) when substrate temperature was above 500 oC. In-situ annealing at growth temperature in 200 mTorr oxygen pressure was found to enhance the quality of the films, reducing the peak width of the X-ray Diffraction (XRD) rocking curve to 0.53o and the χmin of channeling Rutherford Backscattering Spectrometry (RBS) to 8.8% when grown at 800oC. Atomic Force Microscopy (AFM) was used to study the topography and found a monotonic decrease in the surface roughness when the growth temperature increased. Optical absorption and transmission measurements were used to determine the energy bandgap and the refractive index respectively. A low-frequency dielectric constant of 34 was measured using a planar interdigital measurement structure. The resistivity of the film is ~3×1010 ohm·cm at room temperature and has an activation energy of thermal activated current of 0.66 eV.
ContributorsLi, You (Author) / Newman, Nathan (Thesis advisor) / Alford, Terry (Committee member) / Singh, Rakesh (Committee member) / Arizona State University (Publisher)
Created2013
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151596-Thumbnail Image.png
Description
Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3

Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3 in a simple, contactless room temperature measurement. However in practice, recombination lifetime τr measurements such as photoconductance decay (PCD) and surface photovoltage (SPV) that are widely used for characterization of bulk wafers face serious limitations when applied to thin epitaxial layers, where the layer thickness is smaller than the minority carrier diffusion length Ln. Other methods such as microwave photoconductance decay (µ-PCD), photoluminescence (PL), and frequency-dependent SPV, where the generated excess carriers are confined to the epitaxial layer width by using short excitation wavelengths, require complicated configuration and extensive surface passivation processes that make them time-consuming and not suitable for process screening purposes. Generation lifetime τg, typically measured with pulsed MOS capacitors (MOS-C) as test structures, has been shown to be an eminently suitable technique for characterization of thin epitaxial layers. It is for these reasons that the IC community, largely concerned with unipolar MOS devices, uses lifetime measurements as a "process cleanliness monitor." However when dealing with ultraclean epitaxial wafers, the classic MOS-C technique measures an effective generation lifetime τg eff which is dominated by the surface generation and hence cannot be used for screening impurity densities. I have developed a modified pulsed MOS technique for measuring generation lifetime in ultraclean thin p/p+ epitaxial layers which can be used to detect metallic impurities with densities as low as 10-10 cm-3. The widely used classic version has been shown to be unable to effectively detect such low impurity densities due to the domination of surface generation; whereas, the modified version can be used suitably as a metallic impurity density monitoring tool for such cases.
ContributorsElhami Khorasani, Arash (Author) / Alford, Terry (Thesis advisor) / Goryll, Michael (Committee member) / Bertoni, Mariana (Committee member) / Arizona State University (Publisher)
Created2013
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
151513-Thumbnail Image.png
Description
Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material,

Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material, manufacturing process, use condition, as well as, the inherent variabilities present in the system, greatly influence product reliability. Accurate reliability analysis requires an integrated approach to concurrently account for all these factors and their synergistic effects. Such an integrated and robust methodology can be used in design and development of new and advanced microelectronics systems and can provide significant improvement in cycle-time, cost, and reliability. IMPRPK approach is based on a probabilistic methodology, focusing on three major tasks of (1) Characterization of BGA solder joints to identify failure mechanisms and obtain statistical data, (2) Finite Element analysis (FEM) to predict system response needed for life prediction, and (3) development of a probabilistic methodology to predict the reliability, as well as, the sensitivity of the system to various parameters and the variabilities. These tasks and the predictive capabilities of IMPRPK in microelectronic reliability analysis are discussed.
ContributorsFallah-Adl, Ali (Author) / Tasooji, Amaneh (Thesis advisor) / Krause, Stephen (Committee member) / Alford, Terry (Committee member) / Jiang, Hanqing (Committee member) / Mahajan, Ravi (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152164-Thumbnail Image.png
Description
Contention based IEEE 802.11MAC uses the binary exponential backoff algorithm (BEB) for the contention resolution. The protocol suffers poor performance in the heavily loaded networks and MANETs, high collision rate and packet drops, probabilistic delay guarantees, and unfairness. Many backoff strategies were proposed to improve the performance of IEEE 802.11

Contention based IEEE 802.11MAC uses the binary exponential backoff algorithm (BEB) for the contention resolution. The protocol suffers poor performance in the heavily loaded networks and MANETs, high collision rate and packet drops, probabilistic delay guarantees, and unfairness. Many backoff strategies were proposed to improve the performance of IEEE 802.11 but all ignore the network topology and demand. Persistence is defined as the fraction of time a node is allowed to transmit, when this allowance should take into account topology and load, it is topology and load aware persistence (TLA). We develop a relation between contention window size and the TLA-persistence. We implement a new backoff strategy where the TLA-persistence is defined as the lexicographic max-min channel allocation. We use a centralized algorithm to calculate each node's TLApersistence and then convert it into a contention window size. The new backoff strategy is evaluated in simulation, comparing with that of the IEEE 802.11 using BEB. In most of the static scenarios like exposed terminal, flow in the middle, star topology, and heavy loaded multi-hop networks and in MANETs, through the simulation study, we show that the new backoff strategy achieves higher overall average throughput as compared to that of the IEEE 802.11 using BEB.
ContributorsBhyravajosyula, Sai Vishnu Kiran (Author) / Syrotiuk, Violet R. (Thesis advisor) / Sen, Arunabha (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2013