This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 93
151860-Thumbnail Image.png
Description
Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal

Cancer is the second leading cause of death in the United States and novel methods of treating advanced malignancies are of high importance. Of these deaths, prostate cancer and breast cancer are the second most fatal carcinomas in men and women respectively, while pancreatic cancer is the fourth most fatal in both men and women. Developing new drugs for the treatment of cancer is both a slow and expensive process. It is estimated that it takes an average of 15 years and an expense of $800 million to bring a single new drug to the market. However, it is also estimated that nearly 40% of that cost could be avoided by finding alternative uses for drugs that have already been approved by the Food and Drug Administration (FDA). The research presented in this document describes the testing, identification, and mechanistic evaluation of novel methods for treating many human carcinomas using drugs previously approved by the FDA. A tissue culture plate-based screening of FDA approved drugs will identify compounds that can be used in combination with the protein TRAIL to induce apoptosis selectively in cancer cells. Identified leads will next be optimized using high-throughput microfluidic devices to determine the most effective treatment conditions. Finally, a rigorous mechanistic analysis will be conducted to understand how the FDA-approved drug mitoxantrone, sensitizes cancer cells to TRAIL-mediated apoptosis.
ContributorsTaylor, David (Author) / Rege, Kaushal (Thesis advisor) / Jayaraman, Arul (Committee member) / Nielsen, David (Committee member) / Kodibagkar, Vikram (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
152492-Thumbnail Image.png
Description
This thesis presents approaches to develop micro seismometers and accelerometers based on molecular electronic transducers (MET) technology using MicroElectroMechanical Systems (MEMS) techniques. MET is a technology applied in seismic instrumentation that proves highly beneficial to planetary seismology. It consists of an electrochemical cell that senses the movement of liquid electrolyte

This thesis presents approaches to develop micro seismometers and accelerometers based on molecular electronic transducers (MET) technology using MicroElectroMechanical Systems (MEMS) techniques. MET is a technology applied in seismic instrumentation that proves highly beneficial to planetary seismology. It consists of an electrochemical cell that senses the movement of liquid electrolyte between electrodes by converting it to the output current. MET seismometers have advantages of high sensitivity, low noise floor, small size, absence of fragile mechanical moving parts and independence on the direction of sensitivity axis. By using MEMS techniques, a micro MET seismometer is developed with inter-electrode spacing close to 1μm, which improves the sensitivity of fabricated device to above 3000 V/(m/s^2) under operating bias of 600 mV and input acceleration of 400 μG (G=9.81m/s^2) at 0.32 Hz. The lowered hydrodynamic resistance by increasing the number of channels improves the self-noise to -127 dB equivalent to 44 nG/√Hz at 1 Hz. An alternative approach to build the sensing element of MEMS MET seismometer using SOI process is also presented in this thesis. The significantly increased number of channels is expected to improve the noise performance. Inspired by the advantages of combining MET and MEMS technologies on the development of seismometer, a low frequency accelerometer utilizing MET technology with post-CMOS-compatible fabrication processes is developed. In the fabricated accelerometer, the complicated fabrication of mass-spring system in solid-state MEMS accelerometer is replaced with a much simpler post-CMOS-compatible process containing only deposition of a four-electrode MET structure on a planar substrate, and a liquid inertia mass of an electrolyte droplet encapsulated by oil film. The fabrication process does not involve focused ion beam milling which is used in the micro MET seismometer fabrication, thus the cost is lowered. Furthermore, the planar structure and the novel idea of using an oil film as the sealing diaphragm eliminate the complicated three-dimensional packaging of the seismometer. The fabricated device achieves 10.8 V/G sensitivity at 20 Hz with nearly flat response over the frequency range from 1 Hz to 50 Hz, and a low noise floor of 75 μG/√Hz at 20 Hz.
ContributorsHuang, Hai (Author) / Yu, Hongyu (Thesis advisor) / Jiang, Hanqing (Committee member) / Dai, Lenore (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2014
152504-Thumbnail Image.png
Description
Alzheimer's disease (AD) is the most common type of dementia, affecting one in nine people age 65 and older. One of the most important neuropathological characteristics of Alzheimer's disease is the aggregation and deposition of the protein beta-amyloid. Beta-amyloid is produced by proteolytic processing of the Amyloid Precursor Protein (APP).

Alzheimer's disease (AD) is the most common type of dementia, affecting one in nine people age 65 and older. One of the most important neuropathological characteristics of Alzheimer's disease is the aggregation and deposition of the protein beta-amyloid. Beta-amyloid is produced by proteolytic processing of the Amyloid Precursor Protein (APP). Production of beta-amyloid from APP is increased when cells are subject to stress since both APP and beta-secretase are upregulated by stress. An increased beta-amyloid level promotes aggregation of beta-amyloid into toxic species which cause an increase in reactive oxygen species (ROS) and a decrease in cell viability. Therefore reducing beta-amyloid generation is a promising method to control cell damage following stress. The goal of this thesis was to test the effect of inhibiting beta-amyloid production inside stressed AD cell model. Hydrogen peroxide was used as stressing agent. Two treatments were used to inhibit beta-amyloid production, including iBSec1, an scFv designed to block beta-secretase site of APP, and DIA10D, a bispecific tandem scFv engineered to cleave alpha-secretase site of APP and block beta-secretase site of APP. iBSec1 treatment was added extracellularly while DIA10D was stably expressed inside cell using PSECTAG vector. Increase in reactive oxygen species and decrease in cell viability were observed after addition of hydrogen peroxide to AD cell model. The increase in stress induced toxicity caused by addition of hydrogen peroxide was dramatically decreased by simultaneously treating the cells with iBSec1 or DIA10D to block the increase in beta-amyloid levels resulting from the upregulation of APP and beta-secretase.
ContributorsSuryadi, Vicky (Author) / Sierks, Michael (Thesis advisor) / Nielsen, David (Committee member) / Dai, Lenore (Committee member) / Arizona State University (Publisher)
Created2014
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
152962-Thumbnail Image.png
Description
This research focuses on the benefits of using nanocomposites in aerospace structural components to prevent or delay the onset of unique composite failure modes, such as delamination. Analytical, numerical, and experimental analyses were conducted to provide a comprehensive understanding of how carbon nanotubes (CNTs) can provide additional structural integrity when

This research focuses on the benefits of using nanocomposites in aerospace structural components to prevent or delay the onset of unique composite failure modes, such as delamination. Analytical, numerical, and experimental analyses were conducted to provide a comprehensive understanding of how carbon nanotubes (CNTs) can provide additional structural integrity when they are used in specific hot spots within a structure. A multiscale approach was implemented to determine the mechanical and thermal properties of the nanocomposites, which were used in detailed finite element models (FEMs) to analyze interlaminar failures in T and Hat section stringers. The delamination that first occurs between the tow filler and the bondline between the stringer and skin was of particular interest. Both locations are considered to be hot spots in such structural components, and failures tend to initiate from these areas. In this research, nanocomposite use was investigated as an alternative to traditional methods of suppressing delamination. The stringer was analyzed under different loading conditions and assuming different structural defects. Initial damage, defined as the first drop in the load displacement curve was considered to be a useful variable to compare the different behaviors in this study and was detected via the virtual crack closure technique (VCCT) implemented in the FE analysis.

Experiments were conducted to test T section skin/stringer specimens under pull-off loading, replicating those used in composite panels as stiffeners. Two types of designs were considered: one using pure epoxy to fill the tow region and another that used nanocomposite with 5 wt. % CNTs. The response variable in the tests was the initial damage. Detailed analyses were conducted using FEMs to correlate with the experimental data. The correlation between both the experiment and model was satisfactory. Finally, the effects of thermal cure and temperature variation on nanocomposite structure behavior were studied, and both variables were determined to influence the nanocomposite structure performance.
ContributorsHasan, Zeaid (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Jiang, Hanqing (Committee member) / Rajadas, John (Committee member) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2014
152982-Thumbnail Image.png
Description
Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range

Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range of sensors and detection techniques, for both metallic materials and composites. However, detecting damage at the microscale is not possible with commercially available sensors. A probable way to approach this problem is through accurate and efficient multiscale modeling techniques, which are capable of tracking damage initiation at the microscale and propagation across the length scales. The output from these models will provide an improved understanding of damage initiation; the knowledge can be used in conjunction with information from physical sensors to improve the size of detectable damage. In this research, effort has been dedicated to develop multiscale modeling approaches and associated damage criteria for the estimation of damage evolution across the relevant length scales. Important issues such as length and time scales, anisotropy and variability in material properties at the microscale, and response under mechanical and thermal loading are addressed. Two different material systems have been studied: metallic material and a novel stress-sensitive epoxy polymer.

For metallic material (Al 2024-T351), the methodology initiates at the microscale where extensive material characterization is conducted to capture the microstructural variability. A statistical volume element (SVE) model is constructed to represent the material properties. Geometric and crystallographic features including grain orientation, misorientation, size, shape, principal axis direction and aspect ratio are captured. This SVE model provides a computationally efficient alternative to traditional techniques using representative volume element (RVE) models while maintaining statistical accuracy. A physics based multiscale damage criterion is developed to simulate the fatigue crack initiation. The crack growth rate and probable directions are estimated simultaneously.

Mechanically sensitive materials that exhibit specific chemical reactions upon external loading are currently being investigated for self-sensing applications. The "smart" polymer modeled in this research consists of epoxy resin, hardener, and a stress-sensitive material called mechanophore The mechanophore activation is based on covalent bond-breaking induced by external stimuli; this feature can be used for material-level damage detections. In this work Tris-(Cinnamoyl oxymethyl)-Ethane (TCE) is used as the cyclobutane-based mechanophore (stress-sensitive) material in the polymer matrix. The TCE embedded polymers have shown promising results in early damage detection through mechanically induced fluorescence. A spring-bead based network model, which bridges nanoscale information to higher length scales, has been developed to model this material system. The material is partitioned into discrete mass beads which are linked using linear springs at the microscale. A series of MD simulations were performed to define the spring stiffness in the statistical network model. By integrating multiple spring-bead models a network model has been developed to represent the material properties at the mesoscale. The model captures the statistical distribution of crosslinking degree of the polymer to represent the heterogeneous material properties at the microscale. The developed multiscale methodology is computationally efficient and provides a possible means to bridge multiple length scales (from 10 nm in MD simulation to 10 mm in FE model) without significant loss of accuracy. Parametric studies have been conducted to investigate the influence of the crosslinking degree on the material behavior. The developed methodology has been used to evaluate damage evolution in the self-sensing polymer.
ContributorsZhang, Jinjun (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Jiang, Hanqing (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014