This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 84
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
152113-Thumbnail Image.png
Description
The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus

The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus of this thesis is to design scheduling and power control algorithms in wireless networks, and analyze their performances. In this thesis, we first study the multicast capacity of wireless ad hoc networks. Gupta and Kumar studied the scaling law of the unicast capacity of wireless ad hoc networks. They derived the order of the unicast throughput, as the number of nodes in the network goes to infinity. In our work, we characterize the scaling of the multicast capacity of large-scale MANETs under a delay constraint D. We first derive an upper bound on the multicast throughput, and then propose a lower bound on the multicast capacity by proposing a joint coding-scheduling algorithm that achieves a throughput within logarithmic factor of the upper bound. We then study the power control problem in ad-hoc wireless networks. We propose a distributed power control algorithm based on the Gibbs sampler, and prove that the algorithm is throughput optimal. Finally, we consider the scheduling algorithm in collocated wireless networks with flow-level dynamics. Specifically, we study the delay performance of workload-based scheduling algorithm with SRPT as a tie-breaking rule. We demonstrate the superior flow-level delay performance of the proposed algorithm using simulations.
ContributorsZhou, Shan (Author) / Ying, Lei (Thesis advisor) / Zhang, Yanchao (Committee member) / Zhang, Junshan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2013
152073-Thumbnail Image.png
Description
The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation

The effect of earthquake-induced liquefaction on the local void ratio distribution of cohesionless soil is evaluated using x-ray computed tomography (CT) and an advanced image processing software package. Intact, relatively undisturbed specimens of cohesionless soil were recovered before and after liquefaction by freezing and coring soil deposits created by pluviation and by sedimentation through water. Pluviated soil deposits were liquefied in the small geotechnical centrifuge at the University of California at Davis shared-use National Science Foundation (NSF)-supported Network for Earthquake Engineering Simulation (NEES) facility. A soil deposit created by sedimentation through water was liquefied on a small shake table in the Arizona State University geotechnical laboratory. Initial centrifuge tests employed Ottawa 20-30 sand but this material proved to be too coarse to liquefy in the centrifuge. Therefore, subsequent centrifuge tests employed Ottawa F60 sand. The shake table test employed Ottawa 20-30 sand. Recovered cores were stabilized by impregnation with optical grade epoxy and sent to the University of Texas at Austin NSF-supported facility at the University of Texas at Austin for high-resolution CT scanning of geologic media. The local void ratio distribution of a CT-scanned core of Ottawa 20-30 sand evaluated using Avizo® Fire, a commercially available advanced program for image analysis, was compared to the local void ratio distribution established on the same core by analysis of optical images to demonstrate that analysis of the CT scans gave similar results to optical methods. CT scans were subsequently conducted on liquefied and not-liquefied specimens of Ottawa 20-30 sand and Ottawa F60 sand. The resolution of F60 specimens was inadequate to establish the local void ratio distribution. Results of the analysis of the Ottawa 20-30 specimens recovered from the model built for the shake table test showed that liquefaction can substantially influence the variability in local void ratio, increasing the degree of non-homogeneity in the specimen.
ContributorsGutierrez, Angel (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151982-Thumbnail Image.png
Description
The rapid advances in wireless communications and networking have given rise to a number of emerging heterogeneous wireless and mobile networks along with novel networking paradigms, including wireless sensor networks, mobile crowdsourcing, and mobile social networking. While offering promising solutions to a wide range of new applications, their widespread adoption

The rapid advances in wireless communications and networking have given rise to a number of emerging heterogeneous wireless and mobile networks along with novel networking paradigms, including wireless sensor networks, mobile crowdsourcing, and mobile social networking. While offering promising solutions to a wide range of new applications, their widespread adoption and large-scale deployment are often hindered by people's concerns about the security, user privacy, or both. In this dissertation, we aim to address a number of challenging security and privacy issues in heterogeneous wireless and mobile networks in an attempt to foster their widespread adoption. Our contributions are mainly fivefold. First, we introduce a novel secure and loss-resilient code dissemination scheme for wireless sensor networks deployed in hostile and harsh environments. Second, we devise a novel scheme to enable mobile users to detect any inauthentic or unsound location-based top-k query result returned by an untrusted location-based service providers. Third, we develop a novel verifiable privacy-preserving aggregation scheme for people-centric mobile sensing systems. Fourth, we present a suite of privacy-preserving profile matching protocols for proximity-based mobile social networking, which can support a wide range of matching metrics with different privacy levels. Last, we present a secure combination scheme for crowdsourcing-based cooperative spectrum sensing systems that can enable robust primary user detection even when malicious cognitive radio users constitute the majority.
ContributorsZhang, Rui (Author) / Zhang, Yanchao (Thesis advisor) / Duman, Tolga Mete (Committee member) / Xue, Guoliang (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
151747-Thumbnail Image.png
Description
Heating of asphalt during production and construction causes the volatilization and oxidation of binders used in mixes. Volatilization and oxidation causes degradation of asphalt pavements by increasing the stiffness of the binders, increasing susceptibility to cracking and negatively affecting the functional and structural performance of the pavements. Degradation of asphalt

Heating of asphalt during production and construction causes the volatilization and oxidation of binders used in mixes. Volatilization and oxidation causes degradation of asphalt pavements by increasing the stiffness of the binders, increasing susceptibility to cracking and negatively affecting the functional and structural performance of the pavements. Degradation of asphalt binders by volatilization and oxidation due to high production temperature occur during early stages of pavement life and are known as Short Term Aging (STA). Elevated temperatures and increased exposure time to elevated temperatures causes increased STA of asphalt. The objective of this research was to investigate how elevated mixing temperatures and exposure time to elevated temperatures affect aging and stiffening of binders, thus influencing properties of the asphalt mixtures. The study was conducted in two stages. The first stage evaluated STA effect of asphalt binders. It involved aging two Performance Graded (PG) virgin asphalt binders, PG 76-16 and PG 64-22 at two different temperatures and durations, then measuring their viscosities. The second stage involved evaluating the effects of elevated STA temperature and time on properties of the asphalt mixtures. It involved STA of asphalt mixtures produced in the laboratory with the PG 64-22 binder at mixing temperatures elevated 25OF above standard practice; STA times at 2 and 4 hours longer than standard practices, and then compacted in a gyratory compactor. Dynamic modulus (E*) and Indirect Tensile Strength (IDT) were measured for the aged mixtures for each temperature and duration to determine the effect of different aging times and temperatures on the stiffness and fatigue properties of the aged asphalt mixtures. The binder test results showed that in all cases, there was increased viscosity. The results showed the highest increase in viscosity resulted from increased aging time. The results also indicated that PG 64-22 was more susceptible to elevated STA temperature and extended time than the PG 76-16 binders. The asphalt mixture test results confirmed the expected outcome that increasing the STA and mixing temperature by 25oF alters the stiffness of mixtures. Significant change in the dynamic modulus mostly occurred at four hour increase in STA time regardless of temperature.
ContributorsLolly, Rubben (Author) / Kaloush, Kamil (Thesis advisor) / Bearup, Wylie (Committee member) / Zapata, Claudia (Committee member) / Mamlouk, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151835-Thumbnail Image.png
Description
Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important

Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important soil property function for application of unsaturated soil mechanics. The soil water characteristic curve has been used extensively for estimating unsaturated soil properties, and a number of fitting equations for development of soil water characteristic curves from laboratory data have been proposed by researchers. Although not always mentioned, the underlying assumption of soil water characteristic curve fitting equations is that the soil is sufficiently stiff so that there is no change in total volume of the soil while measuring the soil water characteristic curve in the laboratory, and researchers rarely take volume change of soils into account when generating or using the soil water characteristic curve. Further, there has been little attention to the applied net normal stress during laboratory soil water characteristic curve measurement, and often zero to only token net normal stress is applied. The applied net normal stress also affects the volume change of the specimen during soil suction change. When a soil changes volume in response to suction change, failure to consider the volume change of the soil leads to errors in the estimated air-entry value and the slope of the soil water characteristic curve between the air-entry value and the residual moisture state. Inaccuracies in the soil water characteristic curve may lead to inaccuracies in estimated soil property functions such as unsaturated hydraulic conductivity. A number of researchers have recently recognized the importance of considering soil volume change in soil water characteristic curves. The study of correct methods of soil water characteristic curve measurement and determination considering soil volume change, and impacts on the unsaturated hydraulic conductivity function was of the primary focus of this study. Emphasis was placed upon study of the effect of volume change consideration on soil water characteristic curves, for expansive clays and other high volume change soils. The research involved extensive literature review and laboratory soil water characteristic curve testing on expansive soils. The effect of the initial state of the specimen (i.e. slurry versus compacted) on soil water characteristic curves, with regard to volume change effects, and effect of net normal stress on volume change for determination of these curves, was studied for expansive clays. Hysteresis effects were included in laboratory measurements of soil water characteristic curves as both wetting and drying paths were used. Impacts of soil water characteristic curve volume change considerations on fluid flow computations and associated suction-change induced soil deformations were studied through numerical simulations. The study includes both coupled and uncoupled flow and stress-deformation analyses, demonstrating that the impact of volume change consideration on the soil water characteristic curve and the estimated unsaturated hydraulic conductivity function can be quite substantial for high volume change soils.
ContributorsBani Hashem, Elham (Author) / Houston, Sandra L. (Thesis advisor) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151324-Thumbnail Image.png
Description
A principal goal of this dissertation is to study stochastic optimization and real-time scheduling in cyber-physical systems (CPSs) ranging from real-time wireless systems to energy systems to distributed control systems. Under this common theme, this dissertation can be broadly organized into three parts based on the system environments. The first

A principal goal of this dissertation is to study stochastic optimization and real-time scheduling in cyber-physical systems (CPSs) ranging from real-time wireless systems to energy systems to distributed control systems. Under this common theme, this dissertation can be broadly organized into three parts based on the system environments. The first part investigates stochastic optimization in real-time wireless systems, with the focus on the deadline-aware scheduling for real-time traffic. The optimal solution to such scheduling problems requires to explicitly taking into account the coupling in the deadline-aware transmissions and stochastic characteristics of the traffic, which involves a dynamic program that is traditionally known to be intractable or computationally expensive to implement. First, real-time scheduling with adaptive network coding over memoryless channels is studied, and a polynomial-time complexity algorithm is developed to characterize the optimal real-time scheduling. Then, real-time scheduling over Markovian channels is investigated, where channel conditions are time-varying and online channel learning is necessary, and the optimal scheduling policies in different traffic regimes are studied. The second part focuses on the stochastic optimization and real-time scheduling involved in energy systems. First, risk-aware scheduling and dispatch for plug-in electric vehicles (EVs) are studied, aiming to jointly optimize the EV charging cost and the risk of the load mismatch between the forecasted and the actual EV loads, due to the random driving activities of EVs. Then, the integration of wind generation at high penetration levels into bulk power grids is considered. Joint optimization of economic dispatch and interruptible load management is investigated using short-term wind farm generation forecast. The third part studies stochastic optimization in distributed control systems under different network environments. First, distributed spectrum access in cognitive radio networks is investigated by using pricing approach, where primary users (PUs) sell the temporarily unused spectrum and secondary users compete via random access for such spectrum opportunities. The optimal pricing strategy for PUs and the corresponding distributed implementation of spectrum access control are developed to maximize the PU's revenue. Then, a systematic study of the nonconvex utility-based power control problem is presented under the physical interference model in ad-hoc networks. Distributed power control schemes are devised to maximize the system utility, by leveraging the extended duality theory and simulated annealing.
ContributorsYang, Lei (Author) / Zhang, Junshan (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Xue, Guoliang (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
152596-Thumbnail Image.png
Description
This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count

This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count value variability alone (i.e., assuming all other aspects of the design problem do not contribute error or bias). Evaluated methods include Eurocode 7 Geotechnical Design procedures, the Federal Highway Administration drilled shaft LRFD design method, the Electric Power Research Institute transmission foundation design procedure and a site specific variability based approach previously suggested by the author of this thesis and others. The analysis method is defined by three phases: a) Evaluate the spatial variability of an existing subsurface database. b) Derive theoretical foundation designs from the database in accordance with the various design methods identified. c) Conduct Monti Carlo Simulations to compute the reliability of the theoretical foundation designs. Over several decades, reliability-based foundation design (RBD) methods have been developed and implemented to varying degrees for buildings, bridges, electric systems and other structures. In recent years, an effort has been made by researchers, professional societies and other standard-developing organizations to publish design guidelines, manuals and standards concerning RBD for foundations. Most of these approaches rely on statistical methods for quantifying load and resistance probability distribution functions with defined reliability levels. However, each varies with regard to the influence of site-specific variability on resistance. An examination of the influence of site-specific variability is required to provide direction for incorporating the concept into practical RBD design methods. Recent surveys of transmission line engineers by the Electric Power Research Institute (EPRI) demonstrate RBD methods for the design of transmission line foundations have not been widely adopted. In the absence of a unifying design document with established reliability goals, transmission line foundations have historically performed very well, with relatively few failures. However, such a track record with no set reliability goals suggests, at least in some cases, a financial premium has likely been paid.
ContributorsHeim, Zackary (Author) / Houston, Sandra (Thesis advisor) / Witczak, Matthew (Committee member) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014