This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 197
Filtering by

Clear all filters

152177-Thumbnail Image.png
Description
Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent

Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent of a buildings energy use. Increasing the efficiency of building materials will reduce energy usage over the life time of the building. Current methods for maintaining the interior environment can be highly inefficient depending on the building materials selected. Materials such as concrete have low thermal efficiency and have a low heat capacity meaning it provides little insulation. Use of phase change materials (PCM) provides the opportunity to increase environmental efficiency of buildings by using the inherent latent heat storage as well as the increased heat capacity. Incorporating PCM into concrete via lightweight aggregates (LWA) by direct addition is seen as a viable option for increasing the thermal storage capabilities of concrete, thereby increasing building energy efficiency. As PCM change phase from solid to liquid, heat is absorbed from the surroundings, decreasing the demand on the air conditioning systems on a hot day or vice versa on a cold day. Further these materials provide an additional insulating capacity above the value of plain concrete. When the temperature drops outside the PCM turns back into a solid and releases the energy stored from the day. PCM is a hydrophobic material and causes reductions in compressive strength when incorporated directly into concrete, as shown in previous studies. A proposed method for mitigating this detrimental effect, while still incorporating PCM into concrete is to encapsulate the PCM in aggregate. This technique would, in theory, allow for the use of phase change materials directly in concrete, increasing the thermal efficiency of buildings, while negating the negative effect on compressive strength of the material.
ContributorsSharma, Breeann (Author) / Neithalath, Narayanan (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013
152088-Thumbnail Image.png
Description
The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and

The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and ground granulated blast furnace slag are commonly used for their content of soluble silica and aluminate species that can undergo dissolution, polymerization with the alkali, condensation on particle surfaces and solidification. The following topics are the focus of this thesis: (i) the use of microwave assisted thermal processing, in addition to heat-curing as a means of alkali activation and (ii) the relative effects of alkali cations (K or Na) in the activator (powder activators) on the mechanical properties and chemical structure of these systems. Unsuitable curing conditions instigate carbonation, which in turn lowers the pH of the system causing significant reductions in the rate of fly ash activation and mechanical strength development. This study explores the effects of sealing the samples during the curing process, which effectively traps the free water in the system, and allows for increased aluminosilicate activation. The use of microwave-curing in lieu of thermal-curing is also studied in order to reduce energy consumption and for its ability to provide fast volumetric heating. Potassium-based powder activators dry blended into the slag binder system is shown to be effective in obtaining very high compressive strengths under moist curing conditions (greater than 70 MPa), whereas sodium-based powder activation is much weaker (around 25 MPa). Compressive strength decreases when fly ash is introduced into the system. Isothermal calorimetry is used to evaluate the early hydration process, and to understand the reaction kinetics of the alkali powder activated systems. A qualitative evidence of the alkali-hydroxide concentration of the paste pore solution through the use of electrical conductivity measurements is also presented, with the results indicating the ion concentration of alkali is more prevalent in the pore solution of potassium-based systems. The use of advanced spectroscopic and thermal analysis techniques to distinguish the influence of studied parameters is also discussed.
ContributorsChowdhury, Ussala (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramanium D. (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151987-Thumbnail Image.png
Description
Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned

Properties of random porous material such as pervious concrete are strongly dependant on its pore structure features. This research deals with the development of an understanding of the relationship between the material structure and the mechanical and functional properties of pervious concretes. The fracture response of pervious concrete specimens proportioned for different porosities, as a function of the pore structure features and fiber volume fraction, is studied. Stereological and morphological methods are used to extract the relevant pore structure features of pervious concretes from planar images. A two-parameter fracture model is used to obtain the fracture toughness (KIC) and critical crack tip opening displacement (CTODc) from load-crack mouth opening displacement (CMOD) data of notched beams under three-point bending. The experimental results show that KIC is primarily dependent on the porosity of pervious concretes. For a similar porosity, an increase in pore size results in a reduction in KIC. At similar pore sizes, the effect of fibers on the post-peak response is more prominent in mixtures with a higher porosity, as shown by the residual load capacity, stress-crack extension relationships, and GR curves. These effects are explained using the mean free spacing of pores and pore-to-pore tortuosity in these systems. A sensitivity analysis is employed to quantify the influence of material design parameters on KIC. This research has also focused on studying the relationship between permeability and tortuosity as it pertains to porosity and pore size of pervious concretes. Various ideal geometric shapes were also constructed that had varying pore sizes and porosities. The pervious concretes also had differing pore sizes and porosities. The permeabilities were determined using three different methods; Stokes solver, Lattice Boltzmann method and the Katz-Thompson equation. These values were then compared to the tortuosity values determined using a Matlab code that uses a pore connectivity algorithm. The tortuosity was also determined from the inverse of the conductivity determined from a numerical analysis that was necessary for using the Katz-Thompson equation. These tortuosity values were then compared to the permeabilities. The pervious concretes and ideal geometric shapes showed consistent similarities betbetween their tortuosities and permeabilities.
ContributorsRehder, Benjamin (Author) / Neithalath, Narayanana (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151960-Thumbnail Image.png
Description
Buildings consume a large portion of the world's energy, but with the integration of phase change materials (PCMs) in building elements this energy cost can be greatly reduced. The addition of PCMs into building elements, however, becomes a challenge to model and analyze how the material actually affects the energy

Buildings consume a large portion of the world's energy, but with the integration of phase change materials (PCMs) in building elements this energy cost can be greatly reduced. The addition of PCMs into building elements, however, becomes a challenge to model and analyze how the material actually affects the energy flow and temperatures in the system. This research work presents a comprehensive computer program used to model and analyze PCM embedded wall systems. The use of the finite element method (FEM) provides the tool to analyze the energy flow of these systems. Finite element analysis (FEA) can model the transient analysis of a typical climate cycle along with nonlinear problems, which the addition of PCM causes. The use of phase change materials is also a costly material expense. The initial expense of using PCMs can be compensated by the reduction in energy costs it can provide. Optimization is the tool used to determine the optimal point between adding PCM into a wall and the amount of energy savings that layer will provide. The integration of these two tools into a computer program allows for models to be efficiently created, analyzed and optimized. The program was then used to understand the benefits between two different wall models, a wall with a single layer of PCM or a wall with two different PCM layers. The effect of the PCMs on the inside wall temperature along with the energy flow across the wall are computed. The numerical results show that a multi-layer PCM wall was more energy efficient and cost effective than the single PCM layer wall. A structural analysis was then performed on the optimized designs using ABAQUS v. 6.10 to ensure the structural integrity of the wall was not affected by adding PCM layer(s).
ContributorsStockwell, Amie (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Thesis advisor) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013