This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 129
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
152110-Thumbnail Image.png
Description
In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used.

In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used. If that is the case, the soil is compacted to in-situ density and water content (or matric suction), which should best represent the expansive profile in question. It is standard practice to subject the specimen to a wetting process at a particular net normal stress. Even though currently accepted laboratory testing standard procedures provide insight on how the profile conditions changes with time, these procedures do not assess the long term effects on the soil due to climatic changes. In this experimental study, an assessment and quantification of the effect of multiple wetting/drying cycles on the volume change behavior of two different naturally occurring soils was performed. The changes in wetting and drying cycles were extreme when comparing the swings in matric suction. During the drying cycle, the expansive soil was subjected to extreme conditions, which decreased the moisture content less than the shrinkage limit. Nevertheless, both soils were remolded at five different compacted conditions and loaded to five different net normal stresses. Each sample was subjected to six wetting and drying cycles. During the assessment, it was evident from the results that the swell/collapse strain is highly non-linear at low stress levels. The strain-net normal stress relationship cannot be defined by one single function without transforming the data. Therefore, the dataset needs to be fitted to a bi-modal logarithmic function or to a logarithmic transformation of net normal stress in order to use a third order polynomial fit. It was also determined that the moisture content changes with time are best fit by non-linear functions. For the drying cycle, the radial strain was determined to have a constant rate of change with respect to the axial strain. However, for the wetting cycle, there was not enough radial strain data to develop correlations and therefore, an assumption was made based on 55 different test measurements/observations, for the wetting cycles. In general, it was observed that after each subsequent cycle, higher swelling was exhibited for lower net normal stress values; while higher collapse potential was observed for higher net normal stress values, once the net normal stress was less than/greater than a threshold net normal stress value. Furthermore, the swelling pressure underwent a reduction in all cases. Particularly, the Anthem soil exhibited a reduction in swelling pressure by at least 20 percent after the first wetting/drying cycle; while Colorado soil exhibited a reduction of 50 percent. After about the fourth cycle, the swelling pressure seemed to stabilized to an equilibrium value at which a reduction of 46 percent was observed for the Anthem soil and 68 percent reduction for the Colorado soil. The impact of the initial compacted conditions on heave characteristics was studied. Results indicated that materials compacted at higher densities exhibited greater swell potential. When comparing specimens compacted at the same density but at different moisture content (matric suction), it was observed that specimens compacted at higher suction would exhibit higher swelling potential, when subjected to the same net normal stress. The least amount of swelling strain was observed on specimens compacted at the lowest dry density and the lowest matric suction (higher water content). The results from the laboratory testing were used to develop ultimate heave profiles for both soils. This analysis showed that even though the swell pressure for each soil decreased with cycles, the amount of heave would increase or decrease depending upon the initial compaction condition. When the specimen was compacted at 110% of optimum moisture content and 90% of maximum dry density, it resulted in an ultimate heave reduction of 92 percent for Anthem and 685 percent for Colorado soil. On the other hand, when the soils were compacted at 90% optimum moisture content and 100% of the maximum dry density, Anthem specimens heave 78% more and Colorado specimens heave was reduced by 69%. Based on the results obtained, it is evident that the current methods to estimate heave and swelling pressure do not consider the effect of wetting/drying cycles; and seem to fail capturing the free swell potential of the soil. Recommendations for improvement current methods of practice are provided.
ContributorsRosenbalm, Daniel Curtis (Author) / Zapata, Claudia E (Thesis advisor) / Houston, Sandra L. (Committee member) / Kavazanjian, Edward (Committee member) / Witczak, Mathew W (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151994-Thumbnail Image.png
Description
Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly

Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.
ContributorsHe, Miao (Author) / Zhang, Junshan (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Hedman, Kory (Committee member) / Si, Jennie (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151835-Thumbnail Image.png
Description
Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important

Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important soil property function for application of unsaturated soil mechanics. The soil water characteristic curve has been used extensively for estimating unsaturated soil properties, and a number of fitting equations for development of soil water characteristic curves from laboratory data have been proposed by researchers. Although not always mentioned, the underlying assumption of soil water characteristic curve fitting equations is that the soil is sufficiently stiff so that there is no change in total volume of the soil while measuring the soil water characteristic curve in the laboratory, and researchers rarely take volume change of soils into account when generating or using the soil water characteristic curve. Further, there has been little attention to the applied net normal stress during laboratory soil water characteristic curve measurement, and often zero to only token net normal stress is applied. The applied net normal stress also affects the volume change of the specimen during soil suction change. When a soil changes volume in response to suction change, failure to consider the volume change of the soil leads to errors in the estimated air-entry value and the slope of the soil water characteristic curve between the air-entry value and the residual moisture state. Inaccuracies in the soil water characteristic curve may lead to inaccuracies in estimated soil property functions such as unsaturated hydraulic conductivity. A number of researchers have recently recognized the importance of considering soil volume change in soil water characteristic curves. The study of correct methods of soil water characteristic curve measurement and determination considering soil volume change, and impacts on the unsaturated hydraulic conductivity function was of the primary focus of this study. Emphasis was placed upon study of the effect of volume change consideration on soil water characteristic curves, for expansive clays and other high volume change soils. The research involved extensive literature review and laboratory soil water characteristic curve testing on expansive soils. The effect of the initial state of the specimen (i.e. slurry versus compacted) on soil water characteristic curves, with regard to volume change effects, and effect of net normal stress on volume change for determination of these curves, was studied for expansive clays. Hysteresis effects were included in laboratory measurements of soil water characteristic curves as both wetting and drying paths were used. Impacts of soil water characteristic curve volume change considerations on fluid flow computations and associated suction-change induced soil deformations were studied through numerical simulations. The study includes both coupled and uncoupled flow and stress-deformation analyses, demonstrating that the impact of volume change consideration on the soil water characteristic curve and the estimated unsaturated hydraulic conductivity function can be quite substantial for high volume change soils.
ContributorsBani Hashem, Elham (Author) / Houston, Sandra L. (Thesis advisor) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
152001-Thumbnail Image.png
Description
Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in

Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in sample preparation and imaging, and by inter-observer variability in assessment. Over the past few decades, the predictive value quantitative morphology measurements derived from computerized analysis of micrographs has been compromised by the inability of 2D microscopy to capture information in the third dimension, and by the anisotropic spatial resolution inherent to conventional microscopy techniques that generate volumetric images by stacking 2D optical sections to approximate 3D. To gain insight into the analytical 3D nature of cells, this dissertation explores the application of a new technology for single-cell optical computed tomography (optical cell CT) that is a promising 3D tomographic imaging technique which uses visible light absorption to image stained cells individually with sub-micron, isotropic spatial resolution. This dissertation provides a scalable analytical framework to perform fully-automated 3D morphological analysis from transmission-mode optical cell CT images of hematoxylin-stained cells. The developed framework performs rapid and accurate quantification of 3D cell and nuclear morphology, facilitates assessment of morphological heterogeneity, and generates shape- and texture-based biosignatures predictive of the cell state. Custom 3D image segmentation methods were developed to precisely delineate volumes of interest (VOIs) from reconstructed cell images. Comparison with user-defined ground truth assessments yielded an average agreement (DICE coefficient) of 94% for the cell and its nucleus. Seventy nine biologically relevant morphological descriptors (features) were computed from the segmented VOIs, and statistical classification methods were implemented to determine the subset of features that best predicted cell health. The efficacy of our proposed framework was demonstrated on an in vitro model of multistep carcinogenesis in human Barrett's esophagus (BE) and classifier performance using our 3D morphometric analysis was compared against computerized analysis of 2D image slices that reflected conventional cytological observation. Our results enable sensitive and specific nuclear grade classification for early cancer diagnosis and underline the value of the approach as an objective adjunctive tool to better understand morphological changes associated with malignant transformation.
ContributorsNandakumar, Vivek (Author) / Meldrum, Deirdre R (Thesis advisor) / Nelson, Alan C. (Committee member) / Karam, Lina J (Committee member) / Ye, Jieping (Committee member) / Johnson, Roger H (Committee member) / Bussey, Kimberly J (Committee member) / Arizona State University (Publisher)
Created2013
151605-Thumbnail Image.png
Description
In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our

In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our connections and the expansion of our social networks easier. The aggregation of people who share common interests forms social groups, which are fundamental parts of our social lives. Social behavioral analysis at a group level is an active research area and attracts many interests from the industry. Challenges of my work mainly arise from the scale and complexity of user generated behavioral data. The multiple types of interactions, highly dynamic nature of social networking and the volatile user behavior suggest that these data are complex and big in general. Effective and efficient approaches are required to analyze and interpret such data. My work provide effective channels to help connect the like-minded and, furthermore, understand user behavior at a group level. The contributions of this dissertation are in threefold: (1) proposing novel representation of collective tagging knowledge via tag networks; (2) proposing the new information spreader identification problem in egocentric soical networks; (3) defining group profiling as a systematic approach to understanding social groups. In sum, the research proposes novel concepts and approaches for connecting the like-minded, enables the understanding of user groups, and exposes interesting research opportunities.
ContributorsWang, Xufei (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Sundaram, Hari (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
152596-Thumbnail Image.png
Description
This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count

This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count value variability alone (i.e., assuming all other aspects of the design problem do not contribute error or bias). Evaluated methods include Eurocode 7 Geotechnical Design procedures, the Federal Highway Administration drilled shaft LRFD design method, the Electric Power Research Institute transmission foundation design procedure and a site specific variability based approach previously suggested by the author of this thesis and others. The analysis method is defined by three phases: a) Evaluate the spatial variability of an existing subsurface database. b) Derive theoretical foundation designs from the database in accordance with the various design methods identified. c) Conduct Monti Carlo Simulations to compute the reliability of the theoretical foundation designs. Over several decades, reliability-based foundation design (RBD) methods have been developed and implemented to varying degrees for buildings, bridges, electric systems and other structures. In recent years, an effort has been made by researchers, professional societies and other standard-developing organizations to publish design guidelines, manuals and standards concerning RBD for foundations. Most of these approaches rely on statistical methods for quantifying load and resistance probability distribution functions with defined reliability levels. However, each varies with regard to the influence of site-specific variability on resistance. An examination of the influence of site-specific variability is required to provide direction for incorporating the concept into practical RBD design methods. Recent surveys of transmission line engineers by the Electric Power Research Institute (EPRI) demonstrate RBD methods for the design of transmission line foundations have not been widely adopted. In the absence of a unifying design document with established reliability goals, transmission line foundations have historically performed very well, with relatively few failures. However, such a track record with no set reliability goals suggests, at least in some cases, a financial premium has likely been paid.
ContributorsHeim, Zackary (Author) / Houston, Sandra (Thesis advisor) / Witczak, Matthew (Committee member) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014