Matching Items (645)
Filtering by

Clear all filters

151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
152136-Thumbnail Image.png
Description
Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving

Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving the rates of reductive dechlorination and the growth of Dehalococcoides in mixed communities. Biostimulation of contaminated sites or microcosms with electron donor fails to consistently promote dechlorination of PCE/TCE beyond cis-dichloroethene (cis-DCE), even when the presence of Dehalococcoides is confirmed. Supported by data from microcosm experiments, I showed that the stalling at cis-DCE is due a H2 competition in which components of the soil or sediment serve as electron acceptors for competing microorganisms. However, once competition was minimized by providing selective enrichment techniques, I illustrated how to obtain both fast rates and high-density Dehalococcoides using three distinct enrichment cultures. Having achieved a heightened awareness of the fierce competition for electron donor, I then identified bicarbonate (HCO3-) as a potential H2 sink for reductive dechlorination. HCO3- is the natural buffer in groundwater but also the electron acceptor for hydrogenotrophic methanogens and homoacetogens, two microbial groups commonly encountered with Dehalococcoides. By testing a range of concentrations in batch experiments, I showed that methanogens are favored at low HCO3 and homoacetogens at high HCO3-. The high HCO3- concentrations increased the H2 demand which negatively affected the rates and extent of dechlorination. By applying the gained knowledge on microbial community management, I ran the first successful continuous stirred-tank reactor (CSTR) at a 3-d hydraulic retention time for cultivation of dechlorinating cultures. I demonstrated that using carefully selected conditions in a CSTR, cultivation of Dehalococcoides at short retention times is feasible, resulting in robust cultures capable of fast dechlorination. Lastly, I provide a systematic insight into the effect of high ammonia on communities involved in dechlorination of chloroethenes. This work documents the potential use of landfill leachate as a substrate for dechlorination and an increased tolerance of Dehalococcoides to high ammonia concentrations (2 g L-1 NH4+-N) without loss of the ability to dechlorinate TCE to ethene.
ContributorsDelgado, Anca Georgiana (Author) / Krajmalnik-Brown, Rosa (Thesis advisor) / Cadillo-Quiroz, Hinsby (Committee member) / Halden, Rolf U. (Committee member) / Rittmann, Bruce E. (Committee member) / Stout, Valerie (Committee member) / Arizona State University (Publisher)
Created2013
152146-Thumbnail Image.png
Description
Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost

Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost effective and convenient tools for such analysis. Scientific literature is full of novel sensor ideas but it is challenging to develop a working device, which are few. These challenges include trace level detection, presence of hundreds of interfering compounds, excessive humidity, different sampling regulations and personal variability. To meet these challenges as well as deliver a low cost solution, optical sensors based on specific colorimetric chemical reactions on mesoporous membranes have been developed. Sensor hardware utilizing cost effective and ubiquitously available light source (LED) and detector (webcam/photo diodes) has been developed and optimized for sensitive detection. Sample conditioning mouthpiece suitable for portable sensors is developed and integrated. The sensors are capable of communication with mobile phones realizing the idea of m-health for easy personal health monitoring in free living conditions. Nitric oxide and Acetone are chosen as analytes of interest. Nitric oxide levels in the breath correlate with lung inflammation which makes it useful for asthma management. Acetone levels increase during ketosis resulting from fat metabolism in the body. Monitoring breath acetone thus provides useful information to people with type1 diabetes, epileptic children on ketogenic diets and people following fitness plans for weight loss.
ContributorsPrabhakar, Amlendu (Author) / Tao, Nongjian (Thesis advisor) / Forzani, Erica (Committee member) / Lindsay, Stuart (Committee member) / Arizona State University (Publisher)
Created2013
152153-Thumbnail Image.png
Description
Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can be divided into two parts. The first part of this dissertation focuses on developing a more accurate network model for TEP study. First, a mixed-integer linear programming (MILP) based TEP model is proposed for solving multi-stage TEP problems. Compared with previous work, the proposed approach reduces the number of variables and constraints needed and improves the computational efficiency significantly. Second, the AC power flow model is applied to TEP models. Relaxations and reformulations are proposed to make the AC model based TEP problem solvable. Third, a convexified AC network model is proposed for TEP studies with reactive power and off-nominal bus voltage magnitudes included in the model. A MILP-based loss model and its relaxations are also investigated. The second part of this dissertation investigates the uncertainty modeling issues in the TEP problem. A two-stage stochastic TEP model is proposed and decomposition algorithms based on the L-shaped method and progressive hedging (PH) are developed to solve the stochastic model. Results indicate that the stochastic TEP model can give a more accurate estimation of the annual operating cost as compared to the deterministic TEP model which focuses only on the peak load.
ContributorsZhang, Hui (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald T (Thesis advisor) / Mittelmann, Hans D (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2013
152244-Thumbnail Image.png
Description
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR)

Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
ContributorsConley, Quincy (Author) / Atkinson, Robert K (Thesis advisor) / Nguyen, Frank (Committee member) / Nelson, Brian C (Committee member) / Arizona State University (Publisher)
Created2013
152247-Thumbnail Image.png
Description
Surface plasmon resonance (SPR) has emerged as a popular technique for elucidating subtle signals from biological events in a label-free, high throughput environment. The efficacy of conventional SPR sensors, whose signals are mass-sensitive, diminishes rapidly with the size of the observed target molecules. The following work advances the current SPR

Surface plasmon resonance (SPR) has emerged as a popular technique for elucidating subtle signals from biological events in a label-free, high throughput environment. The efficacy of conventional SPR sensors, whose signals are mass-sensitive, diminishes rapidly with the size of the observed target molecules. The following work advances the current SPR sensor paradigm for the purpose of small molecule detection. The detection limits of two orthogonal components of SPR measurement are targeted: speed and sensitivity. In the context of this report, speed refers to the dynamic range of measured kinetic rate constants, while sensitivity refers to the target molecule mass limitation of conventional SPR measurement. A simple device for high-speed microfluidic delivery of liquid samples to a sensor surface is presented to address the temporal limitations of conventional SPR measurement. The time scale of buffer/sample switching is on the order of milliseconds, thereby minimizing the opportunity for sample plug dispersion. The high rates of mass transport to and from the central microfluidic sensing region allow for SPR-based kinetic analysis of binding events with dissociation rate constants (kd) up to 130 s-1. The required sample volume is only 1 μL, allowing for minimal sample consumption during high-speed kinetic binding measurement. Charge-based detection of small molecules is demonstrated by plasmonic-based electrochemical impedance microscopy (P-EIM). The dependence of surface plasmon resonance (SPR) on surface charge density is used to detect small molecules (60-120 Da) printed on a dextran-modified sensor surface. The SPR response to an applied ac potential is a function of the surface charge density. This optical signal is comprised of a dc and an ac component, and is measured with high spatial resolution. The amplitude and phase of local surface impedance is provided by the ac component. The phase signal of the small molecules is a function of their charge status, which is manipulated by the pH of a solution. This technique is used to detect and distinguish small molecules based on their charge status, thereby circumventing the mass limitation (~100 Da) of conventional SPR measurement.
ContributorsMacGriff, Christopher Assiff (Author) / Tao, Nongjian (Thesis advisor) / Wang, Shaopeng (Committee member) / LaBaer, Joshua (Committee member) / Chae, Junseok (Committee member) / Arizona State University (Publisher)
Created2013
152260-Thumbnail Image.png
Description
Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival estimates are based upon accurate direct sequence spread spectrum (DSSS) code and carrier phase tracking. Current multipath mitigating GNSS solutions include fixed radiation pattern antennas and windowed delay-lock loop code phase discriminators. A new multipath mitigating code tracking algorithm is introduced that utilizes a non-symmetric correlation kernel to reject multipath. Independent parameters provide a means to trade-off code tracking discriminant gain against multipath mitigation performance. The algorithm performance is characterized in terms of multipath phase error bias, phase error estimation variance, tracking range, tracking ambiguity and implementation complexity. The algorithm is suitable for modernized GNSS signals including Binary Phase Shift Keyed (BPSK) and a variety of Binary Offset Keyed (BOC) signals. The algorithm compensates for unbalanced code sequences to ensure a code tracking bias does not result from the use of asymmetric correlation kernels. The algorithm does not require explicit knowledge of the propagation channel model. Design recommendations for selecting the algorithm parameters to mitigate precorrelation filter distortion are also provided.
ContributorsMiller, Steven (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151926-Thumbnail Image.png
Description
In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems.

In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems. The greatest challenge in developing such systems is the subject-dependent data variations or subject-based variability in physiological and biomedical data, which leads to difference in data distributions making the task of modeling these data, using traditional machine learning algorithms, complex and challenging. As a result, despite the wide application of machine learning, efficient deployment of its principles to model real-world data is still a challenge. This dissertation addresses the problem of subject based variability in physiological and biomedical data and proposes person adaptive prediction models based on novel transfer and active learning algorithms, an emerging field in machine learning. One of the significant contributions of this dissertation is a person adaptive method, for early detection of muscle fatigue using Surface Electromyogram signals, based on a new multi-source transfer learning algorithm. This dissertation also proposes a subject-independent algorithm for grading the progression of muscle fatigue from 0 to 1 level in a test subject, during isometric or dynamic contractions, at real-time. Besides subject based variability, biomedical image data also varies due to variations in their imaging techniques, leading to distribution differences between the image databases. Hence a classifier learned on one database may perform poorly on the other database. Another significant contribution of this dissertation has been the design and development of an efficient biomedical image data annotation framework, based on a novel combination of transfer learning and a new batch-mode active learning method, capable of addressing the distribution differences across databases. The methodologies developed in this dissertation are relevant and applicable to a large set of computing problems where there is a high variation of data between subjects or sources, such as face detection, pose detection and speech recognition. From a broader perspective, these frameworks can be viewed as a first step towards design of automated adaptive systems for real world data.
ContributorsChattopadhyay, Rita (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013