Matching Items (191)
151402-Thumbnail Image.png
Description
Drosophila melanogaster, as an important model organism, is used to explore the mechanism which governs cell differentiation and embryonic development. Understanding the mechanism will help to reveal the effects of genes on other species or even human beings. Currently, digital camera techniques make high quality Drosophila gene expression imaging possible.

Drosophila melanogaster, as an important model organism, is used to explore the mechanism which governs cell differentiation and embryonic development. Understanding the mechanism will help to reveal the effects of genes on other species or even human beings. Currently, digital camera techniques make high quality Drosophila gene expression imaging possible. On the other hand, due to the advances in biology, gene expression images which can reveal spatiotemporal patterns are generated in a high-throughput pace. Thus, an automated and efficient system that can analyze gene expression will become a necessary tool for investigating the gene functions, interactions and developmental processes. One investigation method is to compare the expression patterns of different developmental stages. Recently, however, the expression patterns are manually annotated with rough stage ranges. The work of annotation requires professional knowledge from experienced biologists. Hence, how to transfer the domain knowledge in biology into an automated system which can automatically annotate the patterns provides a challenging problem for computer scientists. In this thesis, the problem of stage annotation for Drosophila embryo is modeled in the machine learning framework. Three sparse learning algorithms and one ensemble algorithm are used to attack the problem. The sparse algorithms are Lasso, group Lasso and sparse group Lasso. The ensemble algorithm is based on a voting method. Besides that the proposed algorithms can annotate the patterns to stages instead of stage ranges with high accuracy; the decimal stage annotation algorithm presents a novel way to annotate the patterns to decimal stages. In addition, some analysis on the algorithm performance are made and corresponding explanations are given. Finally, with the proposed system, all the lateral view BDGP and FlyFish images are annotated and several interesting applications of decimal stage value are revealed.
ContributorsPan, Cheng (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2012
151926-Thumbnail Image.png
Description
In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems.

In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems. The greatest challenge in developing such systems is the subject-dependent data variations or subject-based variability in physiological and biomedical data, which leads to difference in data distributions making the task of modeling these data, using traditional machine learning algorithms, complex and challenging. As a result, despite the wide application of machine learning, efficient deployment of its principles to model real-world data is still a challenge. This dissertation addresses the problem of subject based variability in physiological and biomedical data and proposes person adaptive prediction models based on novel transfer and active learning algorithms, an emerging field in machine learning. One of the significant contributions of this dissertation is a person adaptive method, for early detection of muscle fatigue using Surface Electromyogram signals, based on a new multi-source transfer learning algorithm. This dissertation also proposes a subject-independent algorithm for grading the progression of muscle fatigue from 0 to 1 level in a test subject, during isometric or dynamic contractions, at real-time. Besides subject based variability, biomedical image data also varies due to variations in their imaging techniques, leading to distribution differences between the image databases. Hence a classifier learned on one database may perform poorly on the other database. Another significant contribution of this dissertation has been the design and development of an efficient biomedical image data annotation framework, based on a novel combination of transfer learning and a new batch-mode active learning method, capable of addressing the distribution differences across databases. The methodologies developed in this dissertation are relevant and applicable to a large set of computing problems where there is a high variation of data between subjects or sources, such as face detection, pose detection and speech recognition. From a broader perspective, these frameworks can be viewed as a first step towards design of automated adaptive systems for real world data.
ContributorsChattopadhyay, Rita (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152001-Thumbnail Image.png
Description
Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in

Despite significant advances in digital pathology and automation sciences, current diagnostic practice for cancer detection primarily relies on a qualitative manual inspection of tissue architecture and cell and nuclear morphology in stained biopsies using low-magnification, two-dimensional (2D) brightfield microscopy. The efficacy of this process is limited by inter-operator variations in sample preparation and imaging, and by inter-observer variability in assessment. Over the past few decades, the predictive value quantitative morphology measurements derived from computerized analysis of micrographs has been compromised by the inability of 2D microscopy to capture information in the third dimension, and by the anisotropic spatial resolution inherent to conventional microscopy techniques that generate volumetric images by stacking 2D optical sections to approximate 3D. To gain insight into the analytical 3D nature of cells, this dissertation explores the application of a new technology for single-cell optical computed tomography (optical cell CT) that is a promising 3D tomographic imaging technique which uses visible light absorption to image stained cells individually with sub-micron, isotropic spatial resolution. This dissertation provides a scalable analytical framework to perform fully-automated 3D morphological analysis from transmission-mode optical cell CT images of hematoxylin-stained cells. The developed framework performs rapid and accurate quantification of 3D cell and nuclear morphology, facilitates assessment of morphological heterogeneity, and generates shape- and texture-based biosignatures predictive of the cell state. Custom 3D image segmentation methods were developed to precisely delineate volumes of interest (VOIs) from reconstructed cell images. Comparison with user-defined ground truth assessments yielded an average agreement (DICE coefficient) of 94% for the cell and its nucleus. Seventy nine biologically relevant morphological descriptors (features) were computed from the segmented VOIs, and statistical classification methods were implemented to determine the subset of features that best predicted cell health. The efficacy of our proposed framework was demonstrated on an in vitro model of multistep carcinogenesis in human Barrett's esophagus (BE) and classifier performance using our 3D morphometric analysis was compared against computerized analysis of 2D image slices that reflected conventional cytological observation. Our results enable sensitive and specific nuclear grade classification for early cancer diagnosis and underline the value of the approach as an objective adjunctive tool to better understand morphological changes associated with malignant transformation.
ContributorsNandakumar, Vivek (Author) / Meldrum, Deirdre R (Thesis advisor) / Nelson, Alan C. (Committee member) / Karam, Lina J (Committee member) / Ye, Jieping (Committee member) / Johnson, Roger H (Committee member) / Bussey, Kimberly J (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151994-Thumbnail Image.png
Description
Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly

Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.
ContributorsHe, Miao (Author) / Zhang, Junshan (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Hedman, Kory (Committee member) / Si, Jennie (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151498-Thumbnail Image.png
Description
Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation

Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation problems: 1) distributed routing optimization and 2) anypath routing optimization. The study on the distributed routing optimization problem is composed of two main thrusts, targeted at understanding distributed routing and resource optimization for multihop wireless networks. The first thrust is dedicated to understanding the impact of full-duplex transmission on wireless network resource optimization. We propose two provably good distributed algorithms to optimize the resources in a full-duplex wireless network. We prove their optimality and also provide network status analysis using dual space information. The second thrust is dedicated to understanding the influence of network entity load constraints on network resource allocation and routing computation. We propose a provably good distributed algorithm to allocate wireless resources. In addition, we propose a new subgradient optimization framework, which can provide findgrained convergence, optimality, and dual space information at each iteration. This framework can provide a useful theoretical foundation for many networking optimization problems. The study on the anypath routing optimization problem is composed of two main thrusts. The first thrust is dedicated to understanding the computational complexity of multi-constrained anypath routing and designing approximate solutions. We prove that this problem is NP-hard when the number of constraints is larger than one. We present two polynomial time K-approximation algorithms. One is a centralized algorithm while the other one is a distributed algorithm. For the second thrust, we study directional anypath routing and present a cross-layer design of MAC and routing. For the MAC layer, we present a directional anycast MAC. For the routing layer, we propose two polynomial time routing algorithms to compute directional anypaths based on two antenna models, and prove their ptimality based on the packet delivery ratio metric.
ContributorsFang, Xi (Author) / Xue, Guoliang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ye, Jieping (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
152406-Thumbnail Image.png
Description
Vector Fitting (VF) is a recent macromodeling method that has been popularized by its use in many commercial software for extracting equivalent circuit's of simulated networks. Specifically for material measurement applications, VF is shown to estimate either the permittivity or permeability of a multi-Debye material accurately, even when measured in

Vector Fitting (VF) is a recent macromodeling method that has been popularized by its use in many commercial software for extracting equivalent circuit's of simulated networks. Specifically for material measurement applications, VF is shown to estimate either the permittivity or permeability of a multi-Debye material accurately, even when measured in the presence of noise and interferences caused by test setup imperfections. A brief history and survey of methods utilizing VF for material measurement will be introduced in this work. It is shown how VF is useful for macromodeling dielectric materials after being measured with standard transmission line and free-space methods. The sources of error in both an admittance tunnel test device and stripline resonant cavity test device are identified and VF is employed for correcting these errors. Full-wave simulations are performed to model the test setup imperfections and the sources of interference they cause are further verified in actual hardware measurements. An accurate macromodel is attained as long as the signal-to-interference-ratio (SIR) in the measurement is sufficiently high such that the Debye relaxations are observable in the data. Finally, VF is applied for macromodeling the time history of the total fields scattering from a perfectly conducting wedge. This effort is an initial test to see if a time domain theory of diffraction exists, and if the diffraction coefficients may be exactly modeled with VF. This section concludes how VF is not only useful for applications in material measurement, but for the solution of modeling fields and interactions in general.
ContributorsRichards, Evan (Author) / Diaz, Rodolfo E (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2013
152400-Thumbnail Image.png
Description
Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the

Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the desired skill level. It would result in more reliable and adaptive neural interfaces that could record optimal neural activity 24/7 with high fidelity signals, high yield and increased throughput. The main contribution here is validating adaptive strategies to overcome challenges in autonomous navigation of microelectrodes inside the brain. The following issues pose significant challenges as brain tissue is both functionally and structurally dynamic: a) time varying mechanical properties of the brain tissue-microelectrode interface due to the hyperelastic, viscoelastic nature of brain tissue b) non-stationarities in the neural signal caused by mechanical and physiological events in the interface and c) the lack of visual feedback of microelectrode position in brain tissue. A closed loop control algorithm is proposed here for autonomous navigation of microelectrodes in brain tissue while optimizing the signal-to-noise ratio of multi-unit neural recordings. The algorithm incorporates a quantitative understanding of constitutive mechanical properties of soft viscoelastic tissue like the brain and is guided by models that predict stresses developed in brain tissue during movement of the microelectrode. An optimal movement strategy is developed that achieves precise positioning of microelectrodes in the brain by minimizing the stresses developed in the surrounding tissue during navigation and maximizing the speed of movement. Results of testing the closed-loop control paradigm in short-term rodent experiments validated that it was possible to achieve a consistently high quality SNR throughout the duration of the experiment. At the systems level, new generation of MEMS actuators for movable microelectrode array are characterized and the MEMS device operation parameters are optimized for improved performance and reliability. Further, recommendations for packaging to minimize the form factor of the implant; design of device mounting and implantation techniques of MEMS microelectrode array to enhance the longevity of the implant are also included in a top-down approach to achieve a reliable brain interface.
ContributorsAnand, Sindhu (Author) / Muthuswamy, Jitendran (Thesis advisor) / Tillery, Stephen H (Committee member) / Buneo, Christopher (Committee member) / Abbas, James (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
152311-Thumbnail Image.png
Description
Mobile robots are used in a broad range of application areas; e.g. search and rescue, reconnaissance, exploration, etc. Given the increasing need for high performance mobile robots, the area has received attention by researchers. In this thesis, critical control and control-relevant design issues for differential drive mobile robots is addressed.

Mobile robots are used in a broad range of application areas; e.g. search and rescue, reconnaissance, exploration, etc. Given the increasing need for high performance mobile robots, the area has received attention by researchers. In this thesis, critical control and control-relevant design issues for differential drive mobile robots is addressed. Two major themes that have been explored are the use of kinematic models for control design and the use of decentralized proportional plus integral (PI) control. While these topics have received much attention, there still remain critical questions which have not been rigorously addressed. In this thesis, answers to the following critical questions are provided: When is 1. a kinematic model sufficient for control design? 2. coupled dynamics essential? 3. a decentralized PI inner loop velocity controller sufficient? 4. centralized multiple-input multiple-output (MIMO) control essential? and how can one design the robot to relax the requirements implied in 1 and 2? In this thesis, the following is shown: 1. The nonlinear kinematic model will suffice for control design when the inner velocity (dynamic) loop is much faster (10X) than the slower outer positioning loop. 2. A dynamic model is essential when the inner velocity (dynamic) loop is less than two times faster than the slower outer positioning loop. 3. A decentralized inner loop PI velocity controller will be sufficient for accomplish- ing high performance control when the required velocity bandwidth is small, rel- ative to the peak dynamic coupling frequency. A rule-of-thumb which depends on the robot aspect ratio is given. 4. A centralized MIMO velocity controller is needed when the required bandwidth is large, relative to the peak dynamic coupling frequency. Here, the analysis in the thesis is sparse making the topic an area for future analytical work. Despite this, it is clearly shown that a centralized MIMO inner loop controller can offer increased performance vis- ́a-vis a decentralized PI controller. 5. Finally, it is shown how the dynamic coupling depends on the robot aspect ratio and how the coupling can be significantly reduced. As such, this can be used to ease the requirements imposed by 2 and 4 above.
ContributorsAnvari, Iman (Author) / Rodriguez, Armando A (Thesis advisor) / Si, Jenni (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
151587-Thumbnail Image.png
Description
The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is one of the most common challenges for machine learning and

The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is one of the most common challenges for machine learning and data mining tasks. Feature selection aims to reduce dimensionality by selecting a small subset of the features that perform at least as good as the full feature set. Generally, the learning performance, e.g. classification accuracy, and algorithm complexity are used to measure the quality of the algorithm. Recently, the stability of feature selection algorithms has gained an increasing attention as a new indicator due to the necessity to select similar subsets of features each time when the algorithm is run on the same dataset even in the presence of a small amount of perturbation. In order to cure the selection stability issue, we should understand the cause of instability first. In this dissertation, we will investigate the causes of instability in high-dimensional datasets using well-known feature selection algorithms. As a result, we found that the stability mostly data-dependent. According to these findings, we propose a framework to improve selection stability by solving these main causes. In particular, we found that data noise greatly impacts the stability and the learning performance as well. So, we proposed to reduce it in order to improve both selection stability and learning performance. However, current noise reduction approaches are not able to distinguish between data noise and variation in samples from different classes. For this reason, we overcome this limitation by using Supervised noise reduction via Low Rank Matrix Approximation, SLRMA for short. The proposed framework has proved to be successful on different types of datasets with high-dimensionality, such as microarrays and images datasets. However, this framework cannot handle unlabeled, hence, we propose Local SVD to overcome this limitation.
ContributorsAlelyani, Salem (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Ye, Jieping (Committee member) / Zhao, Zheng (Committee member) / Arizona State University (Publisher)
Created2013