Matching Items (3)
Filtering by

Clear all filters

150036-Thumbnail Image.png
Description
Demand for biosensor research applications is growing steadily. According to a new report by Frost & Sullivan, the biosensor market is expected to reach $14.42 billion by 2016. Clinical diagnostic applications continue to be the largest market for biosensors, and this demand is likely to continue through 2016 and beyond.

Demand for biosensor research applications is growing steadily. According to a new report by Frost & Sullivan, the biosensor market is expected to reach $14.42 billion by 2016. Clinical diagnostic applications continue to be the largest market for biosensors, and this demand is likely to continue through 2016 and beyond. Biosensor technology for use in clinical diagnostics, however, requires translational research that moves bench science and theoretical knowledge toward marketable products. Despite the high volume of academic research to date, only a handful of biomedical devices have become viable commercial applications. Academic research must increase its focus on practical uses for biosensors. This dissertation is an example of this increased focus, and discusses work to advance microfluidic-based protein biosensor technologies for practical use in clinical diagnostics. Four areas of work are discussed: The first involved work to develop reusable/reconfigurable biosensors that are useful in applications like biochemical science and analytical chemistry that require detailed sensor calibration. This work resulted in a prototype sensor and an in-situ electrochemical surface regeneration technique that can be used to produce microfluidic-based reusable biosensors. The second area of work looked at non-specific adsorption (NSA) of biomolecules, which is a persistent challenge in conventional microfluidic biosensors. The results of this work produced design methods that reduce the NSA. The third area of work involved a novel microfluidic sensing platform that was designed to detect target biomarkers using competitive protein adsorption. This technique uses physical adsorption of proteins to a surface rather than complex and time-consuming immobilization procedures. This method enabled us to selectively detect a thyroid cancer biomarker, thyroglobulin, in a controlled-proteins cocktail and a cardiovascular biomarker, fibrinogen, in undiluted human serum. The fourth area of work involved expanding the technique to produce a unique protein identification method; Pattern-recognition. A sample mixture of proteins generates a distinctive composite pattern upon interaction with a sensing platform consisting of multiple surfaces whereby each surface consists of a distinct type of protein pre-adsorbed on the surface. The utility of the "pattern-recognition" sensing mechanism was then verified via recognition of a particular biomarker, C-reactive protein, in the cocktail sample mixture.
ContributorsChoi, Seokheun (Author) / Chae, Junseok (Thesis advisor) / Tao, Nongjian (Committee member) / Yu, Hongyu (Committee member) / Forzani, Erica (Committee member) / Arizona State University (Publisher)
Created2011
154558-Thumbnail Image.png
Description
Feature learning and the discovery of nonlinear variation patterns in high-dimensional data is an important task in many problem domains, such as imaging, streaming data from sensors, and manufacturing. This dissertation presents several methods for learning and visualizing nonlinear variation in high-dimensional data. First, an automated method for discovering nonlinear

Feature learning and the discovery of nonlinear variation patterns in high-dimensional data is an important task in many problem domains, such as imaging, streaming data from sensors, and manufacturing. This dissertation presents several methods for learning and visualizing nonlinear variation in high-dimensional data. First, an automated method for discovering nonlinear variation patterns using deep learning autoencoders is proposed. The approach provides a functional mapping from a low-dimensional representation to the original spatially-dense data that is both interpretable and efficient with respect to preserving information. Experimental results indicate that deep learning autoencoders outperform manifold learning and principal component analysis in reproducing the original data from the learned variation sources.

A key issue in using autoencoders for nonlinear variation pattern discovery is to encourage the learning of solutions where each feature represents a unique variation source, which we define as distinct features. This problem of learning distinct features is also referred to as disentangling factors of variation in the representation learning literature. The remainder of this dissertation highlights and provides solutions for this important problem.

An alternating autoencoder training method is presented and a new measure motivated by orthogonal loadings in linear models is proposed to quantify feature distinctness in the nonlinear models. Simulated point cloud data and handwritten digit images illustrate that standard training methods for autoencoders consistently mix the true variation sources in the learned low-dimensional representation, whereas the alternating method produces solutions with more distinct patterns.

Finally, a new regularization method for learning distinct nonlinear features using autoencoders is proposed. Motivated in-part by the properties of linear solutions, a series of learning constraints are implemented via regularization penalties during stochastic gradient descent training. These include the orthogonality of tangent vectors to the manifold, the correlation between learned features, and the distributions of the learned features. This regularized learning approach yields low-dimensional representations which can be better interpreted and used to identify the true sources of variation impacting a high-dimensional feature space. Experimental results demonstrate the effectiveness of this method for nonlinear variation pattern discovery on both simulated and real data sets.
ContributorsHoward, Phillip (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Mirchandani, Pitu (Committee member) / Apley, Daniel (Committee member) / Arizona State University (Publisher)
Created2016
152398-Thumbnail Image.png
Description
Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove

Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove noise from the dataset in order to visualize the true nature of the underlying patterns. Third, in addition to visualizing the pattern (preimage), it is also essential to understand the relevant features that define the process variation pattern. This dissertation considers these variation challenges. A base kernel principal component analysis (KPCA) algorithm transforms the measurements to a high-dimensional feature space where non-linear patterns in the original measurement can be handled through linear methods. However, the principal component subspace in feature space might not be well estimated (especially from noisy training data). An ensemble procedure is constructed where the final preimage is estimated as the average from bagged samples drawn from the original dataset to attenuate noise in kernel subspace estimation. This improves the robustness of any base KPCA algorithm. In a second method, successive iterations of denoising a convex combination of the training data and the corresponding denoised preimage are used to produce a more accurate estimate of the actual denoised preimage for noisy training data. The number of primary eigenvectors chosen in each iteration is also decreased at a constant rate. An efficient stopping rule criterion is used to reduce the number of iterations. A feature selection procedure for KPCA is constructed to find the set of relevant features from noisy training data. Data points are projected onto sparse random vectors. Pairs of such projections are then matched, and the differences in variation patterns within pairs are used to identify the relevant features. This approach provides robustness to irrelevant features by calculating the final variation pattern from an ensemble of feature subsets. Experiments are conducted using several simulated as well as real-life data sets. The proposed methods show significant improvement over the competitive methods.
ContributorsSahu, Anshuman (Author) / Runger, George C. (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2013