Matching Items (218)
149962-Thumbnail Image.png
Description
In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow

In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow the path of microelectronics, the fundamental physics in a nanoscale system changes more rapidly compared to microelectronics, as the size scale is decreased. The changes in length, area, and volume ratios due to reduction in size alter the relative influence of various physical effects determining the overall operation of a system in unexpected ways. One such category of nanofluidic structures demonstrating unique ionic and molecular transport characteristics are nanopores. Nanopores derive their unique transport characteristics from the electrostatic interaction of nanopore surface charge with aqueous ionic solutions. In this doctoral research cylindrical nanopores, in single and array configuration, were fabricated in silicon-on-insulator (SOI) using a combination of electron beam lithography (EBL) and reactive ion etching (RIE). The fabrication method presented is compatible with standard semiconductor foundries and allows fabrication of nanopores with desired geometries and precise dimensional control, providing near ideal and isolated physical modeling systems to study ion transport at the nanometer level. Ion transport through nanopores was characterized by measuring ionic conductances of arrays of nanopores of various diameters for a wide range of concentration of aqueous hydrochloric acid (HCl) ionic solutions. Measured ionic conductances demonstrated two distinct regimes based on surface charge interactions at low ionic concentrations and nanopore geometry at high ionic concentrations. Field effect modulation of ion transport through nanopore arrays, in a fashion similar to semiconductor transistors, was also studied. Using ionic conductance measurements, it was shown that the concentration of ions in the nanopore volume was significantly changed when a gate voltage on nanopore arrays was applied, hence controlling their transport. Based on the ion transport results, single nanopores were used to demonstrate their application as nanoscale particle counters by using polystyrene nanobeads, monodispersed in aqueous HCl solutions of different molarities. Effects of field effect modulation on particle transition events were also demonstrated.
ContributorsJoshi, Punarvasu (Author) / Thornton, Trevor J (Thesis advisor) / Goryll, Michael (Thesis advisor) / Spanias, Andreas (Committee member) / Saraniti, Marco (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
152367-Thumbnail Image.png
Description
Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing

Advancements in mobile technologies have significantly enhanced the capabilities of mobile devices to serve as powerful platforms for sensing, processing, and visualization. Surges in the sensing technology and the abundance of data have enabled the use of these portable devices for real-time data analysis and decision-making in digital signal processing (DSP) applications. Most of the current efforts in DSP education focus on building tools to facilitate understanding of the mathematical principles. However, there is a disconnect between real-world data processing problems and the material presented in a DSP course. Sophisticated mobile interfaces and apps can potentially play a crucial role in providing a hands-on-experience with modern DSP applications to students. In this work, a new paradigm of DSP learning is explored by building an interactive easy-to-use health monitoring application for use in DSP courses. This is motivated by the increasing commercial interest in employing mobile phones for real-time health monitoring tasks. The idea is to exploit the computational abilities of the Android platform to build m-Health modules with sensor interfaces. In particular, appropriate sensing modalities have been identified, and a suite of software functionalities have been developed. Within the existing framework of the AJDSP app, a graphical programming environment, interfaces to on-board and external sensor hardware have also been developed to acquire and process physiological data. The set of sensor signals that can be monitored include electrocardiogram (ECG), photoplethysmogram (PPG), accelerometer signal, and galvanic skin response (GSR). The proposed m-Health modules can be used to estimate parameters such as heart rate, oxygen saturation, step count, and heart rate variability. A set of laboratory exercises have been designed to demonstrate the use of these modules in DSP courses. The app was evaluated through several workshops involving graduate and undergraduate students in signal processing majors at Arizona State University. The usefulness of the software modules in enhancing student understanding of signals, sensors and DSP systems were analyzed. Student opinions about the app and the proposed m-health modules evidenced the merits of integrating tools for mobile sensing and processing in a DSP curriculum, and familiarizing students with challenges in modern data-driven applications.
ContributorsRajan, Deepta (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151742-Thumbnail Image.png
Description
This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We

This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users.
Contributorsda Silva, Flavio J. K (Author) / Mcbeath, Michael K (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Presson, Clark (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2013
152260-Thumbnail Image.png
Description
Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival estimates are based upon accurate direct sequence spread spectrum (DSSS) code and carrier phase tracking. Current multipath mitigating GNSS solutions include fixed radiation pattern antennas and windowed delay-lock loop code phase discriminators. A new multipath mitigating code tracking algorithm is introduced that utilizes a non-symmetric correlation kernel to reject multipath. Independent parameters provide a means to trade-off code tracking discriminant gain against multipath mitigation performance. The algorithm performance is characterized in terms of multipath phase error bias, phase error estimation variance, tracking range, tracking ambiguity and implementation complexity. The algorithm is suitable for modernized GNSS signals including Binary Phase Shift Keyed (BPSK) and a variety of Binary Offset Keyed (BOC) signals. The algorithm compensates for unbalanced code sequences to ensure a code tracking bias does not result from the use of asymmetric correlation kernels. The algorithm does not require explicit knowledge of the propagation channel model. Design recommendations for selecting the algorithm parameters to mitigate precorrelation filter distortion are also provided.
ContributorsMiller, Steven (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151542-Thumbnail Image.png
Description
Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity

Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity reveals the existence of constant capacity difference gaps among the schemes with a pre-log constant of 1. Closed-form expressions for these high-SNR capacity difference gaps are derived, which are proportional to the SNR loss between these schemes in dB scale. The largest one of these gaps is found to be between the optimal power and rate adaptation scheme and the channel inversion scheme. Based on these expressions it is shown that the presence of space diversity or multi-user diversity makes channel inversion arbitrarily close to achieving optimal capacity at high SNR with sufficiently large number of antennas or users. A low-SNR analysis also reveals that the presence of fading provably always improves capacity at sufficiently low SNR, compared to the additive white Gaussian noise (AWGN) case. Numerical results are shown to corroborate our analytical results. This dissertation derives high-SNR asymptotic average error rates over fading channels by relating them to the outage probability, under mild assumptions. The analysis is based on the Tauberian theorem for Laplace-Stieltjes transforms which is grounded on the notion of regular variation, and applies to a wider range of channel distributions than existing approaches. The theory of regular variation is argued to be the proper mathematical framework for finding sufficient and necessary conditions for outage events to dominate high-SNR error rate performance. It is proved that the diversity order being d and the cumulative distribution function (CDF) of the channel power gain having variation exponent d at 0 imply each other, provided that the instantaneous error rate is upper-bounded by an exponential function of the instantaneous SNR. High-SNR asymptotic average error rates are derived for specific instantaneous error rates. Compared to existing approaches in the literature, the asymptotic expressions are related to the channel distribution in a much simpler manner herein, and related with outage more intuitively. The high-SNR asymptotic error rate is also characterized under diversity combining schemes with the channel power gain of each branch having a regularly varying CDF. Numerical results are shown to corroborate our theoretical analysis. This dissertation studies several problems concerning channel inclusion, which is a partial ordering between discrete memoryless channels (DMCs) proposed by Shannon. Specifically, majorization-based conditions are derived for channel inclusion between certain DMCs. Furthermore, under general conditions, channel equivalence defined through Shannon ordering is shown to be the same as permutation of input and output symbols. The determination of channel inclusion is considered as a convex optimization problem, and the sparsity of the weights related to the representation of the worse DMC in terms of the better one is revealed when channel inclusion holds between two DMCs. For the exploitation of this sparsity, an effective iterative algorithm is established based on modifying the orthogonal matching pursuit algorithm. The extension of channel inclusion to continuous channels and its application in ordering phase noises are briefly addressed.
ContributorsZhang, Yuan (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Reisslein, Martin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013