This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 41 - 50 of 184
Filtering by

Clear all filters

150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
151028-Thumbnail Image.png
Description
In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can

In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2012
151088-Thumbnail Image.png
Description
Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with

Approximately 1.7 million people in the United States are living with limb loss and are in need of more sophisticated devices that better mimic human function. In the Human Machine Integration Laboratory, a powered, transtibial prosthetic ankle was designed and build that allows a person to regain ankle function with improved ankle kinematics and kinetics. The ankle allows a person to walk normally and up and down stairs, but volitional control is still an issue. This research tackled the problem of giving the user more control over the prosthetic ankle using a force/torque circuit. When the user presses against a force/torque sensor located inside the socket the prosthetic foot plantar flexes or moves downward. This will help the user add additional push-off force when walking up slopes or stairs. It also gives the user a sense of control over the device.
ContributorsFronczyk, Adam (Author) / Sugar, Thomas G. (Thesis advisor) / Helms-Tillery, Stephen (Thesis advisor) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
151092-Thumbnail Image.png
Description
Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for

Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for further processing, compressive cameras offer the advantage of direct acquisition of data in compressed domain and hence readily promise to find applicability in computer vision, particularly in environments hampered by limited communication bandwidths. However, despite the significant progress in theory and methods of compressive sensing, little headway has been made in developing systems for such applications by exploiting the merits of compressive sensing. In such a setting, we consider the problem of activity recognition, which is an important inference problem in many security and surveillance applications. Since all successful activity recognition systems involve detection of human, followed by recognition, a potential fully functioning system motivated by compressive camera would involve the tracking of human, which requires the reconstruction of atleast the initial few frames to detect the human. Once the human is tracked, the recognition part of the system requires only the features to be extracted from the tracked sequences, which can be the reconstructed images or the compressed measurements of such sequences. However, it is desirable in resource constrained environments that these features be extracted from the compressive measurements without reconstruction. Motivated by this, in this thesis, we propose a framework for understanding activities as a non-linear dynamical system, and propose a robust, generalizable feature that can be extracted directly from the compressed measurements without reconstructing the original video frames. The proposed feature is termed recurrence texture and is motivated from recurrence analysis of non-linear dynamical systems. We show that it is possible to obtain discriminative features directly from the compressed stream and show its utility in recognition of activities at very low data rates.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
150828-Thumbnail Image.png
Description
Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode

Effective tactile sensing in prosthetic and robotic hands is crucial for improving the functionality of such hands and enhancing the user's experience. Thus, improving the range of tactile sensing capabilities is essential for developing versatile artificial hands. Multimodal tactile sensors called BioTacs, which include a hydrophone and a force electrode array, were used to understand how grip force, contact angle, object texture, and slip direction may be encoded in the sensor data. Findings show that slip induced under conditions of high contact angles and grip forces resulted in significant changes in both AC and DC pressure magnitude and rate of change in pressure. Slip induced under conditions of low contact angles and grip forces resulted in significant changes in the rate of change in electrode impedance. Slip in the distal direction of a precision grip caused significant changes in pressure magnitude and rate of change in pressure, while slip in the radial direction of the wrist caused significant changes in the rate of change in electrode impedance. A strong relationship was established between slip direction and the rate of change in ratios of electrode impedance for radial and ulnar slip relative to the wrist. Consequently, establishing multiple thresholds or establishing a multivariate model may be a useful method for detecting and characterizing slip. Detecting slip for low contact angles could be done by monitoring electrode data, while detecting slip for high contact angles could be done by monitoring pressure data. Predicting slip in the distal direction could be done by monitoring pressure data, while predicting slip in the radial and ulnar directions could be done by monitoring electrode data.
ContributorsHsia, Albert (Author) / Santos, Veronica J (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen I (Committee member) / Arizona State University (Publisher)
Created2012
151120-Thumbnail Image.png
Description
Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains a vision-threatening complication and one of the leading causes of

Diabetic retinopathy (DR) is a common cause of blindness occurring due to prolonged presence of diabetes. The risk of developing DR or having the disease progress is increasing over time. Despite advances in diabetes care over the years, DR remains a vision-threatening complication and one of the leading causes of blindness among American adults. Recent studies have shown that diagnosis based on digital retinal imaging has potential benefits over traditional face-to-face evaluation. Yet there is a dearth of computer-based systems that can match the level of performance achieved by ophthalmologists. This thesis takes a fresh perspective in developing a computer-based system aimed at improving diagnosis of DR images. These images are categorized into three classes according to their severity level. The proposed approach explores effective methods to classify new images and retrieve clinically-relevant images from a database with prior diagnosis information associated with them. Retrieval provides a novel way to utilize the vast knowledge in the archives of previously-diagnosed DR images and thereby improve a clinician's performance while classification can safely reduce the burden on DR screening programs and possibly achieve higher detection accuracy than human experts. To solve the three-class retrieval and classification problem, the approach uses a multi-class multiple-instance medical image retrieval framework that makes use of spectrally tuned color correlogram and steerable Gaussian filter response features. The results show better retrieval and classification performances than prior-art methods and are also observed to be of clinical and visual relevance.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
151271-Thumbnail Image.png
Description
Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention.

Humans moving in the environment must frequently change walking speed and direction to negotiate obstacles and maintain balance. Maneuverability and stability requirements account for a significant part of daily life. While constant-average-velocity (CAV) human locomotion in walking and running has been studied extensively unsteady locomotion has received far less attention. Although some studies have described the biomechanics and neurophysiology of maneuvers, the underlying mechanisms that humans employ to control unsteady running are still not clear. My dissertation research investigated some of the biomechanical and behavioral strategies used for stable unsteady locomotion. First, I studied the behavioral level control of human sagittal plane running. I tested whether humans could control running using strategies consistent with simple and independent control laws that have been successfully used to control monopod robots. I found that humans use strategies that are consistent with the distributed feedback control strategies used by bouncing robots. Humans changed leg force rather than stance duration to control center of mass (COM) height. Humans adjusted foot placement relative to a "neutral point" to change running speed increment between consecutive flight phases, i.e. a "pogo-stick" rather than a "unicycle" strategy was adopted to change running speed. Body pitch angle was correlated by hip moments if a proportional-derivative relationship with time lags corresponding to pre-programmed reaction (87 ± 19 ms) was assumed. To better understand the mechanisms of performing successful maneuvers, I studied the functions of joints in the lower extremities to control COM speed and height. I found that during stance, the hip functioned as a power generator to change speed. The ankle switched between roles as a damper and torsional spring to contributing both to speed and elevation changes. The knee facilitated both speed and elevation control by absorbing mechanical energy, although its contribution was less than hip or ankle. Finally, I studied human turning in the horizontal plane. I used a morphological perturbation (increased body rotational inertia) to elicit compensational strategies used to control sidestep cutting turns. Humans use changes to initial body angular speed and body pre-rotation to prevent changes in braking forces.
ContributorsQiao, Mu, 1981- (Author) / Jindrich, Devin L (Thesis advisor) / Dounskaia, Natalia (Committee member) / Abbas, James (Committee member) / Hinrichs, Richard (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
153814-Thumbnail Image.png
Description
The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a

The current work investigated the emergence of leader-follower roles during social motor coordination. Previous research has presumed a leader during coordination assumes a spatiotemporally advanced position (e.g., relative phase lead). While intuitive, this definition discounts what role-taking implies. Leading and following is defined as one person (or limb) having a larger influence on the motor state changes of another; the coupling is asymmetric. Three experiments demonstrated asymmetric coupling effects emerge when task or biomechanical asymmetries are imputed between actors. Participants coordinated in-phase (Ф =0o) swinging of handheld pendulums, which differed in their uncoupled eigenfrequencies (frequency detuning). Coupling effects were recovered through phase-amplitude modeling. Experiment 1 examined leader-follower coupling during a bidirectional task. Experiment 2 employed an additional coupling asymmetry by assigning an explicit leader and follower. Both experiment 1 and 2 demonstrated asymmetric coupling effects with increased detuning. In experiment 2, though, the explicit follower exhibited a phase lead in nearly all conditions. These results confirm that coupling direction was not determined strictly by relative phasing. A third experiment examined the question raised by the previous two, which is how could someone follow from ahead (i.e., phase lead in experiment 2). This was tested using a combination of frequency detuning and amplitude asymmetry requirements (e.g., 1:1 or 1:2 & 2:1). Results demonstrated larger amplitude movements drove the coupling towards the person with the smaller amplitude; small amplitude movements exhibited a phase lead, despite being a follower in coupling terms. These results suggest leader-follower coupling is a general property of social motor coordination. Predicting when such coupling effects occur is emphasized by the stability reducing effects of coordinating asymmetric components. Generally, the implication is role-taking is an emergent strategy of dividing up coordination stabilizing efforts unequally between actors (or limbs).
ContributorsFine, Justin (Author) / Amazeen, Eric L. (Thesis advisor) / Amazeen, Polemnia G. (Committee member) / Brewer, Gene (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2015
153947-Thumbnail Image.png
Description
Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea

Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea behind discontinuity is locating the abrupt changes in intensity of images, as are often seen in edges or boundaries. Similarity subdivides an image into regions that fit the pre-defined criteria. The algorithm utilized in this thesis is the second category.

This study addresses the problem of particle image segmentation by measuring the similarity between a sampled region and an adjacent region, based on Bhattacharyya distance and an image feature extraction technique that uses distribution of local binary patterns and pattern contrasts. A boundary smoothing process is developed to improve the accuracy of the segmentation. The novel particle image segmentation algorithm is tested using four different cases of particle image velocimetry (PIV) images. The obtained experimental results of segmentations provide partitioning of the objects within 10 percent error rate. Ground-truth segmentation data, which are manually segmented image from each case, are used to calculate the error rate of the segmentations.
ContributorsHan, Dongmin (Author) / Frakes, David (Thesis advisor) / Adrian, Ronald (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2015