Matching Items (344)
Filtering by

Clear all filters

152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
151433-Thumbnail Image.png
Description
Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1).

Sensitivity is a fundamental challenge for in vivo molecular magnetic resonance imaging (MRI). Here, I improve the sensitivity of metal nanoparticle contrast agents by strategically incorporating pure and doped metal oxides in the nanoparticle core, forming a soluble, monodisperse, contrast agent with adjustable T2 or T1 relaxivity (r2 or r1). I first developed a simplified technique to incorporate iron oxides in apoferritin to form "magnetoferritin" for nM-level detection with T2- and T2* weighting. I then explored whether the crystal could be chemically modified to form a particle with high r1. I first adsorbed Mn2+ ions to metal binding sites in the apoferritin pores. The strategic placement of metal ions near sites of water exchange and within the crystal oxide enhance r1, suggesting a mechanism for increasing relaxivity in porous nanoparticle agents. However, the Mn2+ addition was only possible when the particle was simultaneously filled with an iron oxide, resulting in a particle with a high r1 but also a high r2 and making them undetectable with conventional T1-weighting techniques. To solve this problem and decrease the particle r2 for more sensitive detection, I chemically doped the nanoparticles with tungsten to form a disordered W-Fe oxide composite in the apoferritin core. This configuration formed a particle with a r1 of 4,870mM-1s-1 and r2 of 9,076mM-1s-1. These relaxivities allowed the detection of concentrations ranging from 20nM - 400nM in vivo, both passively injected and targeted to the kidney glomerulus. I further developed an MRI acquisition technique to distinguish particles based on r2/r1, and show that three nanoparticles of similar size can be distinguished in vitro and in vivo with MRI. This work forms the basis for a new, highly flexible inorganic approach to design nanoparticle contrast agents for molecular MRI.
ContributorsClavijo Jordan, Maria Veronica (Author) / Bennett, Kevin M (Thesis advisor) / Kodibagkar, Vikram (Committee member) / Sherry, A Dean (Committee member) / Wang, Xiao (Committee member) / Yarger, Jeffery (Committee member) / Arizona State University (Publisher)
Created2012
151306-Thumbnail Image.png
Description
Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of

Over the past fifty years, the development of sensors for biological applications has increased dramatically. This rapid growth can be attributed in part to the reduction in feature size, which the electronics industry has pioneered over the same period. The decrease in feature size has led to the production of microscale sensors that are used for sensing applications, ranging from whole-body monitoring down to molecular sensing. Unfortunately, sensors are often developed without regard to how they will be integrated into biological systems. The complexities of integration are underappreciated. Integration involves more than simply making electrical connections. Interfacing microscale sensors with biological environments requires numerous considerations with respect to the creation of compatible packaging, the management of biological reagents, and the act of combining technologies with different dimensions and material properties. Recent advances in microfluidics, especially the proliferation of soft lithography manufacturing methods, have established the groundwork for creating systems that may solve many of the problems inherent to sensor-fluidic interaction. The adaptation of microelectronics manufacturing methods, such as Complementary Metal-Oxide-Semiconductor (CMOS) and Microelectromechanical Systems (MEMS) processes, allows the creation of a complete biological sensing system with integrated sensors and readout circuits. Combining these technologies is an obstacle to forming complete sensor systems. This dissertation presents new approaches for the design, fabrication, and integration of microscale sensors and microelectronics with microfluidics. The work addresses specific challenges, such as combining commercial manufacturing processes into biological systems and developing microscale sensors in these processes. This work is exemplified through a feedback-controlled microfluidic pH system to demonstrate the integration capabilities of microscale sensors for autonomous microenvironment control.
ContributorsWelch, David (Author) / Blain Christen, Jennifer (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Frakes, David (Committee member) / LaBelle, Jeffrey (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2012
151390-Thumbnail Image.png
Description
Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space.

Our ability to estimate the position of our body parts in space, a fundamentally proprioceptive process, is crucial for interacting with the environment and movement control. For proprioception to support these actions, the Central Nervous System has to rely on a stored internal representation of the body parts in space. However, relatively little is known about this internal representation of arm position. To this end, I developed a method to map proprioceptive estimates of hand location across a 2-d workspace. In this task, I moved each subject's hand to a target location while the subject's eyes were closed. After returning the hand, subjects opened their eyes to verbally report the location of where their fingertip had been. Then, I reconstructed and analyzed the spatial structure of the pattern of estimation errors. In the first couple of experiments I probed the structure and stability of the pattern of errors by manipulating the hand used and tactile feedback provided when the hand was at each target location. I found that the resulting pattern of errors was systematically stable across conditions for each subject, subject-specific, and not uniform across the workspace. These findings suggest that the observed structure of pattern of errors has been constructed through experience, which has resulted in a systematically stable internal representation of arm location. Moreover, this representation is continuously being calibrated across the workspace. In the next two experiments, I aimed to probe the calibration of this structure. To this end, I used two different perturbation paradigms: 1) a virtual reality visuomotor adaptation to induce a local perturbation, 2) and a standard prism adaptation paradigm to induce a global perturbation. I found that the magnitude of the errors significantly increased to a similar extent after each perturbation. This small effect indicates that proprioception is recalibrated to a similar extent regardless of how the perturbation is introduced, suggesting that sensory and motor changes may be two independent processes arising from the perturbation. Moreover, I propose that the internal representation of arm location might be constructed with a global solution and not capable of local changes.
ContributorsRincon Gonzalez, Liliana (Author) / Helms Tillery, Stephen I (Thesis advisor) / Buneo, Christopher A (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Kleim, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012
151399-Thumbnail Image.png
Description
Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging

Millions of Americans live with motor impairments resulting from a stroke and the best way to administer rehabilitative therapy to achieve recovery is not well understood. Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging multimodal feedback for self-assessment during a therapeutic task. The AMRR system was evaluated in a small (N=3) cohort of stroke survivors to determine best practices for administering adaptive, media-based therapy. A proof of concept study followed, examining changes in clinical scale and kinematic performances among a group of stroke survivors who received either a month of AMRR therapy (N = 11) or matched dosing of traditional repetitive task therapy (N = 10). Both groups demonstrated statistically significant improvements in Wolf Motor Function Test and upper-extremity Fugl-Meyer Assessment scores, indicating increased function after the therapy. However, only participants who received AMRR therapy showed a consistent improvement in their kinematic measurements, including those measured in the trained reaching task (reaching to grasp a cone) and in an untrained reaching task (reaching to push a lighted button). These results suggest that that the AMRR system can be used as a therapy tool to enhance both functionality and reaching kinematics that quantify movement quality. Additionally, the AMRR concepts are currently being transitioned to a home-based training application. An inexpensive, easy-to-use, toolkit of tangible objects has been developed to sense, assess and provide feedback on hand function during different functional activities. These objects have been shown to accurately and consistently track hand function in people with unimpaired movements and will be tested with stroke survivors in the future.
ContributorsDuff, Margaret Rose (Author) / Rikakis, Thanassis (Thesis advisor) / He, Jiping (Thesis advisor) / Herman, Richard (Committee member) / Kleim, Jeffrey (Committee member) / Santos, Veronica (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2012
151971-Thumbnail Image.png
Description
Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart

Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors.
ContributorsMiao, Lifeng (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Zhang, Junshan (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2013
152011-Thumbnail Image.png
Description
Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions

Humans' ability to perform fine object and tool manipulation is a defining feature of their sensorimotor repertoire. How the central nervous system builds and maintains internal representations of such skilled hand-object interactions has attracted significant attention over the past three decades. Nevertheless, two major gaps exist: a) how digit positions and forces are coordinated during natural manipulation tasks, and b) what mechanisms underlie the formation and retention of internal representations of dexterous manipulation. This dissertation addresses these two questions through five experiments that are based on novel grip devices and experimental protocols. It was found that high-level representation of manipulation tasks can be learned in an effector-independent fashion. Specifically, when challenged by trial-to-trial variability in finger positions or using digits that were not previously engaged in learning the task, subjects could adjust finger forces to compensate for this variability, thus leading to consistent task performance. The results from a follow-up experiment conducted in a virtual reality environment indicate that haptic feedback is sufficient to implement the above coordination between digit position and forces. However, it was also found that the generalizability of a learned manipulation is limited across tasks. Specifically, when subjects learned to manipulate the same object across different contexts that require different motor output, interference was found at the time of switching contexts. Data from additional studies provide evidence for parallel learning processes, which are characterized by different rates of decay and learning. These experiments have provided important insight into the neural mechanisms underlying learning and control of object manipulation. The present findings have potential biomedical applications including brain-machine interfaces, rehabilitation of hand function, and prosthetics.
ContributorsFu, Qiushi (Author) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Buneo, Christopher (Committee member) / Santos, Veronica (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2013
152013-Thumbnail Image.png
Description
Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present

Reaching movements are subject to noise in both the planning and execution phases of movement production. Although the effects of these noise sources in estimating and/or controlling endpoint position have been examined in many studies, the independent effects of limb configuration on endpoint variability have been largely ignored. The present study investigated the effects of arm configuration on the interaction between planning noise and execution noise. Subjects performed reaching movements to three targets located in a frontal plane. At the starting position, subjects matched one of two desired arm configuration 'templates' namely "adducted" and "abducted". These arm configurations were obtained by rotations along the shoulder-hand axis, thereby maintaining endpoint position. Visual feedback of the hand was varied from trial to trial, thereby increasing uncertainty in movement planning and execution. It was hypothesized that 1) pattern of endpoint variability would be dependent on arm configuration and 2) that these differences would be most apparent in conditions without visual feedback. It was found that there were differences in endpoint variability between arm configurations in both visual conditions, but these differences were much larger when visual feedback was withheld. The overall results suggest that patterns of endpoint variability are highly dependent on arm configuration, particularly in the absence of visual feedback. This suggests that in the presence of vision, movement planning in 3D space is performed using coordinates that are largely arm configuration independent (i.e. extrinsic coordinates). In contrast, in the absence of vision, movement planning in 3D space reflects a substantial contribution of intrinsic coordinates.
ContributorsLakshmi Narayanan, Kishor (Author) / Buneo, Christopher (Thesis advisor) / Santello, Marco (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
151453-Thumbnail Image.png
Description
Ionizing radiation, such as gamma rays and X-rays, are becoming more widely used. These high-energy forms of electromagnetic radiation are present in nuclear energy, astrophysics, and the medical field. As more and more people have the opportunity to be exposed to ionizing radiation, the necessity for coming up with simple

Ionizing radiation, such as gamma rays and X-rays, are becoming more widely used. These high-energy forms of electromagnetic radiation are present in nuclear energy, astrophysics, and the medical field. As more and more people have the opportunity to be exposed to ionizing radiation, the necessity for coming up with simple and quick methods of radiation detection is increasing. In this work, two systems were explored for their ability to simply detect ionizing radiation. Gold nanoparticles were formed via radiolysis of water in the presence of Elastin-like polypeptides (ELPs) and also in the presence of cationic polymers. Gold nanoparticle formation is an indicator of the presence of radiation. The system with ELP was split into two subsystems: those samples including isopropyl alcohol (IPA) and acetone, and those without IPA and acetone. The samples were exposed to certain radiation doses and gold nanoparticles were formed. Gold nanoparticle formation was deemed to have occurred when the sample changed color from light yellow to a red or purple color. Nanoparticle formation was also checked by absorbance measurements. In the cationic polymer system, gold nanoparticles were also formed after exposing the experimental system to certain radiation doses. Unique to the polymer system was the ability of some of the cationic polymers to form gold nanoparticles without the samples being irradiated. Future work to be done on this project is further characterization of the gold nanoparticles formed by both systems.
ContributorsWalker, Candace (Author) / Rege, Kaushal (Thesis advisor) / Chang, John (Committee member) / Kodibagkar, Vikram (Committee member) / Potta, Thrimoorthy (Committee member) / Arizona State University (Publisher)
Created2012
151492-Thumbnail Image.png
Description
Arizona's English Language Development Model (ELD Model) is intended to increase and accelerate the learning of English by English Language Learners (ELLs), so that the students can then be ready, when they know the English language, to learn the other academic subjects together with their English speaking peers. This model

Arizona's English Language Development Model (ELD Model) is intended to increase and accelerate the learning of English by English Language Learners (ELLs), so that the students can then be ready, when they know the English language, to learn the other academic subjects together with their English speaking peers. This model is part of a response to comply with the Flores Consent Order to improve services for ELLs in Arizona public schools. Whether or not it actually has improved instruction for ELLs has been the subject of much debate and, in 2012, after four years of the requirement to use Arizona's ELD Model, the ELL students who were identified as reclassified for the six districts in the study did not pass the Arizona's Instrument to Measure Standards (AIMS) test. The model's requirement to separate students who are not proficient from students who are proficient, the assessment used for identification of ELLs, and the Structured English Immersion four hours of English only instruction are at the nexus of the controversy, as the courts accepted the separate four hour SEI portion of the model for instruction as sufficient to meet the needs of ELLs in Arizona (Garcia, 2011, Martinez, 2012, Lawton, 2012, Lillie, 2012). This study examines student achievement in Reading and Math as measured by AIMS standards-based tests in six urban K-8 public school districts between 2007-2012. This period was selected to cover two years before and four years after the ELD model was required. Although the numbers of ELLs have decreased for the State and for the six urban elementary districts since the advent of the Arizona ELD Model, the reclassified ELL subgroup in the studied districts did not pass the AIMS for all the years in the study. Based on those results, this study concludes with the following recommendations. First, to study the coming changes in the language assessments and their impact on ELLs' student achievement in broad and comprehensive ways; second, to implement a model change allowing school districts to support their ELLs in their first language; and, finally, to establish programs that will allow ELLs full access to study with their English speaking peers.
ContributorsRoa, Myriam (Author) / Fischman, Gustavo E (Thesis advisor) / Lawton, Stephen B. (Committee member) / Diaz, René X (Committee member) / Arizona State University (Publisher)
Created2012