Matching Items (9)
Filtering by

Clear all filters

136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
133725-Thumbnail Image.png
Description
Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have

Detecting early signs of neurodegeneration is vital for measuring the efficacy of pharmaceuticals and planning treatments for neurological diseases. This is especially true for Amyotrophic Lateral Sclerosis (ALS) where differences in symptom onset can be indicative of the prognosis. Because it can be measured noninvasively, changes in speech production have been proposed as a promising indicator of neurological decline. However, speech changes are typically measured subjectively by a clinician. These perceptual ratings can vary widely between clinicians and within the same clinician on different patient visits, making clinical ratings less sensitive to subtle early indicators. In this paper, we propose an algorithm for the objective measurement of flutter, a quasi-sinusoidal modulation of fundamental frequency that manifests in the speech of some ALS patients. The algorithm detailed in this paper employs long-term average spectral analysis on the residual F0 track of a sustained phonation to detect the presence of flutter and is robust to longitudinal drifts in F0. The algorithm is evaluated on a longitudinal speech dataset of ALS patients at varying stages in their prognosis. Benchmarking with two stages of perceptual ratings provided by an expert speech pathologist indicate that the algorithm follows perceptual ratings with moderate accuracy and can objectively detect flutter in instances where the variability of the perceptual rating causes uncertainty.
ContributorsPeplinski, Jacob Scott (Author) / Berisha, Visar (Thesis director) / Liss, Julie (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134823-Thumbnail Image.png
Description
Imaging using electric fields could provide a cheaper, safer, and easier alternative to the standard methods used for imaging. The viability of electric field imaging at very low frequencies using D-dot sensors has already been investigated and proven. The new goal is to determine if imaging is viable at high

Imaging using electric fields could provide a cheaper, safer, and easier alternative to the standard methods used for imaging. The viability of electric field imaging at very low frequencies using D-dot sensors has already been investigated and proven. The new goal is to determine if imaging is viable at high frequencies. In order to accomplish this, the operational amplifiers used in the very low frequency imaging test set up must be replaced with ones that have higher bandwidth. The trade-off of using these amplifiers is that they have a typical higher input leakage current on the order of 100 compared to the standard. Using a modified circuit design technique that reduces input leakage current of the operational amplifiers used in the imaging test setup, a printed circuit board with D-dot sensors is fabricated to identify the frequency limitations of electric field imaging. Data is collected at both low and high frequencies as well as low peak voltage. The data is then analyzed to determine the range in intensity of electric field and frequency that this circuit low-leakage design can accurately detect a signal. Data is also collected using another printed circuit board that uses the standard circuit design technique. The data taken from the different boards is compared to identify if the modified circuit design technique allows for higher sensitivity imaging. In conclusion, this research supports that using low-leakage design techniques can allow for signal detection comparable to that of the standard circuit design. The low-leakage design allowed for sensitivity within a factor two to that of the standard design. Although testing at higher frequencies was limited, signal detection for the low-leakage design was reliable up until 97 kHz, but further experimentation is needed to determine the upper frequency limits.
ContributorsLin, Richard (Co-author) / Angell, Tyler (Co-author) / Allee, David (Thesis director) / Chung, Hugh (Committee member) / Electrical Engineering Program (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135455-Thumbnail Image.png
Description
The increasing presence and affordability of sensors provides the opportunity to make novel and creative designs for underserved markets like the legally blind. Here we explore how mathematical methods and device coordination can be utilized to improve the functionality of inexpensive proximity sensing electronics in order to create designs that

The increasing presence and affordability of sensors provides the opportunity to make novel and creative designs for underserved markets like the legally blind. Here we explore how mathematical methods and device coordination can be utilized to improve the functionality of inexpensive proximity sensing electronics in order to create designs that are versatile, durable, low cost, and simple. Devices utilizing various acoustic and electromagnetic wave frequencies like ultrasonic rangefinders, radars, Lidar rangefinders, webcams, and infrared rangefinders and the concepts of Sensor Fusion, Frequency Modulated Continuous Wave radar, and Phased Arrays were explored. The effects of various factors on the propagation of different wave signals was also investigated. The devices selected to be incorporated into designs were the HB100 DRO Radar Doppler Sensor (as an FMCW radar), HC-SR04 Ultrasonic Sensor, and Maxbotix Ultrasonic Rangefinder \u2014 EZ3. Three designs were ultimately developed and dubbed the "Rad-Son Fusion", the "Tri-Beam Scanner", and the "Dual-Receiver Ranger". The "Rad-Son Fusion" employs the Sensor Fusion of an FMCW radar and Ultrasonic sensor through a weighted average of the distance reading from the two sensors. The "Tri-Beam Scanner" utilizes a beam-forming Digital Phased Array of ultrasonic sensors to scan its surroundings. The "Dual-Receiver Ranger" uses the convolved result from to two modified HC-SR04 sensors to determine the time of flight and ultimately an object's distance. After conducting hardware experiments to determine the feasibility of each design, the "Dual-Receiver Ranger" was prototyped and tested to demonstrate the potential of the concept. The designs were later compared based on proposed requirements and possible improvements and challenges associated with the designs are discussed.
ContributorsFeinglass, Joshua Forster (Author) / Goryll, Michael (Thesis director) / Reisslein, Martin (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
157900-Thumbnail Image.png
Description
Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are

Readout Integrated Circuits(ROICs) are important components of infrared(IR) imag

ing systems. Performance of ROICs affect the quality of images obtained from IR

imaging systems. Contemporary infrared imaging applications demand ROICs that

can support large dynamic range, high frame rate, high output data rate, at low

cost, size and power. Some of these applications are military surveillance, remote

sensing in space and earth science missions and medical diagnosis. This work focuses

on developing a ROIC unit cell prototype for National Aeronautics and Space Ad

ministration(NASA), Jet Propulsion Laboratory’s(JPL’s) space applications. These

space applications also demand high sensitivity, longer integration times(large well

capacity), wide operating temperature range, wide input current range and immunity

to radiation events such as Single Event Latchup(SEL).

This work proposes a digital ROIC(DROIC) unit cell prototype of 30ux30u size,

to be used mainly with NASA JPL’s High Operating Temperature Barrier Infrared

Detectors(HOT BIRDs). Current state of the art DROICs achieve a dynamic range

of 16 bits using advanced 65-90nm CMOS processes which adds a lot of cost overhead.

The DROIC pixel proposed in this work uses a low cost 180nm CMOS process and

supports a dynamic range of 20 bits operating at a low frame rate of 100 frames per

second(fps), and a dynamic range of 12 bits operating at a high frame rate of 5kfps.

The total electron well capacity of this DROIC pixel is 1.27 billion electrons, enabling

integration times as long as 10ms, to achieve better dynamic range. The DROIC unit

cell uses an in-pixel 12-bit coarse ADC and an external 8-bit DAC based fine ADC.

The proposed DROIC uses layout techniques that make it immune to radiation up to

300krad(Si) of total ionizing dose(TID) and single event latch-up(SEL). It also has a

wide input current range from 10pA to 1uA and supports detectors operating from

Short-wave infrared (SWIR) to longwave infrared (LWIR) regions.
ContributorsPraveen, Subramanya Chilukuri (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kitchen, Jennifer (Committee member) / Long, Yu (Committee member) / Arizona State University (Publisher)
Created2019
132174-Thumbnail Image.png
Description
The NASA Psyche Iron Meteorite Imaging System (IMIS) is a standalone system created to image metal meteorites from ASU’s Center for Meteorite Studies’ collection that have an etched surface. Meteorite scientists have difficulty obtaining true-to-life images of meteorites through traditional photography methods due to the meteorites’ shiny, irregular surfaces, which

The NASA Psyche Iron Meteorite Imaging System (IMIS) is a standalone system created to image metal meteorites from ASU’s Center for Meteorite Studies’ collection that have an etched surface. Meteorite scientists have difficulty obtaining true-to-life images of meteorites through traditional photography methods due to the meteorites’ shiny, irregular surfaces, which interferes with their ability to identify meteorites’ component materials through image analysis. Using the IMIS, scientists can easily and consistently obtain glare-free photographs of meteorite surface that are suitable for future use in an artificial intelligence-based meteorite component analysis system. The IMIS integrates a lighting system, a mounted camera, a sample positioning area, a meteorite leveling/positioning system, and a touch screen control panel featuring an interface that allows the user to see a preview of the image to be taken as well as an edge detection view, a glare detection view, a button that allows the user to remotely take the picture, and feedback if very high levels of glare are detected that may indicate a camera or positioning error. Initial research and design work were completed by the end of Fall semester, and Spring semester consisted of building and testing the system. The current system is fully functional, and photos taken by the current system have been approved by a meteorite expert and an AI expert. The funding for this project was tentatively capped at $1000 for miscellaneous expenses, not including a camera to be supplied by the School of Earth and Space Exploration. When SESE was unable to provide a camera, an additional $4000 were allotted for camera expenses. So far, $1935 of the total $5000 budget has been spent on the project, putting the project $3065 under budget. While this system is a functional prototype, future capstone projects may involve the help of industrial designers to improve the technician’s experience through automating the sample positioning process.
ContributorsBaerwaldt, Morgan Kathleen (Author) / Bowman, Cassie (Thesis director) / Kozicki, Michael (Committee member) / School of Art (Contributor) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132193-Thumbnail Image.png
Description
Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation.

Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation. The Capon algorithm is known as the "adaptive" approach to power spectrum estimation because its filter impulse responses are adapted to fit the characteristics of the data. The Bartlett method is known as the "conventional" approach to power spectrum estimation (PSE) and has a fixed deterministic filter. Both techniques rely on the Sample Covariance Matrix (SCM). The first objective of this project is to analyze the origins and characteristics of the Capon and Bartlett methods to understand their abilities to resolve signals closely spaced in frequency. Taking into consideration the Capon and Bartlett's reliance on the SCM, there is a novelty in combining these two algorithms using their cross-coherence. The second objective of this project is to analyze the performance of the Capon-Bartlett Cross Spectra. This study will involve Matlab simulations of known test cases and comparisons with approximate theoretical predictions.
ContributorsYoshiyama, Cassidy (Author) / Richmond, Christ (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132037-Thumbnail Image.png
Description
In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music field are applicable to the field of speech and hearing. More specifically, wearable haptic feedback devices can enhance the musical listening experience for people who use cochlear implant (CI) devices.
This Honors Thesis is a continuation of Prof. Lauren Hayes’s and Dr. Xin Luo’s research initiative, Haptic Electronic Audio Research into Musical Experience (HEAR-ME), which investigates how to enhance the musical listening experience for CI users using a wearable haptic system. The goals of this Honors Thesis are to adapt Prof. Hayes’s system code from the Max visual programming language into the C++ object-oriented programming language and to study the results of the developed C++ codes. This adaptation allows the system to operate in real-time and independently of a computer.
Towards these goals, two signal processing algorithms were developed and programmed in C++. The first algorithm is a thresholding method, which outputs a pulse of a predefined width when the input signal falls below some threshold in amplitude. The second algorithm is a root-mean-square (RMS) method, which outputs a pulse-width modulation signal with a fixed period and with a duty cycle dependent on the RMS of the input signal. The thresholding method was found to work best with speech, and the RMS method was found to work best with music. Future work entails the design of adaptive signal processing algorithms to allow the system to work more effectively on speech in a noisy environment and to emphasize a variety of elements in music.
ContributorsBonelli, Dominic Berlage (Author) / Papandreou-Suppappola, Antonia (Thesis director) / Hayes, Lauren (Thesis director, Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
166161-Thumbnail Image.png
Description

The idea for this thesis emerged from my senior design capstone project, A Wearable Threat Awareness System. A TFmini-S LiDAR sensor is used as one component of this system; the functionality of and signal processing behind this type of sensor are elucidated in this document. Conceptual implementations of the optical

The idea for this thesis emerged from my senior design capstone project, A Wearable Threat Awareness System. A TFmini-S LiDAR sensor is used as one component of this system; the functionality of and signal processing behind this type of sensor are elucidated in this document. Conceptual implementations of the optical and digital stages of the signal processing is described in some detail. Following an introduction in which some general background knowledge about LiDAR is set forth, the body of the thesis is organized into two main sections. The first section focuses on optical processing to demodulate the received signal backscattered from the target object. This section describes the key steps in demodulation and illustrates them with computer simulation. A series of graphs capture the mathematical form of the signal as it progresses through the optical processing stages, ultimately yielding the baseband envelope which is converted to digital form for estimation of the leading edge of the pulse waveform using a digital algorithm. The next section is on range estimation. It describes the digital algorithm designed to estimate the arrival time of the leading edge of the optical pulse signal. This enables the pulse’s time of flight to be estimated, thus determining the distance between the LiDAR and the target. Performance of this algorithm is assessed with four different levels of noise. A calculation of the error in the leading-edge detection in terms of distance is also included to provide more insight into the algorithm’s accuracy.

ContributorsRidgway, Megan (Author) / Cochran, Douglas (Thesis director) / Aberle, James (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05