Matching Items (52)
Filtering by

Clear all filters

147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151722-Thumbnail Image.png
Description
Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.
ContributorsFink, Alex M (Author) / Spanias, Andreas S (Thesis advisor) / Cook, Perry R. (Committee member) / Turaga, Pavan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
152198-Thumbnail Image.png
Description
The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters

The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters from human auditory models, such as auditory patterns and loudness, involves computationally intensive operations which can strain device resources. Hence, strategies for implementing computationally efficient human auditory models for loudness estimation have been studied in this thesis. Existing algorithms for reducing computations in auditory pattern and loudness estimation have been examined and improved algorithms have been proposed to overcome limitations of these methods. In addition, real-time applications such as perceptual loudness estimation and loudness equalization using auditory models have also been implemented. A software implementation of loudness estimation on iOS devices is also reported in this thesis. In addition to the loudness estimation algorithms and software, in this thesis project we also created new illustrations of speech and audio processing concepts for research and education. As a result, a new suite of speech/audio DSP functions was developed and integrated as part of the award-winning educational iOS App 'iJDSP." These functions are described in detail in this thesis. Several enhancements in the architecture of the application have also been introduced for providing the supporting framework for speech/audio processing. Frame-by-frame processing and visualization functionalities have been developed to facilitate speech/audio processing. In addition, facilities for easy sound recording, processing and audio rendering have also been developed to provide students, practitioners and researchers with an enriched DSP simulation tool. Simulations and assessments have been also developed for use in classes and training of practitioners and students.
ContributorsKalyanasundaram, Girish (Author) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2013
150440-Thumbnail Image.png
Description
Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and

Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and High-Resolution (HR) image quality. Existing SR approaches suffer from extremely high computational requirements due to the high number of unknowns to be estimated in the solution of the SR inverse problem. This thesis proposes efficient iterative SR techniques based on Visual Attention (VA) and perceptual modeling of the human visual system. In the first part of this thesis, an efficient ATtentive-SELective Perceptual-based (AT-SELP) SR framework is presented, where only a subset of perceptually significant active pixels is selected for processing by the SR algorithm based on a local contrast sensitivity threshold model and a proposed low complexity saliency detector. The proposed saliency detector utilizes a probability of detection rule inspired by concepts of luminance masking and visual attention. The second part of this thesis further enhances on the efficiency of selective SR approaches by presenting an ATtentive (AT) SR framework that is completely driven by VA region detectors. Additionally, different VA techniques that combine several low-level features, such as center-surround differences in intensity and orientation, patch luminance and contrast, bandpass outputs of patch luminance and contrast, and difference of Gaussians of luminance intensity are integrated and analyzed to illustrate the effectiveness of the proposed selective SR frameworks. The proposed AT-SELP SR and AT-SR frameworks proved to be flexible by integrating a Maximum A Posteriori (MAP)-based SR algorithm as well as a fast two-stage Fusion-Restoration (FR) SR estimator. By adopting the proposed selective SR frameworks, simulation results show significant reduction on average in computational complexity with comparable visual quality in terms of quantitative metrics such as PSNR, SNR or MAE gains, and subjective assessment. The third part of this thesis proposes a Perceptually Weighted (WP) SR technique that incorporates unequal weighting parameters in the cost function of iterative SR problems. The proposed approach is inspired by the unequal processing of the Human Visual System (HVS) to different local image features in an image. Simulation results show an enhanced reconstruction quality and faster convergence rates when applied to the MAP-based and FR-based SR schemes.
ContributorsSadaka, Nabil (Author) / Karam, Lina J (Thesis advisor) / Spanias, Andreas S (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Abousleman, Glen P (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2011
136314-Thumbnail Image.png
Description
The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of

The world of a hearing impaired person is much different than that of somebody capable of discerning different frequencies and magnitudes of sound waves via their ears. This is especially true when hearing impaired people play video games. In most video games, surround sound is fed through some sort of digital output to headphones or speakers. Based on this information, the gamer can discern where a particular stimulus is coming from and whether or not that is a threat to their wellbeing within the virtual world. People with reliable hearing have a distinct advantage over hearing impaired people in the fact that they can gather information not just from what is in front of them, but from every angle relative to the way they're facing. The purpose of this project was to find a way to even the playing field, so that a person hard of hearing could also receive the sensory feedback that any other person would get while playing video games To do this, visual surround sound was created. This is a system that takes a surround sound input, and illuminates LEDs around the periphery of glasses based on the direction, frequency and amplitude of the audio wave. This provides the user with crucial information on the whereabouts of different elements within the game. In this paper, the research and development of Visual Surround Sound is discussed along with its viability in regards to a deaf person's ability to learn the technology, and decipher the visual cues.
ContributorsKadi, Danyal (Co-author) / Burrell, Nathaneal (Co-author) / Butler, Kristi (Co-author) / Wright, Gavin (Co-author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
135896-Thumbnail Image.png
Description
The purpose of the solar powered quadcopter is to join together the growing technologies of photovoltaics and quadcopters, creating a single unified device where the technologies harmonize to produce a new product with abilities beyond those of a traditional battery powered drone. Specifically, the goal is to take the battery-only

The purpose of the solar powered quadcopter is to join together the growing technologies of photovoltaics and quadcopters, creating a single unified device where the technologies harmonize to produce a new product with abilities beyond those of a traditional battery powered drone. Specifically, the goal is to take the battery-only flight time of a quadcopter loaded with a solar array and increase that flight time by 33% with additional power provided by solar cells. The major concepts explored throughout this project are quadcopter functionality and capability and solar cell power production. In order to combine these technologies, the solar power and quadcopter components were developed and analyzed individually before connecting the solar array to the quadcopter circuit and testing the design as a whole. Several solar copter models were initially developed, resulting in multiple unique quadcopter and solar cell array designs which underwent preliminary testing before settling on a finalized design which proved to be the most effective and underwent final timed flight tests. Results of these tests are showing that the technologies complement each other as anticipated and highlight promising results for future development in this area, in particular the development of a drone running on solar power alone. Applications for a product such as this are very promising in many fields, including the industries of power, defense, consumer goods and services, entertainment, marketing, and medical. Also, becoming a more popular device for UAV hobbyists, such developments would be very appealing for leisure flying and personal photography purposes as well.
ContributorsMartin, Heather Catrina (Author) / Bowden, Stuart (Thesis director) / Aberle, James (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
Description
Every engineer is responsible for completing a capstone project as a culmination of accredited university learning to demonstrate technical knowledge and enhance interpersonal skills, like teamwork, communication, time management, and problem solving. This project, with three or four engineers working together in a group, emphasizes not only the importance of

Every engineer is responsible for completing a capstone project as a culmination of accredited university learning to demonstrate technical knowledge and enhance interpersonal skills, like teamwork, communication, time management, and problem solving. This project, with three or four engineers working together in a group, emphasizes not only the importance of technical skills acquired through laboratory procedures and coursework, but the significance of soft skills as one transitions from a university to a professional workplace; it also enhances the understanding of an engineer's obligation to ethically improve society by harnessing technical knowledge to bring about change. The CC2541 Smart SensorTag is a device manufactured by Texas Instruments that focuses on the use of wireless sensors to create low energy applications, or apps; it is equipped with Bluetooth Smart, which enables it to communicate wirelessly with similar devices like smart phones and computers, assisting greatly in app development. The device contains six built-in sensors, which can be utilized to track and log personal data in real-time; these sensors include a gyroscope, accelerometer, humidifier, thermometer, barometer, and magnetometer. By combining the data obtained through the sensors with the ability to communicate wirelessly, the SensorTag can be used to develop apps in multiple fields, including fitness, recreation, health, safety, and more. Team SensorTag chose to focus on health and safety issues to complete its capstone project, creating applications intended for use by senior citizens who live alone or in assisted care homes. Using the SensorTag's ability to track multiple local variables, the team worked to collect data that verified the accuracy and quality of the sensors through repeated experimental trials. Once the sensors were tested, the team developed applications accessible via smart phones or computers to trigger an alarm and send an alert via vibration, e-mail, or Tweet if the SensorTag detects a fall. The fall detection service utilizes the accelerometer and gyroscope sensors with the hope that such a system will prevent severe injuries among the elderly, allow them to function more independently, and improve their quality of life, which is the obligation of engineers to better through their work.
ContributorsMartin, Katherine Julia (Author) / Thornton, Trevor (Thesis director) / Goryll, Michael (Committee member) / Electrical Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
135759-Thumbnail Image.png
Description
The apparent phenomenon of the human eye retaining images for fractions of a second after the light source has gone is known as Persistence of Vision. While its causes are not fully understood, it can be taken advantage of in order to create illusions which trick the mind into perceiving

The apparent phenomenon of the human eye retaining images for fractions of a second after the light source has gone is known as Persistence of Vision. While its causes are not fully understood, it can be taken advantage of in order to create illusions which trick the mind into perceiving something which, in actuality, is very different from what the mind portrays. It has motivated many creative engineering technologies in the past and is the core for how we perceive motion in movies and animations. This project applies the persistence of vision concept to a lesser explored medium; the wheel of a moving bicycle. The motion of the wheel, along with intelligent control of discrete LEDs, create vibrant illusions of solid lines and shapes. These shapes make up the image to be displayed on the bike wheel. The rotation of the bike wheel can be compensated for in order to produce a standing image (or images) of the user's choosing. This thesis details how the mechanism for conducting the individual LEDs was created in order to produce a device which is capable of delivering colorful, standing images of the user's choosing.
ContributorsSaltwick, Ian Mark (Author) / Goryll, Michael (Thesis director) / Kozicki, Michael (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136728-Thumbnail Image.png
Description
This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral

This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral devices in the same way as the hardware used in the embedded systems lab at ASU. This project would cut down the substantial amount of time students spend commuting to the lab. Having the processor directly at their disposal would also encourage them to spend more time outside of class learning the hardware and familiarizing themselves with development on an embedded micro-controller. The model will be accurate, fast and reliable. These aspects will be achieved through rigorous unit testing and use of the OVP platform which provides instruction accurate simulations at hundreds of MIPS (million instructions per second) for the specified model. The end product was able to accurately simulate a subset of the Coldfire instructions at very high rates.
ContributorsDunning, David Connor (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136937-Thumbnail Image.png
Description
Lighting Audio is a team of senior electrical engineering students at the Arizona State University mentored by Director Emeritus Professor Ronald Roedel and 2nd Committee Member George Karady attempting to prove the feasibility of a consumer grade plasma arc speaker. The plasma arc speaker is a project that explores the

Lighting Audio is a team of senior electrical engineering students at the Arizona State University mentored by Director Emeritus Professor Ronald Roedel and 2nd Committee Member George Karady attempting to prove the feasibility of a consumer grade plasma arc speaker. The plasma arc speaker is a project that explores the use of high voltage arcs to produce audible sound amplification. The goal of the project is to prove feasibility that a consumer grade plasma arc speaker could exist in the marketplace. The inherent challenge was producing audio amplification that could compete with current loudspeakers all while ensuring user safety from the hazards of high voltage and current shock, electromagnetic damage, and ozone from the plasma arc. The project has thus far covered the process of design conception to realization of a prototype device. The operation of the plasma arc speaker is based on the high voltage plasma arc created between two electrodes. The plasma arc rapidly heats and cools the surrounding air creating changes in air pressure which vibrate the air. These pockets of pressurized air are heard as sound. The circuit incorporates a flyback transformer responsible for creating the high voltage necessary for arcing.
ContributorsNandan, Rahul S (Author) / Roedel, Ronald (Thesis director) / Huffman, James (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2014-05