Matching Items (101)

137494-Thumbnail Image.png

Electric Field Sensing

Description

This project examines the science of electric field sensing and completes experiments, gathering data to support its utility for various applications. The basic system consists of a transmitter, receiver, and

This project examines the science of electric field sensing and completes experiments, gathering data to support its utility for various applications. The basic system consists of a transmitter, receiver, and lock-in amplifier. The primary goal of the study was to determine if such a system could detect a human disturbance, due to the capacitance of a human body, and such a thesis was supported. Much different results were obtained when a person disturbed the electric field transmitted by the system than when other types of objects, such as chairs and electronic devices, were placed in the field. In fact, there was a distinct difference between persons of varied sizes as well. This thesis goes through the basic design of the system and the process of experimental design for determining the capabilities of such an electric field sensing system.

Contributors

Created

Date Created
  • 2013-05

136775-Thumbnail Image.png

Analysis of Learning Retention throughout Aging

Description

In this paper, it is determined that learning retention decreases with age and there is a linear rate of decrease. In this study, four male Long-Evans Rats were used. The

In this paper, it is determined that learning retention decreases with age and there is a linear rate of decrease. In this study, four male Long-Evans Rats were used. The rats were each trained in 4 different tasks throughout their lifetime, using a food reward as motivation to work. Rats were said to have learned a task at the age when they received the highest accuracy during a task. A regression of learning retention was created for the set of studied rats: Learning Retention = 112.9 \u2014 0.085919 x (Age at End of Task), indicating that learning retention decreases at a linear rate, although rats have different rates of decrease of learning retention. The presence of behavioral training was determined not to have a positive impact on this rate. In behavioral studies, there were statistically significant differences between timid/outgoing and large ball ability between W12 and Z12. Rat W12 had overall better learning retention and also was more compliant, did not resist being picked up and traveled more frequently at high speeds (in the large ball) than Z12. Further potential studies include implanting an electrode into the frontal cortex in order to compare neuro feedback with learning retention, and using human subjects to find the rate of decrease in learning retention. The implication of this study, if also true for human subjects, is that older persons may need enhanced training or additional refresher training in order to retain information that is learned at a later age.

Contributors

Agent

Created

Date Created
  • 2014-05

136341-Thumbnail Image.png

Developing a Flexible Electric and Magnetic Field Imaging Blanket

Description

Recently, electric and magnetic field sensing has come of interest to the military for a variety of applications, including imaging circuitry and detecting explosive devices. This thesis describes research at

Recently, electric and magnetic field sensing has come of interest to the military for a variety of applications, including imaging circuitry and detecting explosive devices. This thesis describes research at the ASU's Flexible Electronics and Display Center (FEDC) towards the development of a flexible electric and magnetic field imaging blanket. D-dot sensors, which detect changes in electric flux, were chosen for electric field sensing, and a single D-dot sensor in combination with a lock-in amplifier was used to detect individuals passing through an oscillating electric field. This was then developed into a 1 x 16 array of D-dot sensors used to image the field generated by two parallel wires. After the fabrication of a two-dimensional array, it was discovered that commercial field effect transistors did not have a high enough off-resistance to isolate the sensor form the column line. Three alternative solutions were proposed. The first was a one-dimensional array combined with a mechanical stepper to move the array across the E-field pattern. The second was a 1 x 16 strip detector combined with the techniques of computed tomography to reconstruct the image of the field. Such techniques include filtered back projection and algebraic iterative reconstruction (AIR). Lastly, an array of D-dot sensors was fabricated on a flexible substrate, enabled by the high off-resistance of the thin film transistors produced by the FEDC. The research on magnetic field imaging began with a feasibility study of three different types of magnetic field sensors: planar spiral inductors, Hall effect sensors, and giant magnetoresistance (GMR). An experimental array of these sensors was designed and fabricated, and the sensors were used to image the fringe fields of a Helmholtz coil. Furthermore, combining the inductors with the other two types of sensors resulted in three-dimensional sensors. From these measurements, it was determined that planar spiral inductors and Hall effect sensors are best suited for future imaging arrays.

Contributors

Agent

Created

Date Created
  • 2015-05

128413-Thumbnail Image.png

Time-Frequency Analysis of Peptide Microarray Data: Application to Brain Cancer Immunosignatures

Description

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the body’s immune system to detect and amplify tumor-specific signals may enable detection of cancer using an inexpensive immunoassay. Immunosignatures are one such assay: they provide a map of antibody interactions with random-sequence peptides. They enable detection of disease-specific patterns using classic train/test methods. However, to date, very little effort has gone into extracting information from the sequence of peptides that interact with disease-specific antibodies. Because it is difficult to represent all possible antigen peptides in a microarray format, we chose to synthesize only 330,000 peptides on a single immunosignature microarray. The 330,000 random-sequence peptides on the microarray represent 83% of all tetramers and 27% of all pentamers, creating an unbiased but substantial gap in the coverage of total sequence space. We therefore chose to examine many relatively short motifs from these random-sequence peptides. Time-variant analysis of recurrent subsequences provided a means to dissect amino acid sequences from the peptides while simultaneously retaining the antibody–peptide binding intensities. We first used a simple experiment in which monoclonal antibodies with known linear epitopes were exposed to these random-sequence peptides, and their binding intensities were used to create our algorithm. We then demonstrated the performance of the proposed algorithm by examining immunosignatures from patients with Glioblastoma multiformae (GBM), an aggressive form of brain cancer. Eight different frameshift targets were identified from the random-sequence peptides using this technique. If immune-reactive antigens can be identified using a relatively simple immune assay, it might enable a diagnostic test with sufficient sensitivity to detect tumors in a clinically useful way.

Contributors

Agent

Created

Date Created
  • 2015-06-18

132037-Thumbnail Image.png

DESIGN OF SIGNAL PROCESSING ALGORITHMS AND DEVELOPMENT OF A REAL-TIME SYSTEM FOR MAPPING AUDIO TO HAPTICS FOR COCHLEAR IMPLANT USERS

Description

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music field are applicable to the field of speech and hearing. More specifically, wearable haptic feedback devices can enhance the musical listening experience for people who use cochlear implant (CI) devices.
This Honors Thesis is a continuation of Prof. Lauren Hayes’s and Dr. Xin Luo’s research initiative, Haptic Electronic Audio Research into Musical Experience (HEAR-ME), which investigates how to enhance the musical listening experience for CI users using a wearable haptic system. The goals of this Honors Thesis are to adapt Prof. Hayes’s system code from the Max visual programming language into the C++ object-oriented programming language and to study the results of the developed C++ codes. This adaptation allows the system to operate in real-time and independently of a computer.
Towards these goals, two signal processing algorithms were developed and programmed in C++. The first algorithm is a thresholding method, which outputs a pulse of a predefined width when the input signal falls below some threshold in amplitude. The second algorithm is a root-mean-square (RMS) method, which outputs a pulse-width modulation signal with a fixed period and with a duty cycle dependent on the RMS of the input signal. The thresholding method was found to work best with speech, and the RMS method was found to work best with music. Future work entails the design of adaptive signal processing algorithms to allow the system to work more effectively on speech in a noisy environment and to emphasize a variety of elements in music.

Contributors

Agent

Created

Date Created
  • 2019-12

Structural Health Monitoring: Acoustic Emissions

Description

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.

Contributors

Created

Date Created
  • 2019-05

149915-Thumbnail Image.png

Synthetic aperture radar image formation via sparse decomposition

Description

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.

Contributors

Agent

Created

Date Created
  • 2011

149993-Thumbnail Image.png

The detection of reliability prediction cues in manufacturing data from statistically controlled processes

Description

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.

Contributors

Agent

Created

Date Created
  • 2011

149867-Thumbnail Image.png

Incorporating auditory models in speech/audio applications

Description

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

Contributors

Agent

Created

Date Created
  • 2011

149902-Thumbnail Image.png

Fractional focusing and the chirp scaling algorithm with real synthetic aperture radar data

Description

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.

Contributors

Agent

Created

Date Created
  • 2011