Matching Items (6)
Filtering by

Clear all filters

153209-Thumbnail Image.png
Description
Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate

Peptide microarrays have been used in molecular biology to profile immune responses and develop diagnostic tools. When the microarrays are printed with random peptide sequences, they can be used to identify antigen antibody binding patterns or immunosignatures. In this thesis, an advanced signal processing method is proposed to estimate epitope antigen subsequences as well as identify mimotope antigen subsequences that mimic the structure of epitopes from random-sequence peptide microarrays. The method first maps peptide sequences to linear expansions of highly-localized one-dimensional (1-D) time-varying signals and uses a time-frequency processing technique to detect recurring patterns in subsequences. This technique is matched to the aforementioned mapping scheme, and it allows for an inherent analysis on how substitutions in the subsequences can affect antibody binding strength. The performance of the proposed method is demonstrated by estimating epitopes and identifying potential mimotopes for eight monoclonal antibody samples.

The proposed mapping is generalized to express information on a protein's sequence location, structure and function onto a highly localized three-dimensional (3-D) Gaussian waveform. In particular, as analysis of protein homology has shown that incorporating different kinds of information into an alignment process can yield more robust alignment results, a pairwise protein structure alignment method is proposed based on a joint similarity measure of multiple mapped protein attributes. The 3-D mapping allocates protein properties into distinct regions in the time-frequency plane in order to simplify the alignment process by including all relevant information into a single, highly customizable waveform. Simulations demonstrate the improved performance of the joint alignment approach to infer relationships between proteins, and they provide information on mutations that cause changes to both the sequence and structure of a protein.

In addition to the biology-based signal processing methods, a statistical method is considered that uses a physics-based model to improve processing performance. In particular, an externally developed physics-based model for sea clutter is examined when detecting a low radar cross-section target in heavy sea clutter. This novel model includes a process that generates random dynamic sea clutter based on the governing physics of water gravity and capillary waves and a finite-difference time-domain electromagnetics simulation process based on Maxwell's equations propagating the radar signal. A subspace clutter suppression detector is applied to remove dominant clutter eigenmodes, and its improved performance over matched filtering is demonstrated using simulations.
ContributorsO'Donnell, Brian (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Johnston, Stephen A. (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
157327-Thumbnail Image.png
Description
The challenge of radiation therapy is to maximize the dose to the tumor while simultaneously minimizing the dose elsewhere. Proton therapy is well suited to this challenge due to the way protons slow down in matter. As the proton slows down, the rate of energy loss per unit path length

The challenge of radiation therapy is to maximize the dose to the tumor while simultaneously minimizing the dose elsewhere. Proton therapy is well suited to this challenge due to the way protons slow down in matter. As the proton slows down, the rate of energy loss per unit path length continuously increases leading to a sharp dose near the end of range. Unlike conventional radiation therapy, protons stop inside the patient, sparing tissue beyond the tumor. Proton therapy should be superior to existing modalities, however, because protons stop inside the patient, there is uncertainty in the range. “Range uncertainty” causes doctors to take a conservative approach in treatment planning, counteracting the advantages offered by proton therapy. Range uncertainty prevents proton therapy from reaching its full potential.

A new method of delivering protons, pencil-beam scanning (PBS), has become the new standard for treatment over the past few years. PBS utilizes magnets to raster scan a thin proton beam across the tumor at discrete locations and using many discrete pulses of typically 10 ms duration each. The depth is controlled by changing the beam energy. The discretization in time of the proton delivery allows for new methods of dose verification, however few devices have been developed which can meet the bandwidth demands of PBS.

In this work, two devices have been developed to perform dose verification and monitoring with an emphasis placed on fast response times. Measurements were performed at the Mayo Clinic. One detector addresses range uncertainty by measuring prompt gamma-rays emitted during treatment. The range detector presented in this work is able to measure the proton range in-vivo to within 1.1 mm at depths up to 11 cm in less than 500 ms and up to 7.5 cm in less than 200 ms. A beam fluence detector presented in this work is able to measure the position and shape of each beam spot. It is hoped that this work may lead to a further maturation of detection techniques in proton therapy, helping the treatment to reach its full potential to improve the outcomes in patients.
ContributorsHolmes, Jason M (Author) / Alarcon, Ricardo (Thesis advisor) / Bues, Martin (Committee member) / Galyaev, Eugene (Committee member) / Chamberlin, Ralph (Committee member) / Arizona State University (Publisher)
Created2019
155064-Thumbnail Image.png
Description
From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical

From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical and psychological level. Although not lethal by themselves, they can bring about total disruption in consciousness which can, in hazardous conditions, lead to fatality. Roughly 1\% of the world population suffer from epilepsy and another 30 to 50 new cases per 100,000 increase the number of affected annually. Controlling seizures in epileptic patients has therefore become a great medical and, in recent years, engineering challenge.



In this study, the conditions of human seizures are recreated in an animal model of temporal lobe epilepsy. The rodents used in this study are chemically induced to become chronically epileptic. Their Electroencephalogram (EEG) data is then recorded and analyzed to detect and predict seizures; with the ultimate goal being the control and complete suppression of seizures.



Two methods, the maximum Lyapunov exponent and the Generalized Partial Directed Coherence (GPDC), are applied on EEG data to extract meaningful information. Their effectiveness have been reported in the literature for the purpose of prediction of seizures and seizure focus localization. This study integrates these measures, through some modifications, to robustly detect seizures and separately find precursors to them and in consequence provide stimulation to the epileptic brain of rats in order to suppress seizures. Additionally open-loop stimulation with biphasic currents of various pairs of sites in differing lengths of time have helped us create control efficacy maps. While GPDC tells us about the possible location of the focus, control efficacy maps tells us how effective stimulating a certain pair of sites will be.



The results from computations performed on the data are presented and the feasibility of the control problem is discussed. The results show a new reliable means of seizure detection even in the presence of artifacts in the data. The seizure precursors provide a means of prediction, in the order of tens of minutes, prior to seizures. Closed loop stimulation experiments based on these precursors and control efficacy maps on the epileptic animals show a maximum reduction of seizure frequency by 24.26\% in one animal and reduction of length of seizures by 51.77\% in another. Thus, through this study it was shown that the implementation of the methods can ameliorate seizures in an epileptic patient. It is expected that the new knowledge and experimental techniques will provide a guide for future research in an effort to ultimately eliminate seizures in epileptic patients.
ContributorsShafique, Md Ashfaque Bin (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Muthuswamy, Jitendran (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2016
161732-Thumbnail Image.png
Description
Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such

Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such as video quality or viewing angle, algorithms that work well theoretically can have a high error rate in practice. This thesis studies several ways in which that error can be minimized.This thesis describes an application in a practical system. This project is to detect, track and count people entering different lanes at an airport security checkpoint, using CCTV videos as a primary source. This thesis improves an existing algorithm that is not optimized for this particular problem and has a high error rate when comparing the algorithm counts with the true volume of users. The high error rate is caused by many people crowding into security lanes at the same time. The camera from which footage was captured is located at a poor angle, and thus many of the people occlude each other and cause the existing algorithm to miss people. One solution is to count only heads; since heads are smaller than a full body, they will occlude less, and in addition, since the camera is angled from above, the heads in back will appear higher and will not be occluded by people in front. One of the primary improvements to the algorithm is to combine both person detections and head detections to improve the accuracy. The proposed algorithm also improves the accuracy of detections. The existing algorithm used the COCO training dataset, which works well in scenarios where people are visible and not occluded. However, the available video quality in this project was not very good, with people often blocking each other from the camera’s view. Thus, a different training set was needed that could detect people even in poor-quality frames and with occlusion. The new training set is the first algorithmic improvement, and although occasionally performing worse, corrected the error by 7.25% on average.
ContributorsLarsen, Andrei (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
168844-Thumbnail Image.png
Description
The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics

The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics in the photon detection and time tagging system. The continuous nature of the measurements and the need for highly accurate timing resolution requires a customized time-to-digital converter (TDC). A custom built, two-channel, field programmable gate array (FPGA)-based TDC capable of continuously time tagging single photons with sub clock cycle timing resolution was characterized. Auto-correlation and cross-correlation measurements were used to constrain spurious systematic effects in the pulse count data as a function of system variables. These variables included, but were not limited to, incident photon count rate, incoming signal attenuation, and measurements of fixed signals. Additionally, a generalized likelihood ratio test using maximum likelihood estimators (MLEs) was derived as a means to detect and estimate correlated photon signal parameters. The derived GLRT was capable of detecting correlated photon signals in a laboratory setting with a high degree of statistical confidence. A proof is presented in which the MLE for the amplitude of the correlated photon signal is shown to be the minimum variance unbiased estimator (MVUE). The fully characterized TDC was used in preliminary measurements of astronomical sources using ground based telescopes. Finally, preliminary theoretical groundwork is established for the deep space optical communications system of the proposed Breakthrough Starshot project, in which low-mass craft will travel to the Alpha Centauri system to collect scientific data from Proxima B. This theoretical groundwork utilizes recent and upcoming space based optical communication systems as starting points for the Starshot communication system.
ContributorsHodges, Todd Michael William (Author) / Mauskopf, Philip (Thesis advisor) / Trichopoulos, George (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2022
151760-Thumbnail Image.png
Description
Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban planning, identifying cover in military training or flight training. Detection of such features using commonly available geospatial data like orthographic

Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban planning, identifying cover in military training or flight training. Detection of such features using commonly available geospatial data like orthographic aerial imagery is very challenging because rooftop and tree textures are often camouflaged by similar looking features like roads, ground and grass. So, additonal data such as LIDAR, multispectral imagery and multiple viewpoints are exploited for more accurate detection. However, such data is often not available, or may be improperly registered or inacurate. In this thesis, we discuss a novel framework that only uses orthographic images for detection and modeling of rooftops. A segmentation scheme that initializes by assigning either foreground (rooftop) or background labels to certain pixels in the image based on shadows is proposed. Then it employs grabcut to assign one of those two labels to the rest of the pixels based on initial labeling. Parametric model fitting is performed on the segmented results in order to create a 3D scene and to facilitate roof-shape and height estimation. The framework can also benefit from additional geospatial data such as streetmaps and LIDAR, if available.
ContributorsKhanna, Kunal (Author) / Femiani, John (Thesis advisor) / Wonka, Peter (Thesis advisor) / Razdan, Anshuman (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2013