Matching Items (87)
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151382-Thumbnail Image.png
Description
A signal with time-varying frequency content can often be expressed more clearly using a time-frequency representation (TFR), which maps the signal into a two-dimensional function of time and frequency, similar to musical notation. The thesis reviews one of the most commonly used TFRs, the Wigner distribution (WD), and discusses its

A signal with time-varying frequency content can often be expressed more clearly using a time-frequency representation (TFR), which maps the signal into a two-dimensional function of time and frequency, similar to musical notation. The thesis reviews one of the most commonly used TFRs, the Wigner distribution (WD), and discusses its application in Fourier optics: it is shown that the WD is analogous to the spectral dispersion that results from a diffraction grating, and time and frequency are similarly analogous to a one dimensional spatial coordinate and wavenumber. The grating is compared with a simple polychromator, which is a bank of optical filters. Another well-known TFR is the short time Fourier transform (STFT). Its discrete version can be shown to be equivalent to a filter bank, an array of bandpass filters that enable localized processing of the analysis signals in different sub-bands. This work proposes a signal-adaptive method of generating TFRs. In order to minimize distortion in analyzing a signal, the method modifies the filter bank to consist of non-overlapping rectangular bandpass filters generated using the Butterworth filter design process. The information contained in the resulting TFR can be used to reconstruct the signal, and perfect reconstruction techniques involving quadrature mirror filter banks are compared with a simple Fourier synthesis sum. The optimal filter parameters of the rectangular filters are selected adaptively by minimizing the mean-squared error (MSE) from a pseudo-reconstructed version of the analysis signal. The reconstruction MSE is proposed as an error metric for characterizing TFRs; a practical measure of the error requires normalization and cross correlation with the analysis signal. Simulations were performed to demonstrate the the effectiveness of the new adaptive TFR and its relation to swept-tuned spectrum analyzers.
ContributorsWeber, Peter C. (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
151480-Thumbnail Image.png
Description
The use of electromyography (EMG) signals to characterize muscle fatigue has been widely accepted. Initial work on characterizing muscle fatigue during isometric contractions demonstrated that its frequency decreases while its amplitude increases with the onset of fatigue. More recent work concentrated on developing techniques to characterize dynamic contractions for use

The use of electromyography (EMG) signals to characterize muscle fatigue has been widely accepted. Initial work on characterizing muscle fatigue during isometric contractions demonstrated that its frequency decreases while its amplitude increases with the onset of fatigue. More recent work concentrated on developing techniques to characterize dynamic contractions for use in clinical and training applications. Studies demonstrated that as fatigue progresses, the EMG signal undergoes a shift in frequency, and different physiological mechanisms on the possible cause of the shift were considered. Time-frequency processing, using the Wigner distribution or spectrogram, is one of the techniques used to estimate the instantaneous mean frequency and instantaneous median frequency of the EMG signal using a variety of techniques. However, these time-frequency methods suffer either from cross-term interference when processing signals with multiple components or time-frequency resolution due to the use of windowing. This study proposes the use of the matching pursuit decomposition (MPD) with a Gaussian dictionary to process EMG signals produced during both isometric and dynamic contractions. In particular, the MPD obtains unique time-frequency features that represent the EMG signal time-frequency dependence without suffering from cross-terms or loss in time-frequency resolution. As the MPD does not depend on an analysis window like the spectrogram, it is more robust in applying the timefrequency features to identify the spectral time-variation of the EGM signal.
ContributorsAustin, Hiroko (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2012
152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152278-Thumbnail Image.png
Description
The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there

The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there is no well-defined process to be used for email forensics the comprehensiveness, extensibility of tools, uniformity of evidence, usefulness in collaborative/distributed environments, and consistency of investigations are hindered. At present, there exists little support for discovering, acquiring, and representing web-based email, despite its widespread use. To remedy this, a systematic process which includes discovering, acquiring, and representing web-based email for email forensics which is integrated into the normal forensic analysis workflow, and which accommodates the distinct characteristics of email evidence will be presented. This process focuses on detecting the presence of non-obvious artifacts related to email accounts, retrieving the data from the service provider, and representing email in a well-structured format based on existing standards. As a result, developers and organizations can collaboratively create and use analysis tools that can analyze email evidence from any source in the same fashion and the examiner can access additional data relevant to their forensic cases. Following, an extensible framework implementing this novel process-driven approach has been implemented in an attempt to address the problems of comprehensiveness, extensibility, uniformity, collaboration/distribution, and consistency within forensic investigations involving email evidence.
ContributorsPaglierani, Justin W (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Santanam, Raghu T (Committee member) / Arizona State University (Publisher)
Created2013
152495-Thumbnail Image.png
Description
Attribute Based Access Control (ABAC) mechanisms have been attracting a lot of interest from the research community in recent times. This is especially because of the flexibility and extensibility it provides by using attributes assigned to subjects as the basis for access control. ABAC enables an administrator of a server

Attribute Based Access Control (ABAC) mechanisms have been attracting a lot of interest from the research community in recent times. This is especially because of the flexibility and extensibility it provides by using attributes assigned to subjects as the basis for access control. ABAC enables an administrator of a server to enforce access policies on the data, services and other such resources fairly easily. It also accommodates new policies and changes to existing policies gracefully, thereby making it a potentially good mechanism for implementing access control in large systems, particularly in today's age of Cloud Computing. However management of the attributes in ABAC environment is an area that has been little touched upon. Having a mechanism to allow multiple ABAC based systems to share data and resources can go a long way in making ABAC scalable. At the same time each system should be able to specify their own attribute sets independently. In the research presented in this document a new mechanism is proposed that would enable users to share resources and data in a cloud environment using ABAC techniques in a distributed manner. The focus is mainly on decentralizing the access policy specifications for the shared data so that each data owner can specify the access policy independent of others. The concept of ontologies and semantic web is introduced in the ABAC paradigm that would help in giving a scalable structure to the attributes and also allow systems having different sets of attributes to communicate and share resources.
ContributorsPrabhu Verleker, Ashwin Narayan (Author) / Huang, Dijiang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2014
152385-Thumbnail Image.png
Description
This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual

This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual bots use the C&C; channel to receive commands and send the data. This thesis develops active host based approach for identifying the presence of bot based on the anomalies in the usage patterns of the user before and after the bot is installed on the user smartphone and alerting the user to the presence of the bot. A profile is constructed for each user based on the regular web usage patterns (achieved by intercepting the http(s) traffic) and implementing machine learning techniques to continuously learn the user's behavior and changes in the behavior and all the while looking for any anomalies in the user behavior above a threshold which will cause the user to be notified of the anomalous traffic. A prototype bot which uses OSN s as C&C; channel is constructed and used for testing. Users are given smartphones(Nexus 4 and Galaxy Nexus) running Application proxy which intercepts http(s) traffic and relay it to a server which uses the traffic and constructs the model for a particular user and look for any signs of anomalies. This approach lays the groundwork for the future host-based counter measures for smartphone botnets using OSN s as C&C; channel.
ContributorsKilari, Vishnu Teja (Author) / Xue, Guoliang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2013
152813-Thumbnail Image.png
Description
Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient

Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient classification of human activities by employing machine learning techniques. We are interested in the generalization of classical tools for signal approximation to newer spaces, such as rotation data, which is best studied in a non-Euclidean setting, and its application to activity analysis. Attributing to the non-linear nature of the rotation data space, which involve a heavy overload on the smart phone's processor and memory as opposed to feature extraction on the Euclidean space, indexing and compaction of the acquired sensor data is performed prior to feature extraction, to reduce CPU overhead and thereby increase the lifetime of the battery with a little loss in recognition accuracy of the activities. The sensor data represented as unit quaternions, is a more intrinsic representation of the orientation of smart phone compared to Euler angles (which suffers from Gimbal lock problem) or the computationally intensive rotation matrices. Classification algorithms are employed to classify these manifold sequences in the non-Euclidean space. By performing customized indexing (using K-means algorithm) of the evolved manifold sequences before feature extraction, considerable energy savings is achieved in terms of smart phone's battery life.
ContributorsSivakumar, Aswin (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2014