Matching Items (249)
Filtering by

Clear all filters

152475-Thumbnail Image.png
Description
Recently, the location of the nodes in wireless networks has been modeled as point processes. In this dissertation, various scenarios of wireless communications in large-scale networks modeled as point processes are considered. The first part of the dissertation considers signal reception and detection problems with symmetric alpha stable noise which

Recently, the location of the nodes in wireless networks has been modeled as point processes. In this dissertation, various scenarios of wireless communications in large-scale networks modeled as point processes are considered. The first part of the dissertation considers signal reception and detection problems with symmetric alpha stable noise which is from an interfering network modeled as a Poisson point process. For the signal reception problem, the performance of space-time coding (STC) over fading channels with alpha stable noise is studied. We derive pairwise error probability (PEP) of orthogonal STCs. For general STCs, we propose a maximum-likelihood (ML) receiver, and its approximation. The resulting asymptotically optimal receiver (AOR) does not depend on noise parameters and is computationally simple, and close to the ML performance. Then, signal detection in coexisting wireless sensor networks (WSNs) is considered. We define a binary hypothesis testing problem for the signal detection in coexisting WSNs. For the problem, we introduce the ML detector and simpler alternatives. The proposed mixed-fractional lower order moment (FLOM) detector is computationally simple and close to the ML performance. Stochastic orders are binary relations defined on probability. The second part of the dissertation introduces stochastic ordering of interferences in large-scale networks modeled as point processes. Since closed-form results for the interference distributions for such networks are only available in limited cases, it is of interest to compare network interferences using stochastic. In this dissertation, conditions on the fading distribution and path-loss model are given to establish stochastic ordering between interferences. Moreover, Laplace functional (LF) ordering is defined between point processes and applied for comparing interference. Then, the LF orderings of general classes of point processes are introduced. It is also shown that the LF ordering is preserved when independent operations such as marking, thinning, random translation, and superposition are applied. The LF ordering of point processes is a useful tool for comparing spatial deployments of wireless networks and can be used to establish comparisons of several performance metrics such as coverage probability, achievable rate, and resource allocation even when closed form expressions for such metrics are unavailable.
ContributorsLee, Junghoon (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Committee member) / Reisslein, Martin (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014
152337-Thumbnail Image.png
Description
In contemporary society, sustainability and public well-being have been pressing challenges. Some of the important questions are:how can sustainable practices, such as reducing carbon emission, be encouraged? , How can a healthy lifestyle be maintained?Even though individuals are interested, they are unable to adopt these behaviors due to resource constraints.

In contemporary society, sustainability and public well-being have been pressing challenges. Some of the important questions are:how can sustainable practices, such as reducing carbon emission, be encouraged? , How can a healthy lifestyle be maintained?Even though individuals are interested, they are unable to adopt these behaviors due to resource constraints. Developing a framework to enable cooperative behavior adoption and to sustain it for a long period of time is a major challenge. As a part of developing this framework, I am focusing on methods to understand behavior diffusion over time. Facilitating behavior diffusion with resource constraints in a large population is qualitatively different from promoting cooperation in small groups. Previous work in social sciences has derived conditions for sustainable cooperative behavior in small homogeneous groups. However, how groups of individuals having resource constraint co-operate over extended periods of time is not well understood, and is the focus of my thesis. I develop models to analyze behavior diffusion over time through the lens of epidemic models with the condition that individuals have resource constraint. I introduce an epidemic model SVRS ( Susceptible-Volatile-Recovered-Susceptible) to accommodate multiple behavior adoption. I investigate the longitudinal effects of behavior diffusion by varying different properties of an individual such as resources,threshold and cost of behavior adoption. I also consider how behavior adoption of an individual varies with her knowledge of global adoption. I evaluate my models on several synthetic topologies like complete regular graph, preferential attachment and small-world and make some interesting observations. Periodic injection of early adopters can help in boosting the spread of behaviors and sustain it for a longer period of time. Also, behavior propagation for the classical epidemic model SIRS (Susceptible-Infected-Recovered-Susceptible) does not continue for an infinite period of time as per conventional wisdom. One interesting future direction is to investigate how behavior adoption is affected when number of individuals in a network changes. The affects on behavior adoption when availability of behavior changes with time can also be examined.
ContributorsDey, Anindita (Author) / Sundaram, Hari (Thesis advisor) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
152813-Thumbnail Image.png
Description
Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient

Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient classification of human activities by employing machine learning techniques. We are interested in the generalization of classical tools for signal approximation to newer spaces, such as rotation data, which is best studied in a non-Euclidean setting, and its application to activity analysis. Attributing to the non-linear nature of the rotation data space, which involve a heavy overload on the smart phone's processor and memory as opposed to feature extraction on the Euclidean space, indexing and compaction of the acquired sensor data is performed prior to feature extraction, to reduce CPU overhead and thereby increase the lifetime of the battery with a little loss in recognition accuracy of the activities. The sensor data represented as unit quaternions, is a more intrinsic representation of the orientation of smart phone compared to Euler angles (which suffers from Gimbal lock problem) or the computationally intensive rotation matrices. Classification algorithms are employed to classify these manifold sequences in the non-Euclidean space. By performing customized indexing (using K-means algorithm) of the evolved manifold sequences before feature extraction, considerable energy savings is achieved in terms of smart phone's battery life.
ContributorsSivakumar, Aswin (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2014
152770-Thumbnail Image.png
Description
Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms

Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.
ContributorsVaradarajan, Srenivas (Author) / Karam, Lina J (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Li, Baoxin (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
152757-Thumbnail Image.png
Description
Waveform design that allows for a wide variety of frequency-modulation (FM) has proven benefits. However, dictionary based optimization is limited and gradient search methods are often intractable. A new method is proposed using differential evolution to design waveforms with instantaneous frequencies (IFs) with cubic FM functions whose coefficients are constrained

Waveform design that allows for a wide variety of frequency-modulation (FM) has proven benefits. However, dictionary based optimization is limited and gradient search methods are often intractable. A new method is proposed using differential evolution to design waveforms with instantaneous frequencies (IFs) with cubic FM functions whose coefficients are constrained to the surface of the three dimensional unit sphere. Cubic IF functions subsume well-known IF functions such as linear, quadratic monomial, and cubic monomial IF functions. In addition, all nonlinear IF functions sufficiently approximated by a third order Taylor series over the unit time sequence can be represented in this space. Analog methods for generating polynomial IF waveforms are well established allowing for practical implementation in real world systems. By sufficiently constraining the search space to these waveforms of interest, alternative optimization methods such as differential evolution can be used to optimize tracking performance in a variety of radar environments. While simplified tracking models and finite waveform dictionaries have information theoretic results, continuous waveform design in high SNR, narrowband, cluttered environments is explored.
ContributorsPaul, Bryan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
152970-Thumbnail Image.png
Description
Neural activity tracking using electroencephalography (EEG) and magnetoencephalography (MEG) brain scanning methods has been widely used in the field of neuroscience to provide insight into the nervous system. However, the tracking accuracy depends on the presence of artifacts in the EEG/MEG recordings. Artifacts include any signals that do not originate

Neural activity tracking using electroencephalography (EEG) and magnetoencephalography (MEG) brain scanning methods has been widely used in the field of neuroscience to provide insight into the nervous system. However, the tracking accuracy depends on the presence of artifacts in the EEG/MEG recordings. Artifacts include any signals that do not originate from neural activity, including physiological artifacts such as eye movement and non-physiological activity caused by the environment.

This work proposes an integrated method for simultaneously tracking multiple neural sources using the probability hypothesis density particle filter (PPHDF) and reducing the effect of artifacts using feature extraction and stochastic modeling. Unique time-frequency features are first extracted using matching pursuit decomposition for both neural activity and artifact signals.

The features are used to model probability density functions for each signal type using Gaussian mixture modeling for use in the PPHDF neural tracking algorithm. The probability density function of the artifacts provides information to the tracking algorithm that can help reduce the probability of incorrectly estimating the dynamically varying number of current dipole sources and their corresponding neural activity localization parameters. Simulation results demonstrate the effectiveness of the proposed algorithm in increasing the tracking accuracy performance for multiple dipole sources using recordings that have been contaminated by artifacts.
ContributorsJiang, Jiewei (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
152941-Thumbnail Image.png
Description
Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical

Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys.

Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection.

The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per second. In comparison, the natural alert signal - door opening and closing - evoked the faster head turns than other stimulus conditions. These results suggest that behaviorally relevant stimulus such as alert signals evoke faster head-turn responses in marmoset monkeys.
ContributorsSimhadri, Sravanthi (Author) / Zhou, Yi (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2014
152872-Thumbnail Image.png
Description
LTE-Advanced networks employ random access based on preambles

transmitted according to multi-channel slotted Aloha principles. The

random access is controlled through a limit W on the number of

transmission attempts and a timeout period for uniform backoff after a

collision. We model the LTE-Advanced random access system by formulating

the equilibrium condition for the ratio

LTE-Advanced networks employ random access based on preambles

transmitted according to multi-channel slotted Aloha principles. The

random access is controlled through a limit W on the number of

transmission attempts and a timeout period for uniform backoff after a

collision. We model the LTE-Advanced random access system by formulating

the equilibrium condition for the ratio of the number of requests

successful within the permitted number of transmission attempts to those

successful in one attempt. We prove that for W≤8 there is only one

equilibrium operating point and for W≥9 there are three operating

points if the request load ρ is between load boundaries ρ1

and ρ2. We analytically identify these load boundaries as well as

the corresponding system operating points. We analyze the throughput and

delay of successful requests at the operating points and validate the

analytical results through simulations. Further, we generalize the

results using a steady-state equilibrium based approach and develop

models for single-channel and multi-channel systems, incorporating the

barring probability PB. Ultimately, we identify the de-correlating

effect of parameters O, PB, and Tomax and introduce the

Poissonization effect due to the backlogged requests in a slot. We

investigate the impact of Poissonization on different traffic and

conclude this thesis.
ContributorsTyagi, Revak (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / McGarry, Michael (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2014