Matching Items (897)
Filtering by

Clear all filters

149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149780-Thumbnail Image.png
Description
The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices,

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices, including the iPhone, iPod touch and the latest in the family - the iPad, are among the well-known and widely used mobile devices today. Their advanced multi-touch interface and improved processing power can be exploited for engineering and STEM demonstrations. Moreover, these devices have become a part of everyday student life. Hence, the design of exciting mobile applications and software represents a great opportunity to build student interest and enthusiasm in science and engineering. This thesis presents the design and implementation of a portable interactive signal processing simulation software on the iOS platform. The iOS-based object-oriented application is called i-JDSP and is based on the award winning Java-DSP concept. It is implemented in Objective-C and C as a native Cocoa Touch application that can be run on any iOS device. i-JDSP offers basic signal processing simulation functions such as Fast Fourier Transform, filtering, spectral analysis on a compact and convenient graphical user interface and provides a very compelling multi-touch programming experience. Built-in modules also demonstrate concepts such as the Pole-Zero Placement. i-JDSP also incorporates sound capture and playback options that can be used in near real-time analysis of speech and audio signals. All simulations can be visually established by forming interactive block diagrams through multi-touch and drag-and-drop. Computations are performed on the mobile device when necessary, making the block diagram execution fast. Furthermore, the extensive support for user interactivity provides scope for improved learning. The results of i-JDSP assessment among senior undergraduate and first year graduate students revealed that the software created a significant positive impact and increased the students' interest and motivation and in understanding basic DSP concepts.
ContributorsLiu, Jinru (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Qian, Gang (Committee member) / Arizona State University (Publisher)
Created2011
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
147837-Thumbnail Image.png
Description

Human-environment interactions in aeolian (windblown) systems has focused research on<br/>human’s role in causing and aiding recovery from natural and anthropogenic disturbance. There<br/>is room for improvement in understanding the best methods and considerations for manual<br/>coastal foredune restoration. Furthermore, the extent to which humans play a role in changing the<br/>shape and surface

Human-environment interactions in aeolian (windblown) systems has focused research on<br/>human’s role in causing and aiding recovery from natural and anthropogenic disturbance. There<br/>is room for improvement in understanding the best methods and considerations for manual<br/>coastal foredune restoration. Furthermore, the extent to which humans play a role in changing the<br/>shape and surface textures of quartz sand grains is poorly understood. The goal of this thesis is<br/>two-fold: 1) quantify the geomorphic effectiveness of a multi-year manually rebuilt foredune and<br/>2) compare the shapes and microtextures on disturbed and undisturbed quartz sand grains. For<br/>the rebuilt foredune, uncrewed aerial systems (UAS) were used to survey the site, collecting<br/>photos to create digital surface models (DSMs). These DSMs were compared at discrete<br/>moments in time to create a sediment budget. Water levels and cross-shore modeling is also<br/>considered to predict the decadal evolution of the site. In the two years since rebuilding, the<br/>foredune has been stable, but not geomorphically resilient. Modeling shows landward foredune<br/>retreat and beach widening. For the quartz grains, t-testing of shape characteristics showed that<br/>there may be differences in the mean circularity between grains from off-highway vehicle and<br/>non-riding areas. Quartz grains from a variety of coastal and inland dunes were imaged using a<br/>scanning electron microscopy to search for evidence of anthropogenically-induced<br/>microtextures. On grains from Oceano Dunes in California, encouraging textures like parallel<br/>striations, grain fracturing, and linear conchoidal fractures provide exploratory evidence of<br/>anthropogenic microtextures. More focused research is recommended to confirm this exploratory<br/>work.

ContributorsMarvin, Michael Colin (Author) / Walker, Ian (Thesis director) / Dorn, Ron (Committee member) / Schmeeckle, Mark (Committee member) / School of Geographical Sciences and Urban Planning (Contributor, Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147842-Thumbnail Image.png
Description

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer the question of whether a similar interaction leads to savings, a model-free process that is described as faster relearning when experiencing something familiar. This was tested in a two-week reaching task conducted on a robotic arm capable of perturbing movements. The task was designed so that the two sessions differed in their history of errors. By measuring the change in the learning rate, the savings was determined at various points. The results showed that the history of errors successfully modulated savings. Thus, this supports the notion that the two complementary systems interact to develop savings. Additionally, this report was part of a larger study that will explore the organizational structure of the complementary systems as well as the neural basis of this motor learning.

ContributorsRuta, Michael (Author) / Santello, Marco (Thesis director) / Blais, Chris (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / School of Human Evolution & Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147851-Thumbnail Image.png
Description

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most opportunities for growth, along with identify a specific market strategy that Company X could do to capture market share within the most opportunistic segment.

ContributorsHamkins, Sean (Co-author) / Raimondi, Ronnie (Co-author) / Gandolfi, Micheal (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148156-Thumbnail Image.png
Description

This thesis project is part of a larger collaboration documenting the history of the ASU Biodesign Clinical Testing Laboratory (ABCTL). There are many different aspects that need to be considered when transforming to a clinical testing laboratory. This includes the different types of tests performed in the laboratory. In addition

This thesis project is part of a larger collaboration documenting the history of the ASU Biodesign Clinical Testing Laboratory (ABCTL). There are many different aspects that need to be considered when transforming to a clinical testing laboratory. This includes the different types of tests performed in the laboratory. In addition to the diagnostic polymerase chain reaction (PCR) test that is performed detecting the presence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), antibody testing is also performed in clinical laboratories. Antibody testing is used to detect a previous infection. Antibodies are produced as part of the immune response against SARS-CoV-2. There are many different forms of antibody tests and their sensitives and specificities have been examined and reviewed in the literature. Antibody testing can be used to determine the seroprevalence of the disease which can inform policy decisions regarding public health strategies. The results from antibody testing can also be used for creating new therapeutics like vaccines. The ABCTL recognizes the shifting need of the community to begin testing for previous infections of SARS-CoV-2 and is developing new forms of antibody testing that can meet them.

ContributorsRuan, Ellen (Co-author) / Smetanick, Jennifer (Co-author) / Majhail, Kajol (Co-author) / Anderson, Laura (Co-author) / Breshears, Scott (Co-author) / Compton, Carolyn (Thesis director) / Magee, Mitch (Committee member) / School of Life Sciences (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148176-Thumbnail Image.png
Description

In this project, I examined the relationship between lockdowns implemented by COVID-19 and the activity of animals in urban areas. I hypothesized that animals became more active in urban areas during COVID-19 quarantine than they were before and I wanted to see if my hypothesis could be researched through Twitter

In this project, I examined the relationship between lockdowns implemented by COVID-19 and the activity of animals in urban areas. I hypothesized that animals became more active in urban areas during COVID-19 quarantine than they were before and I wanted to see if my hypothesis could be researched through Twitter crowdsourcing. I began by collecting tweets using python code, but upon examining all data output from code-based searches, I concluded that it is quicker and more efficient to use the advanced search on Twitter website. Based on my research, I can neither confirm nor deny if the appearance of wild animals is due to the COVID-19 lockdowns. However, I was able to discover a correlational relationship between these two factors in some research cases. Although my findings are mixed with regard to my original hypothesis, the impact that this phenomenon had on society cannot be denied.

ContributorsHeimlich, Kiana Raye (Author) / Dorn, Ronald (Thesis director) / Martin, Roberta (Committee member) / Donovan, Mary (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05