Matching Items (689)
Filtering by

Clear all filters

149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
147837-Thumbnail Image.png
Description

Human-environment interactions in aeolian (windblown) systems has focused research on<br/>human’s role in causing and aiding recovery from natural and anthropogenic disturbance. There<br/>is room for improvement in understanding the best methods and considerations for manual<br/>coastal foredune restoration. Furthermore, the extent to which humans play a role in changing the<br/>shape and surface

Human-environment interactions in aeolian (windblown) systems has focused research on<br/>human’s role in causing and aiding recovery from natural and anthropogenic disturbance. There<br/>is room for improvement in understanding the best methods and considerations for manual<br/>coastal foredune restoration. Furthermore, the extent to which humans play a role in changing the<br/>shape and surface textures of quartz sand grains is poorly understood. The goal of this thesis is<br/>two-fold: 1) quantify the geomorphic effectiveness of a multi-year manually rebuilt foredune and<br/>2) compare the shapes and microtextures on disturbed and undisturbed quartz sand grains. For<br/>the rebuilt foredune, uncrewed aerial systems (UAS) were used to survey the site, collecting<br/>photos to create digital surface models (DSMs). These DSMs were compared at discrete<br/>moments in time to create a sediment budget. Water levels and cross-shore modeling is also<br/>considered to predict the decadal evolution of the site. In the two years since rebuilding, the<br/>foredune has been stable, but not geomorphically resilient. Modeling shows landward foredune<br/>retreat and beach widening. For the quartz grains, t-testing of shape characteristics showed that<br/>there may be differences in the mean circularity between grains from off-highway vehicle and<br/>non-riding areas. Quartz grains from a variety of coastal and inland dunes were imaged using a<br/>scanning electron microscopy to search for evidence of anthropogenically-induced<br/>microtextures. On grains from Oceano Dunes in California, encouraging textures like parallel<br/>striations, grain fracturing, and linear conchoidal fractures provide exploratory evidence of<br/>anthropogenic microtextures. More focused research is recommended to confirm this exploratory<br/>work.

ContributorsMarvin, Michael Colin (Author) / Walker, Ian (Thesis director) / Dorn, Ron (Committee member) / Schmeeckle, Mark (Committee member) / School of Geographical Sciences and Urban Planning (Contributor, Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147842-Thumbnail Image.png
Description

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer the question of whether a similar interaction leads to savings, a model-free process that is described as faster relearning when experiencing something familiar. This was tested in a two-week reaching task conducted on a robotic arm capable of perturbing movements. The task was designed so that the two sessions differed in their history of errors. By measuring the change in the learning rate, the savings was determined at various points. The results showed that the history of errors successfully modulated savings. Thus, this supports the notion that the two complementary systems interact to develop savings. Additionally, this report was part of a larger study that will explore the organizational structure of the complementary systems as well as the neural basis of this motor learning.

ContributorsRuta, Michael (Author) / Santello, Marco (Thesis director) / Blais, Chris (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / School of Human Evolution & Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147851-Thumbnail Image.png
Description

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most opportunities for growth, along with identify a specific market strategy that Company X could do to capture market share within the most opportunistic segment.

ContributorsHamkins, Sean (Co-author) / Raimondi, Ronnie (Co-author) / Gandolfi, Micheal (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148156-Thumbnail Image.png
Description

This thesis project is part of a larger collaboration documenting the history of the ASU Biodesign Clinical Testing Laboratory (ABCTL). There are many different aspects that need to be considered when transforming to a clinical testing laboratory. This includes the different types of tests performed in the laboratory. In addition

This thesis project is part of a larger collaboration documenting the history of the ASU Biodesign Clinical Testing Laboratory (ABCTL). There are many different aspects that need to be considered when transforming to a clinical testing laboratory. This includes the different types of tests performed in the laboratory. In addition to the diagnostic polymerase chain reaction (PCR) test that is performed detecting the presence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), antibody testing is also performed in clinical laboratories. Antibody testing is used to detect a previous infection. Antibodies are produced as part of the immune response against SARS-CoV-2. There are many different forms of antibody tests and their sensitives and specificities have been examined and reviewed in the literature. Antibody testing can be used to determine the seroprevalence of the disease which can inform policy decisions regarding public health strategies. The results from antibody testing can also be used for creating new therapeutics like vaccines. The ABCTL recognizes the shifting need of the community to begin testing for previous infections of SARS-CoV-2 and is developing new forms of antibody testing that can meet them.

ContributorsRuan, Ellen (Co-author) / Smetanick, Jennifer (Co-author) / Majhail, Kajol (Co-author) / Anderson, Laura (Co-author) / Breshears, Scott (Co-author) / Compton, Carolyn (Thesis director) / Magee, Mitch (Committee member) / School of Life Sciences (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148176-Thumbnail Image.png
Description

In this project, I examined the relationship between lockdowns implemented by COVID-19 and the activity of animals in urban areas. I hypothesized that animals became more active in urban areas during COVID-19 quarantine than they were before and I wanted to see if my hypothesis could be researched through Twitter

In this project, I examined the relationship between lockdowns implemented by COVID-19 and the activity of animals in urban areas. I hypothesized that animals became more active in urban areas during COVID-19 quarantine than they were before and I wanted to see if my hypothesis could be researched through Twitter crowdsourcing. I began by collecting tweets using python code, but upon examining all data output from code-based searches, I concluded that it is quicker and more efficient to use the advanced search on Twitter website. Based on my research, I can neither confirm nor deny if the appearance of wild animals is due to the COVID-19 lockdowns. However, I was able to discover a correlational relationship between these two factors in some research cases. Although my findings are mixed with regard to my original hypothesis, the impact that this phenomenon had on society cannot be denied.

ContributorsHeimlich, Kiana Raye (Author) / Dorn, Ronald (Thesis director) / Martin, Roberta (Committee member) / Donovan, Mary (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148045-Thumbnail Image.png
Description

As part of Arizona State University’s net-zero carbon initiative, 1000 mesquite trees were planted on a vacant plot of land at West Campus to sequester carbon from the atmosphere. Urban forestry is typically a method of carbon capture in temperate areas, but it is hypothesized that the same principle can

As part of Arizona State University’s net-zero carbon initiative, 1000 mesquite trees were planted on a vacant plot of land at West Campus to sequester carbon from the atmosphere. Urban forestry is typically a method of carbon capture in temperate areas, but it is hypothesized that the same principle can be employed in arid regions as well. To test this hypothesis a carbon model was constructed using the pools and fluxes measured at the Carbon sink and learning forest at West Campus. As an ideal, another carbon model was constructed for the mature mesquite forest at the Hassayampa River Preserve to project how the carbon cycle at West Campus could change over time as the forest matures. The results indicate that the West Campus plot currently functions as a carbon source while the site at the Hassayampa river preserve currently functions as a carbon sink. Soil composition at both sites differ with inorganic carbon contributing to the largest percentage at West Campus, and organic carbon at Hassayampa. Predictive modeling using biomass accumulation estimates and photosynthesis rates for the Carbon Sink Forest at West Campus both predict approximately 290 metric tons of carbon sequestration after 30 years. Modeling net ecosystem exchange predicts that the West Campus plot will begin to act as a carbon sink after 33 years.

ContributorsLiddle, David Mohacsy (Author) / Ball, Becky (Thesis director) / Nishimura, Joel (Committee member) / School of Life Sciences (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05