Matching Items (11,397)
Filtering by

Clear all filters

149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
Description

Consider Steven Cryos’ words, “When disaster strikes, the time to prepare has passed.” Witnessing domestic water insecurity in events such as Hurricane Katrina, the instability in Flint, Michigan, and most recently the winter storms affecting millions across Texas, we decided to take action. The period between a water supply’s disruption

Consider Steven Cryos’ words, “When disaster strikes, the time to prepare has passed.” Witnessing domestic water insecurity in events such as Hurricane Katrina, the instability in Flint, Michigan, and most recently the winter storms affecting millions across Texas, we decided to take action. The period between a water supply’s disruption and restoration is filled with anxiety, uncertainty, and distress -- particularly since there is no clear indication of when, exactly, restoration comes. It is for this reason that Water Works now exists. As a team of students from diverse backgrounds, what started as an honors project with the Founders Lab at Arizona State University became the seed that will continue to mature into an economically sustainable business model supporting the optimistic visions and tenants of humanitarianism. By having conversations with community members, conducting market research, competing for funding and fostering progress amid the COVID-19 pandemic, our team’s problem-solving traverses the disciplines. The purpose of this paper is to educate our readers about a unique solution to emerging issues of water insecurity that are nested across and within systems who could benefit from the introduction of a personal water reclamation system, showcase our team’s entrepreneurial journey, and propose future directions that will this once pedagogical exercise to continue fulfilling its mission: To heal, to hydrate and to help bring safe water to everyone.

ContributorsReitzel, Gage Alexander (Co-author) / Filipek, Marina (Co-author) / Sadiasa, Aira (Co-author) / Byrne, Jared (Thesis director) / Sebold, Brent (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / School of Human Evolution & Social Change (Contributor, Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148104-Thumbnail Image.png
Description

Reducing the amount of error and introduced data variability increases the accuracy of Western blot results. In this study, different methods of normalization for loading differences and data alignment were explored with respect to their impact on Western blot results. GAPDH was compared to the LI-COR Revert total protein stain

Reducing the amount of error and introduced data variability increases the accuracy of Western blot results. In this study, different methods of normalization for loading differences and data alignment were explored with respect to their impact on Western blot results. GAPDH was compared to the LI-COR Revert total protein stain as a loading control. The impact of normalizing data to a control condition, which is commonly done to align Western blot data distributed over several immunoblots, was also investigated. Specifically, this study addressed whether normalization to a small subset of distinct controls on each immunoblot increases pooled data variability compared to a larger set of controls. Protein expression data for NOX-2 and SOD-2 from a study investigating the protective role of the bradykinin type 1 receptor in angiotensin-II induced left ventricle remodeling were used to address these questions but are also discussed in the context of the original study. The comparison of GAPDH and Revert total protein stain as a loading control was done by assessing their correlation and comparing how they affected protein expression results. Additionally, the impact of treatment on GAPDH was investigated. To assess how normalization to different combinations of controls influences data variability, protein data were normalized to the average of 5 controls, the average of 2 controls, or an average vehicle and the results by treatment were compared. The results of this study demonstrated that GAPDH expression is not affected by angiotensin-II or bradykinin type 1 receptor antagonist R-954 and is a less sensitive loading control compared to Revert total protein stain. Normalization to the average of 5 controls tended to reduce pooled data variability compared to 2 controls. Lastly, the results of this study provided preliminary evidence that R-954 does not alter the expression of NOX-2 or SOD-2 to an expression profile that would be expected to explain the protection it confers against Ang-II induced left ventricle remodeling.

ContributorsSiegel, Matthew Marat (Author) / Jeremy, Mills (Thesis director) / Sweazea, Karen (Committee member) / Hale, Taben (Committee member) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148105-Thumbnail Image.png
Description

In this creative thesis project I use digital “scrolleytelling” (an interactive scroll-based storytelling) to investigate diversity & inclusion at big tech companies. I wanted to know why diversity numbers were flatlining at Facebook, Apple, Amazon, Microsoft and Google, and took a data journalism approach to explore the relationship between what

In this creative thesis project I use digital “scrolleytelling” (an interactive scroll-based storytelling) to investigate diversity & inclusion at big tech companies. I wanted to know why diversity numbers were flatlining at Facebook, Apple, Amazon, Microsoft and Google, and took a data journalism approach to explore the relationship between what corporations were saying versus what they were doing. Finally, I critiqued diversity and inclusion by giving examples of how the current way we are addressing D&I is not fixing the problem.

ContributorsBrust, Jiaying Eliza (Author) / Coleman, Grisha (Thesis director) / Tinapple, David (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148106-Thumbnail Image.png
Description

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the winner-take-all principle invariably wastes some votes” (121). This means that the majority group gets all of the power in an election while the votes of the minority groups are completely wasted and hold little to no significance. Additionally, FPTP systems reinforce a two-party system in which neither candidate could satisfy the majority of the electorate’s needs and issues, yet forces them to choose between the two dominant parties. Moreover, voting for a third party candidate only hurts the voter since it takes votes away from the party they might otherwise support and gives the victory to the party they prefer the least, ensuring that the two party system is inescapable. Therefore, a winner-take-all system does not provide the electorate with fair or proportional representation and creates voter disenfranchisement: it offers them very few choices that appeal to their needs and forces them to choose a candidate they dislike. There are, however, alternative voting systems that remedy these issues, such as a Ranked voting system, in which voters can rank their candidate choices in the order they prefer them, or a Proportional voting system, in which a political party acquires a number of seats based on the proportion of votes they receive from the voter base. Given these alternatives, we will implement a software simulation of one of these systems to demonstrate how they work in contrast to FPTP systems, and therefore provide evidence of how these alternative systems could work in practice and in place of the current electoral system.

ContributorsSummers, Jack Gillespie (Co-author) / Martin, Autumn (Co-author) / Burger, Kevin (Thesis director) / Voorhees, Matthew (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148107-Thumbnail Image.png
Description

Partisan politics has created an increasingly polarized political climate in the United States. Despite the divisive political climate, women’s representation in politics has also increased drastically over the years. I began this project to see if there is a partisan rivalry between women in politics or a sense of shared

Partisan politics has created an increasingly polarized political climate in the United States. Despite the divisive political climate, women’s representation in politics has also increased drastically over the years. I began this project to see if there is a partisan rivalry between women in politics or a sense of shared “womanhood.” This thesis explores the role political parties play for women in office by examining how they vote on bills, what type of bills they propose, and whether or not they work collaboratively with their female counterparts at the Arizona State Legislature. My main goals for this project are to see how strong or weak political parties are in shaping political behavior at the Arizona State Legislature and to determine if there is a sense of “womanhood” despite different political affiliations. I also explore the role party affiliation plays within women legislators at the Arizona State Legislature.

ContributorsSanson, Claudia Maria (Author) / Lennon, Tara (Thesis director) / Woodall, Gina (Committee member) / School of Public Affairs (Contributor) / Department of English (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148108-Thumbnail Image.png
Description

This 15-week long course is designed to introduce students, specifically in Arizona, to basic sustainability and conservation principles in the context of local reptile wildlife. Throughout the course, the students work on identifying the problem, creating visions for the desired future, and finally developing a strategy to help with reptile

This 15-week long course is designed to introduce students, specifically in Arizona, to basic sustainability and conservation principles in the context of local reptile wildlife. Throughout the course, the students work on identifying the problem, creating visions for the desired future, and finally developing a strategy to help with reptile species survival in the valley. Research shows that animals in the classroom have led to improved academic success for students. Thus, through creating this course I was able to combine conservation and sustainability curriculum with real-life animals whose survival is directly being affected in the valley. My hope is that this course will help students identify a newfound passion and call to action to protect native wildlife. The more awareness and actionable knowledge which can be brought to students in Arizona about challenges to species survival the more likely we are to see a change in the future and a stronger sense of urgency for protecting wildlife. In order to accomplish these goals, the curriculum was developed to begin with basic concepts of species needs such as food and shelter and basic principles of sustainability. As the course progresses the students analyze current challenges reptile wildlife faces, like urban sprawl, and explore options to address these challenges. The course concludes with a pilot pitch where students present their solution projects to the school.

ContributorsGoethe, Emma Rae (Author) / Brundiers, Katja (Thesis director) / Bouges, Olivia (Committee member) / School of Sustainability (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148109-Thumbnail Image.png
Description

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment and identification of objects. The challenge posed in perception systems involves verifying the accuracy and rigidity of detections. The use of Spatio-Temporal Perception Logic (STPL) enables the user to express requirements for the perception system to verify, validate, and ensure its behavior; however, a drawback to STPL involves its accessibility. It is limited to individuals with an expert or higher-level knowledge of temporal and spatial logics, and the formal-written requirements become quite verbose with more restrictions imposed. In this thesis, I propose a domain-specific language (DSL) catered to Spatio-Temporal Perception Logic to enable non-expert users the ability to capture requirements for perception subsystems while reducing the necessity to have an experienced background in said logic. The domain-specific language for the Spatio-Temporal Perception Logic is built upon the formal language with two abstractions. The main abstraction captures simple programming statements that are translated to a lower-level STPL expression accepted by the testing monitor. The STPL DSL provides a seamless interface to writing formal expressions while maintaining the power and expressiveness of STPL. These translated equivalent expressions are capable of directing a standard for perception systems to ensure the safety and reduce the risks involved in ill-formed detections.

ContributorsAnderson, Jacob (Author) / Fainekos, Georgios (Thesis director) / Yezhou, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05