Matching Items (454)
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
Description
The purpose of this project was to commission, perform, and discuss a new work for an instrument pairing not often utilized, oboe and percussion. The composer, Alyssa Morris, was selected in June 2009. Her work, titled Forecast, was completed in October of 2009 and premiered in February of 2010, as

The purpose of this project was to commission, perform, and discuss a new work for an instrument pairing not often utilized, oboe and percussion. The composer, Alyssa Morris, was selected in June 2009. Her work, titled Forecast, was completed in October of 2009 and premiered in February of 2010, as part of a program showcasing music for oboe and percussion. Included in this document is a detailed biography of the composer, a description of the four movements of Forecast, performance notes for each movement, a diagram for stage set-up, the full score, the program from the premiere performance with biographies of all the performers involved, and both a live recording and MIDI sound file. The performance notes discuss issues that arose during preparation for the premiere and should help avoid potential pitfalls. TrevCo Music, publisher of the work, graciously allowed inclusion of the full score. This score is solely for use in this document; please visit the publisher's website for purchasing information. The commission and documentation of this composition are intended to add to the repertoire for oboe in an unusual instrument pairing and to encourage further exploration of such combinations.
ContributorsCreamer, Caryn (Author) / Schuring, Martin (Thesis advisor) / Hill, Gary (Committee member) / Holbrook, Amy (Committee member) / Micklich, Albie (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150358-Thumbnail Image.png
Description
During the twentieth-century, the dual influence of nationalism and modernism in the eclectic music from Latin America promoted an idiosyncratic style which naturally combined traditional themes, popular genres and secular music. The saxophone, commonly used as a popular instrument, started to develop a prominent role in Latin American classical music

During the twentieth-century, the dual influence of nationalism and modernism in the eclectic music from Latin America promoted an idiosyncratic style which naturally combined traditional themes, popular genres and secular music. The saxophone, commonly used as a popular instrument, started to develop a prominent role in Latin American classical music beginning in 1970. The lack of exposure and distribution of the Latin American repertoire has created a general perception that composers are not interested in the instrument, and that Latin American repertoire for classical saxophone is minimal. However, there are more than 1100 works originally written for saxophone in the region, and the amount continues to grow. This Modern Latin American Repertoire for Classical Saxophone: Recording Project and Performance Guide document establishes and exhibits seven works by seven representative Latin American composers.The recording includes works by Carlos Gonzalo Guzman (Colombia), Ricardo Tacuchian (Brazil), Roque Cordero (Panama), Luis Naón (Argentina), Andrés Alén-Rodriguez (Cuba), Alejandro César Morales (Mexico) and Jose-Luis Maúrtua (Peru), featuring a range of works for solo alto saxophone to alto saxophone with piano, alto saxophone with vibraphone, and tenor saxophone with electronic tape; thus forming an important selection of Latin American repertoire. Complete recorded performances of all seven pieces are supplemented by biographical, historical, and performance practice suggestions. The result is a written and audio guide to some of the most important pieces composed for classical saxophone in Latin America, with an emphasis on fostering interest in, and research into, composers who have contributed in the development and creation of the instrument in Latin America.
ContributorsOcampo Cardona, Javier Andrés (Author) / McAllister, Timothy (Thesis advisor) / Spring, Robert (Committee member) / Hill, Gary (Committee member) / Pilafian, Sam (Committee member) / Rogers, Rodney (Committee member) / Gardner, Joshua (Committee member) / Arizona State University (Publisher)
Created2011
149922-Thumbnail Image.png
Description
Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or

Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or low-level data entities. Also, additional domain knowledge may often be indispensable for uncovering the underlying semantics, but in most cases such domain knowledge is not readily available from the acquired media streams. Thus, making use of various types of contextual information and leveraging corresponding domain knowledge are vital for effectively associating high-level semantics with low-level signals with higher accuracies in multimedia computing problems. In this work, novel computational methods are explored and developed for incorporating contextual information/domain knowledge in different forms for multimedia computing and pattern recognition problems. Specifically, a novel Bayesian approach with statistical-sampling-based inference is proposed for incorporating a special type of domain knowledge, spatial prior for the underlying shapes; cross-modality correlations via Kernel Canonical Correlation Analysis is explored and the learnt space is then used for associating multimedia contents in different forms; model contextual information as a graph is leveraged for regulating interactions among high-level semantic concepts (e.g., category labels), low-level input signal (e.g., spatial/temporal structure). Four real-world applications, including visual-to-tactile face conversion, photo tag recommendation, wild web video classification and unconstrained consumer video summarization, are selected to demonstrate the effectiveness of the approaches. These applications range from classic research challenges to emerging tasks in multimedia computing. Results from experiments on large-scale real-world data with comparisons to other state-of-the-art methods and subjective evaluations with end users confirmed that the developed approaches exhibit salient advantages, suggesting that they are promising for leveraging contextual information/domain knowledge for a wide range of multimedia computing and pattern recognition problems.
ContributorsWang, Zhesheng (Author) / Li, Baoxin (Thesis advisor) / Sundaram, Hari (Committee member) / Qian, Gang (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
149808-Thumbnail Image.png
Description
Finger motion and hand posture of six professional clarinetists (defined by entrance into or completion of a doctorate of musical arts degree in clarinet performance) were recorded using a pair of CyberGloves® in Arizona State University's Center for Cognitive Ubiquitous Computing Laboratory. Performance tasks included performing a slurred three-octave chromatic

Finger motion and hand posture of six professional clarinetists (defined by entrance into or completion of a doctorate of musical arts degree in clarinet performance) were recorded using a pair of CyberGloves® in Arizona State University's Center for Cognitive Ubiquitous Computing Laboratory. Performance tasks included performing a slurred three-octave chromatic scale in sixteenth notes, at sixty quarter-note beats per minute, three times, with a metronome and a short pause between repetitions, and forming three pedagogical hand postures. Following the CyberGloves® tasks, each subject completed a questionnaire about equipment, playing history, practice routines, health practices, and hand usage during computer and sports activities. CyberGlove® data were analyzed to find average hand/finger postures and differences for each pitch across subjects, subject variance in the performance task and differences in ascending and descending postures of the chromatic scale. The data were also analyzed to describe generalized finger posture characteristics based on hand size, whether right hand thumb position affects finger flexion, and whether professional clarinetists use similar finger/hand postures when performing on clarinet, holding a tennis ball, allowing hands to hang freely by the sides, or form a "C" shape. The findings of this study suggest an individual approach based on hand size is necessary for teaching clarinet hand posture.
ContributorsHarger, Stefanie (Author) / Spring, Robert (Thesis advisor) / Hill, Gary (Committee member) / Koonce, Frank (Committee member) / Norton, Kay (Committee member) / Stauffer, Sandy (Committee member) / Arizona State University (Publisher)
Created2011
Description
Owen Middleton (b. 1941) enjoys an established and growing reputation as a composer of classical guitar music, but his works for piano are comparatively little known. The close investigation offered here of Middleton's works for piano reveals the same impressive craftsmanship, compelling character, and innovative spirit found in his works

Owen Middleton (b. 1941) enjoys an established and growing reputation as a composer of classical guitar music, but his works for piano are comparatively little known. The close investigation offered here of Middleton's works for piano reveals the same impressive craftsmanship, compelling character, and innovative spirit found in his works for guitar. Indeed, the only significant thing Middleton's piano music currently lacks is the well-deserved attention of professional players and a wider audience. Middleton's piano music needs to be heard, not just discussed, so one of this document's purposes is to provide a recorded sample of his piano works. While the overall repertoire for solo piano is vast, and new works become established in that repertoire with increasing difficulty, Middleton's piano works have a significant potential to find their way into the concert hall as well as the private teaching studio. His solo piano music is highly effective, well suited to the instrument, and, perhaps most importantly, fresh sounding and truly original. His pedagogical works are of equal value. Middleton's piano music offers something for everyone: there one finds daring virtuosity, effusions of passion, intellectual force, colorful imagery, poetry, humor, and even a degree of idiomatic innovation. This study aims to reveal key aspects of the composer's musical style, especially his style of piano writing, and to provide pianists with helpful analytical, technical, and interpretive insights. These descriptions of the music are supported with recorded examples, selected from the works for solo piano written between 1962 and 1993: Sonata for Piano, Childhood Scenes, Katie's Collection, and Toccata for Piano. The complete scores of the recorded works are included in the appendix. A chapter briefly describing the piano pieces since 1993 concludes the study and invites the reader to further investigations of this unique and important body of work.
ContributorsMoreau, Barton Andrew (Author) / Hamilton, Robert (Thesis advisor) / Holbrook, Amy (Committee member) / Campbell, Andrew (Committee member) / Spring, Robert (Committee member) / Gardner, Joshua (Committee member) / Arizona State University (Publisher)
Created2011
150158-Thumbnail Image.png
Description
Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering

Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms.
ContributorsSun, Liang (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Liu, Huan (Committee member) / Mittelmann, Hans D. (Committee member) / Arizona State University (Publisher)
Created2011
150181-Thumbnail Image.png
Description
Real-world environments are characterized by non-stationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs

Real-world environments are characterized by non-stationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs and without the need for explicit relearning from scratch. In this thesis, a novel instance transfer technique that adapts a "Cost-sensitive" variation of AdaBoost is presented. The method capitalizes on the theoretical and functional properties of AdaBoost to selectively reuse outdated training instances obtained from a "source" domain to effectively classify unseen instances occurring in a different, but related "target" domain. The algorithm is evaluated on real-world classification problems namely accelerometer based 3D gesture recognition, smart home activity recognition and text categorization. The performance on these datasets is analyzed and evaluated against popular boosting-based instance transfer techniques. In addition, supporting empirical studies, that investigate some of the less explored bottlenecks of boosting based instance transfer methods, are presented, to understand the suitability and effectiveness of this form of knowledge transfer.
ContributorsVenkatesan, Ashok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Li, Baoxin (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011