Matching Items (229)
Filtering by

Clear all filters

150882-Thumbnail Image.png
Description
The long-term impacts of bullying, stress, sexual prejudice and stigma against members of the LGBTQ population are both worrisome and expansive. Bullying among adolescents is one of the clearest and most well documented risks to adolescent health(Nansel et al., 2004; Wilkins-Shurmer et al., 2003; Wolke, Woods, Bloomfield, & Karstadt, 2001)

The long-term impacts of bullying, stress, sexual prejudice and stigma against members of the LGBTQ population are both worrisome and expansive. Bullying among adolescents is one of the clearest and most well documented risks to adolescent health(Nansel et al., 2004; Wilkins-Shurmer et al., 2003; Wolke, Woods, Bloomfield, & Karstadt, 2001) The present study examined the influence of sexual orientation to severity of bullying experience, coping strategies, emotion regulation and the interaction of gender role endorsements in relation to coping and emotion regulation strategy prediction. Extensive research exists to support high victimization experiences in LGBT individuals (Birkett et al., 2009; Robert H DuRant et al., n.d.; Kimmel & Mahler, 2003; Mishna et al., 2009) and separately, research also indicates support of gender role non conformity, social stress and long term coping skills (Galambos et al., 1990; Sánchez et al., 2010; Tolman, Striepe, & Harmon, 2003b). The goal of this study was to extend previous finding to find a relationship between the three variables: sexual orientation, victimization history, and non-traditional gender role endorse and utilizing those traits as predictors of future emotion regulation and coping strategies. The data suggests that as a whole LGBT identified individuals experience bullying at a significantly higher rate than their heterosexual counterparts. By utilizing gender role endorsement the relationship can be expanded to predict maladaptive emotion regulation skills, higher rates of perceived stress and increased fear of negative evaluation in lesbian women and gay men. The data was consistent for all hypotheses in the model: sexual identity significantly predicts higher bully score and atypical gender role endorsement is a moderator of victimization in LGBT individuals. The findings indicate high masculine endorsement in lesbians and high feminine endorsement in gay males can significantly predict victimization and maladaptive coping skills, emotion dysregulation, increased stress, and lack of emotional awareness.
ContributorsPuckett, Yesmina N (Author) / Newman, Matthew L. (Thesis advisor) / Hall, Deborah (Committee member) / Risko, Evan (Committee member) / Arizona State University (Publisher)
Created2012
150843-Thumbnail Image.png
Description
This project will attempt to supplement the current registry of lesbian inquiry in literature by exploring a very specific topos important to the Modern era: woman and her intellect. Under this umbrella, the project will perform two tasks: First, it will argue that the Modern turn that accentuates what I

This project will attempt to supplement the current registry of lesbian inquiry in literature by exploring a very specific topos important to the Modern era: woman and her intellect. Under this umbrella, the project will perform two tasks: First, it will argue that the Modern turn that accentuates what I call negative valence mimesis is a moment of change that enables the general public to perceive lesbianism in representations of women that before, perhaps, remained unacknowledged. And, second, that the intersection of thought and resistance to heteronormative structures, such as heterosexual desire/sex, childbirth, marriage, religion, feminine performance, generate topoi of lesbianism that lesbian studies should continuously critique in order to index the myriad and creative ways through which fictional representations of women have evaded their proper roles in society. The two tasks above will be performed amidst the backdrop of a crucial moment in history in which lesbianism jumped from fiction to fact through the publication and obscenity trial of Radclyffe Hall's novel, The Well of Loneliness. Deconstructive feminist and queer inquiry of under-researched novels by women from the UK and the US written within the decade surrounding the trial reveals the possibilities of lesbianism in novels where the protagonists' investment in heteronormativity has remained unquestioned. In those texts where the protagonists have been questioned, the analysis of lesbianism will be delved into more deeply in order to illustrate new ways of reading these texts. I will focus on women writers who, as Terry Castle suggests, "both usurped and deepened the [lesbian] genre" with the arrival of the new century (Literature 29). It is my attempt to combat heteronormativity through a more positive approach. As Michael Warner asserts, "heteronormativity can be overcome only by actively imagining a necessarily and desirably queer world" (xvi). This is not to say this study will be all roses and no thorns; a desirably queer world is not about a wish for an utopia. For this project, it is about rigorously engaging in the lesbianism of literature while acknowledging how a lesbian reading, a reading for lesbianism, can continue to both expand and enrich the critical tradition of a text and the customary interpretation of various characters.
ContributorsWagner, Johanna M. (Author) / Clarke, Deborah (Thesis advisor) / Lussier, Mark (Thesis advisor) / Mallot, Edward (Committee member) / Arizona State University (Publisher)
Created2012
151154-Thumbnail Image.png
Description
Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection

Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection options. The primary focus of this thesis is to identify key biomarkers to understand the pathogenesis and prognosis of Alzheimer's Disease. Feature selection is the process of finding a subset of relevant features to develop efficient and robust learning models. It is an active research topic in diverse areas such as computer vision, bioinformatics, information retrieval, chemical informatics, and computational finance. In this work, state of the art feature selection algorithms, such as Student's t-test, Relief-F, Information Gain, Gini Index, Chi-Square, Fisher Kernel Score, Kruskal-Wallis, Minimum Redundancy Maximum Relevance, and Sparse Logistic regression with Stability Selection have been extensively exploited to identify informative features for AD using data from Alzheimer's Disease Neuroimaging Initiative (ADNI). An integrative approach which uses blood plasma protein, Magnetic Resonance Imaging, and psychometric assessment scores biomarkers has been explored. This work also analyzes the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Performance of feature selection algorithm is evaluated using the relevance of derived features and the predictive power of the algorithm using Random Forest and Support Vector Machine classifiers. Performance metrics such as Accuracy, Sensitivity and Specificity, and area under the Receiver Operating Characteristic curve (AUC) have been used for evaluation. The feature selection algorithms best suited to analyze AD proteomics data have been proposed. The key biomarkers distinguishing healthy and AD patients, Mild Cognitive Impairment (MCI) converters and non-converters, and healthy and MCI patients have been identified.
ContributorsDubey, Rashmi (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2012
154086-Thumbnail Image.png
Description
Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution, but we may have plenty of labeled data from one or multiple related sources with different distributions.

Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution, but we may have plenty of labeled data from one or multiple related sources with different distributions. Due to its capability of migrating knowledge from related domains, transfer learning has shown to be effective for cross-domain learning problems. In this dissertation, I carry out research along this direction with a particular focus on designing efficient and effective algorithms for BioImaging and Bilingual applications. Specifically, I propose deep transfer learning algorithms which combine transfer learning and deep learning to improve image annotation performance. Firstly, I propose to generate the deep features for the Drosophila embryo images via pretrained deep models and build linear classifiers on top of the deep features. Secondly, I propose to fine-tune the pretrained model with a small amount of labeled images. The time complexity and performance of deep transfer learning methodologies are investigated. Promising results have demonstrated the knowledge transfer ability of proposed deep transfer algorithms. Moreover, I propose a novel Robust Principal Component Analysis (RPCA) approach to process the noisy images in advance. In addition, I also present a two-stage re-weighting framework for general domain adaptation problems. The distribution of source domain is mapped towards the target domain in the first stage, and an adaptive learning model is proposed in the second stage to incorporate label information from the target domain if it is available. Then the proposed model is applied to tackle cross lingual spam detection problem at LinkedIn’s website. Our experimental results on real data demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsSun, Qian (Author) / Ye, Jieping (Committee member) / Xue, Guoliang (Committee member) / Liu, Huan (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2015
153988-Thumbnail Image.png
Description
With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain

With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain webpages before getting at the webpage he/she wanted. This problem of Information Overload can be solved using Automatic Text Summarization. Summarization is a process of obtaining at abridged version of documents so that user can have a quick view to understand what exactly the document is about. Email threads from W3C are used in this system. Apart from common IR features like Term Frequency, Inverse Document Frequency, Term Rank, a variation of page rank based on graph model, which can cluster the words with respective to word ambiguity, is implemented. Term Rank also considers the possibility of co-occurrence of words with the corpus and evaluates the rank of the word accordingly. Sentences of email threads are ranked as per features and summaries are generated. System implemented the concept of pyramid evaluation in content selection. The system can be considered as a framework for Unsupervised Learning in text summarization.
ContributorsNadella, Sravan (Author) / Davulcu, Hasan (Thesis advisor) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
153889-Thumbnail Image.png
Description
Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array

Robust and stable decoding of neural signals is imperative for implementing a useful neuroprosthesis capable of carrying out dexterous tasks. A nonhuman primate (NHP) was trained to perform combined flexions of the thumb, index and middle fingers in addition to individual flexions and extensions of the same digits. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon action potential firing rates. The effect of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis, and Mutual Information Maximization was compared based on SVM classification performance. SVM classification was used to examine the functional parameters of (i) efficacy (ii) endurance to simulated failure and (iii) longevity of classification. The effect of using isolated-neuron and multi-unit firing rates was compared as the feature vector supplied to the SVM. The best classification performance was on post-implantation day 36, when using multi-unit firing rates the worst classification accuracy resulted from features selected with Wilcoxon signed-rank test (51.12 ± 0.65%) and the best classification accuracy resulted from Mutual Information Maximization (93.74 ± 0.32%). On this day when using single-unit firing rates, the classification accuracy from the Wilcoxon signed-rank test was 88.85 ± 0.61 % and Mutual Information Maximization was 95.60 ± 0.52% (degrees of freedom =10, level of chance =10%)
ContributorsPadmanaban, Subash (Author) / Greger, Bradley (Thesis advisor) / Santello, Marco (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2015
156061-Thumbnail Image.png
Description
The rate of progress in improving survival of patients with solid tumors is slow due to late stage diagnosis and poor tumor characterization processes that fail to effectively reflect the nature of tumor before treatment or the subsequent change in its dynamics because of treatment. Further advancement of targeted therapies

The rate of progress in improving survival of patients with solid tumors is slow due to late stage diagnosis and poor tumor characterization processes that fail to effectively reflect the nature of tumor before treatment or the subsequent change in its dynamics because of treatment. Further advancement of targeted therapies relies on advancements in biomarker research. In the context of solid tumors, bio-specimen samples such as biopsies serve as the main source of biomarkers used in the treatment and monitoring of cancer, even though biopsy samples are susceptible to sampling error and more importantly, are local and offer a narrow temporal scope.

Because of its established role in cancer care and its non-invasive nature imaging offers the potential to complement the findings of cancer biology. Over the past decade, a compelling body of literature has emerged suggesting a more pivotal role for imaging in the diagnosis, prognosis, and monitoring of diseases. These advances have facilitated the rise of an emerging practice known as Radiomics: the extraction and analysis of large numbers of quantitative features from medical images to improve disease characterization and prediction of outcome. It has been suggested that radiomics can contribute to biomarker discovery by detecting imaging traits that are complementary or interchangeable with other markers.

This thesis seeks further advancement of imaging biomarker discovery. This research unfolds over two aims: I) developing a comprehensive methodological pipeline for converting diagnostic imaging data into mineable sources of information, and II) investigating the utility of imaging data in clinical diagnostic applications. Four validation studies were conducted using the radiomics pipeline developed in aim I. These studies had the following goals: (1 distinguishing between benign and malignant head and neck lesions (2) differentiating benign and malignant breast cancers, (3) predicting the status of Human Papillomavirus in head and neck cancers, and (4) predicting neuropsychological performances as they relate to Alzheimer’s disease progression. The long-term objective of this thesis is to improve patient outcome and survival by facilitating incorporation of routine care imaging data into decision making processes.
ContributorsRanjbar, Sara (Author) / Kaufman, David (Thesis advisor) / Mitchell, Joseph R. (Thesis advisor) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2017
155971-Thumbnail Image.png
Description
Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for

Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts -

The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations.

I conclude this dissertation by describing future research directions.
ContributorsWai, Hoi To (Author) / Scaglione, Anna (Thesis advisor) / Berisha, Visar (Committee member) / Nedich, Angelia (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2017
156205-Thumbnail Image.png
Description
The subliminal impact of framing of social, political and environmental issues such as climate change has been studied for decades in political science and communications research. Media framing offers an “interpretative package" for average citizens on how to make sense of climate change and its consequences to their livelihoods, how

The subliminal impact of framing of social, political and environmental issues such as climate change has been studied for decades in political science and communications research. Media framing offers an “interpretative package" for average citizens on how to make sense of climate change and its consequences to their livelihoods, how to deal with its negative impacts, and which mitigation or adaptation policies to support. A line of related work has used bag of words and word-level features to detect frames automatically in text. Such works face limitations since standard keyword based features may not generalize well to accommodate surface variations in text when different keywords are used for similar concepts.

This thesis develops a unique type of textual features that generalize triplets extracted from text, by clustering them into high-level concepts. These concepts are utilized as features to detect frames in text. Compared to uni-gram and bi-gram based models, classification and clustering using generalized concepts yield better discriminating features and a higher classification accuracy with a 12% boost (i.e. from 74% to 83% F-measure) and 0.91 clustering purity for Frame/Non-Frame detection.

The automatic discovery of complex causal chains among interlinked events and their participating actors has not yet been thoroughly studied. Previous studies related to extracting causal relationships from text were based on laborious and incomplete hand-developed lists of explicit causal verbs, such as “causes" and “results in." Such approaches result in limited recall because standard causal verbs may not generalize well to accommodate surface variations in texts when different keywords and phrases are used to express similar causal effects. Therefore, I present a system that utilizes generalized concepts to extract causal relationships. The proposed algorithms overcome surface variations in written expressions of causal relationships and discover the domino effects between climate events and human security. This semi-supervised approach alleviates the need for labor intensive keyword list development and annotated datasets. Experimental evaluations by domain experts achieve an average precision of 82%. Qualitative assessments of causal chains show that results are consistent with the 2014 IPCC report illuminating causal mechanisms underlying the linkages between climatic stresses and social instability.
ContributorsAlashri, Saud (Author) / Davulcu, Hasan (Thesis advisor) / Desouza, Kevin C. (Committee member) / Maciejewski, Ross (Committee member) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2018
156084-Thumbnail Image.png
Description
The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle

The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.

The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.

In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017