Matching Items (1,090)
Filtering by

Clear all filters

151006-Thumbnail Image.png
Description
The Open Services Gateway initiative (OSGi) framework is a standard of module system and service platform that implements a complete and dynamic component model. Currently most of OSGi implementations are implemented by Java, which has similarities of Android language. With the emergence of Android operating system, due to the similarities

The Open Services Gateway initiative (OSGi) framework is a standard of module system and service platform that implements a complete and dynamic component model. Currently most of OSGi implementations are implemented by Java, which has similarities of Android language. With the emergence of Android operating system, due to the similarities between Java and Android, the integration of module system and service platform from OSGi to Android system attracts more and more attention. How to make OSGi run in Android is a hot topic, further, how to find a mechanism to enable communication between OSGi and Android system is a more advanced area than simply making OSGi running in Android. This paper, which aimed to fulfill SOA (Service Oriented Architecture) and CBA (Component Based Architecture), proposed a solution on integrating Felix OSGi platform with Android system in order to build up Distributed OSGi framework between mobile phones upon XMPP protocol. And in this paper, it not only successfully makes OSGi run on Android, but also invents a mechanism that makes a seamless collaboration between these two platforms.
ContributorsDong, Xinyi (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Chen, Yinong (Committee member) / Arizona State University (Publisher)
Created2012
190777-Thumbnail Image.png
Description
Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context

Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context behind viral content, particularly in the realm of the digital world, introducing a semi-supervised taxonomy induction framework (STIF). STIF employs state-of-the-art term representation, topical phrase detection, and clustering to organize terms into a two-level topic taxonomy. Social scientists then assess the topic clusters for coherence and completeness. STIF proves effective, significantly reducing human coding efforts (up to 74%) while accurately inducing taxonomies and term-to-topic mappings due to the high purity of its topics. Second, to profile the drivers of virality, this study investigates messaging strategies influencing message virality. Three content-based hypotheses are formulated and tested, demonstrating that incorporation of “negativity bias,” “causal arguments,” and “threats to personal or societal core values” - singularly and jointly - significantly enhances message virality on social media, quantified by retweet counts. Furthermore, the study highlights framing narratives’ pivotal role in shaping discourse, particularly in adversarial campaigns. An innovative pipeline for automatic framing detection is introduced, and tested on a collection of texts on the Russia-Ukraine conflict. Integrating representation learning, overlapping graph-clustering, and a unique Topic Actor Graph (TAG) synthesis method, the study achieves remarkable framing detection accuracy. The developed scoring mechanism maps sentences to automatically detect framing signatures. This pipeline attains an impressive F1 score of 92% and a 95% weighted accuracy for framing detection on a real-world dataset. In essence, this dissertation focuses on the multidimensional exploration of information cascade, uncovering the context and drivers of content virality, and automating framing detection. Through innovative methodologies like STIF, messaging strategy analysis, and TAG Frames, the research contributes valuable insights into the mechanics of viral content spread and framing nuances within the digital landscape, enriching fields such as advertisement, communication, public discourse, and crisis response strategies.
ContributorsMousavi, Maryam (Author) / Davulcu, Hasan HD (Thesis advisor) / Li, Baoxin (Committee member) / Corman, Steven (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2023
190776-Thumbnail Image.png
Description
This project analyzed the sequencing results of 230 bat samples to investigatenovel Coronaviruses (CoVs) appearance. A bioinformatics workflow solution was developed to process the Next-Generation Sequencing (NGS) data to identify novel CoV genomes. A parallel computing scheme was implemented to enhance performance. Among the 230 bat samples, 14 samples previously

This project analyzed the sequencing results of 230 bat samples to investigatenovel Coronaviruses (CoVs) appearance. A bioinformatics workflow solution was developed to process the Next-Generation Sequencing (NGS) data to identify novel CoV genomes. A parallel computing scheme was implemented to enhance performance. Among the 230 bat samples, 14 samples previously tested positive for CoV appearance by a pan-CoV quantitative polymerase chain reaction (qPCR). The Illumina NGS techniques are used to generate the shotgun readings. With the newly developed bioinformatics pipeline, the sequencing reads from each bat sample, and a positive control sample were quality controlled and assembled to generate longer viral contigs. They then went through a Basic Local Alignment Search Tool X (BLASTx) query against a customized CoV database from the National Center for Biotechnology Information (NCBI) databases. After further filtering with BLASTx and megaBLAST against the NCBI nucleotide collection (nr/nt) database, the confirmed CoV contigs were used to build bootstrapped phylogenetic trees with several representative Alpha, Beta, and Gamma-CoV genomes. Two bat samples contained potentially novel CoV fragments corresponding to the Open Reading Frame 1ab (ORF1ab), ORF7, and Nucleocapsid (N) gene regions. The phylogenetic trees showed that the fragments are Alpha-CoVs, which are closely related to Eptesicus Bat Coronavirus, Pipistrellus Bat Coronavirus, and Tadarida Brasiliensis Bat Alphacoronavirus 1.
ContributorsMu, Tianchen Nil (Author) / Lim, Efrem EL (Thesis advisor) / Lee, Kookjin KL (Thesis advisor) / Chung, Yunro YC (Committee member) / Arizona State University (Publisher)
Created2023
190765-Thumbnail Image.png
Description
Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on

Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on conventional acoustic feature changes in subjects with post-traumatic headache (PTH) attributed to mild traumatic brain injury (mTBI). This work demonstrates the effectiveness of using speech signals to assess the pathological status of individuals. At the same time, it highlights some of the limitations of conventional acoustic and linguistic features, such as low repeatability and generalizability. Two critical characteristics of speech features are (1) good robustness, as speech features need to generalize across different corpora, and (2) high repeatability, as speech features need to be invariant to all confounding factors except the pathological state of targets. This thesis presents two research thrusts in the context of speech signals in clinical applications that focus on improving the robustness and repeatability of speech features, respectively. The first thrust introduces a deep learning framework to generate acoustic feature embeddings sensitive to vocal quality and robust across different corpora. A contrastive loss combined with a classification loss is used to train the model jointly, and data-warping techniques are employed to improve the robustness of embeddings. Empirical results demonstrate that the proposed method achieves high in-corpus and cross-corpus classification accuracy and generates good embeddings sensitive to voice quality and robust across different corpora. The second thrust introduces using the intra-class correlation coefficient (ICC) to evaluate the repeatability of embeddings. A novel regularizer, the ICC regularizer, is proposed to regularize deep neural networks to produce embeddings with higher repeatability. This ICC regularizer is implemented and applied to three speech applications: a clinical application, speaker verification, and voice style conversion. The experimental results reveal that the ICC regularizer improves the repeatability of learned embeddings compared to the contrastive loss, leading to enhanced performance in downstream tasks.
ContributorsZhang, Jianwei (Author) / Jayasuriya, Suren (Thesis advisor) / Berisha, Visar (Thesis advisor) / Liss, Julie (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2023
190815-Thumbnail Image.png
Description
Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This

Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This is suboptimal for properly assessing model robustness and generalization. To address this gap, a novel multi-modal VQA benchmark dataset is introduced for the first time. This dataset combines both visual and textual distribution shifts across training and test sets. Using this challenging benchmark exposes vulnerabilities in existing models relying on spurious correlations and overfitting to dataset biases. The novel dataset advances the field by enabling more robust model training and rigorous evaluation of multi-modal distribution shift generalization. In addition, a new few-shot multi-modal prompt fusion model is proposed to better adapt models for downstream VQA tasks. The model incorporates a prompt encoder module and dual-path design to align and fuse image and text prompts. This represents a novel prompt learning approach tailored for multi-modal learning across vision and language. Together, the introduced benchmark dataset and prompt fusion model address key limitations around evaluating and improving VQA model robustness. The work expands the methodology for training models resilient to multi-modal distribution shifts.
ContributorsJyothi Unni, Suraj (Author) / Liu, Huan (Thesis advisor) / Davalcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023
190761-Thumbnail Image.png
Description
In this thesis, applications of sparsity, specifically sparse-tensors are motivated in physics.An algorithm is introduced to natively compute sparse-tensor's partial-traces, along with direct implementations in popular python libraries for immediate use. These applications include the infamous exponentially-scaling (with system size) Quantum-Many-Body problems (both Heisenberg/spin-chain-like and Chemical Hamiltonian models). This sparsity

In this thesis, applications of sparsity, specifically sparse-tensors are motivated in physics.An algorithm is introduced to natively compute sparse-tensor's partial-traces, along with direct implementations in popular python libraries for immediate use. These applications include the infamous exponentially-scaling (with system size) Quantum-Many-Body problems (both Heisenberg/spin-chain-like and Chemical Hamiltonian models). This sparsity aspect is stressed as an important and essential feature in solving many real-world physical problems approximately-and-numerically. These include the original motivation of solving radiation-damage questions for ultrafast light and electron sources.
ContributorsCandanedo, Julio (Author) / Beckstein, Oliver (Thesis advisor) / Arenz, Christian (Thesis advisor) / Keeler, Cynthia (Committee member) / Erten, Onur (Committee member) / Arizona State University (Publisher)
Created2023
190831-Thumbnail Image.png
Description
The proposed research is motivated by the colon cancer bio-marker study, which recruited case (or colon cancer) and healthy control samples and quantified their large number of candidate bio-markers using a high-throughput technology, called nucleicacid-programmable protein array (NAPPA). The study aimed to identify a panel of biomarkers to accurately distinguish

The proposed research is motivated by the colon cancer bio-marker study, which recruited case (or colon cancer) and healthy control samples and quantified their large number of candidate bio-markers using a high-throughput technology, called nucleicacid-programmable protein array (NAPPA). The study aimed to identify a panel of biomarkers to accurately distinguish between the cases and controls. A major challenge in analyzing this study was the bio-marker heterogeneity, where bio-marker responses differ from sample to sample. The goal of this research is to improve prediction accuracy for motivating or similar studies. Most machine learning (ML) algorithms, developed under the one-size-fits-all strategy, were not able to analyze the above-mentioned heterogeneous data. Failing to capture the individuality of each subject, several standard ML algorithms tested against this dataset performed poorly resulting in 55-61% accuracy. Alternatively, the proposed personalized ML (PML) strategy aims at tailoring the optimal ML models for each subject according to their individual characteristics yielding best highest accuracy of 72%.
ContributorsShah, Nishtha (Author) / Chung, Yunro (Thesis advisor) / Lee, Kookjin (Thesis advisor) / Ghasemzadeh, Hassan (Committee member) / Arizona State University (Publisher)
Created2023
190835-Thumbnail Image.png
Description
In contrast to traditional chemotherapy for cancer which fails to address tumor heterogeneity, raises patients’ levels of toxicity, and selects for drug-resistant cells, adaptive therapy applies ideas from cancer ecology in employing low-dose drugs to encourage competition between cancerous cells, reducing toxicity and potentially prolonging disease progression. Despite promising results

In contrast to traditional chemotherapy for cancer which fails to address tumor heterogeneity, raises patients’ levels of toxicity, and selects for drug-resistant cells, adaptive therapy applies ideas from cancer ecology in employing low-dose drugs to encourage competition between cancerous cells, reducing toxicity and potentially prolonging disease progression. Despite promising results in some clinical trials, optimizing adaptive therapy routines involves navigating a vast space of combina- torial possibilities, including the number of drugs, drug holiday duration, and drug dosages. Computational models can serve as precursors to efficiently explore this space, narrowing the scope of possibilities for in-vivo and in-vitro experiments which are time-consuming, expensive, and specific to tumor types. Among the existing modeling techniques, agent-based models are particularly suited for studying the spatial inter- actions critical to successful adaptive therapy. In this thesis, I introduce CancerSim, a three-dimensional agent-based model fully implemented in C++ that is designed to simulate tumorigenesis, angiogenesis, drug resistance, and resource competition within a tissue. Additionally, the model is equipped to assess the effectiveness of various adaptive therapy regimens. The thesis provides detailed insights into the biological motivation and calibration of different model parameters. Lastly, I propose a series of research questions and experiments for adaptive therapy that CancerSim can address in the pursuit of advancing cancer treatment strategies.
ContributorsShah, Sanjana Saurin (Author) / Daymude, Joshua J (Thesis advisor) / Forrest, Stephanie (Committee member) / Maley, Carlo C (Committee member) / Arizona State University (Publisher)
Created2023
190930-Thumbnail Image.png
Description
This thesis introduces a requirement-based regression test selection approach in an agile development context. Regression testing is critical in ensuring software quality but demands substantial time and resources. The rise of agile methodologies emphasizes the need for swift, iterative software delivery, requiring efficient regression testing. Although executing all existing test

This thesis introduces a requirement-based regression test selection approach in an agile development context. Regression testing is critical in ensuring software quality but demands substantial time and resources. The rise of agile methodologies emphasizes the need for swift, iterative software delivery, requiring efficient regression testing. Although executing all existing test cases is the most thorough approach, it becomes impractical and resource-intensive for large real-world projects. Regression test selection emerges as a solution to this challenge, focusing on identifying a subset of test cases that efficiently uncover potential faults due to changes in the existing code. Existing literature on regression test selection in agile settings presents strategies that may only partially embrace agile characteristics. This research proposes a regression test selection method by utilizing data from user stories—agile's equivalent of requirements—and the associated business value spanning successive releases to pinpoint regression test cases. Given that value is a chief metric in agile, and testing—particularly regression testing—is often viewed more as value preservation than creation, the approach in this thesis demonstrates that integrating user stories and business value can lead to notable advancements in agile regression testing efficiency.
ContributorsMondal, Aniruddha (Author) / Gary, Kevin KG (Thesis advisor) / Bansal, Srividya SB (Thesis advisor) / Tuzmen, Ayca AT (Committee member) / Arizona State University (Publisher)
Created2023
190987-Thumbnail Image.png
Description
From the earliest operatic spectacles to the towering Coachella-esque stages that dominate today’s music industry, there are no shortage of successful examples of artists combining music and visual art. The advancement of technology has created greater potential for these combinations today. Music curriculums that wish to produce well-rounded graduates capable

From the earliest operatic spectacles to the towering Coachella-esque stages that dominate today’s music industry, there are no shortage of successful examples of artists combining music and visual art. The advancement of technology has created greater potential for these combinations today. Music curriculums that wish to produce well-rounded graduates capable of realizing this potential need to adapt to teach how to incorporate technology in performances. This paper presents two new courses that integrate technology with performance: Sound & Sight: A Practical Approach to Audio-Visual Performances; and Phase Music: An Introduction to Design and Fabrication. In Sound & Sight, students will learn how to “storyboard” pieces of music, realize that vision through object-oriented programming in Processing, and synchronize audio and visual elements in live performance settings using Ableton Live and Max. In Phase Music, students will be introduced to Phase Music, learn how to use Ableton Live to perform one of Steve Reich’s phase pieces or compose and perform their own piece of phase music, and design and build a custom Musical Instrument Digital Interface (MIDI) controller using Arduino, Adobe Illustrator, and Max. The document includes complete fifteen-week lesson plans for each course, which detail learning objectives, assignments, use of class time, original video coding tutorials, and lecture notes.
ContributorsNguyen, Julian Tuan Anh (Author) / Swartz, Jonathan (Thesis advisor) / Thorn, Seth (Thesis advisor) / Navarro, Fernanda (Committee member) / Arizona State University (Publisher)
Created2023