Matching Items (1,088)
Filtering by

Clear all filters

154329-Thumbnail Image.png
Description
The presence of a rich set of embedded sensors on mobile devices has been fuelling various sensing applications regarding the activities of individuals and their surrounding environment, and these ubiquitous sensing-capable mobile devices are pushing the new paradigm of Mobile Crowd Sensing (MCS) from concept to reality. MCS aims to

The presence of a rich set of embedded sensors on mobile devices has been fuelling various sensing applications regarding the activities of individuals and their surrounding environment, and these ubiquitous sensing-capable mobile devices are pushing the new paradigm of Mobile Crowd Sensing (MCS) from concept to reality. MCS aims to outsource sensing data collection to mobile users and it could revolutionize the traditional ways of sensing data collection and processing. In the meantime, cloud computing provides cloud-backed infrastructures for mobile devices to provision their capabilities with network access. With enormous computational and storage resources along with sufficient bandwidth, it functions as the hub to handle the sensing service requests from sensing service consumers and coordinate sensing task assignment among eligible mobile users to reach a desired quality of sensing service. This paper studies the problem of sensing task assignment to mobile device owners with specific spatio-temporal traits to minimize the cost and maximize the utility in MCS while adhering to QoS constraints. Greedy approaches and hybrid solutions combined with bee algorithms are explored to address the problem.

Moreover, the privacy concerns arise with the widespread deployment of MCS from both the data contributors and the sensing service consumers. The uploaded sensing data, especially those tagged with spatio-temporal information, will disclose the personal information of the data contributors. In addition, the sensing service requests can reveal the personal interests of service consumers. To address the privacy issues, this paper constructs a new framework named Privacy-Preserving Mobile Crowd Sensing (PP-MCS) to leverage the sensing capabilities of ubiquitous mobile devices and cloud infrastructures. PP-MCS has a distributed architecture without relying on trusted third parties for privacy-preservation. In PP-MCS, the sensing service consumers can retrieve data without revealing the real data contributors. Besides, the individual sensing records can be compared against the aggregation result while keeping the values of sensing records unknown, and the k-nearest neighbors could be approximately identified without privacy leaks. As such, the privacy of the data contributors and the sensing service consumers can be protected to the greatest extent possible.
ContributorsWang, Zhijie (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2016
154330-Thumbnail Image.png
Description
A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space

A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space and time complexity is software complexity. An algorithm with multiple lines of pseudocode might sometimes be simpler to understand that the one with fewer lines. So, it is crucial to determine the Algorithmic Understandability for an algorithm, in order to better understand Software Complexity. This work deals with understanding Software Complexity from a cognitive angle. Also, it is vital to compute the effect of reducing cognitive complexity. The work aims to prove three important statements. The first being, that, while algorithmic complexity is a part of software complexity, software complexity does not solely and entirely mean algorithmic Complexity. Second, the work intends to bring to light the importance of cognitive understandability of algorithms. Third, is about the impact, reducing Cognitive Complexity, would have on Software Design and Development.
ContributorsMannava, Manasa Priyamvada (Author) / Ghazarian, Arbi (Thesis advisor) / Gaffar, Ashraf (Committee member) / Bansal, Ajay (Committee member) / Arizona State University (Publisher)
Created2016
154203-Thumbnail Image.png
Description
This paper provides a comprehensive study of Italian liturgical organ works from the 15th to 17th centuries. This music was composed for the Catholic Mass, and it demonstrates the development of Italian keyboard style and the incorporation of new genres into the organ Mass, such as a Toccata before the

This paper provides a comprehensive study of Italian liturgical organ works from the 15th to 17th centuries. This music was composed for the Catholic Mass, and it demonstrates the development of Italian keyboard style and the incorporation of new genres into the organ Mass, such as a Toccata before the Mass, music for the Offertory, and the Elevation Toccata. This often neglected corpus of music deserves greater scholarly attention.

The Italian organ Mass begins with the Faenza Codex of c.1430, which contains the earliest surviving liturgical music for organ. Over a century would pass before Girolamo Cavazzoni published his three organ Masses in 1543: Mass IV (for feasts of apostles), Mass IX (for Marian feasts) and Mass XI (for typical Sundays of the year). The prevalence of publishing in Venice and the flourishing liturgical culture at San Marco led two notable organists, Andrea Gabrieli and Claudio Merulo, to publish their own Masses in 1563 and 1568. Both composers cultivated imitation and figurative lines which were often replete with ornamentation.

Frescobaldi’s Fiori musicali, published in Venice in 1635, represents the pinnacle of the Italian organ Mass. Reflecting the type of music he performed liturgically at San Pietro in Rome, this publication includes several new genres: canzonas after the reading of the Epistle and after Communion; ricercars after the Credo; and toccatas to be played during the Elevation of the Host. Frescobaldi’s music shows unparalleled mastery of counterpoint and invention of figuration. His liturgical music casts a long shadow over the three composers who published organ Masses in the decade following Fiori musicali: Giovanni Salvatore, Fra Antonio Croci and Giovanni Battista Fasolo.

This comprehensive look at Italian organ Masses from the 15th-17th centuries reveals the musical creativity inspired by the Catholic liturgy. Perhaps because of their practical use, these organ works are often neglected, mentioned merely as addenda to the other accomplishments of these composers. Hopefully insight into the contents of each organ Mass, along with the information about their style and aspects of performance practice, will make these musical gems more accessible to contemporary organists.
ContributorsHolton Prouty, Kristin Michelle (Author) / Marshall, Kimberly (Thesis advisor) / Ryan, Russell (Committee member) / Solis, Theodore (Committee member) / Arizona State University (Publisher)
Created2015
154213-Thumbnail Image.png
Description
Computer supported collaborative learning (CSCL) has made great inroads in classroom teaching marked by the use of tools and technologies to support and enhance collaborative learning. Computer mediated learning environments produce large amounts of data, capturing student interactions, which can be used to analyze students’ learning behaviors (Martinez-Maldonado et al.,

Computer supported collaborative learning (CSCL) has made great inroads in classroom teaching marked by the use of tools and technologies to support and enhance collaborative learning. Computer mediated learning environments produce large amounts of data, capturing student interactions, which can be used to analyze students’ learning behaviors (Martinez-Maldonado et al., 2013a). The analysis of the process of collaboration is an active area of research in CSCL. Contributing towards this area, Meier et al. (2007) defined nine dimensions and gave a rating scheme to assess the quality of collaboration. This thesis aims to extract and examine frequent patterns of students’ interactions that characterize strong and weak groups across the above dimensions. To achieve this, an exploratory data mining technique, differential sequence mining, was employed using data from a collaborative concept mapping activity where collaboration amongst students was facilitated by an interactive tabletop. The results associate frequent patterns of collaborative concept mapping process with some of the dimensions assessing the quality of collaboration. The analysis of associating these patterns with the dimensions of collaboration is theoretically grounded, considering aspects of collaborative learning, concept mapping, communication, group cognition and information processing. The results are preliminary but still demonstrate the potential of associating frequent patterns of interactions with strong and weak groups across specific dimensions of collaboration, which is relevant for students, teachers, and researchers to monitor the process of collaborative learning. The frequent patterns for strong groups reflected conformance to the process of conversation for dimensions related to “communication” aspect of collaboration. In terms of the concept mapping sub-processes the frequent patterns for strong groups reflect the presentation phase of conversation with processes like talking, sharing individual maps while constructing the groups concept map followed by short utterances which represents the acceptance phase. For “joint information processing” aspect of collaboration, the frequent patterns for strong groups were marked by learners’ contributing more upon each other’s work. In terms of the concept mapping sub-processes the frequent patterns were marked by learners adding links to each other’s concepts or working with each other’s concepts, while revising the group concept map.
ContributorsChaudhry, Rishabh (Author) / Walker, Erin A (Thesis advisor) / Maldonado-Martinez, Roberto (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2015
154217-Thumbnail Image.png
Description
Software-as-a-Service (SaaS) has received significant attention in recent years as major computer companies such as Google, Microsoft, Amazon, and Salesforce are adopting this new approach to develop software and systems. Cloud computing is a computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable,

Software-as-a-Service (SaaS) has received significant attention in recent years as major computer companies such as Google, Microsoft, Amazon, and Salesforce are adopting this new approach to develop software and systems. Cloud computing is a computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable, and virtualized manner. Computer Simulations are widely utilized to analyze the behaviors of software and test them before fully implementations. Simulation can further benefit SaaS application in a cost-effective way taking the advantages of cloud such as customizability, configurability and multi-tendency.

This research introduces Modeling, Simulation and Analysis for Software-as-Service in Cloud. The researches cover the following topics: service modeling, policy specification, code generation, dynamic simulation, timing, event and log analysis. Moreover, the framework integrates current advantages of cloud: configurability, Multi-Tenancy, scalability and recoverability.

The following chapters are provided in the architecture:

Multi-Tenancy Simulation Software-as-a-Service.

Policy Specification for MTA simulation environment.

Model Driven PaaS Based SaaS modeling.

Dynamic analysis and dynamic calibration for timing analysis.

Event-driven Service-Oriented Simulation Framework.

LTBD: A Triage Solution for SaaS.
ContributorsLi, Wu (Author) / Tsai, Wei-Tek (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2015
157808-Thumbnail Image.png
Description
Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including

Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.
ContributorsGao, Fei (Author) / Wu, Teresa (Thesis advisor) / Li, Jing (Committee member) / Yan, Hao (Committee member) / Patel, Bhavika (Committee member) / Arizona State University (Publisher)
Created2019
157810-Thumbnail Image.png
Description
Millions of users leave digital traces of their political engagements on social media platforms every day. Users form networks of interactions, produce textual content, like and share each others' content. This creates an invaluable opportunity to better understand the political engagements of internet users. In this proposal, I present three

Millions of users leave digital traces of their political engagements on social media platforms every day. Users form networks of interactions, produce textual content, like and share each others' content. This creates an invaluable opportunity to better understand the political engagements of internet users. In this proposal, I present three algorithmic solutions to three facets of online political networks; namely, detection of communities, antagonisms and the impact of certain types of accounts on political polarization. First, I develop a multi-view community detection algorithm to find politically pure communities. I find that word usage among other content types (i.e. hashtags, URLs) complement user interactions the best in accurately detecting communities.

Second, I focus on detecting negative linkages between politically motivated social media users. Major social media platforms do not facilitate their users with built-in negative interaction options. However, many political network analysis tasks rely on not only positive but also negative linkages. Here, I present the SocLSFact framework to detect negative linkages among social media users. It utilizes three pieces of information; sentiment cues of textual interactions, positive interactions, and socially balanced triads. I evaluate the contribution of each three aspects in negative link detection performance on multiple tasks.

Third, I propose an experimental setup that quantifies the polarization impact of automated accounts on Twitter retweet networks. I focus on a dataset of tragic Parkland shooting event and its aftermath. I show that when automated accounts are removed from the retweet network the network polarization decrease significantly, while a same number of accounts to the automated accounts are removed randomly the difference is not significant. I also find that prominent predictors of engagement of automatically generated content is not very different than what previous studies point out in general engaging content on social media. Last but not least, I identify accounts which self-disclose their automated nature in their profile by using expressions such as bot, chat-bot, or robot. I find that human engagement to self-disclosing accounts compared to non-disclosing automated accounts is much smaller. This observational finding can motivate further efforts into automated account detection research to prevent their unintended impact.
ContributorsOzer, Mert (Author) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Sen, Arunabha (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
157818-Thumbnail Image.png
Description
Graph is a ubiquitous data structure, which appears in a broad range of real-world scenarios. Accordingly, there has been a surge of research to represent and learn from graphs in order to accomplish various machine learning and graph analysis tasks. However, most of these efforts only utilize the graph structure

Graph is a ubiquitous data structure, which appears in a broad range of real-world scenarios. Accordingly, there has been a surge of research to represent and learn from graphs in order to accomplish various machine learning and graph analysis tasks. However, most of these efforts only utilize the graph structure while nodes in real-world graphs usually come with a rich set of attributes. Typical examples of such nodes and their attributes are users and their profiles in social networks, scientific articles and their content in citation networks, protein molecules and their gene sets in biological networks as well as web pages and their content on the Web. Utilizing node features in such graphs---attributed graphs---can alleviate the graph sparsity problem and help explain various phenomena (e.g., the motives behind the formation of communities in social networks). Therefore, further study of attributed graphs is required to take full advantage of node attributes.

In the wild, attributed graphs are usually unlabeled. Moreover, annotating data is an expensive and time-consuming process, which suffers from many limitations such as annotators’ subjectivity, reproducibility, and consistency. The challenges of data annotation and the growing increase of unlabeled attributed graphs in various real-world applications significantly demand unsupervised learning for attributed graphs.

In this dissertation, I propose a set of novel models to learn from attributed graphs in an unsupervised manner. To better understand and represent nodes and communities in attributed graphs, I present different models in node and community levels. In node level, I utilize node features as well as the graph structure in attributed graphs to learn distributed representations of nodes, which can be useful in a variety of downstream machine learning applications. In community level, with a focus on social media, I take advantage of both node attributes and the graph structure to discover not only communities but also their sentiment-driven profiles and inter-community relations (i.e., alliance, antagonism, or no relation). The discovered community profiles and relations help to better understand the structure and dynamics of social media.
ContributorsSalehi, Amin (Author) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2019
157694-Thumbnail Image.png
Description
There are more than 20 active missions exploring planets and small bodies beyond Earth in our solar system today. Many more have completed their journeys or will soon begin. Each spacecraft has a suite of instruments and sensors that provide a treasure trove of data that scientists use to advance

There are more than 20 active missions exploring planets and small bodies beyond Earth in our solar system today. Many more have completed their journeys or will soon begin. Each spacecraft has a suite of instruments and sensors that provide a treasure trove of data that scientists use to advance our understanding of the past, present, and future of the solar system and universe. As more missions come online and the volume of data increases, it becomes more difficult for scientists to analyze these complex data at the desired pace. There is a need for systems that can rapidly and intelligently extract information from planetary instrument datasets and prioritize the most promising, novel, or relevant observations for scientific analysis. Machine learning methods can serve this need in a variety of ways: by uncovering patterns or features of interest in large, complex datasets that are difficult for humans to analyze; by inspiring new hypotheses based on structure and patterns revealed in data; or by automating tedious or time-consuming tasks. In this dissertation, I present machine learning solutions to enhance the tactical planning process for the Mars Science Laboratory Curiosity rover and future tactically-planned missions, as well as the science analysis process for archived and ongoing orbital imaging investigations such as the High Resolution Imaging Science Experiment (HiRISE) at Mars. These include detecting novel geology in multispectral images and active nuclear spectroscopy data, analyzing the intrinsic variability in active nuclear spectroscopy data with respect to elemental geochemistry, automating tedious image review processes, and monitoring changes in surface features such as impact craters in orbital remote sensing images. Collectively, this dissertation shows how machine learning can be a powerful tool for facilitating scientific discovery during active exploration missions and in retrospective analysis of archived data.
ContributorsKerner, Hannah Rae (Author) / Bell, James F. (Thesis advisor) / Ben Amor, Heni (Thesis advisor) / Wagstaff, Kiri L (Committee member) / Hardgrove, Craig J (Committee member) / Shirzaei, Manoochehr (Committee member) / Arizona State University (Publisher)
Created2019
157695-Thumbnail Image.png
Description
Causality analysis is the process of identifying cause-effect relationships among variables. This process is challenging because causal relationships cannot be tested solely based on statistical indicators as additional information is always needed to reduce the ambiguity caused by factors beyond those covered by the statistical test. Traditionally, controlled experiments are

Causality analysis is the process of identifying cause-effect relationships among variables. This process is challenging because causal relationships cannot be tested solely based on statistical indicators as additional information is always needed to reduce the ambiguity caused by factors beyond those covered by the statistical test. Traditionally, controlled experiments are carried out to identify causal relationships, but recently there is a growing interest in causality analysis with observational data due to the increasing availability of data and tools. This type of analysis will often involve automatic algorithms that extract causal relations from large amounts of data and rely on expert judgment to scrutinize and verify the relations. Over-reliance on these automatic algorithms is dangerous because models trained on observational data are susceptible to bias that can be difficult to spot even with expert oversight. Visualization has proven to be effective at bridging the gap between human experts and statistical models by enabling an interactive exploration and manipulation of the data and models. This thesis develops a visual analytics framework to support the interaction between human experts and automatic models in causality analysis. Three case studies were conducted to demonstrate the application of the visual analytics framework in which feature engineering, insight generation, correlation analysis, and causality inspections were showcased.
ContributorsWang, Hong, Ph.D (Author) / Maciejewski, Ross (Thesis advisor) / He, Jingrui (Committee member) / Davulcu, Hasan (Committee member) / Thies, Cameron (Committee member) / Arizona State University (Publisher)
Created2019