Matching Items (400)
Filtering by

Clear all filters

150733-Thumbnail Image.png
Description
This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The

This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The objective functions that are studied in this paper are total weighted completion time, maximum lateness, number of tardy jobs and total weighted tardiness. Based on the computational results, discussion and recommendations are made on which MIP formulation might work best for these problems. The performances of these formulations very much depend on the objective function, number of jobs and the sum of the processing times of all the jobs. Two sets of inequalities are presented that can be used to improve the performance of the formulation with assignment and positional date variables. Further, this research is extend to single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. These problems have been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first single machine interfering problem (P1), the criteria of minimizing total completion time and number of tardy jobs for the two sets of jobs is studied. A Forward SPT-EDD heuristic is presented that attempts to generate set of non-dominated solutions. The complexity of this specific problem is NP-hard. The computational efficiency of the heuristic is compared against the pseudo-polynomial algorithm proposed by Ng et al. [2006]. In the second single machine interfering job sets problem (P2), the criteria of minimizing total weighted completion time and maximum lateness is studied. This is an established NP-hard problem for which a Forward WSPT-EDD heuristic is presented that attempts to generate set of supported points and the solution quality is compared with MIP formulations. For both of these problems, all jobs are available at time zero and the jobs are not allowed to be preempted.
ContributorsKhowala, Ketan (Author) / Fowler, John (Thesis advisor) / Keha, Ahmet (Thesis advisor) / Balasubramanian, Hari J (Committee member) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2012
151180-Thumbnail Image.png
Description
As we migrate into an era of personalized medicine, understanding how bio-molecules interact with one another to form cellular systems is one of the key focus areas of systems biology. Several challenges such as the dynamic nature of cellular systems, uncertainty due to environmental influences, and the heterogeneity between individual

As we migrate into an era of personalized medicine, understanding how bio-molecules interact with one another to form cellular systems is one of the key focus areas of systems biology. Several challenges such as the dynamic nature of cellular systems, uncertainty due to environmental influences, and the heterogeneity between individual patients render this a difficult task. In the last decade, several algorithms have been proposed to elucidate cellular systems from data, resulting in numerous data-driven hypotheses. However, due to the large number of variables involved in the process, many of which are unknown or not measurable, such computational approaches often lead to a high proportion of false positives. This renders interpretation of the data-driven hypotheses extremely difficult. Consequently, a dismal proportion of these hypotheses are subject to further experimental validation, eventually limiting their potential to augment existing biological knowledge. This dissertation develops a framework of computational methods for the analysis of such data-driven hypotheses leveraging existing biological knowledge. Specifically, I show how biological knowledge can be mapped onto these hypotheses and subsequently augmented through novel hypotheses. Biological hypotheses are learnt in three levels of abstraction -- individual interactions, functional modules and relationships between pathways, corresponding to three complementary aspects of biological systems. The computational methods developed in this dissertation are applied to high throughput cancer data, resulting in novel hypotheses with potentially significant biological impact.
ContributorsRamesh, Archana (Author) / Kim, Seungchan (Thesis advisor) / Langley, Patrick W (Committee member) / Baral, Chitta (Committee member) / Kiefer, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2012
190777-Thumbnail Image.png
Description
Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context

Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context behind viral content, particularly in the realm of the digital world, introducing a semi-supervised taxonomy induction framework (STIF). STIF employs state-of-the-art term representation, topical phrase detection, and clustering to organize terms into a two-level topic taxonomy. Social scientists then assess the topic clusters for coherence and completeness. STIF proves effective, significantly reducing human coding efforts (up to 74%) while accurately inducing taxonomies and term-to-topic mappings due to the high purity of its topics. Second, to profile the drivers of virality, this study investigates messaging strategies influencing message virality. Three content-based hypotheses are formulated and tested, demonstrating that incorporation of “negativity bias,” “causal arguments,” and “threats to personal or societal core values” - singularly and jointly - significantly enhances message virality on social media, quantified by retweet counts. Furthermore, the study highlights framing narratives’ pivotal role in shaping discourse, particularly in adversarial campaigns. An innovative pipeline for automatic framing detection is introduced, and tested on a collection of texts on the Russia-Ukraine conflict. Integrating representation learning, overlapping graph-clustering, and a unique Topic Actor Graph (TAG) synthesis method, the study achieves remarkable framing detection accuracy. The developed scoring mechanism maps sentences to automatically detect framing signatures. This pipeline attains an impressive F1 score of 92% and a 95% weighted accuracy for framing detection on a real-world dataset. In essence, this dissertation focuses on the multidimensional exploration of information cascade, uncovering the context and drivers of content virality, and automating framing detection. Through innovative methodologies like STIF, messaging strategy analysis, and TAG Frames, the research contributes valuable insights into the mechanics of viral content spread and framing nuances within the digital landscape, enriching fields such as advertisement, communication, public discourse, and crisis response strategies.
ContributorsMousavi, Maryam (Author) / Davulcu, Hasan HD (Thesis advisor) / Li, Baoxin (Committee member) / Corman, Steven (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2023
190765-Thumbnail Image.png
Description
Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on

Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on conventional acoustic feature changes in subjects with post-traumatic headache (PTH) attributed to mild traumatic brain injury (mTBI). This work demonstrates the effectiveness of using speech signals to assess the pathological status of individuals. At the same time, it highlights some of the limitations of conventional acoustic and linguistic features, such as low repeatability and generalizability. Two critical characteristics of speech features are (1) good robustness, as speech features need to generalize across different corpora, and (2) high repeatability, as speech features need to be invariant to all confounding factors except the pathological state of targets. This thesis presents two research thrusts in the context of speech signals in clinical applications that focus on improving the robustness and repeatability of speech features, respectively. The first thrust introduces a deep learning framework to generate acoustic feature embeddings sensitive to vocal quality and robust across different corpora. A contrastive loss combined with a classification loss is used to train the model jointly, and data-warping techniques are employed to improve the robustness of embeddings. Empirical results demonstrate that the proposed method achieves high in-corpus and cross-corpus classification accuracy and generates good embeddings sensitive to voice quality and robust across different corpora. The second thrust introduces using the intra-class correlation coefficient (ICC) to evaluate the repeatability of embeddings. A novel regularizer, the ICC regularizer, is proposed to regularize deep neural networks to produce embeddings with higher repeatability. This ICC regularizer is implemented and applied to three speech applications: a clinical application, speaker verification, and voice style conversion. The experimental results reveal that the ICC regularizer improves the repeatability of learned embeddings compared to the contrastive loss, leading to enhanced performance in downstream tasks.
ContributorsZhang, Jianwei (Author) / Jayasuriya, Suren (Thesis advisor) / Berisha, Visar (Thesis advisor) / Liss, Julie (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2023
190761-Thumbnail Image.png
Description
In this thesis, applications of sparsity, specifically sparse-tensors are motivated in physics.An algorithm is introduced to natively compute sparse-tensor's partial-traces, along with direct implementations in popular python libraries for immediate use. These applications include the infamous exponentially-scaling (with system size) Quantum-Many-Body problems (both Heisenberg/spin-chain-like and Chemical Hamiltonian models). This sparsity

In this thesis, applications of sparsity, specifically sparse-tensors are motivated in physics.An algorithm is introduced to natively compute sparse-tensor's partial-traces, along with direct implementations in popular python libraries for immediate use. These applications include the infamous exponentially-scaling (with system size) Quantum-Many-Body problems (both Heisenberg/spin-chain-like and Chemical Hamiltonian models). This sparsity aspect is stressed as an important and essential feature in solving many real-world physical problems approximately-and-numerically. These include the original motivation of solving radiation-damage questions for ultrafast light and electron sources.
ContributorsCandanedo, Julio (Author) / Beckstein, Oliver (Thesis advisor) / Arenz, Christian (Thesis advisor) / Keeler, Cynthia (Committee member) / Erten, Onur (Committee member) / Arizona State University (Publisher)
Created2023
190987-Thumbnail Image.png
Description
From the earliest operatic spectacles to the towering Coachella-esque stages that dominate today’s music industry, there are no shortage of successful examples of artists combining music and visual art. The advancement of technology has created greater potential for these combinations today. Music curriculums that wish to produce well-rounded graduates capable

From the earliest operatic spectacles to the towering Coachella-esque stages that dominate today’s music industry, there are no shortage of successful examples of artists combining music and visual art. The advancement of technology has created greater potential for these combinations today. Music curriculums that wish to produce well-rounded graduates capable of realizing this potential need to adapt to teach how to incorporate technology in performances. This paper presents two new courses that integrate technology with performance: Sound & Sight: A Practical Approach to Audio-Visual Performances; and Phase Music: An Introduction to Design and Fabrication. In Sound & Sight, students will learn how to “storyboard” pieces of music, realize that vision through object-oriented programming in Processing, and synchronize audio and visual elements in live performance settings using Ableton Live and Max. In Phase Music, students will be introduced to Phase Music, learn how to use Ableton Live to perform one of Steve Reich’s phase pieces or compose and perform their own piece of phase music, and design and build a custom Musical Instrument Digital Interface (MIDI) controller using Arduino, Adobe Illustrator, and Max. The document includes complete fifteen-week lesson plans for each course, which detail learning objectives, assignments, use of class time, original video coding tutorials, and lecture notes.
ContributorsNguyen, Julian Tuan Anh (Author) / Swartz, Jonathan (Thesis advisor) / Thorn, Seth (Thesis advisor) / Navarro, Fernanda (Committee member) / Arizona State University (Publisher)
Created2023
190881-Thumbnail Image.png
Description
The management of underground utilities is a complex and challenging task due to the uncertainty regarding the location of existing infrastructure. The lack of accurate information often leads to excavation-related damages, which pose a threat to public safety. In recent years, advanced underground utilities management systems have been developed to

The management of underground utilities is a complex and challenging task due to the uncertainty regarding the location of existing infrastructure. The lack of accurate information often leads to excavation-related damages, which pose a threat to public safety. In recent years, advanced underground utilities management systems have been developed to improve the safety and efficiency of excavation work. This dissertation aims to explore the potential applications of blockchain technology in the management of underground utilities and reduction of excavation-related damage. The literature review provides an overview of the current systems for managing underground infrastructure, including Underground Infrastructure Management (UIM) and 811, and highlights the benefits of advanced underground utilities management systems in enhancing safety and efficiency on construction sites. The review also examines the limitations and challenges of the existing systems and identifies the opportunities for integrating blockchain technology to improve their performance. The proposed application involves the creation of a shared database of information about the location and condition of pipes, cables, and other underground infrastructure, which can be updated in real time by authorized users such as utility companies and government agencies. The use of blockchain technology can provide an additional layer of security and transparency to the system, ensuring the reliability and accuracy of the information. Contractors and excavation companies can access this information before commencing work, reducing the risk of accidental damage to underground utilities.
ContributorsAlnahari, Mohammed S (Author) / Ariaratnam, Samuel T (Thesis advisor) / El Asmar, Mounir (Committee member) / Czerniawski, Thomas (Committee member) / Arizona State University (Publisher)
Created2023
190995-Thumbnail Image.png
Description
This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of

This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of this study are models where computation is distributed over the agents themselves, which are assumed to possess onboard computational capabilities. There exist many assumption variants for distributed models, including fairness and concurrency properties. In general, there is a fundamental trade-off whereby weakening model assumptions increases the applicability of proposed solutions, while also increasing the difficulty of proving theoretical guarantees. This dissertation aims to produce a deeper understanding of this trade-off with respect to multiagent optimization and scalability in distributed settings. This study considers four multiagent optimization problems. The model assumptions begin with fully centralized computation for the all-or-nothing multicommodity flow problem, then progress to synchronous distributed models through examination of the unmapped multivehicle routing problem and the distributed target localization problem. The final model is again distributed but assumes an unfair asynchronous adversary in the context of the energy distribution problem for programmable matter. For these problems, a variety of algorithms are presented, each of which is grounded in a theoretical foundation that permits formal guarantees regarding correctness, running time, and other critical properties. These guarantees are then validated with in silico simulations and (in some cases) physical experiments, demonstrating empirically that they may carry over to the real world. Hence, this dissertation bridges a portion of the predictability-practicality gap with respect to multiagent optimization problems.
ContributorsWeber, Jamison Wayne (Author) / Richa, Andréa W (Thesis advisor) / Bertsekas, Dimitri P (Committee member) / Murphey, Todd D (Committee member) / Jiang, Zilin (Committee member) / Arizona State University (Publisher)
Created2023
190719-Thumbnail Image.png
Description
Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate

Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform.However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. To make the silent majority heard is to reveal the true landscape of the platform. In this dissertation, to compensate for this bias in the data, which is related to user-level data scarcity, I introduce three pieces of research work. Two of these proposed solutions deal with the data on hand while the other tries to augment the current data. Specifically, the first proposed approach modifies the weight of users' activity/interaction in the input space, while the second approach involves re-weighting the loss based on the users' activity levels during the downstream task training. Lastly, the third approach uses large language models (LLMs) and learns the user's writing behavior to expand the current data. In other words, by utilizing LLMs as a sophisticated knowledge base, this method aims to augment the silent user's data.
ContributorsKarami, Mansooreh (Author) / Liu, Huan (Thesis advisor) / Sen, Arunabha (Committee member) / Davulcu, Hasan (Committee member) / Mancenido, Michelle V. (Committee member) / Arizona State University (Publisher)
Created2023
190728-Thumbnail Image.png
Description
Human civilization within the last two decades has largely transformed into an online one, with many of its associated activities taking place on computers and complex networked systems -- their analog and real-world equivalents having been rendered obsolete.These activities run the gamut from the ordinary and mundane, like ordering food,

Human civilization within the last two decades has largely transformed into an online one, with many of its associated activities taking place on computers and complex networked systems -- their analog and real-world equivalents having been rendered obsolete.These activities run the gamut from the ordinary and mundane, like ordering food, to complex and large-scale, such as those involving critical infrastructure or global trade and communications. Unfortunately, the activities of human civilization also involve criminal, adversarial, and malicious ones with the result that they also now have their digital equivalents. Ransomware, malware, and targeted cyberattacks are a fact of life today and are instigated not only by organized criminal gangs, but adversarial nation-states and organizations as well. Needless to say, such actions result in disastrous and harmful real-world consequences. As the complexity and variety of software has evolved, so too has the ingenuity of attacks that exploit them; for example modern cyberattacks typically involve sequential exploitation of multiple software vulnerabilities.Compared to a decade ago, modern software stacks on personal computers, laptops, servers, mobile phones, and even Internet of Things (IoT) devices involve a dizzying array of interdependent programs and software libraries, with each of these components presenting attractive attack-surfaces for adversarial actors. However, the responses to this still rely on paradigms that can neither react quickly enough nor scale to increasingly dynamic, ever-changing, and complex software environments. Better approaches are therefore needed, that can assess system readiness and vulnerabilities, identify potential attack vectors and strategies (including ways to counter them), and proactively detect vulnerabilities in complex software before they can be exploited. In this dissertation, I first present a mathematical model and associated algorithms to identify attacker strategies for sequential cyberattacks based on attacker state, attributes and publicly-available vulnerability information.Second, I extend the model and design algorithms to help identify defensive courses of action against attacker strategies. Finally, I present my work to enhance the ability of coverage-based fuzzers to identify software vulnerabilities by providing visibility into complex, internal program-states.
ContributorsPaliath, Vivin Suresh (Author) / Doupe, Adam (Thesis advisor) / Shoshitaishvili, Yan (Thesis advisor) / Wang, Ruoyu (Committee member) / Shakarian, Paulo (Committee member) / Arizona State University (Publisher)
Created2023