Matching Items (580)
Filtering by

Clear all filters

150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150114-Thumbnail Image.png
Description
Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling

Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling studies) and aggregate highly heterogeneous biological samples. Popular methods to learn GRNs simplistically assume a single universal regulatory network corresponding to available data. They neglect regulatory network adaptation due to change in underlying conditions and cellular phenotype or both. This dissertation presents a novel computational framework to learn common regulatory interactions and networks underlying the different sets of relatively homogeneous samples from real world biological data. The characteristic set of samples/conditions and corresponding regulatory interactions defines the cellular context (context). Context, in this dissertation, represents the deterministic transcriptional activity within the specific cellular regulatory mechanism. The major contributions of this framework include - modeling and learning context specific GRNs; associating enriched samples with contexts to interpret contextual interactions using biological knowledge; pruning extraneous edges from the context-specific GRN to improve the precision of the final GRNs; integrating multisource data to learn inter and intra domain interactions and increase confidence in obtained GRNs; and finally, learning combinatorial conditioning factors from the data to identify regulatory cofactors. The framework, Expattern, was applied to both real world and synthetic data. Interesting insights were obtained into mechanism of action of drugs on analysis of NCI60 drug activity and gene expression data. Application to refractory cancer data and Glioblastoma multiforme yield GRNs that were readily annotated with context-specific phenotypic information. Refractory cancer GRNs also displayed associations between distinct cancers, not observed through only clustering. Performance comparisons on multi-context synthetic data show the framework Expattern performs better than other comparable methods.
ContributorsSen, Ina (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Bittner, Michael (Committee member) / Konjevod, Goran (Committee member) / Arizona State University (Publisher)
Created2011
149703-Thumbnail Image.png
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
151748-Thumbnail Image.png
Description
For over a century, researchers have been investigating collective cognition, in which a group of individuals together process information and act as a single cognitive unit. However, I still know little about circumstances under which groups achieve better (or worse) decisions than individuals. My dissertation research directly addressed this longstanding

For over a century, researchers have been investigating collective cognition, in which a group of individuals together process information and act as a single cognitive unit. However, I still know little about circumstances under which groups achieve better (or worse) decisions than individuals. My dissertation research directly addressed this longstanding question, using the house-hunting ant Temnothorax rugatulus as a model system. Here I applied concepts and methods developed in psychology not only to individuals but also to colonies in order to investigate differences of their cognitive abilities. This approach is inspired by the superorganism concept, which sees a tightly integrated insect society as the analog of a single organism. I combined experimental manipulations and models to elucidate the emergent processes of collective cognition. My studies show that groups can achieve superior cognition by sharing the burden of option assessment among members and by integrating information from members using positive feedback. However, the same positive feedback can lock the group into a suboptimal choice in certain circumstances. Although ants are obligately social, my results show that they can be isolated and individually tested on cognitive tasks. In the future, this novel approach will help the field of animal behavior move towards better understanding of collective cognition.
ContributorsSasaki, Takao (Author) / Pratt, Stephen C (Thesis advisor) / Amazeen, Polemnia (Committee member) / Liebig, Jürgen (Committee member) / Janssen, Marco (Committee member) / Fewell, Jennifer (Committee member) / Hölldobler, Bert (Committee member) / Arizona State University (Publisher)
Created2013
152172-Thumbnail Image.png
Description
The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence, the fraction of time it is permitted to spend transmitting. Schedule-based schemes implement stable persistences, achieving low variation in delay

The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence, the fraction of time it is permitted to spend transmitting. Schedule-based schemes implement stable persistences, achieving low variation in delay and throughput, and sometimes bounding maximum delay. However, they adapt slowly, if at all, to changes in the network. Contention-based schemes are agile, adapting quickly to changes in perceived contention, but suffer from short-term unfairness, large variations in packet delay, and poor performance at high load. The perfect MAC protocol, it seems, embodies the strengths of both contention- and schedule-based approaches while avoiding their weaknesses. This thesis culminates in the design of a Variable-Weight and Adaptive Topology Transparent (VWATT) MAC protocol. The design of VWATT first required answers for two questions: (1) If a node is equipped with schedules of different weights, which weight should it employ? (2) How is the node to compute the desired weight in a network lacking centralized control? The first question is answered by the Topology- and Load-Aware (TLA) allocation which defines target persistences that conform to both network topology and traffic load. Simulations show the TLA allocation to outperform IEEE 802.11, improving on the expectation and variation of delay, throughput, and drop rate. The second question is answered in the design of an Adaptive Topology- and Load-Aware Scheduled (ATLAS) MAC that computes the TLA allocation in a decentralized and adaptive manner. Simulation results show that ATLAS converges quickly on the TLA allocation, supporting highly dynamic networks. With these questions answered, a construction based on transversal designs is given for a variable-weight topology transparent schedule that allows nodes to dynamically and independently select weights to accommodate local topology and traffic load. The schedule maintains a guarantee on maximum delay when the maximum neighbourhood size is not too large. The schedule is integrated with the distributed computation of ATLAS to create VWATT. Simulations indicate that VWATT offers the stable performance characteristics of a scheduled MAC while adapting quickly to changes in topology and traffic load.
ContributorsLutz, Jonathan (Author) / Colbourn, Charles J (Thesis advisor) / Syrotiuk, Violet R. (Thesis advisor) / Konjevod, Goran (Committee member) / Lloyd, Errol L. (Committee member) / Arizona State University (Publisher)
Created2013
150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
ContributorsZhang, Sushu (Author) / Chatha, Karam S (Thesis advisor) / Cao, Yu (Committee member) / Konjevod, Goran (Committee member) / Vrudhula, Sarma (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
151119-Thumbnail Image.png
Description
The spread of invasive species may be greatly affected by human responses to prior species spread, but models and estimation methods seldom explicitly consider human responses. I investigate the effects of management responses on estimates of invasive species spread rates. To do this, I create an agent-based simulation model of

The spread of invasive species may be greatly affected by human responses to prior species spread, but models and estimation methods seldom explicitly consider human responses. I investigate the effects of management responses on estimates of invasive species spread rates. To do this, I create an agent-based simulation model of an insect invasion across a county-level citrus landscape. My model provides an approximation of a complex spatial environment while allowing the "truth" to be known. The modeled environment consists of citrus orchards with insect pests dispersing among them. Insects move across the simulation environment infesting orchards, while orchard managers respond by administering insecticide according to analyst-selected behavior profiles and management responses may depend on prior invasion states. Dispersal data is generated in each simulation and used to calculate spread rate via a set of estimators selected for their predominance in the empirical literature. Spread rate is a mechanistic, emergent phenomenon measured at the population level caused by a suite of latent biological, environmental, and anthropogenic. I test the effectiveness of orchard behavior profiles on invasion suppression and evaluate the robustness of the estimators given orchard responses. I find that allowing growers to use future expectations of spread in management decisions leads to reduced spread rates. Acting in a preventative manner by applying insecticide before insects are actually present, orchards are able to lower spread rates more than by reactive behavior alone. Spread rates are highly sensitive to spatial configuration. Spatial configuration is hardly a random process, consisting of many latent factors often not accounted for in spread rate estimation. Not considering these factors may lead to an omitted variables bias and skew estimation results. The ability of spread rate estimators to predict future spread varies considerably between estimators, and with spatial configuration, invader biological parameters, and orchard behavior profile. The model suggests that understanding the latent factors inherent to dispersal is important for selecting phenomenological models of spread and interpreting estimation results. This indicates a need for caution when evaluating spread. Although standard practice, current empirical estimators may both over- and underestimate spread rate in the simulation.
ContributorsShanafelt, David William (Author) / Fenichel, Eli P (Thesis advisor) / Richards, Timothy (Committee member) / Janssen, Marco (Committee member) / Arizona State University (Publisher)
Created2012
137493-Thumbnail Image.png
DescriptionThis paper provides an analysis of the differences in impacts made by companies that promote their sustainability efforts. A comparison of companies reveals that the ones with greater supply chain influence and larger consumer bases can make more concrete progress in terms of accomplishment for the sustainability realm.
ContributorsBeaubien, Courtney Lynn (Author) / Anderies, John (Thesis director) / Allenby, Brad (Committee member) / Janssen, Marco (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor)
Created2013-05
141462-Thumbnail Image.png
Description

Despite the fact that seizures are commonly associated with autism spectrum disorder (ASD), the effectiveness of treatments for seizures has not been well studied in individuals with ASD. This manuscript reviews both traditional and novel treatments for seizures associated with ASD. Studies were selected by systematically searching major electronic databases

Despite the fact that seizures are commonly associated with autism spectrum disorder (ASD), the effectiveness of treatments for seizures has not been well studied in individuals with ASD. This manuscript reviews both traditional and novel treatments for seizures associated with ASD. Studies were selected by systematically searching major electronic databases and by a panel of experts that treat ASD individuals. Only a few anti-epileptic drugs (AEDs) have undergone carefully controlled trials in ASD, but these trials examined outcomes other than seizures. Several lines of evidence point to valproate, lamotrigine, and levetiracetam as the most effective and tolerable AEDs for individuals with ASD. Limited evidence supports the use of traditional non-AED treatments, such as the ketogenic and modified Atkins diet, multiple subpial transections, immunomodulation, and neurofeedback treatments. Although specific treatments may be more appropriate for specific genetic and metabolic syndromes associated with ASD and seizures, there are few studies which have documented the effectiveness of treatments for seizures for specific syndromes. Limited evidence supports l-carnitine, multivitamins, and N-acetyl-l-cysteine in mitochondrial disease and dysfunction, folinic acid in cerebral folate abnormalities and early treatment with vigabatrin in tuberous sclerosis complex. Finally, there is limited evidence for a number of novel treatments, particularly magnesium with pyridoxine, omega-3 fatty acids, the gluten-free casein-free diet, and low-frequency repetitive transcranial magnetic simulation. Zinc and l-carnosine are potential novel treatments supported by basic research but not clinical studies. This review demonstrates the wide variety of treatments used to treat seizures in individuals with ASD as well as the striking lack of clinical trials performed to support the use of these treatments. Additional studies concerning these treatments for controlling seizures in individuals with ASD are warranted.

ContributorsFrye, Richard E. (Author) / Rossignol, Daniel (Author) / Casanova, Manuel F. (Author) / Brown, Gregory L. (Author) / Martin, Victoria (Author) / Edelson, Stephen (Author) / Coben, Robert (Author) / Lewine, Jeffrey (Author) / Slattery, John C. (Author) / Lau, Chrystal (Author) / Hardy, Paul (Author) / Fatemi, S. Hossein (Author) / Folsom, Timothy D. (Author) / MacFabe, Derrick (Author) / Adams, James (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2013-09-13
141466-Thumbnail Image.png
Description

There is a growing body of scientific evidence that the health of the microbiome (the trillions of microbes that inhabit the human host) plays an important role in maintaining the health of the host and that disruptions in the microbiome may play a role in certain disease processes. An increasing

There is a growing body of scientific evidence that the health of the microbiome (the trillions of microbes that inhabit the human host) plays an important role in maintaining the health of the host and that disruptions in the microbiome may play a role in certain disease processes. An increasing number of research studies have provided evidence that the composition of the gut (enteric) microbiome (GM) in at least a subset of individuals with autism spectrum disorder (ASD) deviates from what is usually observed in typically developing individuals. There are several lines of research that suggest that specific changes in the GM could be causative or highly associated with driving core and associated ASD symptoms, pathology, and comorbidities which include gastrointestinal symptoms, although it is also a possibility that these changes, in whole or in part, could be a consequence of underlying pathophysiological features associated with ASD. However, if the GM truly plays a causative role in ASD, then the manipulation of the GM could potentially be leveraged as a therapeutic approach to improve ASD symptoms and/or comorbidities, including gastrointestinal symptoms.

One approach to investigating this possibility in greater detail includes a highly controlled clinical trial in which the GM is systematically manipulated to determine its significance in individuals with ASD. To outline the important issues that would be required to design such a study, a group of clinicians, research scientists, and parents of children with ASD participated in an interdisciplinary daylong workshop as an extension of the 1st International Symposium on the Microbiome in Health and Disease with a Special Focus on Autism (www.microbiome-autism.com). The group considered several aspects of designing clinical studies, including clinical trial design, treatments that could potentially be used in a clinical trial, appropriate ASD participants for the clinical trial, behavioral and cognitive assessments, important biomarkers, safety concerns, and ethical considerations. Overall, the group not only felt that this was a promising area of research for the ASD population and a promising avenue for potential treatment but also felt that further basic and translational research was needed to clarify the clinical utility of such treatments and to elucidate possible mechanisms responsible for a clinical response, so that new treatments and approaches may be discovered and/or fostered in the future.

ContributorsFrye, Richard E. (Author) / Slattery, John (Author) / MacFabe, Derrick F. (Author) / Allen-Vercoe, Emma (Author) / Parker, William (Author) / Rodakis, John (Author) / Adams, James (Author) / Krajmalnik-Brown, Rosa (Author) / Bolte, Ellen (Author) / Kahler, Stephen (Author) / Jennings, Jana (Author) / James, Jill (Author) / Cerniglia, Carl E. (Author) / Midtvedt, Tore (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-07