Matching Items (726)
Filtering by

Clear all filters

Description

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research involved a series of qualitative investigations. I held one-on-one virtual interviews with students in which I asked them questions about the mode they use and why they feel that their chosen mode works best for them. These interviews served two functions. First, they provided me with insight into the various motivations underlying student mode choice. Second, they provided me with an indication of what explanatory variables should be included in a model of mode choice on campus.<br/>The first half of the research project informed a quantitative survey that was released via the Honors Digest to attract student respondents. Data was gathered on travel behavior as well as relevant explanatory variables.<br/>My analysis involved developing a logit model to predict student mode choice on campus and presenting the model estimation in conjunction with a discussion of student travel motivations based on the qualitative interviews. I use this information to make a recommendation on how campus infrastructure could be modified to better support the needs of the student population.

ContributorsMirtich, Laura Christine (Author) / Salon, Deborah (Thesis director) / Fang, Kevin (Committee member) / School of Public Affairs (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150231-Thumbnail Image.png
Description
In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it.

In this thesis I introduce a new direction to computing using nonlinear chaotic dynamics. The main idea is rich dynamics of a chaotic system enables us to (1) build better computers that have a flexible instruction set, and (2) carry out computation that conventional computers are not good at it. Here I start from the theory, explaining how one can build a computing logic block using a chaotic system, and then I introduce a new theoretical analysis for chaos computing. Specifically, I demonstrate how unstable periodic orbits and a model based on them explains and predicts how and how well a chaotic system can do computation. Furthermore, since unstable periodic orbits and their stability measures in terms of eigenvalues are extractable from experimental times series, I develop a time series technique for modeling and predicting chaos computing from a given time series of a chaotic system. After building a theoretical framework for chaos computing I proceed to architecture of these chaos-computing blocks to build a sophisticated computing system out of them. I describe how one can arrange and organize these chaos-based blocks to build a computer. I propose a brand new computer architecture using chaos computing, which shifts the limits of conventional computers by introducing flexible instruction set. Our new chaos based computer has a flexible instruction set, meaning that the user can load its desired instruction set to the computer to reconfigure the computer to be an implementation for the desired instruction set. Apart from direct application of chaos theory in generic computation, the application of chaos theory to speech processing is explained and a novel application for chaos theory in speech coding and synthesizing is introduced. More specifically it is demonstrated how a chaotic system can model the natural turbulent flow of the air in the human speech production system and how chaotic orbits can be used to excite a vocal tract model. Also as another approach to build computing system based on nonlinear system, the idea of Logical Stochastic Resonance is studied and adapted to an autoregulatory gene network in the bacteriophage λ.
ContributorsKia, Behnam (Author) / Ditto, William (Thesis advisor) / Huang, Liang (Committee member) / Lai, Ying-Cheng (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150114-Thumbnail Image.png
Description
Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling

Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling studies) and aggregate highly heterogeneous biological samples. Popular methods to learn GRNs simplistically assume a single universal regulatory network corresponding to available data. They neglect regulatory network adaptation due to change in underlying conditions and cellular phenotype or both. This dissertation presents a novel computational framework to learn common regulatory interactions and networks underlying the different sets of relatively homogeneous samples from real world biological data. The characteristic set of samples/conditions and corresponding regulatory interactions defines the cellular context (context). Context, in this dissertation, represents the deterministic transcriptional activity within the specific cellular regulatory mechanism. The major contributions of this framework include - modeling and learning context specific GRNs; associating enriched samples with contexts to interpret contextual interactions using biological knowledge; pruning extraneous edges from the context-specific GRN to improve the precision of the final GRNs; integrating multisource data to learn inter and intra domain interactions and increase confidence in obtained GRNs; and finally, learning combinatorial conditioning factors from the data to identify regulatory cofactors. The framework, Expattern, was applied to both real world and synthetic data. Interesting insights were obtained into mechanism of action of drugs on analysis of NCI60 drug activity and gene expression data. Application to refractory cancer data and Glioblastoma multiforme yield GRNs that were readily annotated with context-specific phenotypic information. Refractory cancer GRNs also displayed associations between distinct cancers, not observed through only clustering. Performance comparisons on multi-context synthetic data show the framework Expattern performs better than other comparable methods.
ContributorsSen, Ina (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Bittner, Michael (Committee member) / Konjevod, Goran (Committee member) / Arizona State University (Publisher)
Created2011
149703-Thumbnail Image.png
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
136399-Thumbnail Image.png
Description
Defines the concept of the arcology as conceived by architect Paolo Soleri. Arcology combines "architecture" and "ecology" and explores a visionary notion of a self-contained urban community that has agricultural, commercial, and residential facilities under one roof. Two real-world examples of these projects are explored: Arcosanti, AZ and Masdar City,

Defines the concept of the arcology as conceived by architect Paolo Soleri. Arcology combines "architecture" and "ecology" and explores a visionary notion of a self-contained urban community that has agricultural, commercial, and residential facilities under one roof. Two real-world examples of these projects are explored: Arcosanti, AZ and Masdar City, Abu Dhabi, UAE. Key aspects of the arcology that could be applied to an existing urban fabric are identified, such as urban design fostering social interaction, reduction of automobile dependency, and a development pattern that combats sprawl. Through interviews with local representatives, a holistic approach to applying arcology concepts to the Phoenix Metro Area is devised.
ContributorsSpencer, Sarah Anne (Author) / Manuel-Navarrete, David (Thesis director) / Salon, Deborah (Committee member) / Barrett, The Honors College (Contributor) / School of Geographical Sciences and Urban Planning (Contributor) / School of Sustainability (Contributor)
Created2015-05
152172-Thumbnail Image.png
Description
The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence, the fraction of time it is permitted to spend transmitting. Schedule-based schemes implement stable persistences, achieving low variation in delay

The primary function of the medium access control (MAC) protocol is managing access to a shared communication channel. From the viewpoint of transmitters, the MAC protocol determines each transmitter's persistence, the fraction of time it is permitted to spend transmitting. Schedule-based schemes implement stable persistences, achieving low variation in delay and throughput, and sometimes bounding maximum delay. However, they adapt slowly, if at all, to changes in the network. Contention-based schemes are agile, adapting quickly to changes in perceived contention, but suffer from short-term unfairness, large variations in packet delay, and poor performance at high load. The perfect MAC protocol, it seems, embodies the strengths of both contention- and schedule-based approaches while avoiding their weaknesses. This thesis culminates in the design of a Variable-Weight and Adaptive Topology Transparent (VWATT) MAC protocol. The design of VWATT first required answers for two questions: (1) If a node is equipped with schedules of different weights, which weight should it employ? (2) How is the node to compute the desired weight in a network lacking centralized control? The first question is answered by the Topology- and Load-Aware (TLA) allocation which defines target persistences that conform to both network topology and traffic load. Simulations show the TLA allocation to outperform IEEE 802.11, improving on the expectation and variation of delay, throughput, and drop rate. The second question is answered in the design of an Adaptive Topology- and Load-Aware Scheduled (ATLAS) MAC that computes the TLA allocation in a decentralized and adaptive manner. Simulation results show that ATLAS converges quickly on the TLA allocation, supporting highly dynamic networks. With these questions answered, a construction based on transversal designs is given for a variable-weight topology transparent schedule that allows nodes to dynamically and independently select weights to accommodate local topology and traffic load. The schedule maintains a guarantee on maximum delay when the maximum neighbourhood size is not too large. The schedule is integrated with the distributed computation of ATLAS to create VWATT. Simulations indicate that VWATT offers the stable performance characteristics of a scheduled MAC while adapting quickly to changes in topology and traffic load.
ContributorsLutz, Jonathan (Author) / Colbourn, Charles J (Thesis advisor) / Syrotiuk, Violet R. (Thesis advisor) / Konjevod, Goran (Committee member) / Lloyd, Errol L. (Committee member) / Arizona State University (Publisher)
Created2013
151291-Thumbnail Image.png
Description
The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is

The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is reflected in teaching practices, principles, and tools. Much of this digital integration goes unremarked and may not even be explicitly taught. In this qualitative research project, interviews with 18 leading architecture lecturers, professors, and deans from programs across the United States were conducted. These interviews focused on advanced practices of digital architecture, such as the use of digital tools, and how these practices are viewed. These interviews yielded a wealth of information about the uses (and abuses) of advanced digital technologies within the architectural academy, and the results were analyzed using the methods of phenomenology and grounded theory. Most schools use digital technologies to some extent, although this extent varies greatly. While some schools have abandoned hand-drawing and other hand-based craft almost entirely, others have retained traditional techniques and use digital technologies sparingly. Reasons for using digital design processes include industry pressure as well as the increased ability to solve problems and the speed with which they could be solved. Despite the prevalence of digital design, most programs did not teach related design software explicitly, if at all, instead requiring students (especially graduate students) to learn to use them outside the design studio. Some of the problems with digital design identified in the interviews include social problems such as alienation as well as issues like understanding scale and embodiment of skill.
ContributorsAlqabandy, Hamad (Author) / Brandt, Beverly (Thesis advisor) / Mesch, Claudia (Committee member) / Newton, David (Committee member) / Arizona State University (Publisher)
Created2012
150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
ContributorsZhang, Sushu (Author) / Chatha, Karam S (Thesis advisor) / Cao, Yu (Committee member) / Konjevod, Goran (Committee member) / Vrudhula, Sarma (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150829-Thumbnail Image.png
Description
In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage

In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage and widespread acceptance of modern easel painting. Held at the Philbrook Art Center in Tulsa (1946-1979), the Denver Art Museum (1951-1954), and the Museum of New Mexico Art Gallery in Santa Fe (1956-1965), they were significant not only for the accolades and prestige they garnered for award winners, but also for setting standards of quality and style at the time. During the early years of the annuals, the art was changing, some moving away from conventional forms derived from the early art training of the 1920s and 30s in the Southwest and Oklahoma, and incorporating modern themes and styles acquired through expanded opportunities for travel and education. The competitions reinforced and reflected a variety of attitudes about contemporary art which ranged from preserving the authenticity of the traditional style to encouraging experimentation. Ultimately becoming sites of conflict, the museums that hosted annuals contested the directions in which artists were working. Exhibition catalogs, archived documents, and newspaper and magazine articles about the annuals provide details on the exhibits and the changes that occurred over time. The museums' guidelines and motivations, and the statistics on the award winners reveal attitudes toward the art. The institutions' reactions in the face of controversy and their adjustments to the annuals' guidelines impart the compromises each made as they adapted to new trends that occurred in Native American painting over a fifteen year period. This thesis compares the approaches of three museums to their juried annuals and establishes the existence of a variety of attitudes on contemporary Native American painting from 1946-1960. Through this collection of institutional views, the competitions maintained a patronage base for traditional style painting while providing opportunities for experimentation, paving the way for the great variety and artistic progress of Native American painting today.
ContributorsPeters, Stephanie (Author) / Duncan, Kate (Thesis advisor) / Fahlman, Betsy (Thesis advisor) / Mesch, Claudia (Committee member) / Arizona State University (Publisher)
Created2012