Matching Items (87)
155228-Thumbnail Image.png
Description
Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively

Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively small number of subjects but extremely high-dimensionality of both imaging data and genomic data. In this dissertation, I carry on my research on imaging genetics with particular focuses on two tasks---building predictive models between neuroimaging data and genomic data, and identifying disorder-related genetic risk factors through image-based biomarkers. To this end, I consider a suite of structured sparse methods---that can produce interpretable models and are robust to overfitting---for imaging genetics. With carefully-designed sparse-inducing regularizers, different biological priors are incorporated into learning models. More specifically, in the Allen brain image--gene expression study, I adopt an advanced sparse coding approach for image feature extraction and employ a multi-task learning approach for multi-class annotation. Moreover, I propose a label structured-based two-stage learning framework, which utilizes the hierarchical structure among labels, for multi-label annotation. In the Alzheimer's disease neuroimaging initiative (ADNI) imaging genetics study, I employ Lasso together with EDPP (enhanced dual polytope projections) screening rules to fast identify Alzheimer's disease risk SNPs. I also adopt the tree-structured group Lasso with MLFre (multi-layer feature reduction) screening rules to incorporate linkage disequilibrium information into modeling. Moreover, I propose a novel absolute fused Lasso model for ADNI imaging genetics. This method utilizes SNP spatial structure and is robust to the choice of reference alleles of genotype coding. In addition, I propose a two-level structured sparse model that incorporates gene-level networks through a graph penalty into SNP-level model construction. Lastly, I explore a convolutional neural network approach for accurate predicting Alzheimer's disease related imaging phenotypes. Experimental results on real-world imaging genetics applications demonstrate the efficiency and effectiveness of the proposed structured sparse methods.
ContributorsYang, Tao (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Thesis advisor) / He, Jingrui (Committee member) / Li, Baoxin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
Description
Volumetric cell imaging using 3D optical Computed Tomography (cell CT) is advantageous for identification and characterization of cancer cells. Many diseases arise from genomic changes, some of which are manifest at the cellular level in cytostructural and protein expression (functional) features which can be resolved, captured and quantified in 3D

Volumetric cell imaging using 3D optical Computed Tomography (cell CT) is advantageous for identification and characterization of cancer cells. Many diseases arise from genomic changes, some of which are manifest at the cellular level in cytostructural and protein expression (functional) features which can be resolved, captured and quantified in 3D far more sensitively and specifically than in traditional 2D microscopy. Live single cells were rotated about an axis perpendicular to the optical axis to facilitate data acquisition for functional live cell CT imaging. The goal of this thesis research was to optimize and characterize the microvortex rotation chip. Initial efforts concentrated on optimizing the microfabrication process in terms of time (6-8 hours v/s 12-16 hours), yield (100% v/s 40-60%) and ease of repeatability. This was done using a tilted exposure lithography technique, as opposed to the backside diffuser photolithography (BDPL) method used previously (Myers 2012) (Chang and Yoon 2004). The fabrication parameters for the earlier BDPL technique were also optimized so as to improve its reliability. A new, PDMS to PDMS demolding process (soft lithography) was implemented, greatly improving flexibility in terms of demolding and improving the yield to 100%, up from 20-40%. A new pump and flow sensor assembly was specified, tested, procured and set up, allowing for both pressure-control and flow-control (feedback-control) modes; all the while retaining the best features of a previous, purpose-built pump assembly. Pilot experiments were performed to obtain the flow rate regime required for cell rotation. These experiments also allowed for the determination of optimal trapezoidal neck widths (opening to the main flow channel) to be used for cell rotation characterization. The optimal optical trap forces were experimentally estimated in order to minimize the required optical power incident on the cell. Finally, the relationships between (main channel) flow rates and cell rotation rates were quantified for different trapezoidal chamber dimensions, and at predetermined constant values of laser trapping strengths, allowing for parametric characterization of the system.
ContributorsShetty, Rishabh M (Author) / Meldrum, Deirdre R (Thesis advisor) / Johnson, Roger H (Committee member) / Tillery, Stephen H (Committee member) / Arizona State University (Publisher)
Created2013
152082-Thumbnail Image.png
Description
While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other

While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other entities only when this form of coalition yields a better return. The interaction among multiple independent decision-makers necessitates the use of game theory, including economic notions related to markets and incentives. In this dissertation, we are interested in modeling, analyzing, addressing network problems caused by the selfish behavior of network entities. First, we study how the selfish behavior of network entities affects the system performance while users are competing for limited resource. For this resource allocation domain, we aim to study the selfish routing problem in networks with fair queuing on links, the relay assignment problem in cooperative networks, and the channel allocation problem in wireless networks. Another important aspect of this dissertation is the study of designing efficient mechanisms to incentivize network entities to achieve certain system objective. For this incentive mechanism domain, we aim to motivate wireless devices to serve as relays for cooperative communication, and to recruit smartphones for crowdsourcing. In addition, we apply different game theoretic approaches to problems in security and privacy domain. For this domain, we aim to analyze how a user could defend against a smart jammer, who can quickly learn about the user's transmission power. We also design mechanisms to encourage mobile phone users to participate in location privacy protection, in order to achieve k-anonymity.
ContributorsYang, Dejun (Author) / Xue, Guoliang (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
149478-Thumbnail Image.png
Description
Optimization of surgical operations is a challenging managerial problem for surgical suite directors. This dissertation presents modeling and solution techniques for operating room (OR) planning and scheduling problems. First, several sequencing and patient appointment time setting heuristics are proposed for scheduling an Outpatient Procedure Center. A discrete event simulation model

Optimization of surgical operations is a challenging managerial problem for surgical suite directors. This dissertation presents modeling and solution techniques for operating room (OR) planning and scheduling problems. First, several sequencing and patient appointment time setting heuristics are proposed for scheduling an Outpatient Procedure Center. A discrete event simulation model is used to evaluate how scheduling heuristics perform with respect to the competing criteria of expected patient waiting time and expected surgical suite overtime for a single day compared to current practice. Next, a bi-criteria Genetic Algorithm is used to determine if better solutions can be obtained for this single day scheduling problem. The efficacy of the bi-criteria Genetic Algorithm, when surgeries are allowed to be moved to other days, is investigated. Numerical experiments based on real data from a large health care provider are presented. The analysis provides insight into the best scheduling heuristics, and the tradeoff between patient and health care provider based criteria. Second, a multi-stage stochastic mixed integer programming formulation for the allocation of surgeries to ORs over a finite planning horizon is studied. The demand for surgery and surgical duration are random variables. The objective is to minimize two competing criteria: expected surgery cancellations and OR overtime. A decomposition method, Progressive Hedging, is implemented to find near optimal surgery plans. Finally, properties of the model are discussed and methods are proposed to improve the performance of the algorithm based on the special structure of the model. It is found simple rules can improve schedules used in practice. Sequencing surgeries from the longest to shortest mean duration causes high expected overtime, and should be avoided, while sequencing from the shortest to longest mean duration performed quite well in our experiments. Expending greater computational effort with more sophisticated optimization methods does not lead to substantial improvements. However, controlling daily procedure mix may achieve substantial improvements in performance. A novel stochastic programming model for a dynamic surgery planning problem is proposed in the dissertation. The efficacy of the progressive hedging algorithm is investigated. It is found there is a significant correlation between the performance of the algorithm and type and number of scenario bundles in a problem instance. The computational time spent to solve scenario subproblems is among the most significant factors that impact the performance of the algorithm. The quality of the solutions can be improved by detecting and preventing cyclical behaviors.
ContributorsGul, Serhat (Author) / Fowler, John W. (Thesis advisor) / Denton, Brian T. (Thesis advisor) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2010
149481-Thumbnail Image.png
Description
Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task

Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task due to the conflicting objectives and the uncertain nature of surgeries. In this dissertation, three different methodologies are developed to address OR planning and scheduling problem. First, a simulation-based framework is constructed to analyze the factors that affect the utilization of a catheterization lab and provide decision support for improving the efficiency of operations in a hospital with different priorities of patients. Both operational costs and patient satisfaction metrics are considered. Detailed parametric analysis is performed to provide generic recommendations. Overall it is found the 75th percentile of process duration is always on the efficient frontier and is a good compromise of both objectives. Next, the general OR planning and scheduling problem is formulated with a mixed integer program. The objectives include reducing staff overtime, OR idle time and patient waiting time, as well as satisfying surgeon preferences and regulating patient flow from OR to the Post Anesthesia Care Unit (PACU). Exact solutions are obtained using real data. Heuristics and a random keys genetic algorithm (RKGA) are used in the scheduling phase and compared with the optimal solutions. Interacting effects between planning and scheduling are also investigated. Lastly, a multi-objective simulation optimization approach is developed, which relaxes the deterministic assumption in the second study by integrating an optimization module of a RKGA implementation of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to search for Pareto optimal solutions, and a simulation module to evaluate the performance of a given schedule. It is experimentally shown to be an effective technique for finding Pareto optimal solutions.
ContributorsLi, Qing (Author) / Fowler, John W (Thesis advisor) / Mohan, Srimathy (Thesis advisor) / Gopalakrishnan, Mohan (Committee member) / Askin, Ronald G. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2010
137671-Thumbnail Image.png
Description
NGExtract 2 is a complete transistor (MOSFET) parameter extraction solution based upon the original computer program NGExtract by Rahul Shringarpure written in February 2007. NGExtract 2 is written in Java and based around the circuit simulator NGSpice. The goal of the program is to be used to produce

NGExtract 2 is a complete transistor (MOSFET) parameter extraction solution based upon the original computer program NGExtract by Rahul Shringarpure written in February 2007. NGExtract 2 is written in Java and based around the circuit simulator NGSpice. The goal of the program is to be used to produce accurate transistor models based around real-world transistor data. The program contains numerous improvements to the original program:
• Completely rewritten with performance and usability in mind
• Cross-Platform vs. Linux Only
• Simple installation procedure vs. compilation and manual library configuration
• Self-contained, single file runtime
• Particle Swarm Optimization routine
NGExtract 2 works by plotting the Ids vs. Vds and Ids vs. Vgs curves of a simulation model and the measured, real-world data. The user can adjust model parameters and re-simulate to attempt to match the curves. The included Particle Swarm Optimization routine attempts to automate this process by iteratively attempting to improve a solution by measuring its sum-squared error against the real-world data that the user has provided.
ContributorsVetrano, Michael Thomas (Author) / Allee, David (Thesis director) / Gorur, Ravi (Committee member) / Bakkaloglu, Bertan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
149326-Thumbnail Image.png
Description
The Resource Description Framework (RDF) is a specification that aims to support the conceptual modeling of metadata or information about resources in the form of a directed graph composed of triples of knowledge (facts). RDF also provides mechanisms to encode meta-information (such as source, trust, and certainty) about facts already

The Resource Description Framework (RDF) is a specification that aims to support the conceptual modeling of metadata or information about resources in the form of a directed graph composed of triples of knowledge (facts). RDF also provides mechanisms to encode meta-information (such as source, trust, and certainty) about facts already existing in a knowledge base through a process called reification. In this thesis, an extension to the current RDF specification is proposed in order to enhance RDF triples with an application specific weight (cost). Unlike reification, this extension treats these additional weights as first class knowledge attributes in the RDF model, which can be leveraged by the underlying query engine. Additionally, current RDF query languages, such as SPARQL, have a limited expressive power which limits the capabilities of applications that use them. Plus, even in the presence of language extensions, current RDF stores could not provide methods and tools to process extended queries in an efficient and effective way. To overcome these limitations, a set of novel primitives for the SPARQL language is proposed to express Top-k queries using traditional query patterns as well as novel predicates inspired by those from the XPath language. Plus, an extended query processor engine is developed to support efficient ranked path search, join, and indexing. In addition, several query optimization strategies are proposed, which employ heuristics, advanced indexing tools, and two graph metrics: proximity and sub-result inter-arrival time. These strategies aim to find join orders that reduce the total query execution time while avoiding worst-case pattern combinations. Finally, extensive experimental evaluation shows that using these two metrics in query optimization has a significant impact on the performance and efficiency of Top-k queries. Further experiments also show that proximity and inter-arrival have an even greater, although sometimes undesirable, impact when combined through aggregation functions. Based on these results, a hybrid algorithm is proposed which acknowledges that proximity is more important than inter-arrival time, due to its more complete nature, and performs a fine-grained combination of both metrics by analyzing the differences between their individual scores and performing the aggregation only if these differences are negligible.
ContributorsCedeno, Juan Pablo (Author) / Candan, Kasim S (Thesis advisor) / Chen, Yi (Committee member) / Sapino, Maria L (Committee member) / Arizona State University (Publisher)
Created2010
149353-Thumbnail Image.png
Description
Fluctuating flow releases on regulated rivers destabilize downstream riverbanks, causing unintended, unnatural, and uncontrolled geomorphologic changes. These flow releases, usually a result of upstream hydroelectric dam operations, create manmade tidal effects that cause significant environmental damage; harm fish, vegetation, mammal, and avian habitats; and destroy riverbank camping and boating areas.

Fluctuating flow releases on regulated rivers destabilize downstream riverbanks, causing unintended, unnatural, and uncontrolled geomorphologic changes. These flow releases, usually a result of upstream hydroelectric dam operations, create manmade tidal effects that cause significant environmental damage; harm fish, vegetation, mammal, and avian habitats; and destroy riverbank camping and boating areas. This work focuses on rivers regulated by hydroelectric dams and have banks formed by sediment processes. For these systems, bank failures can be reduced, but not eliminated, by modifying flow release schedules. Unfortunately, comprehensive mitigation can only be accomplished with expensive rebuilding floods which release trapped sediment back into the river. The contribution of this research is to optimize weekly hydroelectric dam releases to minimize the cost of annually mitigating downstream bank failures. Physical process modeling of dynamic seepage effects is achieved through a new analytical unsaturated porewater response model that allows arbitrary periodic stage loading by Fourier series. This model is incorporated into a derived bank failure risk model that utilizes stochastic parameters identified through a meta-analysis of more than 150 documented slope failures. The risk model is then expanded to the river reach level by a Monte Carlos simulation and nonlinear regression of measured attenuation effects. Finally, the comprehensive risk model is subjected to a simulated annealing (SA) optimization scheme that accounts for physical, environmental, mechanical, operations, and flow constraints. The complete risk model is used to optimize the weekly flow release schedule of the Glen Canyon Dam, which regulates flow in the Colorado River within the Grand Canyon. A solution was obtained that reduces downstream failure risk, allows annual rebuilding floods, and predicts a hydroelectric revenue increase of more than 2%.
ContributorsTravis, Quentin Brent (Author) / Mays, Larry (Thesis advisor) / Schmeeckle, Mark (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2010
135593-Thumbnail Image.png
Description
The effect of conflicting sensorimotor memories on optimal force strategies was explored. Subjects operated a virtual object controlled by a physical handle to complete a simple straight-line task. Perturbations applied to the handle induced a period of increased error in subject accuracy. After two blocks of 33 trials, perturbations switched

The effect of conflicting sensorimotor memories on optimal force strategies was explored. Subjects operated a virtual object controlled by a physical handle to complete a simple straight-line task. Perturbations applied to the handle induced a period of increased error in subject accuracy. After two blocks of 33 trials, perturbations switched direction, inducing increased error from the previous trials. Subjects returned after a 24-hour period to complete a similar protocol, but beginning with the second context and ending with the first. Interference from the first context on each day caused an increase in initial error for the second (P < 0.05). Following the rest period, subjects showed retention of the sensorimotor memory from the previous day through significantly decreased initial error (P = 3x10-6). However, subjects showed an increase in forces for each new context resulting from a sub-optimal motor strategy. Higher levels of total effort (P < 0.05) and a lack of separation between force values for opposing and non-opposing digits (P > 0.05) indicated a strategy that used more energy to complete the task, even when rates of learning appeared identical or improved. Two possible mechanisms for this lack of energy conservation have been proposed.
ContributorsSmith, Michael David (Author) / Santello, Marco (Thesis director) / Kleim, Jeffrey (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
147518-Thumbnail Image.png
Description

This paper is an exploration of numerical optimization as it applies to the consumer choice problem. Suggested algorithms are intended to compute solutions to the Marshallian problem, and some can extend to the dual given the suggested modifications. Each method seeks to either weaken the sufficient conditions for optimization, converge

This paper is an exploration of numerical optimization as it applies to the consumer choice problem. Suggested algorithms are intended to compute solutions to the Marshallian problem, and some can extend to the dual given the suggested modifications. Each method seeks to either weaken the sufficient conditions for optimization, converge to a solution more efficiently, or describe additional properties of the decision space. The purpose of this paper is to explore constrained quasiconvex programming in a less complicated environment by design of Marshallian constraints.

ContributorsKnipp, Charles (Author) / Reffett, Kevin (Thesis director) / Leiva-Bertran, Fernando (Committee member) / Department of Economics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05