This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 97
Filtering by

Clear all filters

151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
153498-Thumbnail Image.png
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric

Myoelectric control is lled with potential to signicantly change human-robot interaction.

Humans desire compliant robots to safely interact in dynamic environments

associated with daily activities. As surface electromyography non-invasively measures

limb motion intent and correlates with joint stiness during co-contractions,

it has been identied as a candidate for naturally controlling such robots. However,

state-of-the-art myoelectric interfaces have struggled to achieve both enhanced

functionality and long-term reliability. As demands in myoelectric interfaces trend

toward simultaneous and proportional control of compliant robots, robust processing

of multi-muscle coordinations, or synergies, plays a larger role in the success of the

control scheme. This dissertation presents a framework enhancing the utility of myoelectric

interfaces by exploiting motor skill learning and

exible muscle synergies for

reliable long-term simultaneous and proportional control of multifunctional compliant

robots. The interface is learned as a new motor skill specic to the controller,

providing long-term performance enhancements without requiring any retraining or

recalibration of the system. Moreover, the framework oers control of both motion

and stiness simultaneously for intuitive and compliant human-robot interaction. The

framework is validated through a series of experiments characterizing motor learning

properties and demonstrating control capabilities not seen previously in the literature.

The results validate the approach as a viable option to remove the trade-o

between functionality and reliability that have hindered state-of-the-art myoelectric

interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric

controlled applications beyond commonly perceived anthropomorphic and

\intuitive control" constraints and into more advanced robotic systems designed for

everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
153540-Thumbnail Image.png
Description
In accordance with the Principal Agent Theory, Property Right Theory, Incentive Theory, and Human Capital Theory, firms face agency problems due to “separation of ownership and management”, which call for effective corporate governance. Ownership structure is a core element of the corporate governance. The differences in ownership structures thus may

In accordance with the Principal Agent Theory, Property Right Theory, Incentive Theory, and Human Capital Theory, firms face agency problems due to “separation of ownership and management”, which call for effective corporate governance. Ownership structure is a core element of the corporate governance. The differences in ownership structures thus may result in differential incentives in governance through the selection of senior management and in the design of senior management compensation system. This thesis investigates four firms with four different types of ownership structures: a public listed firm with the controlling interest by the state, a public listed firm with a non-state-owned controlling interest, a public listed firm a family-owned controlling interest, and a Sino-foreign joint venture firm. By using a case study approach, I focus on two dimensions of ownership structure characteristics – ownership diversification and differences in property rights so as to document whether there are systematic differences in governance participation and executive compensation design. Specifically, I focused on whether such differences are reflected in management selection (which is linked to adverse selection and moral hazard problems) and in compensation design (the choices of performance measurements, performance pay, and in stock option or restricted stock). The results are consistent with my expectation – the nature of ownership structure does affect senior management compensation design. Policy implications are discussed accordingly.
ContributorsGao, Shenghua (Author) / Pei, Ker-Wei (Thesis advisor) / Li, Feng (Committee member) / Shen, Wei (Committee member) / Arizona State University (Publisher)
Created2015
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
154091-Thumbnail Image.png
Description
Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new

Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new one, and resuming execution of the new program with the mapped state. The semantic correctness of DSU depends largely on the state mapping which is mostly composed by developers manually nowadays. However, the manual construction of a state mapping does not necessarily ensure sound and dependable state mapping. This dissertation presents a methodology to assist developers by automating the construction of a partial state mapping with a guarantee of correctness.

This dissertation includes a detailed study of DSU correctness and automatic state mapping for server programs with an established user base. At first, the dissertation presents the formal treatment of DSU correctness and the state mapping problem. Then the dissertation presents an argument that for programs with an established user base, dynamic updates must be backward compatible. The dissertation next presents a general definition of backward compatibility that specifies the allowed changes in program interaction between an old version and a new version and identified patterns of code evolution that results in backward compatible behavior. Thereafter the dissertation presents formal definitions of these patterns together with proof that any changes to programs in these patterns will result in backward compatible update. To show the applicability of the results, the dissertation presents SitBack, a program analysis tool that has an old version program and a new one as input and computes a partial state mapping under the assumption that the new version is backward compatible with the old version.

SitBack does not handle all kinds of changes and it reports to the user in incomplete part of a state mapping. The dissertation presents a detailed evaluation of SitBack which shows that the methodology of automatic state mapping is promising in deal with real world program updates. For example, SitBack produces state mappings for 17-75% of the changed functions. Furthermore, SitBack generates automatic state mapping that leads to successful DSU. In conclusion, the study presented in this dissertation does assist developers in developing state mappings for DSU by automating the construction of state mappings with a correctness guarantee, which helps the adoption of DSU ultimately.
ContributorsShen, Jun (Author) / Bazzi, Rida A (Thesis advisor) / Fainekos, Georgios (Committee member) / Neamtiu, Iulian (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2015
154003-Thumbnail Image.png
Description
Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation

Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.
ContributorsSong, Young Wn (Author) / Lee, Yann-Hang (Thesis advisor) / Shrivastava, Aviral (Committee member) / Fainekos, Georgios (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2015
155975-Thumbnail Image.png
Description
Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of CPS is a computationally hard problem. Therefore, lightweight verification methods

Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of CPS is a computationally hard problem. Therefore, lightweight verification methods such as testing and monitoring of the CPS are considered in the industry. The formal representation of the CPS requirements is a challenging task. In addition, checking the system outputs with respect to requirements is a computationally complex problem. In this dissertation, these problems for the verification of CPS are addressed. The first method provides a formal requirement analysis framework which can find logical issues in the requirements and help engineers to correct the requirements. Also, a method is provided to detect tests which vacuously satisfy the requirement because of the requirement structure. This method is used to improve the test generation framework for CPS. Finally, two runtime verification algorithms are developed for off-line/on-line monitoring with respect to real-time requirements. These monitoring algorithms are computationally efficient, and they can be used in practical applications for monitoring CPS with low runtime overhead.
ContributorsDokhanchi, Adel (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2017
156003-Thumbnail Image.png
Description
Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to

Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation.

A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial.

In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes.

Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.
ContributorsGholami, Soroosh (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Fainekos, Georgios (Committee member) / Ogras, Umit Y. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2017
157422-Thumbnail Image.png
Description冷链物流主要是指食品在生产到消费者食用前始终处于适宜的温度环境,以保障食品品质、降低流通过程中的损耗。冷链物流相比于传统物流而言是一项更复杂的系统性工程,受到政策和市场需求的影响呈现迅猛发展态势。但是,冷链物流企业长期以来因规模小、固定资产少、服务范围窄、服务规范性弱而发展困难重重,核心问题是资金的问题。政府引导和鼓励打造冷链物流产业园,推动产业园投资和建设主体打造平台,实现对园区内冷链企业的聚集效应并通过金融服务解决企业发展的资金问题。通过产融结合助力冷链物流企业发展,成为目前冷链物流行业发展的主要方式和未来趋势。

本研究聚焦冷链物流产业园金融服务助力冷链物流企业发展问题,主要研究内容包括:第一,基于产融结合理论,梳理冷链物流企业与产业园之间关系,从供需两侧探索冷链物流企业和产业园的金融服务的范围、类型和特点。第二,基于平台理论,构建冷链物流企业采纳产业园金融服务的研究模型,探索金融服务影响冷链物流企业的经营因素,分析冷链物流企业采纳产业园金融服务的因素和途径。第三,基于信息不对称理论,关切信息技术支持和知识分享在冷链物流企业采纳产业园提供金融服务过程中的调节作用。同时,梳理产业园提供金融服务可能面临哪些风险,制订冷链物流企业入驻园区的标准,防范风险。

本文运用实证研究方法,通过对国内18家冷链物流相关的产业园、物流园、冷链物流、商贸流通、金融等企业实地考察和专家访谈基础上,拟定问卷并对268家企业进行调查收集数据,使用结构方程模型进行假设检验。研究发现:金融服务的有形性、可靠性、移情性、经济性对冷链物流企业采纳产业园金融服务影响显著,而响应性的影响不显著。同时

信息技术支持和知识共享的调节作用不显著。最后,针对产业园吸引冷链物流企业提供金融服务、冷链物流企业采纳产业园金融服务的风险,提出防范策略措施。
ContributorsYang, Su (Author) / Shen, Wei (Thesis advisor) / Chen, Xinlei (Thesis advisor) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2019
157089-Thumbnail Image.png
Description财富管理是一个高度信息不对称的行业,因此投资人需要尽可能减少自身的不确定来做投资决策,通过文献整理,本文发现通过建立信任来消除不确定性是很多投资人都会选择的帮助投资决策的方法。纵观历史,美国2007-2008年的金融危机也恰恰导致金融市场投资人对于理财机构信任的严重缺失,相同的情况也可能发生在中国财富管理市场,因此本文将此选作研究重点,希望深入研究财富管理公司投资人对理财师的信任来得到一系列结论。本文最终发现就平台和理财师相比,投资人更看重平台的信誉度。 投资人大多认为平台的信誉度要高于理财师的信誉度,但是这并不意味着理财师不重要。本文进一步的分析发现,多数投资人会和理财师建立起一种私人联系,且该私人关系有助于加强客户和平台的联系。投资人认为行业经验、为人诚恳,说话可信以及责任心是加强这种私人关系的重要因素。最后,投资人对于钜派平台的信任主要由对于理财师的信任来维持,同时对于理财师的信任主要来自与情感信任。本文的发现对财富管理平台具有战略意义。
ContributorsWu, Qimin (Author) / Shen, Wei (Thesis advisor) / Chang, Chun (Thesis advisor) / Zhu, Hongquan (Committee member) / Arizona State University (Publisher)
Created2019