ASU Electronic Theses and Dissertations
This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.
In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.
Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.
Filtering by
- Genre: Academic theses
This dissertation proposes the Problem Map (P-maps) ontological framework. P-maps represent designers' problem formulation in terms of six groups of entities (requirement, use scenario, function, artifact, behavior, and issue). Entities have hierarchies within each group and links among groups. Variables extracted from P-maps characterize problem formulation.
Three experiments were conducted. The first experiment was to study the similarities and differences between novice and expert designers. Results show that experts use more abstraction than novices do and novices are more likely to add entities in a specific order. Experts also discover more issues.
The second experiment was to see how problem formulation relates to creativity. Ideation metrics were used to characterize creative outcome. Results include but are not limited to a positive correlation between adding more issues in an unorganized way with quantity and variety, more use scenarios and functions with novelty, more behaviors and conflicts identified with quality, and depth-first exploration with all ideation metrics. Fewer hierarchies in use scenarios lower novelty and fewer links to requirements and issues lower quality of ideas.
The third experiment was to see if problem formulation can predict creative outcome. Models based on one problem were used to predict the creativity of another. Predicted scores were compared to assessments of independent judges. Quality and novelty are predicted more accurately than variety, and quantity. Backward elimination improves model fit, though reduces prediction accuracy.
P-maps provide a theoretical framework for formalizing, tracing, and quantifying conceptual design strategies. Other potential applications are developing a test of problem formulation skill, tracking students' learning of formulation skills in a course, and reproducing other researchers’ observations about designer thinking.
Intelligent agents learn from experiences, and in times of uncertainty use the knowl-
edge acquired to make decisions and accomplish their individual or team objectives.
Agent objectives are defined using cost functions designed uniquely for the collective
task being performed. Individual agent costs are coupled in such a way that group ob-
jective is attained while minimizing individual costs. Information Asymmetry refers
to situations where interacting agents have no knowledge or partial knowledge of cost
functions of other agents. By virtue of their intelligence, i.e., by learning from past
experiences agents learn cost functions of other agents, predict their responses and
act adaptively to accomplish the team’s goal.
Algorithms that agents use for learning others’ cost functions are called Learn-
ing Algorithms, and algorithms agents use for computing actuation (control) which
drives them towards their goal and minimize their cost functions are called Control
Algorithms. Typically knowledge acquired using learning algorithms is used in con-
trol algorithms for computing control signals. Learning and control algorithms are
designed in such a way that the multi-agent system as a whole remains stable during
learning and later at an equilibrium. An equilibrium is defined as the event/point
where cost functions of all agents are optimized simultaneously. Cost functions are
designed so that the equilibrium coincides with the goal state multi-agent system as
a whole is trying to reach.
In collective load transport, two or more agents (robots) carry a load from point
A to point B in space. Robots could have different control preferences, for example,
different actuation abilities, however, are still required to coordinate and perform
load transport. Control preferences for each robot are characterized using a scalar
parameter θ i unique to the robot being considered and unknown to other robots.
With the aid of state and control input observations, agents learn control preferences
of other agents, optimize individual costs and drive the multi-agent system to a goal
state.
Two learning and Control algorithms are presented. In the first algorithm(LCA-
1), an existing work, each agent optimizes a cost function similar to 1-step receding
horizon optimal control problem for control. LCA-1 uses recursive least squares as
the learning algorithm and guarantees complete learning in two time steps. LCA-1 is
experimentally verified as part of this thesis.
A novel learning and control algorithm (LCA-2) is proposed and verified in sim-
ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear
quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-
gorithm similar to line search methods, and guarantees learning convergence to true
values asymptotically.
Simulations and hardware implementation show that the LCA-2 is stable for a
variety of systems. Load transport is demonstrated using both the algorithms. Ex-
periments running algorithm LCA-2 are able to resist disturbances and balance the
assumed load better compared to LCA-1.
Non-destructive testing (NDT) and structural health monitoring (SHM) are widely used for this purpose. Different types of NDT techniques have been proposed for the damage detection, such as optical image, ultrasound wave, thermography, eddy current, and microwave. The focus in this study is on the wave-based detection method, which is grouped into two major categories: feature-based damage detection and model-assisted damage detection. Both damage detection approaches have their own pros and cons. Feature-based damage detection is usually very fast and doesn’t involve in the solution of the physical model. The key idea is the dimension reduction of signals to achieve efficient damage detection. The disadvantage is that the loss of information due to the feature extraction can induce significant uncertainties and reduces the resolution. The resolution of the feature-based approach highly depends on the sensing path density. Model-assisted damage detection is on the opposite side. Model-assisted damage detection has the ability for high resolution imaging with limited number of sensing paths since the entire signal histories are used for damage identification. Model-based methods are time-consuming due to the requirement for the inverse wave propagation solution, which is especially true for the large 3D structures.
The motivation of the proposed method is to develop efficient and accurate model-based damage imaging technique with limited data. The special focus is on the efficiency of the damage imaging algorithm as it is the major bottleneck of the model-assisted approach. The computational efficiency is achieved by two complimentary components. First, a fast forward wave propagation solver is developed, which is verified with the classical Finite Element(FEM) solution and the speed is 10-20 times faster. Next, efficient inverse wave propagation algorithms is proposed. Classical gradient-based optimization algorithms usually require finite difference method for gradient calculation, which is prohibitively expensive for large degree of freedoms. An adjoint method-based optimization algorithms is proposed, which avoids the repetitive finite difference calculations for every imaging variables. Thus, superior computational efficiency can be achieved by combining these two methods together for the damage imaging. A coupled Piezoelectric (PZT) damage imaging model is proposed to include the interaction between PZT and host structure. Following the formulation of the framework, experimental validation is performed on isotropic and anisotropic material with defects such as cracks, delamination, and voids. The results show that the proposed method can detect and reconstruct multiple damage simultaneously and efficiently, which is promising to be applied to complex large-scale engineering structures.
Performance evaluation and characterization of lithium-ion cells under simulated PHEVs' drive cycles
In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.
The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.
Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends on the geometry and material of the buildings. To explore a framework for accurately assessing this dependence, detailed 3-D thermofluid simulations are performed by systematically sweeping the parameter space spanned by four parameters: the size of building, thickness and material of wall, and fractional size of window. The simulations incorporate realistic boundary conditions of diurnally-varying temperatures from observation, and the effect of fluid flow with explicit thermal convection inside the building. The outcome of the numerical simulations is synthesized into a simple map of an index of energy efficiency in the parameter space which can be used by stakeholders to quick look-up the energy efficiency of a proposed design of a building before its construction. Although this study only considers a special prototype of buildings, the framework developed in this work can potentially be used for a wide range of buildings and applications.
The Setup Map is a point space in six dimensions where each of the six orthogonal coordinates corresponds to one of the rigid-body displacements in three dimensional space: three rotations and three translations. Any point within the boundaries of the Setup-Map (S-Map) corresponds to a small displacement of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. The process for creating the S-Map involves the representation of constraints imposed by the tolerances in simple coordinate systems for each to-be-machined feature. Constraints are then transformed to a single coordinate system where the intersection reveals the common allowable ‘setup’ points. Should an intersection of the six-dimensional constraints exist, an optimization scheme is used to choose a single setup that gives the best chance for machining to be completed successfully. Should no intersection exist, the particular part cannot be machined to specification or must be re-worked with weld metal added to specific locations.