Matching Items (29)
Filtering by
- Genre: Masters Thesis
Description
The increasing availability of data and advances in computation have spurred the development of data-driven approaches for modeling complex dynamical systems. These approaches are based on the idea that the underlying structure of a complex system can be discovered from data using mathematical and computational techniques. They also show promise for addressing the challenges of modeling high-dimensional, nonlinear systems with limited data. In this research expository, the state of the art in data-driven approaches for modeling complex dynamical systems is surveyed in a systemic way. First the general formulation of data-driven modeling of dynamical systems is discussed. Then several representative methods in feature engineering and system identification/prediction are reviewed, including recent advances and key challenges.
ContributorsShi, Wenlong (Author) / Ren, Yi (Thesis advisor) / Hong, Qijun (Committee member) / Jiao, Yang (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
Description
The need for autonomous cars has never been more vital, and for a vehicle to be completely autonomous, multiple components must work together, one of which is the capacity to park at the end of a mission. This thesis project aims to design and execute an automated parking assist system (APAS). Traditional Automated parking assist systems (APAS) may not be effective in some constrained urban parking environments because of the parking space dimension. The thesis proposes a novel four-wheel steering (4-WS) vehicle for automated parallel parking to overcome this kind of challenge. Then, benefiting from the maneuverability enabled by the 4WS system, the feasible initial parking area is vastly expanded from those for the conventional 2WS vehicles. In addition, the expanded initial area is divided into four areas where different paths are planned correspondingly. In the proposed novel APAS first, a suitable parking space is identified through ultra-sonic sensors, which are mounted around the vehicle, and then depending upon the vehicle's initial position, various compact and smooth parallel parking paths are generated. An optimization function is built to get the smoothest (i.e., the smallest steering angle change and the shortest path) parallel parking path. With the full utilization of the 4WS system, the proposed path planning algorithm can allow a larger initial parking area that can be easily tracked by the 4WS vehicles. The proposed APAS for 4WS vehicles makes the automatic parking process in restricted spaces efficient. To verify the feasibility and effectiveness of the proposed APAS, a 4WS vehicle prototype is applied for validation through both simulation and experiment results.
ContributorsGujarathi, Kaushik Kumar (Author) / Chen, Yan (Thesis advisor) / Yong, Sze Zheng (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
Description
The distribution and transport of mercury in the human body are poorly constrained. For instance, the long-term persistence and intra-individual distribution of mercury in bones from dental amalgams or environmental exposure have not been studied. A robust method validated for accuracy and precision specifically for mercury in human bones would facilitate studies of mercury in anthropological, forensic, and medical studies. I present a highly precise, accurate mercury concentration analytical method targeted to human bone samples. This method uses commercially commonly available and reliable instruments that are not limited to elemental Hg analysis. This method requires significantly lower sample amounts than existing methods because it has a much lower limit of detection compared to the best mercury analyzers on the market and other analytical methods. With the low limit of detection achieved, this mercury concentration protocol is an excellent fit for studies with a limited amount of samples for destructive analysis. I then use this method to analyze the mercury concentration distribution in modern skeletal collections provided by three U.S. anthropological research facilities. Mercury concentration and distribution were analyzed from 35 donors’ skeletons with 18 different skeletal elements (bones) per donor to evaluate both the intra-individual and inter-individual variation in mercury concentration. Considered factors include geological differences in decomposition sites and the presence of dental amalgam filling. Geological differences in decomposition sites did not statistically affect the mercury concentration in the donor’s skeleton. The presence of dental amalgam significantly affected the inter-individual and intra-individual mercury concentration variation in donors’ skeletal samples. Individuals who had dental amalgam had significantly higher mercury concentration in their skeleton compared to individuals who did not have dental amalgam (p-value <0.01). Mercury concentration in the mandible, occipital bone, patella, and proximal phalanx (foot) was significantly affected by the presence of dental amalgam.
ContributorsRen, Yi (Author) / Gordon, Gwyneth GG (Thesis advisor) / Anbar, Ariel AD (Thesis advisor) / Shock, Everett ES (Committee member) / Knudson, Kelly KJ (Committee member) / Arizona State University (Publisher)
Created2022
Description
This research aims to develop a single-phase immersion cooling system for CPU (Central Processing Unit) processors. To achieve this, a heat pipe with a
dielectric liquid is designed to be used to cool the CPU, relying only on natural
convection. A Tesla valve phenomenon is used to achieve the one-directional,
recirculating system. A comparative study was conducted between two different
single-phase dielectric fluids Mineral Oil and FC 3283 (Fluorocarbon), utilizing
natural convection and Boussinesq correlations. ANSYS Fluent was used to conduct
CFD (Computational Fluid Dynamics) analysis, demonstrating natural convection
and recirculating flow in the heating direction. A comparison was made between the
traditional cooling method of air and the developed immersion cooling system, with
the results indicating that the system is capable of reducing the operating
temperature of the CPU by 40 to 50 degrees Celsius, depending on the power
consumption. The results of the experiment conducted showed that a processor cooled
by Mineral oil would operate at 56 degrees Celsius, while a processor cooled by FC
3283 would operate at 47 degrees Celsius. By comparison, a processor cooled by the
traditional air-cooled system would operate between 80 and 100 degrees Celsius.
These results demonstrate that the Mineral oil and FC 3283 cooling systems are
significantly more efficient than the traditional air-cooled system. This could prove to
be a valuable asset in the development of more efficient cooling systems. Further
research is necessary to evaluate the longevity, cost-effectiveness, and benefits of
these systems in comparison to traditional air cooling
ContributorsGajjar, Kathan Malaybhai (Author) / Huang, Huei Ping (Thesis advisor) / Chen, Kangping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2023
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
ContributorsSunkara, Sai Chandu (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023
Description
Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now
capable of synthesizing multimedia contents are difficult to be distinguishable from
authentic contents. Such capabilities cause concerns such as malicious impersonation,
Intellectual property theft(IP theft) and copyright infringement.
One method to solve these threats is to embedded attributable watermarking in
synthesized contents so that user can identify the user-end models where the contents
are generated from. This paper investigates a solution for model attribution, i.e., the
classification of synthetic contents by their source models via watermarks embedded
in the contents. Existing studies showed the feasibility of model attribution in the
image domain and tradeoff between attribution accuracy and generation quality under
the various adversarial attacks but not in speech domain.
This work discuss the feasibility of model attribution in different domain and
algorithmic improvements for generating user-end speech models that empirically
achieve high accuracy of attribution while maintaining high generation quality. Lastly,
several experiments are conducted show the tradeoff between attributability and
generation quality under a variety of attacks on generated speech signals attempting
to remove the watermarks.
ContributorsCho, Yongbaek (Author) / Yang, Yezhou (Thesis advisor) / Ren, Yi (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2021
Description
Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always confine vehicle states in a predefined lateral stability region. To consider the feasibility and stability of MPC, terminal cost and constraints are investigated to guarantee the stability and recursive feasibility of the proposed non-overshooting MPC. The proposed non-overshooting MPC is first verified by using numerical examples of linear and nonlinear systems. Finally, the non-overshooting MPC is applied to guarantee vehicle lateral stability based on a nonlinear vehicle model for a cornering maneuver. The simulation results are presented and discussed through co-simulation of CarSim® and MATLAB/Simulink.
ContributorsSudhakhar, Monish Dev (Author) / Chen, Yan (Thesis advisor) / Ren, Yi (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2023
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.
Intelligent agents learn from experiences, and in times of uncertainty use the knowl-
edge acquired to make decisions and accomplish their individual or team objectives.
Agent objectives are defined using cost functions designed uniquely for the collective
task being performed. Individual agent costs are coupled in such a way that group ob-
jective is attained while minimizing individual costs. Information Asymmetry refers
to situations where interacting agents have no knowledge or partial knowledge of cost
functions of other agents. By virtue of their intelligence, i.e., by learning from past
experiences agents learn cost functions of other agents, predict their responses and
act adaptively to accomplish the team’s goal.
Algorithms that agents use for learning others’ cost functions are called Learn-
ing Algorithms, and algorithms agents use for computing actuation (control) which
drives them towards their goal and minimize their cost functions are called Control
Algorithms. Typically knowledge acquired using learning algorithms is used in con-
trol algorithms for computing control signals. Learning and control algorithms are
designed in such a way that the multi-agent system as a whole remains stable during
learning and later at an equilibrium. An equilibrium is defined as the event/point
where cost functions of all agents are optimized simultaneously. Cost functions are
designed so that the equilibrium coincides with the goal state multi-agent system as
a whole is trying to reach.
In collective load transport, two or more agents (robots) carry a load from point
A to point B in space. Robots could have different control preferences, for example,
different actuation abilities, however, are still required to coordinate and perform
load transport. Control preferences for each robot are characterized using a scalar
parameter θ i unique to the robot being considered and unknown to other robots.
With the aid of state and control input observations, agents learn control preferences
of other agents, optimize individual costs and drive the multi-agent system to a goal
state.
Two learning and Control algorithms are presented. In the first algorithm(LCA-
1), an existing work, each agent optimizes a cost function similar to 1-step receding
horizon optimal control problem for control. LCA-1 uses recursive least squares as
the learning algorithm and guarantees complete learning in two time steps. LCA-1 is
experimentally verified as part of this thesis.
A novel learning and control algorithm (LCA-2) is proposed and verified in sim-
ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear
quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-
gorithm similar to line search methods, and guarantees learning convergence to true
values asymptotically.
Simulations and hardware implementation show that the LCA-2 is stable for a
variety of systems. Load transport is demonstrated using both the algorithms. Ex-
periments running algorithm LCA-2 are able to resist disturbances and balance the
assumed load better compared to LCA-1.
Intelligent agents learn from experiences, and in times of uncertainty use the knowl-
edge acquired to make decisions and accomplish their individual or team objectives.
Agent objectives are defined using cost functions designed uniquely for the collective
task being performed. Individual agent costs are coupled in such a way that group ob-
jective is attained while minimizing individual costs. Information Asymmetry refers
to situations where interacting agents have no knowledge or partial knowledge of cost
functions of other agents. By virtue of their intelligence, i.e., by learning from past
experiences agents learn cost functions of other agents, predict their responses and
act adaptively to accomplish the team’s goal.
Algorithms that agents use for learning others’ cost functions are called Learn-
ing Algorithms, and algorithms agents use for computing actuation (control) which
drives them towards their goal and minimize their cost functions are called Control
Algorithms. Typically knowledge acquired using learning algorithms is used in con-
trol algorithms for computing control signals. Learning and control algorithms are
designed in such a way that the multi-agent system as a whole remains stable during
learning and later at an equilibrium. An equilibrium is defined as the event/point
where cost functions of all agents are optimized simultaneously. Cost functions are
designed so that the equilibrium coincides with the goal state multi-agent system as
a whole is trying to reach.
In collective load transport, two or more agents (robots) carry a load from point
A to point B in space. Robots could have different control preferences, for example,
different actuation abilities, however, are still required to coordinate and perform
load transport. Control preferences for each robot are characterized using a scalar
parameter θ i unique to the robot being considered and unknown to other robots.
With the aid of state and control input observations, agents learn control preferences
of other agents, optimize individual costs and drive the multi-agent system to a goal
state.
Two learning and Control algorithms are presented. In the first algorithm(LCA-
1), an existing work, each agent optimizes a cost function similar to 1-step receding
horizon optimal control problem for control. LCA-1 uses recursive least squares as
the learning algorithm and guarantees complete learning in two time steps. LCA-1 is
experimentally verified as part of this thesis.
A novel learning and control algorithm (LCA-2) is proposed and verified in sim-
ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear
quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-
gorithm similar to line search methods, and guarantees learning convergence to true
values asymptotically.
Simulations and hardware implementation show that the LCA-2 is stable for a
variety of systems. Load transport is demonstrated using both the algorithms. Ex-
periments running algorithm LCA-2 are able to resist disturbances and balance the
assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
Performance evaluation and characterization of lithium-ion cells under simulated PHEVs' drive cycles
Description
Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and electric motors powered by batteries, and can significantly improve fuel economy due to downsizing of the engine. Whereas, Plug-in hybrids (PHEVs) have an additional feature compared to hybrid vehicles i.e. recharging batteries through external power outlets. Among hybrid powertrains, lithium-ion batteries have emerged as a major electrochemical storage source for propulsion of vehicles.
In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
ContributorsBadami, Pavan Pramod (Author) / Kannan, Arunachala Mada (Thesis advisor) / Huang, Huei Ping (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
Description
A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not uncommon to see only dimensional plus/minus values on rough sketches accompanying the instructions. On the other hand, design drawings use standard GD&T (Geometrical Dimensioning and tolerancing) symbols with datums and DRFs (Datum Reference Frames) clearly specified. This is not to say that process planners do not consider tolerances; they are implied by way of choices of fixtures, tools, machines, and operations. When converting design tolerances to the manufacturing datum flow, process planners do tolerance charting, that is based on operation sequence but the resulting plans cannot be audited for conformance to design specification.
In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
ContributorsHaghighi, Payam (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015