Matching Items (26)
Filtering by

Clear all filters

187873-Thumbnail Image.png
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly

Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
ContributorsSunkara, Sai Chandu (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023
187466-Thumbnail Image.png
Description
Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always

Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always confine vehicle states in a predefined lateral stability region. To consider the feasibility and stability of MPC, terminal cost and constraints are investigated to guarantee the stability and recursive feasibility of the proposed non-overshooting MPC. The proposed non-overshooting MPC is first verified by using numerical examples of linear and nonlinear systems. Finally, the non-overshooting MPC is applied to guarantee vehicle lateral stability based on a nonlinear vehicle model for a cornering maneuver. The simulation results are presented and discussed through co-simulation of CarSim® and MATLAB/Simulink.
ContributorsSudhakhar, Monish Dev (Author) / Chen, Yan (Thesis advisor) / Ren, Yi (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2023
156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed.

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
154802-Thumbnail Image.png
Description
Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and

Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and electric motors powered by batteries, and can significantly improve fuel economy due to downsizing of the engine. Whereas, Plug-in hybrids (PHEVs) have an additional feature compared to hybrid vehicles i.e. recharging batteries through external power outlets. Among hybrid powertrains, lithium-ion batteries have emerged as a major electrochemical storage source for propulsion of vehicles.

In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
ContributorsBadami, Pavan Pramod (Author) / Kannan, Arunachala Mada (Thesis advisor) / Huang, Huei Ping (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
153927-Thumbnail Image.png
Description
A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not

A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not uncommon to see only dimensional plus/minus values on rough sketches accompanying the instructions. On the other hand, design drawings use standard GD&T (Geometrical Dimensioning and tolerancing) symbols with datums and DRFs (Datum Reference Frames) clearly specified. This is not to say that process planners do not consider tolerances; they are implied by way of choices of fixtures, tools, machines, and operations. When converting design tolerances to the manufacturing datum flow, process planners do tolerance charting, that is based on operation sequence but the resulting plans cannot be audited for conformance to design specification.

In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
ContributorsHaghighi, Payam (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
155081-Thumbnail Image.png
Description
ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends

ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends on the geometry and material of the buildings. To explore a framework for accurately assessing this dependence, detailed 3-D thermofluid simulations are performed by systematically sweeping the parameter space spanned by four parameters: the size of building, thickness and material of wall, and fractional size of window. The simulations incorporate realistic boundary conditions of diurnally-varying temperatures from observation, and the effect of fluid flow with explicit thermal convection inside the building. The outcome of the numerical simulations is synthesized into a simple map of an index of energy efficiency in the parameter space which can be used by stakeholders to quick look-up the energy efficiency of a proposed design of a building before its construction. Although this study only considers a special prototype of buildings, the framework developed in this work can potentially be used for a wide range of buildings and applications.
ContributorsJain, Gaurav (Author) / Huang, Huei-Ping (Thesis advisor) / Ren, Yi (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2016
154987-Thumbnail Image.png
Description
The majority of the natural issues the world is confronting today is because of our dependence on fossil fuels and the increase in CO2 emissions. The alternative solution for this problem is the use of renewable energy for the energy production, but these are uncertain energy sources. So, the combination

The majority of the natural issues the world is confronting today is because of our dependence on fossil fuels and the increase in CO2 emissions. The alternative solution for this problem is the use of renewable energy for the energy production, but these are uncertain energy sources. So, the combination of reducing carbon dioxide with the use of renewable energy sources is the finest way to mitigate this problem. Electrochemical reduction of carbon dioxide (ERC) is a reasonable approach as it eliminates as well as utilizes the carbon dioxide as a source for generating valuable products.

In this study, development of electrochemical reactor, characterization of membrane electrode assembly (MEA) and analysis of electrochemical reduction of carbon dioxide (ERC) is discussed. Electrodes using various catalyst materials in solid polymer based electrolyte (SPE) along with gas diffusion layer (GDL) are developed. The prepared membrane electrodes are characterized under ex-situ conditions using scanning electron microscopy (SEM). The membranes are later placed in the electrochemical reactor for the in-situ characterization to assess the performance of the membrane electrode assembly.

The electrodes are processed by airbrushing the metal particles on the nafion membrane and then are electrochemically characterized by linear sweep voltammetry. The anode was kept constant with platinum whereas the cathode was examined with compositions of different metal catalysts. The products formed subsequently are analyzed using gas chromatography (GC) and Residual gas analysis (RGA). Hydrogen (H2) and carbon monoxide (CO) are detected using GC while the hydrocarbons are detected by performing quantitative analysis using RGA. The preliminary experiments gave very encouraging results. However, more work needs to be done to achieve new heights.
ContributorsVenka, Rishika (Author) / Kannan, Arunachala Mada (Thesis advisor) / Huang, Huei-Ping (Thesis advisor) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2016
154994-Thumbnail Image.png
Description
When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important

When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important features are machined to specification. Depending on the relative locations of as-cast to-be-machined features and the amount of material at each, the part may be positioned or ‘set up’ on a fixture in a configuration that will ensure that the pre-specified machining operations will successfully clean up the rough surfaces and produce a part that conforms to any assigned tolerances. For a particular part whose features incur excessive deviation in the casting process, it may be that no setup would yield an acceptable final part. The proposed Setup-Map (S-Map) describes the positions and orientations of a part that will allow for it to be successfully machined, and will be able to determine if a particular part cannot be made to specification.

The Setup Map is a point space in six dimensions where each of the six orthogonal coordinates corresponds to one of the rigid-body displacements in three dimensional space: three rotations and three translations. Any point within the boundaries of the Setup-Map (S-Map) corresponds to a small displacement of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. The process for creating the S-Map involves the representation of constraints imposed by the tolerances in simple coordinate systems for each to-be-machined feature. Constraints are then transformed to a single coordinate system where the intersection reveals the common allowable ‘setup’ points. Should an intersection of the six-dimensional constraints exist, an optimization scheme is used to choose a single setup that gives the best chance for machining to be completed successfully. Should no intersection exist, the particular part cannot be machined to specification or must be re-worked with weld metal added to specific locations.
ContributorsKalish, Nathan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
155159-Thumbnail Image.png
Description
The environmental impact of the fossil fuels has increased tremendously in the last decade. This impact is one of the most contributing factors of global warming. This research aims to reduce the amount of fuel consumed by vehicles through optimizing the control scheme for the future route information. Taking advantage

The environmental impact of the fossil fuels has increased tremendously in the last decade. This impact is one of the most contributing factors of global warming. This research aims to reduce the amount of fuel consumed by vehicles through optimizing the control scheme for the future route information. Taking advantage of more degrees of freedom available within PHEV, HEV, and FCHEV “energy management” allows more margin to maximize efficiency in the propulsion systems. The application focuses on reducing the energy consumption in vehicles by acquiring information about the road grade. Road elevations are obtained by use of Geographic Information System (GIS) maps to optimize the controller. The optimization is then reflected on the powertrain of the vehicle.The approach uses a Model Predictive Control (MPC) algorithm that allows the energy management strategy to leverage road grade to prepare the vehicle for minimizing energy consumption during an uphill and potential energy harvesting during a downhill. The control algorithm will predict future energy/power requirements of the vehicle and optimize the performance by instructing the power split between the internal combustion engine (ICE) and the electric-drive system. Allowing for more efficient operation and higher performance of the PHEV, and HEV. Implementation of different strategies, such as MPC and Dynamic Programming (DP), is considered for optimizing energy management systems. These strategies are utilized to have a low processing time. This approach allows the optimization to be integrated with ADAS applications, using current technology for implementable real time applications.

The Thesis presents multiple control strategies designed, implemented, and tested using real-world road elevation data from three different routes. Initial simulation based results show significant energy savings. The savings range between 11.84% and 25.5% for both Rule Based (RB) and DP strategies on the real world tested routes. Future work will take advantage of vehicle connectivity and ADAS systems to utilize Vehicle to Vehicle (V2V), Vehicle to Infrastructure (V2I), traffic information, and sensor fusion to further optimize the PHEV and HEV toward more energy efficient operation.
ContributorsAlzorgan, Mohammad (Author) / Mayyas, Abdel Ra’ouf (Thesis advisor) / Berman, Spring (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016