Matching Items (22)
Filtering by

Clear all filters

156938-Thumbnail Image.png
Description
Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed.

Coordination and control of Intelligent Agents as a team is considered in this thesis.

Intelligent agents learn from experiences, and in times of uncertainty use the knowl-

edge acquired to make decisions and accomplish their individual or team objectives.

Agent objectives are defined using cost functions designed uniquely for the collective

task being performed. Individual agent costs are coupled in such a way that group ob-

jective is attained while minimizing individual costs. Information Asymmetry refers

to situations where interacting agents have no knowledge or partial knowledge of cost

functions of other agents. By virtue of their intelligence, i.e., by learning from past

experiences agents learn cost functions of other agents, predict their responses and

act adaptively to accomplish the team’s goal.

Algorithms that agents use for learning others’ cost functions are called Learn-

ing Algorithms, and algorithms agents use for computing actuation (control) which

drives them towards their goal and minimize their cost functions are called Control

Algorithms. Typically knowledge acquired using learning algorithms is used in con-

trol algorithms for computing control signals. Learning and control algorithms are

designed in such a way that the multi-agent system as a whole remains stable during

learning and later at an equilibrium. An equilibrium is defined as the event/point

where cost functions of all agents are optimized simultaneously. Cost functions are

designed so that the equilibrium coincides with the goal state multi-agent system as

a whole is trying to reach.

In collective load transport, two or more agents (robots) carry a load from point

A to point B in space. Robots could have different control preferences, for example,

different actuation abilities, however, are still required to coordinate and perform

load transport. Control preferences for each robot are characterized using a scalar

parameter θ i unique to the robot being considered and unknown to other robots.

With the aid of state and control input observations, agents learn control preferences

of other agents, optimize individual costs and drive the multi-agent system to a goal

state.

Two learning and Control algorithms are presented. In the first algorithm(LCA-

1), an existing work, each agent optimizes a cost function similar to 1-step receding

horizon optimal control problem for control. LCA-1 uses recursive least squares as

the learning algorithm and guarantees complete learning in two time steps. LCA-1 is

experimentally verified as part of this thesis.

A novel learning and control algorithm (LCA-2) is proposed and verified in sim-

ulations and on hardware. In LCA-2, each agent solves an infinite horizon linear

quadratic regulator (LQR) problem for computing control. LCA-2 uses a learning al-

gorithm similar to line search methods, and guarantees learning convergence to true

values asymptotically.

Simulations and hardware implementation show that the LCA-2 is stable for a

variety of systems. Load transport is demonstrated using both the algorithms. Ex-

periments running algorithm LCA-2 are able to resist disturbances and balance the

assumed load better compared to LCA-1.
ContributorsKAMBAM, KARTHIK (Author) / Zhang, Wenlong (Thesis advisor) / Nedich, Angelia (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2018
154802-Thumbnail Image.png
Description
Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and

Increasing demand for reducing the stress on fossil fuels has motivated automotive industries to shift towards sustainable modes of transport through electric and hybrid electric vehicles. Most fuel efficient cars of year 2016 are hybrid vehicles as reported by environmental protection agency. Hybrid vehicles operate with internal combustion engine and electric motors powered by batteries, and can significantly improve fuel economy due to downsizing of the engine. Whereas, Plug-in hybrids (PHEVs) have an additional feature compared to hybrid vehicles i.e. recharging batteries through external power outlets. Among hybrid powertrains, lithium-ion batteries have emerged as a major electrochemical storage source for propulsion of vehicles.

In PHEVs, batteries operate under charge sustaining and charge depleting mode based on torque requirement and state of charge. In the current article, 26650 lithium-ion cells were cycled extensively at 25 and 50 oC under charge sustaining mode to monitor capacity and cell impedance values followed by analyzing the Lithium iron phosphate (LiFePO4) cathode material by X-ray diffraction analysis (XRD). High frequency resistance measured by electrochemical impedance spectroscopy was found to increase significantly under high temperature cycling, leading to power fading. No phase change in LiFePO4 cathode material is observed after 330 cycles at elevated temperature under charge sustaining mode from the XRD analysis. However, there was significant change in crystallite size of the cathode active material after charge/discharge cycling with charge sustaining mode. Additionally, 18650 lithium-ion cells were tested under charge depleting mode to monitor capacity values.
ContributorsBadami, Pavan Pramod (Author) / Kannan, Arunachala Mada (Thesis advisor) / Huang, Huei Ping (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
153927-Thumbnail Image.png
Description
A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not

A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not uncommon to see only dimensional plus/minus values on rough sketches accompanying the instructions. On the other hand, design drawings use standard GD&T (Geometrical Dimensioning and tolerancing) symbols with datums and DRFs (Datum Reference Frames) clearly specified. This is not to say that process planners do not consider tolerances; they are implied by way of choices of fixtures, tools, machines, and operations. When converting design tolerances to the manufacturing datum flow, process planners do tolerance charting, that is based on operation sequence but the resulting plans cannot be audited for conformance to design specification.

In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
ContributorsHaghighi, Payam (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
155081-Thumbnail Image.png
Description
ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends

ABSTRACT

A large fraction of the total energy consumption in the world comes from heating and cooling of buildings. Improving the energy efficiency of buildings to reduce the needs of seasonal heating and cooling is one of the major challenges in sustainable development. In general, the energy efficiency depends on the geometry and material of the buildings. To explore a framework for accurately assessing this dependence, detailed 3-D thermofluid simulations are performed by systematically sweeping the parameter space spanned by four parameters: the size of building, thickness and material of wall, and fractional size of window. The simulations incorporate realistic boundary conditions of diurnally-varying temperatures from observation, and the effect of fluid flow with explicit thermal convection inside the building. The outcome of the numerical simulations is synthesized into a simple map of an index of energy efficiency in the parameter space which can be used by stakeholders to quick look-up the energy efficiency of a proposed design of a building before its construction. Although this study only considers a special prototype of buildings, the framework developed in this work can potentially be used for a wide range of buildings and applications.
ContributorsJain, Gaurav (Author) / Huang, Huei-Ping (Thesis advisor) / Ren, Yi (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2016
154994-Thumbnail Image.png
Description
When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important

When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important features are machined to specification. Depending on the relative locations of as-cast to-be-machined features and the amount of material at each, the part may be positioned or ‘set up’ on a fixture in a configuration that will ensure that the pre-specified machining operations will successfully clean up the rough surfaces and produce a part that conforms to any assigned tolerances. For a particular part whose features incur excessive deviation in the casting process, it may be that no setup would yield an acceptable final part. The proposed Setup-Map (S-Map) describes the positions and orientations of a part that will allow for it to be successfully machined, and will be able to determine if a particular part cannot be made to specification.

The Setup Map is a point space in six dimensions where each of the six orthogonal coordinates corresponds to one of the rigid-body displacements in three dimensional space: three rotations and three translations. Any point within the boundaries of the Setup-Map (S-Map) corresponds to a small displacement of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. The process for creating the S-Map involves the representation of constraints imposed by the tolerances in simple coordinate systems for each to-be-machined feature. Constraints are then transformed to a single coordinate system where the intersection reveals the common allowable ‘setup’ points. Should an intersection of the six-dimensional constraints exist, an optimization scheme is used to choose a single setup that gives the best chance for machining to be completed successfully. Should no intersection exist, the particular part cannot be machined to specification or must be re-worked with weld metal added to specific locations.
ContributorsKalish, Nathan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
155159-Thumbnail Image.png
Description
The environmental impact of the fossil fuels has increased tremendously in the last decade. This impact is one of the most contributing factors of global warming. This research aims to reduce the amount of fuel consumed by vehicles through optimizing the control scheme for the future route information. Taking advantage

The environmental impact of the fossil fuels has increased tremendously in the last decade. This impact is one of the most contributing factors of global warming. This research aims to reduce the amount of fuel consumed by vehicles through optimizing the control scheme for the future route information. Taking advantage of more degrees of freedom available within PHEV, HEV, and FCHEV “energy management” allows more margin to maximize efficiency in the propulsion systems. The application focuses on reducing the energy consumption in vehicles by acquiring information about the road grade. Road elevations are obtained by use of Geographic Information System (GIS) maps to optimize the controller. The optimization is then reflected on the powertrain of the vehicle.The approach uses a Model Predictive Control (MPC) algorithm that allows the energy management strategy to leverage road grade to prepare the vehicle for minimizing energy consumption during an uphill and potential energy harvesting during a downhill. The control algorithm will predict future energy/power requirements of the vehicle and optimize the performance by instructing the power split between the internal combustion engine (ICE) and the electric-drive system. Allowing for more efficient operation and higher performance of the PHEV, and HEV. Implementation of different strategies, such as MPC and Dynamic Programming (DP), is considered for optimizing energy management systems. These strategies are utilized to have a low processing time. This approach allows the optimization to be integrated with ADAS applications, using current technology for implementable real time applications.

The Thesis presents multiple control strategies designed, implemented, and tested using real-world road elevation data from three different routes. Initial simulation based results show significant energy savings. The savings range between 11.84% and 25.5% for both Rule Based (RB) and DP strategies on the real world tested routes. Future work will take advantage of vehicle connectivity and ADAS systems to utilize Vehicle to Vehicle (V2V), Vehicle to Infrastructure (V2I), traffic information, and sensor fusion to further optimize the PHEV and HEV toward more energy efficient operation.
ContributorsAlzorgan, Mohammad (Author) / Mayyas, Abdel Ra’ouf (Thesis advisor) / Berman, Spring (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
155672-Thumbnail Image.png
Description
The greenhouse gases in the atmosphere have reached a highest level due to high number of vehicles. A Fuel Cell Hybrid Electric Vehicle (FCHEV) has zero greenhouse gas emissions compared to conventional ICE vehicles or Hybrid Electric Vehicles and hence is a better alternative. All Electric Vehicle (AEVs) have longer

The greenhouse gases in the atmosphere have reached a highest level due to high number of vehicles. A Fuel Cell Hybrid Electric Vehicle (FCHEV) has zero greenhouse gas emissions compared to conventional ICE vehicles or Hybrid Electric Vehicles and hence is a better alternative. All Electric Vehicle (AEVs) have longer charging time which is unfavorable. A fully charged battery gives less range compared to a FCHEV with a full hydrogen tank. So FCHEV has an advantage of a quick fuel up and more mileage than AEVs. A Proton Electron Membrane Fuel Cell (PEMFC) is the commonly used kind of fuel cell vehicles but it possesses slow current dynamics and hence not suitable to be the sole power source in a vehicle. Therefore, improving the transient power capabilities of fuel cell to satisfy the road load demand is critical.

This research studies integration of Ultra-Capacitor (UC) to FCHEV. The objective is to analyze the effect of integrating UCs on the transient response of FCHEV powertrain. UCs has higher power density which can overcome slow dynamics of fuel cell. A power management strategy utilizing peak power shaving strategy is implemented. The goal is to decrease power load on batteries and operate fuel cell stack in it’s most efficient region. Complete model to simulate the physical behavior of UC-Integrated FCHEV (UC-FCHEV) is developed using Matlab/SIMULINK. The fuel cell polarization curve is utilized to devise operating points of the fuel cell to maintain its operation at most efficient region. Results show reduction of hydrogen consumption in aggressive US06 drive cycle from 0.29 kg per drive cycle to 0.12 kg. The maximum charge/discharge battery current was reduced from 286 amperes to 110 amperes in US06 drive cycle. Results for the FUDS drive cycle show a reduction in fuel consumption from 0.18 kg to 0.05 kg in one drive cycle. This reduction in current increases the life of the battery since its protected from overcurrent. The SOC profile of the battery also shows that the battery is not discharged to its minimum threshold which increasing the health of the battery based on number of charge/discharge cycles.
ContributorsJethani, Puneet V. (Author) / Mayyas, Abdel (Thesis advisor) / Berman, Spring (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2017
155687-Thumbnail Image.png
Description
Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore very crucial for these industries to be able to predict cycle time very accurately in order to quote accurate delivery

Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore very crucial for these industries to be able to predict cycle time very accurately in order to quote accurate delivery dates. Discrete Event Simulation (DES) models are often used to model these complex manufacturing systems in order to generate estimates of the cycle time distribution. However, building models and executing them consumes sufficient time and resources. The objective of this research is to determine the influence of input parameters on the cycle time distribution of a semiconductor or high volume electronics manufacturing system. This will help the decision makers to implement system changes to improve the predictability of their cycle time distribution without having to run simulation models. In order to understand how input parameters impact the cycle time, Design of Experiments (DOE) is performed. The response variables considered are the attributes of cycle time distribution which include the four moments and percentiles. The input to this DOE is the output from the simulation runs. Main effects, two-way and three-way interactions for these input variables are analyzed. The implications of these results to real world scenarios are explained which would help manufactures understand the effects of the interactions between the input factors on the estimates of cycle time distribution. The shape of the cycle time distributions is different for different types of systems. Also, DES requires substantial resources and time to run. In an effort to generalize the results obtained in semiconductor manufacturing analysis, a non- complex system is considered.
ContributorsSalvi, Tanushree Ashutosh (Author) / Bekki, Jennifer M (Thesis advisor) / Sodemann, Angela (Thesis advisor) / Shuaib, Abdelrahman (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2017
158735-Thumbnail Image.png
Description
Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine allowable manufacturing variations to ensure proper functioning and assemblability. Most of the flexible assemblies are made by stamping sheet metal. Sheet metal stamping process involves plastically deforming sheet metals using dies. Sub-assemblies of two or more components are made with either spot-welding or riveting operations. Various sub-assemblies are finally joined, using spot-welds or rivets, to create the desired assembly. When two components are brought together for assembly, they do not align exactly; this causes gaps and irregularities in assemblies. As multiple parts are stacked, errors accumulate further. Stamping leads to variable deformations due to residual stresses and elastic recovery from plastic strain of metals; this is called as the ‘spring-back’ effect. When multiple components are stacked or assembled using spot welds, input parameters variations, such as sheet metal thickness, number and order of spot welds, cause variations in the exact shape of the final assembly in its free state. It is essential to understand the influence of these input parameters on the geometric variations of both the individual components and the assembly created using these components. Design of Experiment is used to generate principal effect study which evaluates the influence of input parameters on output parameters. The scope of this study is to quantify the geometric variations for a flexible assembly and evaluate their dependence on specific input variables. The 3 input variables considered are the thickness of the sheet material, the number of spot welds used and the spot-welding order to create the assembly. To quantify the geometric variations, sprung-back nodal points along lines, circular arcs, a combination of these, and a specific profile are reduced to metrologically simulated features.
ContributorsJoshi, Abhishek (Author) / Ren, Yi (Thesis advisor) / Davidson, Joseph (Committee member) / Shah, Jami (Committee member) / Arizona State University (Publisher)
Created2020