Matching Items (7)
Filtering by

Clear all filters

152349-Thumbnail Image.png
Description
As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem

As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations.
ContributorsWang, Yuting (Author) / Artemiadis, Panagiotis (Thesis advisor) / Mignolet, Marc (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
154595-Thumbnail Image.png
Description
All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing

All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development.

The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally.

Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering.

The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network, are trained and utilized to interpret nonlinear far-field wave patterns.

Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.
ContributorsKim, Inho (Author) / Chattopadhyay, Aditi (Thesis advisor) / Jiang, Hanqing (Committee member) / Liu, Yongming (Committee member) / Mignolet, Marc (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2016
137671-Thumbnail Image.png
Description
NGExtract 2 is a complete transistor (MOSFET) parameter extraction solution based upon the original computer program NGExtract by Rahul Shringarpure written in February 2007. NGExtract 2 is written in Java and based around the circuit simulator NGSpice. The goal of the program is to be used to produce

NGExtract 2 is a complete transistor (MOSFET) parameter extraction solution based upon the original computer program NGExtract by Rahul Shringarpure written in February 2007. NGExtract 2 is written in Java and based around the circuit simulator NGSpice. The goal of the program is to be used to produce accurate transistor models based around real-world transistor data. The program contains numerous improvements to the original program:
• Completely rewritten with performance and usability in mind
• Cross-Platform vs. Linux Only
• Simple installation procedure vs. compilation and manual library configuration
• Self-contained, single file runtime
• Particle Swarm Optimization routine
NGExtract 2 works by plotting the Ids vs. Vds and Ids vs. Vgs curves of a simulation model and the measured, real-world data. The user can adjust model parameters and re-simulate to attempt to match the curves. The included Particle Swarm Optimization routine attempts to automate this process by iteratively attempting to improve a solution by measuring its sum-squared error against the real-world data that the user has provided.
ContributorsVetrano, Michael Thomas (Author) / Allee, David (Thesis director) / Gorur, Ravi (Committee member) / Bakkaloglu, Bertan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
158221-Thumbnail Image.png
Description
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with

The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.

This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ContributorsBiswal, Shiba (Author) / Berman, Spring (Thesis advisor) / Fainekos, Georgios (Committee member) / Lanchier, Nicolas (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2020
158307-Thumbnail Image.png
Description
The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear

The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear Reduced Order Modeling (NLROM) methodology, basis selection and/or identification methodology, to obtain reliable reduced order models of these structures. Focusing on these goals, the work carried out addressed more specifically the following issues:

i) optimization of the basis to capture at best the response in the smallest number of modes,

ii) improved identification of the reduced order model stiffness coefficients,

iii) detection of strongly nonlinear events using NLROM.

For the first issue, an approach was proposed to rotate a limited number of linear modes to become more dominant in the response of the structure. This step was achieved through a proper orthogonal decomposition of the projection on these linear modes of a series of representative nonlinear displacements. This rotation does not expand the modal space but renders that part of the basis more efficient, the identification of stiffness coefficients more reliable, and the selection of dual modes more compact. In fact, a separate approach was also proposed for an independent optimization of the duals. Regarding the second issue, two tuning approaches of the stiffness coefficients were proposed to improve the identification of a limited set of critical coefficients based on independent response data of the structure. Both approaches led to a significant improvement of the static prediction for the clamped-clamped curved beam model. Extensive validations of the NLROMs based on the above novel approaches was carried out by comparisons with full finite element response data. The third issue, the detection of nonlinear events, was finally addressed by building connections between the eigenvalues of the finite element software (Nastran here) and NLROM tangent stiffness matrices and the occurrence of the ‘events’ which is further extended to the assessment of the accuracy with which the NLROM captures the full finite element behavior after the event has occurred.
ContributorsLin, Jinshan (Author) / Mignolet, Marc (Thesis advisor) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Spottswood, Stephen (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2020
132776-Thumbnail Image.png
Description
Customers in the modern world are accustomed to having immediate and simple access to an immense amount of information, and demand this immediacy in all businesses, especially in the restaurant industry. Now more than ever, restaurants are relying on third party delivery services such as UberEATS, Postmates, and GrubHub to

Customers in the modern world are accustomed to having immediate and simple access to an immense amount of information, and demand this immediacy in all businesses, especially in the restaurant industry. Now more than ever, restaurants are relying on third party delivery services such as UberEATS, Postmates, and GrubHub to satiate the appetite of their delivery market, and while this may seem like the natural progression, not all restaurant owners are comfortable moving in this direction. Pain points range from not wanting a third party to represent their business or the lack of supervision over the food in transit, and the time it takes to navigate the delivery landscape, to the fact that some food just doesn’t “travel” well. In addition to this, food delivery services can cause increased stress on a kitchen, and dig into the bottom line of an already slim restaurant margin. Simply put, customer reliance on these applications puts apprehensive restaurant owners at a competitive disadvantage.Our solution is simple—we want business owners to be able to take advantage of the huge market provided by third party delivery services, without the fear of compromising their brand. At DLVR Consulting, we listen to specific pain points of a customer and alleviate them through solutions developed by our in-house food, restaurant, and branding experts. Whether creating an entirely new “delivery” brand, menu curation, or payment processing service, we give the customer exactly what they need to feel comfortable using third-party delivery applications. In this plan, we will first take a deep dive into the problem and opportunity identified by both third-party research and first-hand interviews with successful restaurant owners and operators. After exploring the problem, we will propose our solution, who we will target with said solution, and what makes this solution unique and sellable. From here we will begin to explore the execution of our ideas, including our sales and marketing plans which will work in conjunction with our go-to-market strategy. We will explore key milestones and metrics we hope to meet in the coming year, as well as the team which will be taking DLVR from a plan to an implemented business. We will take a look at our three year financial forecast, and break this down further to monthly revenue, direct costs, and expenses. We will finish by taking a look at our required funding, and how we will attempt to gain said funding.
ContributorsClancy, Kevin (Co-author, Co-author) / Sebold, Brent (Thesis director) / Clancy, Keith (Committee member) / Computer Science and Engineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
163974-Thumbnail Image.png
Description
Low-level optimization is the process of handwriting key parts of applications in assembly code that is better than what can be generated from a higher-level language. In performance-intensive applications, this is key to ensuring efficient code. This is generally something that is taught in on the job training, but knowledge

Low-level optimization is the process of handwriting key parts of applications in assembly code that is better than what can be generated from a higher-level language. In performance-intensive applications, this is key to ensuring efficient code. This is generally something that is taught in on the job training, but knowledge of it improves college student’s skill sets and makes them more desirable employees I have created material for a course teaching this low-level optimization with assembly code. I specifically focus on the x86 architecture, as this is one of the most prolific computer architectures. The course contains a series of lecture videos, live coding videos, and structured programming assignments to support the learning objectives. This material is presented in an entirely autonomous way, which serves as remote learning material and can be easily added as supplemental material to an existing course.
ContributorsAbraham, Jacob (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05