Matching Items (255)
Filtering by

Clear all filters

149551-Thumbnail Image.png
Description
This Thesis contends that if the designer of a non-biological machine (android) can establish that the machine exhibits certain specified behaviors or characteristics, then there is no principled reason to deny that the machine can be considered a legal person. The thesis also states that given a related but not

This Thesis contends that if the designer of a non-biological machine (android) can establish that the machine exhibits certain specified behaviors or characteristics, then there is no principled reason to deny that the machine can be considered a legal person. The thesis also states that given a related but not necessarily identical set of characteristics, there is no principled reason to deny that the non-biological machine can make a claim to a level of moral personhood. It is the purpose of my analysis to delineate some of the specified behaviors required for each of these conditions so as to provide guidance and understanding to designers seeking to establish criteria for creation of such machines. Implicit in the stated thesis are assumptions concerning what is meant by a non-biological machine. I use analytic functionalism as a mechanism to establish a framework within which to operate. In order to develop this framework it is necessary to provide an analysis of what currently constitutes the attributes of a legal person, and to likewise examine what are the roots of the claim to moral personhood. This analysis consists of a treatment of the concept of legal personhood starting with the Greek and Roman views and tracing the line of development through the modern era. This examination then explores at a more abstract level what it means to be a person. Next, I examine law's role as a normative system, placing it within the context of the previous discussions. Then, criteria such as autonomy and intentionality are discussed in detail and are related to the over all analysis of the thesis. Following this, moral personhood is examined using the animal rights movement of the last thirty years as an argument by analogy to the question posed by the thesis. Finally, all of the above concepts are combined in a way that will provide a basis for analyzing and testing future assertions that a non-biological entity has a plausible claim for legal or moral personhood. If such an entity exhibits the type of intentionality and autonomy which humans view as the foundation of practical reason, in combination with other indicia of sentience described by "folk psychology", analytic functionalism suggests that there is no principled reason to deny the android's claim to rights.
ContributorsCalverley, David J (Author) / Armendt, Brad (Thesis advisor) / McGregor, Joan (Committee member) / Askland, Andrew (Committee member) / Arizona State University (Publisher)
Created2011
149452-Thumbnail Image.png
Description
Cyber Physical Systems (CPSs) are systems comprising of computational systems that interact with the physical world to perform sensing, communication, computation and actuation. Common examples of these systems include Body Area Networks (BANs), Autonomous Vehicles (AVs), Power Distribution Systems etc. The close coupling between cyber and physical worlds in a

Cyber Physical Systems (CPSs) are systems comprising of computational systems that interact with the physical world to perform sensing, communication, computation and actuation. Common examples of these systems include Body Area Networks (BANs), Autonomous Vehicles (AVs), Power Distribution Systems etc. The close coupling between cyber and physical worlds in a CPS manifests in two types of interactions between computing systems and the physical world: intentional and unintentional. Unintentional interactions result from the physical characteristics of the computing systems and often cause harm to the physical world, if the computing nodes are close to each other, these interactions may overlap thereby increasing the chances of causing a Safety hazard. Similarly, due to mobile nature of computing nodes in a CPS planned and unplanned interactions with the physical world occur. These interactions represent the behavior of a computing node while it is following a planned path and during faulty operations. Both of these interactions change over time due to the dynamics (motion) of the computing node and may overlap thereby causing harm to the physical world. Lack of proper modeling and analysis frameworks for these systems causes system designers to use ad-hoc techniques thereby further increasing their design and development time. The thesis addresses these problems by taking a holistic approach to model Computational, Physical and Cyber Physical Interactions (CPIs) aspects of a CPS and proposes modeling constructs for them. These constructs are analyzed using a safety analysis algorithm developed as part of the thesis. The algorithm computes the intersection of CPIs for both mobile as well as static computing nodes and determines the safety of the physical system. A framework is developed by extending AADL to support these modeling constructs; the safety analysis algorithm is implemented as OSATE plug-in. The applicability of the proposed approach is demonstrated by considering the safety of human tissue during the operations of BAN, and the safety of passengers traveling in an Autonomous Vehicle.
ContributorsKandula, Sailesh Umamaheswara (Author) / Gupta, Sandeep (Thesis advisor) / Lee, Yann Hang (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2010
152197-Thumbnail Image.png
Description
Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current

Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current density is of great concern affecting the reliability of the entire microelectronics systems. This paper reviews electromigration in Pb- free solders, focusing specifically on Sn0.7wt.% Cu solder joints. Effect of texture, grain orientation, and grain-boundary misorientation angle on electromigration and intermetallic compound (IMC) formation is studied through EBSD analysis performed on actual C4 bumps.
ContributorsLara, Leticia (Author) / Tasooji, Amaneh (Thesis advisor) / Lee, Kyuoh (Committee member) / Krause, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
158117-Thumbnail Image.png
Description
Visual object recognition has achieved great success with advancements in deep learning technologies. Notably, the existing recognition models have gained human-level performance on many of the recognition tasks. However, these models are data hungry, and their performance is constrained by the amount of training data. Inspired by the human ability

Visual object recognition has achieved great success with advancements in deep learning technologies. Notably, the existing recognition models have gained human-level performance on many of the recognition tasks. However, these models are data hungry, and their performance is constrained by the amount of training data. Inspired by the human ability to recognize object categories based on textual descriptions of objects and previous visual knowledge, the research community has extensively pursued the area of zero-shot learning. In this area of research, machine vision models are trained to recognize object categories that are not observed during the training process. Zero-shot learning models leverage textual information to transfer visual knowledge from seen object categories in order to recognize unseen object categories.

Generative models have recently gained popularity as they synthesize unseen visual features and convert zero-shot learning into a classical supervised learning problem. These generative models are trained using seen classes and are expected to implicitly transfer the knowledge from seen to unseen classes. However, their performance is stymied by overfitting towards seen classes, which leads to substandard performance in generalized zero-shot learning. To address this concern, this dissertation proposes a novel generative model that leverages the semantic relationship between seen and unseen categories and explicitly performs knowledge transfer from seen categories to unseen categories. Experiments were conducted on several benchmark datasets to demonstrate the efficacy of the proposed model for both zero-shot learning and generalized zero-shot learning. The dissertation also provides a unique Student-Teacher based generative model for zero-shot learning and concludes with future research directions in this area.
ContributorsVyas, Maunil Rohitbhai (Author) / Panchanathan, Sethuraman (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2020
157623-Thumbnail Image.png
Description
Feature embeddings differ from raw features in the sense that the former obey certain properties like notion of similarity/dissimilarity in it's embedding space. word2vec is a preeminent example in this direction, where the similarity in the embedding space is measured in terms of the cosine similarity. Such language embedding models

Feature embeddings differ from raw features in the sense that the former obey certain properties like notion of similarity/dissimilarity in it's embedding space. word2vec is a preeminent example in this direction, where the similarity in the embedding space is measured in terms of the cosine similarity. Such language embedding models have seen numerous applications in both language and vision community as they capture the information in the modality (English language) efficiently. Inspired by these language models, this work focuses on learning embedding spaces for two visual computing tasks, 1. Image Hashing 2. Zero Shot Learning. The training set was used to learn embedding spaces over which similarity/dissimilarity is measured using several distance metrics like hamming / euclidean / cosine distances. While the above-mentioned language models learn generic word embeddings, in this work task specific embeddings were learnt which can be used for Image Retrieval and Classification separately.

Image Hashing is the task of mapping images to binary codes such that some notion of user-defined similarity is preserved. The first part of this work focuses on designing a new framework that uses the hash-tags associated with web images to learn the binary codes. Such codes can be used in several applications like Image Retrieval and Image Classification. Further, this framework requires no labelled data, leaving it very inexpensive. Results show that the proposed approach surpasses the state-of-art approaches by a significant margin.

Zero-shot classification is the task of classifying the test sample into a new class which was not seen during training. This is possible by establishing a relationship between the training and the testing classes using auxiliary information. In the second part of this thesis, a framework is designed that trains using the handcrafted attribute vectors and word vectors but doesn’t require the expensive attribute vectors during test time. More specifically, an intermediate space is learnt between the word vector space and the image feature space using the hand-crafted attribute vectors. Preliminary results on two zero-shot classification datasets show that this is a promising direction to explore.
ContributorsGattupalli, Jaya Vijetha (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2019
157174-Thumbnail Image.png
Description
Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card

Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.
ContributorsYildirim, Mehmet Yigit (Author) / Davulcu, Hasan (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Huang, Dijiang (Committee member) / Hsiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2019
161629-Thumbnail Image.png
Description
One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in

One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in MOOC. There are different approaches and various features available for the prediction of student’s dropout in MOOC courses.In this research, the data derived from the self-paced math course ‘College Algebra and Problem Solving’ offered on the MOOC platform Open edX offered by Arizona State University (ASU) from 2016 to 2020 was considered. This research aims to predict the dropout of students from a MOOC course given a set of features engineered from the learning of students in a day. Machine Learning (ML) model used is Random Forest (RF) and this model is evaluated using the validation metrics like accuracy, precision, recall, F1-score, Area Under the Curve (AUC), Receiver Operating Characteristic (ROC) curve. The average rate of student learning progress was found to have more impact than other features. The model developed can predict the dropout or continuation of students on any given day in the MOOC course with an accuracy of 87.5%, AUC of 94.5%, precision of 88%, recall of 87.5%, and F1-score of 87.5% respectively. The contributing features and interactions were explained using Shapely values for the prediction of the model. The features engineered in this research are predictive of student dropout and could be used for similar courses to predict student dropout from the course. This model can also help in making interventions at a critical time to help students succeed in this MOOC course.
ContributorsDominic Ravichandran, Sheran Dass (Author) / Gary, Kevin (Thesis advisor) / Bansal, Ajay (Committee member) / Cunningham, James (Committee member) / Sannier, Adrian (Committee member) / Arizona State University (Publisher)
Created2021
168417-Thumbnail Image.png
Description
Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction,

Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction, implicit learning and reproduction of human behavior is the major challenge. This work tries to compare some of the recent advances taking a phenomenological approach to trajectory prediction. \par The work is expected to mainly target on generating future events or trajectories based on the previous data observed across many time intervals. In particular, this work presents and compares machine learning models to generate various human handwriting trajectories. Although the behavior of every individual is unique, it is still possible to broadly generalize and learn the underlying human behavior from the current observations to predict future human writing trajectories. This enables the machine or the robot to generate future handwriting trajectories given an initial trajectory from the individual thus helping the person to fill up the rest of the letter or curve. This work tests and compares the performance of Conditional Variational Autoencoders and Sinusoidal Representation Network models on handwriting trajectory prediction and reconstruction.
ContributorsKota, Venkata Anil (Author) / Ben Amor, Hani (Thesis advisor) / Venkateswara, Hemanth Kumar Demakethepalli (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2021
171773-Thumbnail Image.png
Description
Chemical Reaction Networks (CRNs) provide a useful framework for modeling andcontrolling large numbers of agents that undergo stochastic transitions between a set of states in a manner similar to chemical compounds. By utilizing CRN models to design agent control policies, some of the computational challenges in the coordination of multi-agent systems can be

Chemical Reaction Networks (CRNs) provide a useful framework for modeling andcontrolling large numbers of agents that undergo stochastic transitions between a set of states in a manner similar to chemical compounds. By utilizing CRN models to design agent control policies, some of the computational challenges in the coordination of multi-agent systems can be overcome. In this thesis, a CRN model is developed that defines agent control policies for a multi-agent construction task. The use of surface CRNs to overcome the tradeoff between speed and accuracy of task performance is explained. The computational difficulties involved in coordinating multiple agents to complete collective construction tasks is then discussed. A method for stochastic task and motion planning (TAMP) is proposed to explain how a TAMP solver can be applied with CRNs to coordinate multiple agents. This work defines a collective construction scenario in which a group of noncommunicating agents must rearrange blocks on a discrete domain with obstacles into a predefined target distribution. Four different construction tasks are considered with 10, 20, 30, or 40 blocks, and a simulation of each scenario with 2, 4, 6, or 8 agents is performed. As the number of blocks increases, the construction problem becomes more complex, and a given population of agents requires more time to complete the task. Populations of fewer than 8 agents are unable to solve the 30-block and 40-block problems in the allotted simulation time, suggesting an inflection point for computational feasibility, implying that beyond that point the solution times for fewer than 8 agents would be expected to increase significantly. For a group of 8 agents, the time to complete the task generally increases as the number of blocks increases, except for the 30-block problem, which has specifications that make the task slightly easier for the agents to complete compared to the 20-block problem. For the 10-block and 20- block problems, the time to complete the task decreases as the number of agents increases; however, the marginal effect of each additional two agents on this time decreases. This can be explained through the pigeonhole principle: since there area finite number of states, when the number of agents is greater than the number of available spaces, deadlocks start to occur and the expectation is that the overall solution time to tend to infinity.
ContributorsKamojjhala, Pranav (Author) / Berman, Spring (Thesis advisor) / Fainekos, Gergios E (Thesis advisor) / Pavlic, Theodore P (Committee member) / Arizona State University (Publisher)
Created2022
171787-Thumbnail Image.png
Description
A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those objects. The representation of relationships and correlations between data is

A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those objects. The representation of relationships and correlations between data is unique to graph structures. GNNs exploit this feature of graphs by augmenting both forms of data, individual and relational, and have been designed to allow for communication and sharing of data within each neural network layer. These benefits allow each node to have an enriched perspective, or a better understanding, of its neighbouring nodes and its connections to those nodes. The ability of GNNs to efficiently process high-dimensional node data and multi-faceted relationships among nodes gives them advantages over neural network architectures such as Convolutional Neural Networks (CNNs) that do not implicitly handle relational data. These quintessential characteristics of GNN models make them suitable for solving problems in which the correspondences among input data are needed to produce an accurate and precise representation of these data. GNN frameworks may significantly improve existing communication and control techniques for multi-agent tasks by implicitly representing not only information associated with the individual agents, such as agent position, velocity, and camera data, but also their relationships with one another, such as distances between the agents and their ability to communicate with one another. One such task is a multi-agent navigation problem in which the agents must coordinate with one another in a decentralized manner, using proximity sensors only, to navigate safely to their intended goal positions in the environment without collisions or deadlocks. The contribution of this thesis is the design of an end-to-end decentralized control scheme for multi-agent navigation that utilizes GNNs to prevent inter-agent collisions and deadlocks. The contributions consist of the development, simulation and evaluation of the performance of an advantage actor-critic (A2C) reinforcement learning algorithm that employs actor and critic networks for training that simultaneously approximate the policy function and value function, respectively. These networks are implemented using GNN frameworks for navigation by groups of 3, 5, 10 and 15 agents in simulated two-dimensional environments. It is observed that in $40\%$ to $50\%$ of the simulation trials, between 70$\%$ to 80$\%$ of the agents reach their goal positions without colliding with other agents or becoming trapped in deadlocks. The model is also compared to a random run simulation, where actions are chosen randomly for the agents and observe that the model performs notably well for smaller groups of agents.
ContributorsAyalasomayajula, Manaswini (Author) / Berman, Spring (Thesis advisor) / Mian, Sami (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2022