Matching Items (92)
Filtering by

Clear all filters

151803-Thumbnail Image.png
Description
Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives

Humans have an inherent capability of performing highly dexterous and skillful tasks with their arms, involving maintaining posture, movement and interacting with the environment. The latter requires for them to control the dynamic characteristics of the upper limb musculoskeletal system. Inertia, damping and stiffness, a measure of mechanical impedance, gives a strong representation of these characteristics. Many previous studies have shown that the arm posture is a dominant factor for determining the end point impedance in a horizontal plane (transverse plane). The objective of this thesis is to characterize end point impedance of the human arm in the three dimensional (3D) space. Moreover, it investigates and models the control of the arm impedance due to increasing levels of muscle co-contraction. The characterization is done through experimental trials where human subjects maintained arm posture, while perturbed by a robot arm. Moreover, the subjects were asked to control the level of their arm muscles' co-contraction, using visual feedback of their muscles' activation, in order to investigate the effect of the muscle co-contraction on the arm impedance. The results of this study showed a very interesting, anisotropic increase of the arm stiffness due to muscle co-contraction. This can lead to very useful conclusions about the arm biomechanics as well as many implications for human motor control and more specifically the control of arm impedance through muscle co-contraction. The study finds implications for the EMG-based control of robots that physically interact with humans.
ContributorsPatel, Harshil Naresh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring (Committee member) / Helms Tillery, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
151948-Thumbnail Image.png
Description
Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor

Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor the SHS installed in their homes remotely. This is typically realized by giving the SHS network access ability. Although the SHS's network access ability brings a lot of conveniences to the home owners, it also makes the SHS facing more security threats than ever before. As a result, when designing a SHS, the security threats it might face should be given careful considerations. System security threats can be solved properly by understanding them and knowing the parts in the system that should be protected against them first. This leads to the idea of solving the security threats a SHS might face from the requirements engineering level. Following this idea, this paper proposes a systematic approach to generate the security requirements specifications for the SHS. It can be viewed as the first step toward the complete SHS security requirements engineering process.
ContributorsXu, Rongcao (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2013
150509-Thumbnail Image.png
Description
Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other

Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other steps. In order to produce better quality software, the requirements have to be free of any defects. Verification and Validation (V&V;) of requirements are performed to improve their quality, by performing the V&V; process on the Software Requirement Specification (SRS) document. V&V; of the software requirements focused to a specific domain helps in improving quality. A large database of software requirements from software projects of different domains is created. Software requirements from commercial applications are focus of this project; other domains embedded, mobile, E-commerce, etc. can be the focus of future efforts. The V&V; is done to inspect the requirements and improve the quality. Inspections are done to detect defects in the requirements and three approaches for inspection of software requirements are discussed; ad-hoc techniques, checklists, and scenario-based techniques. A more systematic domain-specific technique is presented for performing V&V; of requirements.
ContributorsChughtai, Rehman (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Millard, Bruce (Committee member) / Arizona State University (Publisher)
Created2012
161626-Thumbnail Image.png
Description
Calculus as a math course is important subject students need to succeed in, in order to venture into STEM majors. This thesis focuses on the early detection of at-risk students in a calculus course which can provide the proper intervention that might help them succeed in the course. Calculus has

Calculus as a math course is important subject students need to succeed in, in order to venture into STEM majors. This thesis focuses on the early detection of at-risk students in a calculus course which can provide the proper intervention that might help them succeed in the course. Calculus has high failure rates which corroborates with the data collected from Arizona State University that shows that 40% of the 3266 students whose data were used failed in their calculus course.This thesis proposes to utilize educational big data to detect students at high risk of failure and their eventual early detection and subsequent intervention can be useful. Some existing studies similar to this thesis make use of open-scale data that are lower in data count and perform predictions on low-impact Massive Open Online Courses(MOOC) based courses. In this thesis, an automatic detection method of academically at-risk students by using learning management systems(LMS) activity data along with the student information system(SIS) data from Arizona State University(ASU) for the course calculus for engineers I (MAT 265) is developed. The method will detect students at risk by employing machine learning to identify key features that contribute to the success of a student. This thesis also proposes a new technique to convert this button click data into a button click sequence which can be used as inputs to classifiers. In addition, the advancements in Natural Language Processing field can be used by adopting methods such as part-of-speech (POS) tagging and tools such as Facebook Fasttext word embeddings to convert these button click sequences into numeric vectors before feeding them into the classifiers. The thesis proposes two preprocessing techniques and evaluates them on 3 different machine learning ensembles to determine their performance across the two modalities of the class.
ContributorsDileep, Akshay Kumar (Author) / Bansal, Ajay (Thesis advisor) / Cunningham, James (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2021
161629-Thumbnail Image.png
Description
One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in

One persisting problem in Massive Open Online Courses (MOOCs) is the issue of student dropout from these courses. The prediction of student dropout from MOOC courses can identify the factors responsible for such an event and it can further initiate intervention before such an event to increase student success in MOOC. There are different approaches and various features available for the prediction of student’s dropout in MOOC courses.In this research, the data derived from the self-paced math course ‘College Algebra and Problem Solving’ offered on the MOOC platform Open edX offered by Arizona State University (ASU) from 2016 to 2020 was considered. This research aims to predict the dropout of students from a MOOC course given a set of features engineered from the learning of students in a day. Machine Learning (ML) model used is Random Forest (RF) and this model is evaluated using the validation metrics like accuracy, precision, recall, F1-score, Area Under the Curve (AUC), Receiver Operating Characteristic (ROC) curve. The average rate of student learning progress was found to have more impact than other features. The model developed can predict the dropout or continuation of students on any given day in the MOOC course with an accuracy of 87.5%, AUC of 94.5%, precision of 88%, recall of 87.5%, and F1-score of 87.5% respectively. The contributing features and interactions were explained using Shapely values for the prediction of the model. The features engineered in this research are predictive of student dropout and could be used for similar courses to predict student dropout from the course. This model can also help in making interventions at a critical time to help students succeed in this MOOC course.
ContributorsDominic Ravichandran, Sheran Dass (Author) / Gary, Kevin (Thesis advisor) / Bansal, Ajay (Committee member) / Cunningham, James (Committee member) / Sannier, Adrian (Committee member) / Arizona State University (Publisher)
Created2021
171752-Thumbnail Image.png
Description
Building and optimizing a design for deformable media can be extremely costly. However, granular scaling laws enable the ability to predict system velocity and mobility power consumption by testing at a smaller scale in the same environment. The validity of the granular scaling laws for arbitrarily shaped wheels and screws

Building and optimizing a design for deformable media can be extremely costly. However, granular scaling laws enable the ability to predict system velocity and mobility power consumption by testing at a smaller scale in the same environment. The validity of the granular scaling laws for arbitrarily shaped wheels and screws were evaluated in materials like silica sand and BP-1, a lunar simulant. Different wheel geometries, such as non-grousered and straight and bihelically grousered wheels were created and tested using 3D printed technologies. Using the granular scaling laws and the empirical data from initial experiments, power and velocity were predicted for a larger scaled version then experimentally validated on a dynamic mobility platform. Working with granular media has high variability in material properties depending on initial environmental conditions, so particular emphasis was placed on consistency in the testing methodology. Through experiments, these scaling laws have been validated with defined use cases and limitations.
ContributorsMcbryan, Teresa (Author) / Marvi, Hamidreza (Thesis advisor) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2022
171773-Thumbnail Image.png
Description
Chemical Reaction Networks (CRNs) provide a useful framework for modeling andcontrolling large numbers of agents that undergo stochastic transitions between a set of states in a manner similar to chemical compounds. By utilizing CRN models to design agent control policies, some of the computational challenges in the coordination of multi-agent systems can be

Chemical Reaction Networks (CRNs) provide a useful framework for modeling andcontrolling large numbers of agents that undergo stochastic transitions between a set of states in a manner similar to chemical compounds. By utilizing CRN models to design agent control policies, some of the computational challenges in the coordination of multi-agent systems can be overcome. In this thesis, a CRN model is developed that defines agent control policies for a multi-agent construction task. The use of surface CRNs to overcome the tradeoff between speed and accuracy of task performance is explained. The computational difficulties involved in coordinating multiple agents to complete collective construction tasks is then discussed. A method for stochastic task and motion planning (TAMP) is proposed to explain how a TAMP solver can be applied with CRNs to coordinate multiple agents. This work defines a collective construction scenario in which a group of noncommunicating agents must rearrange blocks on a discrete domain with obstacles into a predefined target distribution. Four different construction tasks are considered with 10, 20, 30, or 40 blocks, and a simulation of each scenario with 2, 4, 6, or 8 agents is performed. As the number of blocks increases, the construction problem becomes more complex, and a given population of agents requires more time to complete the task. Populations of fewer than 8 agents are unable to solve the 30-block and 40-block problems in the allotted simulation time, suggesting an inflection point for computational feasibility, implying that beyond that point the solution times for fewer than 8 agents would be expected to increase significantly. For a group of 8 agents, the time to complete the task generally increases as the number of blocks increases, except for the 30-block problem, which has specifications that make the task slightly easier for the agents to complete compared to the 20-block problem. For the 10-block and 20- block problems, the time to complete the task decreases as the number of agents increases; however, the marginal effect of each additional two agents on this time decreases. This can be explained through the pigeonhole principle: since there area finite number of states, when the number of agents is greater than the number of available spaces, deadlocks start to occur and the expectation is that the overall solution time to tend to infinity.
ContributorsKamojjhala, Pranav (Author) / Berman, Spring (Thesis advisor) / Fainekos, Gergios E (Thesis advisor) / Pavlic, Theodore P (Committee member) / Arizona State University (Publisher)
Created2022
171787-Thumbnail Image.png
Description
A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those objects. The representation of relationships and correlations between data is

A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those objects. The representation of relationships and correlations between data is unique to graph structures. GNNs exploit this feature of graphs by augmenting both forms of data, individual and relational, and have been designed to allow for communication and sharing of data within each neural network layer. These benefits allow each node to have an enriched perspective, or a better understanding, of its neighbouring nodes and its connections to those nodes. The ability of GNNs to efficiently process high-dimensional node data and multi-faceted relationships among nodes gives them advantages over neural network architectures such as Convolutional Neural Networks (CNNs) that do not implicitly handle relational data. These quintessential characteristics of GNN models make them suitable for solving problems in which the correspondences among input data are needed to produce an accurate and precise representation of these data. GNN frameworks may significantly improve existing communication and control techniques for multi-agent tasks by implicitly representing not only information associated with the individual agents, such as agent position, velocity, and camera data, but also their relationships with one another, such as distances between the agents and their ability to communicate with one another. One such task is a multi-agent navigation problem in which the agents must coordinate with one another in a decentralized manner, using proximity sensors only, to navigate safely to their intended goal positions in the environment without collisions or deadlocks. The contribution of this thesis is the design of an end-to-end decentralized control scheme for multi-agent navigation that utilizes GNNs to prevent inter-agent collisions and deadlocks. The contributions consist of the development, simulation and evaluation of the performance of an advantage actor-critic (A2C) reinforcement learning algorithm that employs actor and critic networks for training that simultaneously approximate the policy function and value function, respectively. These networks are implemented using GNN frameworks for navigation by groups of 3, 5, 10 and 15 agents in simulated two-dimensional environments. It is observed that in $40\%$ to $50\%$ of the simulation trials, between 70$\%$ to 80$\%$ of the agents reach their goal positions without colliding with other agents or becoming trapped in deadlocks. The model is also compared to a random run simulation, where actions are chosen randomly for the agents and observe that the model performs notably well for smaller groups of agents.
ContributorsAyalasomayajula, Manaswini (Author) / Berman, Spring (Thesis advisor) / Mian, Sami (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2022
168538-Thumbnail Image.png
Description
Recently, Generative Adversarial Networks (GANs) have been applied to the problem of Cold-Start Recommendation, but the training performance of these models is hampered by the extreme sparsity in warm user purchase behavior. This thesis introduces a novel representation for user-vectors by combining user demographics and user preferences, making the model

Recently, Generative Adversarial Networks (GANs) have been applied to the problem of Cold-Start Recommendation, but the training performance of these models is hampered by the extreme sparsity in warm user purchase behavior. This thesis introduces a novel representation for user-vectors by combining user demographics and user preferences, making the model a hybrid system which uses Collaborative Filtering and Content Based Recommendation. This system models user purchase behavior using weighted user-product preferences (explicit feedback) rather than binary user-product interactions (implicit feedback). Using this a novel sparse adversarial model, Sparse ReguLarized Generative Adversarial Network (SRLGAN), is developed for Cold-Start Recommendation. SRLGAN leverages the sparse user-purchase behavior which ensures training stability and avoids over-fitting on warm users. The performance of SRLGAN is evaluated on two popular datasets and demonstrate state-of-the-art results.
ContributorsShah, Aksheshkumar Ajaykumar (Author) / Venkateswara, Hemanth (Thesis advisor) / Berman, Spring (Thesis advisor) / Ladani, Leila J (Committee member) / Arizona State University (Publisher)
Created2022
171603-Thumbnail Image.png
Description
A significant proportion of medical errors exist in crucial medical information, and most stem from misinterpreting non-standardized clinical notes. Clinical Skills exam offered by the United States Medical Licensing Examination (USMLE) was put in place to certify patient note-taking skills before medical students joined professional practices, offering the first line

A significant proportion of medical errors exist in crucial medical information, and most stem from misinterpreting non-standardized clinical notes. Clinical Skills exam offered by the United States Medical Licensing Examination (USMLE) was put in place to certify patient note-taking skills before medical students joined professional practices, offering the first line of defense in protecting patients from medical errors. Nonetheless, the exams were discontinued in 2021 following high costs and resource usage in scoring the exams. This thesis compares four transformer-based models, namely BERT (Bidirectional Encoder Representations from Transformers) Base Uncased, Emilyalsentzer Bio_ClinicalBERT, RoBERTa (Robustly Optimized BERT Pre-Training Approach), and DeBERTa (Decoding-enhanced BERT with disentangled attention), with the goal to map free text in patient notes to clinical concepts present in the exam rubric. The impact of context-specific embeddings on BERT was also studied to determine the need for a clinical BERT in Clinical Skills exam. This thesis proposes the use of DeBERTa as a backbone model in patient note scoring for the USMLE Clinical Skills exam after comparing it with three other transformer models. Disentangled attention and enhanced mask decoder integrated into DeBERTa were credited for the high performance of DeBERTa as compared to the other models. Besides, the effect of meta pseudo labeling was also investigated in this thesis, which in turn, further enhanced DeBERTa’s performance.
ContributorsGanesh, Jay (Author) / Bansal, Ajay (Thesis advisor) / Mehlhase, Alexandra (Committee member) / Findler, Michael (Committee member) / Arizona State University (Publisher)
Created2022