Matching Items (405)
Filtering by

Clear all filters

133340-Thumbnail Image.png
Description
For as long as humans have been working, they have been looking for ways to get that work done better, faster, and more efficient. Over the course of human history, mankind has created innumerable spectacular inventions, all with the goal of making the economy and daily life more efficient. Today,

For as long as humans have been working, they have been looking for ways to get that work done better, faster, and more efficient. Over the course of human history, mankind has created innumerable spectacular inventions, all with the goal of making the economy and daily life more efficient. Today, innovations and technological advancements are happening at a pace like never seen before, and technology like automation and artificial intelligence are poised to once again fundamentally alter the way people live and work in society. Whether society is prepared or not, robots are coming to replace human labor, and they are coming fast. In many areas artificial intelligence has disrupted entire industries of the economy. As people continue to make advancements in artificial intelligence, more industries will be disturbed, more jobs will be lost, and entirely new industries and professions will be created in their wake. The future of the economy and society will be determined by how humans adapt to the rapid innovations that are taking place every single day. In this paper I will examine the extent to which automation will take the place of human labor in the future, project the potential effect of automation to future unemployment, and what individuals and society will need to do to adapt to keep pace with rapidly advancing technology. I will also look at the history of automation in the economy. For centuries humans have been advancing technology to make their everyday work more productive and efficient, and for centuries this has forced humans to adapt to the modern technology through things like training and education. The thesis will additionally examine the ways in which the U.S. education system will have to adapt to meet the demands of the advancing economy, and how job retraining programs must be modernized to prepare workers for the changing economy.
ContributorsCunningham, Reed P. (Author) / DeSerpa, Allan (Thesis director) / Haglin, Brett (Committee member) / School of International Letters and Cultures (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133894-Thumbnail Image.png
Description
Pandora is a play exploring our relationship with gendered technology through the lens of artificial intelligence. Can women be subjective under patriarchy? Do robots who look like women have subjectivity? Hoping to create a better version of ourselves, The Engineer must navigate the loss of her creation, and Pandora must

Pandora is a play exploring our relationship with gendered technology through the lens of artificial intelligence. Can women be subjective under patriarchy? Do robots who look like women have subjectivity? Hoping to create a better version of ourselves, The Engineer must navigate the loss of her creation, and Pandora must navigate their new world. The original premiere run was March 27-28, 2018, original cast: Caitlin Andelora, Rikki Tremblay, and Michael Tristano Jr.
ContributorsToye, Abigail Elizabeth (Author) / Linde, Jennifer (Thesis director) / Abele, Kelsey (Committee member) / Department of Information Systems (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134152-Thumbnail Image.png
Description
Due to artificial selection, dogs have high levels of phenotypic diversity, yet, there appears to be low genetic diversity within individual breeds. Through their domestication from wolves, dogs have gone through a series of population bottlenecks, which has resulted in a reduction in genetic diversity, with a large amount of

Due to artificial selection, dogs have high levels of phenotypic diversity, yet, there appears to be low genetic diversity within individual breeds. Through their domestication from wolves, dogs have gone through a series of population bottlenecks, which has resulted in a reduction in genetic diversity, with a large amount of linkage disequilibrium and the persistence of deleterious mutations. This has led to an increased susceptibility to a multitude of diseases, including cancer. To study the effects of artificial selection and life history characteristics on the risk of cancer mortality, we collected cancer mortality data from four studies as well as the percent of heterozygosity, body size, lifespan and breed group for 201 dog breeds. We also collected specific types of cancer breeds were susceptible to and compared the dog cancer mortality patterns to the patterns observed in other mammals. We found a relationship between cancer mortality rate and heterozygosity, body size, lifespan as well as breed group. Higher levels of heterozygosity were also associated with longer lifespan. These results indicate larger breeds, such as Irish Water Spaniels, Flat-coated Retrievers and Bernese Mountain Dogs, are more susceptible to cancer, with lower heterozygosity and lifespan. These breeds are also more susceptible to sarcomas, as opposed to carcinomas in smaller breeds, such as Miniature Pinschers, Chihuahuas, and Pekingese. Other mammals show that larger and long-lived animals have decreased cancer mortality, however, within dog breeds, the opposite relationship is observed. These relationships could be due to the trade-off between cellular maintenance and growing fast and large, with higher expression of growth factors, such as IGF-1. This study further demonstrates the relationships between cancer mortality, heterozygosity, and life history traits and exhibits dogs as an important model organism for understanding the relationship between genetics and health.
ContributorsBalsley, Cassandra Sierra (Author) / Maley, Carlo (Thesis director) / Wynne, Clive (Committee member) / Tollis, Marc (Committee member) / School of Life Sciences (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135584-Thumbnail Image.png
Description
Breast cancer is the leading cause of cancer-related deaths of women in the united states. Traditionally, Breast cancer is predominantly treated by a combination of surgery, chemotherapy, and radiation therapy. However, due to the significant negative side effects associated with these traditional treatments, there has been substantial efforts to develo

Breast cancer is the leading cause of cancer-related deaths of women in the united states. Traditionally, Breast cancer is predominantly treated by a combination of surgery, chemotherapy, and radiation therapy. However, due to the significant negative side effects associated with these traditional treatments, there has been substantial efforts to develop alternative therapies to treat cancer. One such alternative therapy is a peptide-based therapeutic cancer vaccine. Therapeutic cancer vaccines enhance an individual's immune response to a specific tumor. They are capable of doing this through artificial activation of tumor specific CTLs (Cytotoxic T Lymphocytes). However, in order to artificially activate tumor specific CTLs, a patient must be treated with immunogenic epitopes derived from their specific cancer type. We have identified that the tumor associated antigen, TPD52, is an ideal target for a therapeutic cancer vaccine. This designation was due to the overexpression of TPD52 in a variety of different cancer types. In order to start the development of a therapeutic cancer vaccine for TPD52-related cancers, we have devised a two-step strategy. First, we plan to create a list of potential TPD52 epitopes by using epitope binding and processing prediction tools. Second, we plan to attempt to experimentally identify MHC class I TPD52 epitopes in vitro. We identified 942 potential 9 and 10 amino acid epitopes for the HLAs A1, A2, A3, A11, A24, B07, B27, B35, B44. These epitopes were predicted by using a combination of 3 binding prediction tools and 2 processing prediction tools. From these 942 potential epitopes, we selected the top 50 epitopes ranked by a combination of binding and processing scores. Due to the promiscuity of some predicted epitopes for multiple HLAs, we ordered 38 synthetic epitopes from the list of the top 50 epitope. We also performed a frequency analysis of the TPD52 protein sequence and identified 3 high volume regions of high epitope production. After the epitope predictions were completed, we proceeded to attempt to experimentally detected presented TPD52 epitopes. First, we successful transduced parental K562 cells with TPD52. After transduction, we started the optimization process for the immunoprecipitation protocol. The optimization of the immunoprecipitation protocol proved to be more difficult than originally believed and was the main reason that we were unable to progress past the transduction of the parental cells. However, we believe that we have identified the issues and will be able to complete the experiment in the coming months.
ContributorsWilson, Eric Andrew (Author) / Anderson, Karen (Thesis director) / Borges, Chad (Committee member) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135193-Thumbnail Image.png
Description
This purpose of this thesis study was to examine variables of the "War on Cancer" frame, loss-gain prime, and patient gender on treatment decision for advanced cancer patients. A total of 291 participants (141 females) participated in an online survey experiment and were randomly assigned to one of eight possible

This purpose of this thesis study was to examine variables of the "War on Cancer" frame, loss-gain prime, and patient gender on treatment decision for advanced cancer patients. A total of 291 participants (141 females) participated in an online survey experiment and were randomly assigned to one of eight possible conditions, each of which were comprised of a combination of one of two levels for three total independent variables: war frame ("War on Cancer" frame or neutral frame), loss-gain prime (loss prime or gain prime), and patient gender (female or male). Each of the three variables were operationalized to determine whether or not the exposure to the war on cancer paradigm, loss-frame language, or male patient gender would increase the likelihood of a participant choosing a more aggressive cancer treatment. Participants read a patient scenario and were asked to respond to questions related to motivating factors. Participants were then asked to report preference for one of two treatment decisions. Participants were then asked to provide brief demographic information in addition to responding to questions about military history, war attitudes, and cancer history. The aforementioned manipulations sought to determine whether exposure to various factors would make a substantive difference in final treatment decision. Contrary to the predicted results, participants in the war frame condition (M = 3.85, SD = 1.48) were more likely to choose the pursuit of palliative care (as opposed to aggressive treatment) than participants in the neutral frame condition (M = 3.54, SD = 1.23). Ultimately, these significant findings suggest that there is practical information to be gained from treatment presentation manipulations. By arming healthcare providers with a more pointed understanding of the nuances of treatment presentation, we can hope to empower patients, their loved ones, and healthcare providers entrenched in the world of cancer treatment.
ContributorsKnowles, Madelyn Ann (Author) / Kwan, Virginia S. Y. (Thesis director) / Presson, Clark (Committee member) / Salamone, Damien (Committee member) / Department of Psychology (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05