Matching Items (216)
Filtering by

Clear all filters

133352-Thumbnail Image.png
Description
The inherent risk in testing drugs has been hotly debated since the government first started regulating the drug industry in the early 1900s. Who can assume the risks associated with trying new pharmaceuticals is unclear when looked at through society's lens. In the mid twentieth century, the US Food and

The inherent risk in testing drugs has been hotly debated since the government first started regulating the drug industry in the early 1900s. Who can assume the risks associated with trying new pharmaceuticals is unclear when looked at through society's lens. In the mid twentieth century, the US Food and Drug Administration (FDA) published several guidance documents encouraging researchers to exclude women from early clinical drug research. The motivation to publish those documents and the subsequent guidance documents in which the FDA and other regulatory offices established their standpoints on women in drug research may have been connected to current events at the time. The problem of whether women should be involved in drug research is a question of who can assume risk and who is responsible for disseminating what specific kinds of information. The problem tends to be framed as one that juxtaposes the health of women and fetuses and sets their health as in opposition. That opposition, coupled with the inherent uncertainty in testing drugs, provides for a complex set of issues surrounding consent and access to information.
ContributorsMeek, Caroline Jane (Author) / Maienschein, Jane (Thesis director) / Brian, Jennifer (Committee member) / School of Life Sciences (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135668-Thumbnail Image.png
Description
In the medical industry, there have been promising advances in the increase of new types of healthcare to the public. As of 2015, there was a 98% Premarket Approval rate, a 38% increase since 2010. In addition, there were 41 new novel drugs approved for clinical usage in 2014 where

In the medical industry, there have been promising advances in the increase of new types of healthcare to the public. As of 2015, there was a 98% Premarket Approval rate, a 38% increase since 2010. In addition, there were 41 new novel drugs approved for clinical usage in 2014 where the average in the previous years from 2005-2013 was 25. However, the research process towards creating and delivering new healthcare to the public remains remarkably inefficient. It takes on average 15 years, over $900 million by one estimate, for a less than 12% success rate of discovering a novel drug for clinical usage. Medical devices do not fare much better. Between 2005-2009, there were over 700 recalls per year. In addition, it takes at minimum 3.25 years for a 510(k) exempt premarket approval. Plus, a time lag exists where it takes 17 years for only 14% of medical discoveries to be implemented clinically. Coupled with these inefficiencies, government funding for medical research has been decreasing since 2002 (2.5% of Gross Domestic Product) and is predicted to be 1.5% of Gross Domestic Product by 2019. Translational research, the conversion of bench-side discoveries to clinical usage for a simplistic definition, has been on the rise since the 1990s. This may be driving the increased premarket approvals and new novel drug approvals. At the very least, it is worth considering as translational research is directly related towards healthcare practices. In this paper, I propose to improve the outcomes of translational research in order to better deliver advancing healthcare to the public. I suggest Best Value Performance Information Procurement System (BV PIPS) should be adapted in the selection process of translational research projects to fund. BV PIPS has been shown to increase the efficiency and success rate of delivering projects and services. There has been over 17 years of research with $6.3 billion of projects and services delivered showing that BV PIPS has a 98% customer satisfaction, 90% minimized management effort, and utilizes 50% less manpower and effort. Using University of Michigan \u2014 Coulter Foundation Program's funding process as a baseline and standard in the current selection of translational research projects to fund, I offer changes to this process based on BV PIPS that may ameliorate it. As concepts implemented in this process are congruent with literature on successful translational research, it may suggest that this new model for selecting translational research projects to fund will reduce costs, increase efficiency, and increase success. This may then lead to more Premarket Approvals, more new novel drug approvals, quicker delivery time to the market, and lower recalls.
ContributorsDel Rosario, Joseph Paul (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
This project explores the dimensions that affect the success of a nonprofit organizations' web presence by using a dance and health nonprofit website as the foundation of the research and redesign. This report includes literature and design research, analysis, recommendations and a journal of the web design process. Through research,

This project explores the dimensions that affect the success of a nonprofit organizations' web presence by using a dance and health nonprofit website as the foundation of the research and redesign. This report includes literature and design research, analysis, recommendations and a journal of the web design process. Through research, three categories were identified as the primary dimensions affecting the success of a website: content, technical adequacy and appearance. Furthermore, website success is influenced by how the strength of individual categories relies on one another. To improve the web design of Dancers and Health Together Inc., content implementations and redesign elements were both research and personal preference-based. The redesigned website can be found at www.collaydennis.com and will become inactive after May 31, 2015.
ContributorsDennis, Collay Carole (Author) / Coleman, Grisha (Thesis director) / Hosmer, Anthony Ryan (Committee member) / Barrett, The Honors College (Contributor) / Walter Cronkite School of Journalism and Mass Communication (Contributor)
Created2015-05
136475-Thumbnail Image.png
Description
Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.
ContributorsMa, Owen (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136271-Thumbnail Image.png
Description
The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak

The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak Fusion Test Reactor), and NSTX (National Spherical Torus Experiment) devices possible through their use. This development has facilitated the investigation of NNs for predicting heat transport profiles in JET, TFTR, and NSTX, and has promoted additional investigations to discover how else NNs may be of use to scientists at PPPL. In applying NNs to the aforementioned devices for predicting heat transport, the primary goal of this endeavor is to reproduce the success shown in Meneghini et al. in using NNs for heat transport prediction in DIII-D. Being able to reproduce the results from is important because this in turn would provide scientists at PPPL with a quick and efficient toolset for reliably predicting heat transport profiles much faster than any existing computational methods allow; the progress towards this goal is outlined in this report, and potential additional applications of the NN framework are presented.
ContributorsLuna, Christopher Joseph (Author) / Tang, Wenbo (Thesis director) / Treacy, Michael (Committee member) / Orso, Meneghini (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05