Matching Items (176)
Filtering by

Clear all filters

131527-Thumbnail Image.png
Description
Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic

Object localization is used to determine the location of a device, an important aspect of applications ranging from autonomous driving to augmented reality. Commonly-used localization techniques include global positioning systems (GPS), simultaneous localization and mapping (SLAM), and positional tracking, but all of these methodologies have drawbacks, especially in high traffic indoor or urban environments. Using recent improvements in the field of machine learning, this project proposes a new method of localization using networks with several wireless transceivers and implemented without heavy computational loads or high costs. This project aims to build a proof-of-concept prototype and demonstrate that the proposed technique is feasible and accurate.

Modern communication networks heavily depend upon an estimate of the communication channel, which represents the distortions that a transmitted signal takes as it moves towards a receiver. A channel can become quite complicated due to signal reflections, delays, and other undesirable effects and, as a result, varies significantly with each different location. This localization system seeks to take advantage of this distinctness by feeding channel information into a machine learning algorithm, which will be trained to associate channels with their respective locations. A device in need of localization would then only need to calculate a channel estimate and pose it to this algorithm to obtain its location.

As an additional step, the effect of location noise is investigated in this report. Once the localization system described above demonstrates promising results, the team demonstrates that the system is robust to noise on its location labels. In doing so, the team demonstrates that this system could be implemented in a continued learning environment, in which some user agents report their estimated (noisy) location over a wireless communication network, such that the model can be implemented in an environment without extensive data collection prior to release.
ContributorsChang, Roger (Co-author) / Kann, Trevor (Co-author) / Alkhateeb, Ahmed (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133901-Thumbnail Image.png
Description
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
ContributorsRichards, Nicholas Giovanni (Author) / Miller, Phillip (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135426-Thumbnail Image.png
Description
Company X is one of the world's largest manufacturer of semiconductors. The company relies on various suppliers in the U.S. and around the globe for its manufacturing process. The financial health of these suppliers is vital to the continuation of Company X's business without any material interruption. Therefore, it is

Company X is one of the world's largest manufacturer of semiconductors. The company relies on various suppliers in the U.S. and around the globe for its manufacturing process. The financial health of these suppliers is vital to the continuation of Company X's business without any material interruption. Therefore, it is in Company X's interest to monitor its supplier's financial performance. Company X has a supplier financial health model currently in use. Having been developed prior to watershed events like the Great Recession, the current model may not reflect the significant changes in the economic environment due to these events. Company X wants to know if there is a more accurate model for evaluating supplier health that better indicates business risk. The scope of this project will be limited to a sample of 24 suppliers representative of Company X's supplier base that are public companies. While Company X's suppliers consist of both private and public companies, the used of exclusively public companies ensures that we will have sufficient and appropriate data for the necessary analysis. The goal of this project is to discover if there is a more accurate model for evaluating the financial health of publicly traded suppliers that better indicates business risk. Analyzing this problem will require a comprehensive understanding of various financial health models available and their components. The team will study best practice and academia. This comprehension will allow us to customize a model by incorporating metrics that allows greater accuracy in evaluating supplier financial health in accordance with Company X's values.
ContributorsLi, Tong (Co-author) / Gonzalez, Alexandra (Co-author) / Park, Zoon Beom (Co-author) / Vogelsang, Meridith (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135671-Thumbnail Image.png
Description
Financial statements are one of the most important, if not the most important, documents for investors. These statements are prepared quarterly and yearly by the company accounting department, and are then audited in detail by a large external accounting firm. Investors use these documents to determine the value of the

Financial statements are one of the most important, if not the most important, documents for investors. These statements are prepared quarterly and yearly by the company accounting department, and are then audited in detail by a large external accounting firm. Investors use these documents to determine the value of the company, and trust that the company was truthful in its statements, and the auditing firm correctly audited the company's financial statements for any mistakes in their books and balances. Mistakes on a company's financial statements can be costly. However, financial fraud on the statements can be outright disastrous. Penalties for accounting fraud can include individual lifetime prison sentences, as well as company fines for billions of dollars. As students in the accounting major, it is our responsibility to ensure that financial statements are accurate and truthful to protect ourselves, other stakeholders, and the companies we work for. This ethics game takes the stories of Enron, WorldCom, and Lehman Brothers and uses them to help students identify financial fraud and how it can be prevented, as well as the consequences behind unethical decisions in financial reporting. The Enron scandal involved CEO Kenneth Lay and his predecessor Jeffery Skilling hiding losses in their financial statements with the help of their auditing firm, Arthur Andersen. Enron collapsed in 2002, and Lay was sentenced to 45 years in prison with his conspirator Skilling sentenced to 24 years in prison. In the WorldCom scandal, CEO Bernard "Bernie" Ebbers booked line costs as capital expenses (overstating WorldCom's assets), and created fraudulent accounts to inflate revenue and WorldCom's profit. Ebbers was sentenced to 25 years in prison and lost his title as WorldCom's Chief Executive Officer. Lehman Brothers took advantage of a loophole in accounting procedure Repo 105, that let the firm hide $50 billion in profits. No one at Lehman Brothers was sentenced to jail since the transaction was technically considered legal, but Lehman was the largest investment bank to fail and the only large financial institution that was not bailed out by the U.S. government.
ContributorsPanikkar, Manoj Madhuraj (Author) / Samuelson, Melissa (Thesis director) / Ahmad, Altaf (Committee member) / Department of Information Systems (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136616-Thumbnail Image.png
Description
The author is an accounting major headed into the public accounting industry. As a tax intern his senior year, he was able to work in the thick of "busy season", when tax returns are due for submission and work is very busy. The author tired of working long hours and

The author is an accounting major headed into the public accounting industry. As a tax intern his senior year, he was able to work in the thick of "busy season", when tax returns are due for submission and work is very busy. The author tired of working long hours and continuous talking with his accounting friends how working on Saturdays and long weeknights was generally accepted. Best value principles from Dr. Dean Kashiwagi's Information Measurement Theory were applied to examine how to maximize efficiency in public accounting and reduce the workload. After reviewing how Information Measurement Theory applies to public accounting, the author deemed three possible solutions to improve the working conditions of public accountants. First, to decrease the work load during busy season, tax organizers need to be sent earlier and staff should be assigned to oversee this information gathering. Second, in order to better prepare new hires to become partners, the career path needs to be outlined on day one with a career guide. Finally, in order to more successfully on board new hires due to the steep learning in public accounting, firms should utilize buddy systems and encourage organic mentoring.
ContributorsBohmke, Scott (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / WPC Graduate Programs (Contributor) / School of Accountancy (Contributor)
Created2015-05
136475-Thumbnail Image.png
Description
Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.
ContributorsMa, Owen (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135818-Thumbnail Image.png
Description
In A Comparative Analysis of Indoor and Greenhouse Cannabis Cultivation Systems, the two most common systems for commercial cannabis cultivation are compared using an operational and capital expenditure model combined with a collection of relevant industry sources to ascertain conclusions about the two systems' relative competitiveness. The cannabis industry is

In A Comparative Analysis of Indoor and Greenhouse Cannabis Cultivation Systems, the two most common systems for commercial cannabis cultivation are compared using an operational and capital expenditure model combined with a collection of relevant industry sources to ascertain conclusions about the two systems' relative competitiveness. The cannabis industry is one of the fastest growing nascent industries in the United States, and, as it evolves into a mature market, it will require more sophisticated considerations of resource deployment in order to maximize efficiency and maintain competitive advantage. Through drawing on leading assumptions by industry experts, we constructed a model of each system to demonstrate the dynamics of typical capital deployment and cost flow in each system. The systems are remarkably similar in many respects, with notable reductions in construction costs, electrical costs, and debt servicing for greenhouses. Although the differences are somewhat particular, they make up a large portion of the total costs and capital expenditures, causing a marked separation between the two systems in their attractiveness to operators. Besides financial efficiency, we examined quality control, security, and historical norms as relevant considerations for cannabis decision makers, using industry sources to reach conclusions about the validity of each of these concerns as a reason for resistance to implementation of greenhouse systems. In our opinion, these points of contention will become less pertinent with the technological and legislative changes surrounding market maturation. When taking into account the total mix of information, we conclude that the greenhouse system is positioned to become the preeminent method of production for future commercial cannabis cultivators.
ContributorsShouse, Corbin (Co-author) / Nichols, Nathaniel (Co-author) / Swenson, Dan (Thesis director) / Cassidy, Nancy (Committee member) / Feltham, Joe (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05