Matching Items (996)
Filtering by

Clear all filters

135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135360-Thumbnail Image.png
Description
Aberrant glycosylation has been shown to be linked to specific cancers, and using this idea, it was proposed that the levels of glycans in the blood could predict stage I adenocarcinoma. To track this glycosylation, glycan were broken down into glycan nodes via methylation analysis. This analysis utilized information from

Aberrant glycosylation has been shown to be linked to specific cancers, and using this idea, it was proposed that the levels of glycans in the blood could predict stage I adenocarcinoma. To track this glycosylation, glycan were broken down into glycan nodes via methylation analysis. This analysis utilized information from N-, O-, and lipid linked glycans detected from gas chromatography-mass spectrometry. The resulting glycan node-ratios represent the initial quantitative data that were used in this experiment.
For this experiment, two Sets of 50 µl blood plasma samples were provided by NYU Medical School. These samples were then analyzed by Dr. Borges’s lab so that they contained normalized biomarker levels from patients with stage 1 adenocarcinoma and control patients with matched age, smoking status, and gender were examined. An ROC curve was constructed under individual and paired conditions and AUC calculated in Wolfram Mathematica 10.2. Methods such as increasing size of training set, using hard vs. soft margins, and processing biomarkers together and individually were used in order to increase the AUC. Using a soft margin for this particular data set was proved to be most useful compared to the initial set hard margin, raising the AUC from 0.6013 to 0.6585. In regards to which biomarkers yielded the better value, 6-Glc/6-Man and 3,6-Gal glycan node ratios had the best with 0.7687 AUC and a sensitivity of .7684 and specificity of .6051. While this is not enough accuracy to become a primary diagnostic tool for diagnosing stage I adenocarcinoma, the methods examined in the paper should be evaluated further. . By comparison, the current clinical standard blood test for prostate cancer that has an AUC of only 0.67.
ContributorsDe Jesus, Celine Spicer (Author) / Taylor, Thomas (Thesis director) / Borges, Chad (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135415-Thumbnail Image.png
Description
This thesis looks into the current method a particular company uses to value its inventory carrying costs (ICC). By identifying costs incurred during all stages of production, along with incorporating industry standards and academic research while avoiding the shortcomings of the company's current method, this thesis was able to derive

This thesis looks into the current method a particular company uses to value its inventory carrying costs (ICC). By identifying costs incurred during all stages of production, along with incorporating industry standards and academic research while avoiding the shortcomings of the company's current method, this thesis was able to derive a more comprehensive and manageable tool for measuring ICC. Our findings led to concrete recommendations, which will provide real value to company managers by improving the accuracy of project finance calculations, supply chain optimization modeling, and numerous other decisions relying on accurate inventory data inputs.
ContributorsDougherty, Mitch (Co-author) / Marshall, Jeffrey (Co-author) / Zieler, Jason (Co-author) / Gilmore, Eric (Co-author) / Hertzel, Michael (Thesis director) / Simonson, Mark (Committee member) / Yarn, James (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2014-05
135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135426-Thumbnail Image.png
Description
Company X is one of the world's largest manufacturer of semiconductors. The company relies on various suppliers in the U.S. and around the globe for its manufacturing process. The financial health of these suppliers is vital to the continuation of Company X's business without any material interruption. Therefore, it is

Company X is one of the world's largest manufacturer of semiconductors. The company relies on various suppliers in the U.S. and around the globe for its manufacturing process. The financial health of these suppliers is vital to the continuation of Company X's business without any material interruption. Therefore, it is in Company X's interest to monitor its supplier's financial performance. Company X has a supplier financial health model currently in use. Having been developed prior to watershed events like the Great Recession, the current model may not reflect the significant changes in the economic environment due to these events. Company X wants to know if there is a more accurate model for evaluating supplier health that better indicates business risk. The scope of this project will be limited to a sample of 24 suppliers representative of Company X's supplier base that are public companies. While Company X's suppliers consist of both private and public companies, the used of exclusively public companies ensures that we will have sufficient and appropriate data for the necessary analysis. The goal of this project is to discover if there is a more accurate model for evaluating the financial health of publicly traded suppliers that better indicates business risk. Analyzing this problem will require a comprehensive understanding of various financial health models available and their components. The team will study best practice and academia. This comprehension will allow us to customize a model by incorporating metrics that allows greater accuracy in evaluating supplier financial health in accordance with Company X's values.
ContributorsLi, Tong (Co-author) / Gonzalez, Alexandra (Co-author) / Park, Zoon Beom (Co-author) / Vogelsang, Meridith (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135428-Thumbnail Image.png
Description
In the aftermath of the 2008 financial crisis, banking regulators have been taking a more active role in pursing greater financial stability. One area of focus has been on Wall Street banks' leverage lending practices which include leveraged lending activities to fund leveraged buyouts. In March 2013, the Federal Reserve

In the aftermath of the 2008 financial crisis, banking regulators have been taking a more active role in pursing greater financial stability. One area of focus has been on Wall Street banks' leverage lending practices which include leveraged lending activities to fund leveraged buyouts. In March 2013, the Federal Reserve and the Office of the Comptroller of the Currency issued guidance urging banks to avoid financing leveraged buyouts in most industries that would put total debt on a company of more than six times its earnings before interest, taxes, depreciation and amortization, or Ebitda. Our research, using data on all leveraged buyouts (with EBITDA >$20 million) issued after the guidance, sets out to explain the elements banks consider when exceeding leverage limitations. Initially, we hypothesized that since deals over 6x leverage had higher amounts of debt, they were riskier deals, which would carry over to other risk measures such as yield to maturity on debt and company credit ratings. To analyze this, we obtained a large data set with all LBO deals in the past three years and ran difference-in-means tests on a number of variables such as deal size, credit rating and yield to maturity to determine if deals over 6x leverage had significantly different risk characteristics than deals under 6x leverage. Contrary to our hypothesis, we found that deals over 6x leverage had significantly less risk, mainly demonstrated by lower average YTMs, than deals under 6x. One possible explanation of this might be that banks, wanting to ensure they are not fined, will only go through with a deal over 6x leverage if other risk metrics such as yield to maturity are well below average.
ContributorsKing, Adam (Co-author) / Lukemire, Sean (Co-author) / McAleer, Stephen (Co-author) / Simonson, Mark (Thesis director) / Bonadurer, Werner (Committee member) / Department of Finance (Contributor) / Department of Economics (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135433-Thumbnail Image.png
Description
For our collaborative thesis we explored the US electric utility market and how the Internet of Things technology movement could capture a possible advancement of the current existing grid. Our objective of this project was to successfully understand the market trends in the utility space and identify where a semiconductor

For our collaborative thesis we explored the US electric utility market and how the Internet of Things technology movement could capture a possible advancement of the current existing grid. Our objective of this project was to successfully understand the market trends in the utility space and identify where a semiconductor manufacturing company, with a focus on IoT technology, could penetrate the market using their products. The methodology used for our research was to conduct industry interviews to formulate common trends in the utility and industrial hardware manufacturer industries. From there, we composed various strategies that The Company should explore. These strategies were backed up using qualitative reasoning and forecasted discounted cash flow and net present value analysis. We confirmed that The Company should use specific silicon microprocessors and microcontrollers that pertained to each of the four devices analytics demand. Along with a silicon strategy, our group believes that there is a strong argument for a data analytics software package by forming strategic partnerships in this space.
ContributorsLlazani, Loris (Co-author) / Ruland, Matthew (Co-author) / Medl, Jordan (Co-author) / Crowe, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / Department of Economics (Contributor) / Department of Finance (Contributor) / Department of Supply Chain Management (Contributor) / Department of Information Systems (Contributor) / Hugh Downs School of Human Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135434-Thumbnail Image.png
Description
Chebfun is a collection of algorithms and an open-source software system in object-oriented Matlab that extends familiar powerful methods of numerical computation involving numbers to continuous or piecewise-continuous functions. The success of this strategy is based on the mathematical fact that smooth functions can be represented very efficiently by polynomial

Chebfun is a collection of algorithms and an open-source software system in object-oriented Matlab that extends familiar powerful methods of numerical computation involving numbers to continuous or piecewise-continuous functions. The success of this strategy is based on the mathematical fact that smooth functions can be represented very efficiently by polynomial interpolation at Chebyshev points or by trigonometric interpolation at equispaced points for periodic functions. More recently, the system has been extended to handle bivariate functions and vector fields. These two new classes of objects are called Chebfun2 and Chebfun2v, respectively. We will show that Chebfun2 and Chebfun2v, and can be used to accurately and efficiently perform various computations on parametric surfaces in two or three dimensions, including path trajectories and mean and Gaussian curvatures. More advanced surface computations such as mean curvature flows are also explored. This is also the first work to use the newly implemented trigonometric representation, namely Trigfun, for computations on surfaces.
ContributorsPage-Bottorff, Courtney Michelle (Author) / Platte, Rodrigo (Thesis director) / Kostelich, Eric (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135450-Thumbnail Image.png
Description
As the IoT (Internet of Things) market continues to grow, Company X needs to find a way to penetrate the market and establish larger market share. The problem with Company X's current strategy and cost structure lies in the fact that the fastest growing portion of the IoT market is

As the IoT (Internet of Things) market continues to grow, Company X needs to find a way to penetrate the market and establish larger market share. The problem with Company X's current strategy and cost structure lies in the fact that the fastest growing portion of the IoT market is microcontrollers (MCUs). As Company X currently holds its focus in manufacturing microprocessors (MPUs), the current manufacturing strategy is not optimal for entering competitively into the MCU space. Within the MCU space, the companies that are competing the best do not utilize such high level manufacturing processes because these low cost products do not demand them. Given that the MCU market is largely untested by Company X and its products would need to be manufactured at increasingly lower costs, it runs the risk of over producing and holding obsolete inventory that is either scrapped or sold at or below cost. In order to eliminate that risk, we will explore alternative manufacturing strategies for Company X's MCU products specifically, which will allow for a more optimal cost structure and ultimately a more profitable Internet of Things Group (IoTG). The IoT MCU ecosystem does not require the high powered technology Company X is currently manufacturing and therefore, Company X loses large margins due to its unnecessary leading technology. Since cash is king, pursuing a fully external model for MCU design and manufacturing processes will generate the highest NPV for Company X. It also will increase Company X's market share, which is extremely important given that every tech company in the world is trying to get its hands into the IoT market. It is possible that in ten to thirty years down the road, Company X can manufacture enough units to keep its products in-house, but this is not feasible in the foreseeable future. For now, Company X should focus on the cost market of MCUs by driving its prices down while maintaining low costs due to the variables of COGS and R&D given in our fully external strategy.
ContributorsKadi, Bengimen (Co-author) / Peterson, Tyler (Co-author) / Langmack, Haley (Co-author) / Quintana, Vince (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Supply Chain Management (Contributor) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / School of Accountancy (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05