Matching Items (10)
Filtering by

Clear all filters

135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136239-Thumbnail Image.png
Description
In an effort to gauge on-campus resident's satisfaction with services provided by Century Link and the University Technology Office as well as understand the resident's technology usage habits, the Performance Based Research Studies Group at ASU conducted a survey to collect the data needed to initiate improvements. Unlike previous years,

In an effort to gauge on-campus resident's satisfaction with services provided by Century Link and the University Technology Office as well as understand the resident's technology usage habits, the Performance Based Research Studies Group at ASU conducted a survey to collect the data needed to initiate improvements. Unlike previous years, the 2015 edition of the survey was distributed more efficiently by engaging University Housing staff members (those who work closest with the residents). The result was a 288% increase in responses from the previous year, totaling 2352 respondents and a 167% increase in the number of Residential Halls surveyed, totaling 24. As a primary concern, on a scale of zero to five, the average Internet satisfaction rating was 2.42. In the comments section residents reported issues with the reliability and speed of the ASU networks. It was further determined that residents were dissatisfied with the television services with an average satisfaction rating of 2.91; and the vast majority of comments regarding television services demanding that the ESPN channels be provided. In addition to the metrics on resident satisfaction, it was found that the majority of on-campus residents do not utilize hard-wired ports. Based on the information gathered from this survey, it is recommended that the University Technology Office: 1) focus efforts on upgrading, expanding, and improving the existing ASU networks in particular the reliability and speed of those networks, 2) invest in a broader channel line-up to at minimum provide the ESPN channels, and 3) start an awareness campaign to educate residents on the usage of hard wired ports with the goal of increasing hard wired port usage. As a corollary to information gathered from the survey, it is possible to begin building technology usage profiles on each building and even building such profiles on each residential college and academic unit to better understand the clientele and adapt the services a necessary.
ContributorsMcculloch, John Patrick (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / School of Earth and Space Exploration (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
131135-Thumbnail Image.png
Description
Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space,

Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space, but also the power requirements mandated by space-flyable hardware. This thesis investigates leveraging a deep learning approach for monocular one-shot pose initialization and pose estimation. A convolutional neural network was used to estimate the 6D pose of an assembly truss object. This network was trained by utilizing synthetic imagery generated from a simulation testbed. Furthermore, techniques to quantify model uncertainty of the deep learning model were investigated and applied in the task of in-space pose estimation and pose initialization. The feasibility of this approach on low-power computational platforms was also tested. The results demonstrate that accurate pose initialization and pose estimation can be conducted using a convolutional neural network. In addition, the results show that the model uncertainty can be obtained from the network. Lastly, the use of deep learning for pose initialization and pose estimation in addition with uncertainty quantification was demonstrated to be feasible on low-power compute platforms.
ContributorsKailas, Siva Maneparambil (Author) / Ben Amor, Heni (Thesis director) / Detry, Renaud (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131482-Thumbnail Image.png
Description
In shotgun proteomics, liquid chromatography coupled to tandem mass spectrometry
(LC-MS/MS) is used to identify and quantify peptides and proteins. LC-MS/MS produces mass spectra, which must be searched by one or more engines, which employ
algorithms to match spectra to theoretical spectra derived from a reference database.
These engines identify and characterize proteins

In shotgun proteomics, liquid chromatography coupled to tandem mass spectrometry
(LC-MS/MS) is used to identify and quantify peptides and proteins. LC-MS/MS produces mass spectra, which must be searched by one or more engines, which employ
algorithms to match spectra to theoretical spectra derived from a reference database.
These engines identify and characterize proteins and their component peptides. By
training a convolutional neural network on a dataset of over 6 million MS/MS spectra
derived from human proteins, we aim to create a tool that can quickly and effectively
identify spectra as peptides prior to database searching. This can significantly reduce search space and thus run time for database searches, thereby accelerating LCMS/MS-based proteomics data acquisition. Additionally, by training neural networks
on labels derived from the search results of three different database search engines, we
aim to examine and compare which features are best identified by individual search
engines, a neural network, or a combination of these.
ContributorsWhyte, Cameron Stafford (Author) / Suren, Jayasuriya (Thesis director) / Gil, Speyer (Committee member) / Patrick, Pirrotte (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132574-Thumbnail Image.png
Description
Convolutional neural networks boast a myriad of applications in artificial intelligence, but one of the most common uses for such networks is image extraction. The ability of convolutional layers to extract and combine data features for the purpose of image analysis can be leveraged for pose estimation on an object

Convolutional neural networks boast a myriad of applications in artificial intelligence, but one of the most common uses for such networks is image extraction. The ability of convolutional layers to extract and combine data features for the purpose of image analysis can be leveraged for pose estimation on an object - detecting the presence and attitude of corners and edges allows a convolutional neural network to identify how an object is positioned. This task can assist in working to grasp an object correctly in robotics applications, or to track an object more accurately in 3D space. However, the effectiveness of pose estimation may change based on properties of the object; the pose of a complex object, complexity being determined by internal occlusions, similar faces, etcetera, can be difficult to resolve.
This thesis is part of a collaboration between ASU’s Interactive Robotics Laboratory and NASA’s Jet Propulsion Laboratory. In this thesis, the training pipeline from Sharma’s paper “Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks” was modified to perform pose estimation on a complex object - specifically, a segment of a hollow truss. After initial attempts to replicate the architecture used in the paper and train solely on synthetic images, a combination of synthetic dataset generation and transfer learning on an ImageNet-pretrained AlexNet model was implemented to mitigate the difficulty of gathering large amounts of real-world data. Experimentation with pose estimation accuracy and hyperparameters of the model resulted in gradual test accuracy improvement, and future work is suggested to improve pose estimation for complex objects with some form of rotational symmetry.
ContributorsDsouza, Susanna Roshini (Author) / Ben Amor, Hani (Thesis director) / Maneparambil, Kailasnath (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
The study tested the parameterized neural ordinary differential equation (PNODE) framework with a physical system exhibiting only advective phenomenon. Existing deep learning methods have difficulty learning multiple dynamic, continuous time processes. PNODE encodes the input data and initial parameter into a set of reduced states within the latent space. Then

The study tested the parameterized neural ordinary differential equation (PNODE) framework with a physical system exhibiting only advective phenomenon. Existing deep learning methods have difficulty learning multiple dynamic, continuous time processes. PNODE encodes the input data and initial parameter into a set of reduced states within the latent space. Then the reduced states are fitted to a system of ordinary differential equations. The outputs from the model are then decoded back to the data space for a desired input parameter and time. The application of the PNODE formalism to different types of physical systems is important to test the methods robustness. The linear advection data was generated through a high-fidelity numerical tool for multiple velocity parameters. The PNODE code was modified for the advection dataset, whose temporal domain and spatial discretization varied from the original study configuration. The L2 norm between the reconstruction and surrogate model and the reconstruction plots were used to analyze the PNODE model performance. The model reconstructions presented mixed results. For a temporal domain of 20-time units, where multiple advection cycles were completed for each advection speed, the reconstructions did not agree with the surrogate model. For a reduced temporal domain of 5-time units, the reconstructions and surrogate models were in close agreement. Near the end of the temporal domain, deviations occurred likely resulting from the accumulation of numerical errors. Note, over the 5-time units, smaller advection speed parameters were unable to complete a cycle. The behavior for the 20-time units highlighted potential issues with imbalanced datasets and repeated features. The 5-time unit model illustrates PNODEs adaptability to this class of problems when the dataset is better posed.
ContributorsReithal, Richard Robert (Author) / Kim, Jeonglae (Thesis director) / Lee, Kookjin (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-12
Description

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.

ContributorsMallya, Aatmik (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
164857-Thumbnail Image.png
Description
Hydrologic modeling in snowfed karst watersheds is important for many communities relying on their water for municipal and agricultural use, but the complexities of karst hydrology have made this task historically difficult. Here, two Long Short-Term Memory (LSTM) models are compared to investigate this problem from a deep-learning perspective within

Hydrologic modeling in snowfed karst watersheds is important for many communities relying on their water for municipal and agricultural use, but the complexities of karst hydrology have made this task historically difficult. Here, two Long Short-Term Memory (LSTM) models are compared to investigate this problem from a deep-learning perspective within the context of the Logan River Canyon watershed, which supplies water to Logan City, UT. One is spatially lumped and the other spatially distributed, the latter with a potential to reveal underlying spatial watershed dynamics. Both use snowmelt and rainfall to predict daily streamflow downstream. I find distributed LSTMs consistently outperform lumped LSTMs in this task. Additionally, I find that a spatial sensitivity analysis of distributed LSTMs is unpromising in revealing spatial watershed dynamics but warrants further investigation.
ContributorsShaver, Ryan (Author) / Xu, Tianfang (Thesis director) / Jones, Don (Committee member) / Barrett, The Honors College (Contributor) / School of Earth and Space Exploration (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
164635-Thumbnail Image.png
Description
Due to the prevalence of digital communication, the importance of digital communication for romantic relationship formation and maintenance, and the associations between online behavior and romantic conflict, it is important to investigate conflict enabled by and conducted through digital communication platforms. Additionally, because of the overrepresentation of self-report measures in

Due to the prevalence of digital communication, the importance of digital communication for romantic relationship formation and maintenance, and the associations between online behavior and romantic conflict, it is important to investigate conflict enabled by and conducted through digital communication platforms. Additionally, because of the overrepresentation of self-report measures in studying online relational behavior, it is not known whether current methods of studying in-person conflict apply to digital conflict. The present study thus aimed to examine 1) the efficacy of participant-uploaded screenshots for observing online relationship experiences, and 2) the applicability of the adapted SPAFF coding system (D-SPAFF) to romantic dyadic digital communication. We found acceptable participant compliance and rich data was acquired using this method. We also found affective behavior in screenshots was related to similar concurrent and prospective relationship outcomes as found in the literature. Finally, there were a few unexpected affective behaviors related to relationship outcomes. Our study supports a nuanced theoretical framework for the investigation of online relationship interactions. Future research should continue to validate this method and investigate the unique affordances and mechanisms of digital interactions.
ContributorsTrimble, Ava (Author) / Mukarram, Maheeyah (Co-author) / Ha, Thao (Thesis director) / Quiroz, Selena (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description
Historical trends of artificial intelligence have, as shown by recent quantitative and qualitative studies, shown that the reported threats (as understood by the general public) are vastly different from the tech industry’s most pressing and vital concerns. The modern AI that most people interact with on a daily basis are

Historical trends of artificial intelligence have, as shown by recent quantitative and qualitative studies, shown that the reported threats (as understood by the general public) are vastly different from the tech industry’s most pressing and vital concerns. The modern AI that most people interact with on a daily basis are mostly helpful commercialized products or generative AI, leading to a cultural mindset where AI is an assistant capable of autonomous tasks. Popular fictional depictions of artificial intelligence clearly demonstrate that those perceptions of threats fall closely in line with the sorts of actions portrayed by AI characters, suggesting that pop media has a significant influence over its audience’s understanding of AI technology and its potential ramifications. To mitigate harm that AI tools can inflict upon the general public, there is an immediate need for technology-specific legislation, incentives and deterrents, and oversight so that artificial intelligence can be regulated and controlled.
ContributorsCrowe, Katlynn (Author) / Martin, Thomas (Thesis director) / Anderson, Lisa (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2024-05