Matching Items (119)
161528-Thumbnail Image.png
Description
In classification applications, such as medical disease diagnosis, the cost of one type of error (false negative) could greatly outweigh the other (false positive) enabling the need of asymmetric error control. Due to this unique nature of the problem, traditional machine learning techniques, even with much improved accuracy, may not

In classification applications, such as medical disease diagnosis, the cost of one type of error (false negative) could greatly outweigh the other (false positive) enabling the need of asymmetric error control. Due to this unique nature of the problem, traditional machine learning techniques, even with much improved accuracy, may not be ideal as they do not provide a way to control the false negatives below a certain threshold. To address this need, a classification algorithm that can provide asymmetric error control is proposed. The theoretical foundation for this algorithm is based on Neyman-Pearson (NP) Lemma and it is complemented with sample splitting and order statistics to pick a threshold that enables an upper bound on the number of false negatives. Additionally, this classifier addresses the imbalance of the data, which is common in medical datasets, by using Hellinger distance as the splitting criterion. This eliminates the need of sampling methods, which add complexity and the need for parameter selection. This approach is used to create a novel tree-based classifier that enables asymmetric error control. Applications, such as prediction of the severity of cardiac arrhythmia, require classification over multiple classes. The NP oracle inequalities for binary classes are not immediately applicable for the multiclass NP classification, leading to a multi-step procedure proposed in this dissertation to extend the algorithm in the context of multiple classes. This classifier is used in predicting various forms of cardiac disease for both binary and multi-class classification problems with not only comparable accuracy metrics but also with full control over the number of false negatives. Moreover, this research allows us to pick the threshold for the classifier in a data adaptive way. This dissertation also shows that this methodology can be extended to non medical applications that require classification with asymmetric error control.
ContributorsBokhari, Wasif (Author) / Bansal, Ajay (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Bahadur, Faisal (Committee member) / Arizona State University (Publisher)
Created2021
161510-Thumbnail Image.png
Description
The proliferation of semantic data in the form of RDF (Resource Description Framework) triples demands an efficient, scalable, and distributed storage along with a highly available and fault-tolerant parallel processing strategy. There are three open issues with distributed RDF data management systems that are not well addressed altogether in existing

The proliferation of semantic data in the form of RDF (Resource Description Framework) triples demands an efficient, scalable, and distributed storage along with a highly available and fault-tolerant parallel processing strategy. There are three open issues with distributed RDF data management systems that are not well addressed altogether in existing work. First is the querying efficiency, second is that solutions are optimized for certain types of query patterns and don’t necessarily work well for all types, and third is concerned with reducing pre-processing cost. Therefore, the rapid growth of RDF data raises the need for an efficient partitioning strategy over distributed data management systems to improve SPARQL (SPARQL Protocol and RDF Query Language) query performance regardless of its pattern shape with minimized pre-processing overhead. In this context, the first contribution of this work is a distributed RDF data partitioning schema called 3CStore that extends the existing VP (Vertical Partitioning) approach by using a subset of triples from the VP tables based on different join correlations. This approach speeds up queries at the cost of additional pre-processing overhead. To solve this, a relational partitioning schema called VPExp was developed by splitting predicates based on explicit type information of objects. This approach gains a significant query performance only for the specific type of query where the object is bound to a value for a particular predicate. To get efficient query performance on a wide range of query patterns, an improved solution is proposed by extending the existing Property Table approach to Subset-Property Table and combined with the VP approach. Further investigation on distributed RDF processing and querying systems based on typical use cases led to a novel relational partitioning schema called PTP (Property Table Partitioning) that further partitions the whole Property Table into the number of unique properties to minimize query input size and join operations during query evaluation. Finally, an RDF data management system based on the SPARQL-over-SQL approach called S3QLRDF is developed that generates the optimal query execution plan using statistics of PTP tables to provide efficient SPARQL query processing on a distributed system.
ContributorsHassan, P M Mahmudul Mahmudul (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Davulcu, Hasan (Committee member) / Sarwat Abdelghany Aly Elsayed, Mohamed (Committee member) / Arizona State University (Publisher)
Created2021
161549-Thumbnail Image.png
Description
Traditionally, databases have been categorized as either row-oriented or column-oriented databases. Row-oriented databases store each row of the table’s data contiguously onto the disk whereas column-oriented databases store each column’s data contiguously onto the disk. In recent years, columnar database management systems are becoming increasingly popular because deep and narrow

Traditionally, databases have been categorized as either row-oriented or column-oriented databases. Row-oriented databases store each row of the table’s data contiguously onto the disk whereas column-oriented databases store each column’s data contiguously onto the disk. In recent years, columnar database management systems are becoming increasingly popular because deep and narrow queries are faster on them. Hence, column-oriented databases are highly optimized to be used with analytical (OLAP) workloads (Mike Freedman 2019). That is why they are very frequently used in business intelligence (BI), data warehouses, etc., which involve working with large data sets, intensive queries and aggregated computing. As the size of data keeps growing, efficient compression of data becomes an important consideration for these databases to optimize storage as well as improve query performance. Since column-oriented databases store data of the same data type contiguously, most modern compression techniques provide better compression ratios as compared to row-oriented databases. This thesis introduces a new compression technique called SA128 for column-oriented databases that performs a column-wise compression of database tables. SA128 is a multi-stage compression technique which performs a column-wise compression followed by a table-wide compression of database tables. In the first stage, SA128 performs an analysis based on the characteristics of data (such as data type and distribution) and determines which combination of lossless compression algorithms would result in the best compression ratio. In the second phase, SA128 uses an entropy encoding technique such as rANS (Duda, J., 2013) to further improve the compression ratio.
ContributorsAnand, Sukhpreet Singh (Author) / Bansal, Ajay (Thesis advisor) / Heinrichs, Robert R (Committee member) / Gonzalez-Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2021
161463-Thumbnail Image.png
Description
Serious or educational games have been a subject of research for a long time. They usually have game mechanics, game content, and content assessment all tied together to make a specialized game intended to impart learning of the associated content to its players. While this approach is good for developing

Serious or educational games have been a subject of research for a long time. They usually have game mechanics, game content, and content assessment all tied together to make a specialized game intended to impart learning of the associated content to its players. While this approach is good for developing games for teaching highly specific topics, it consumes a lot of time and money. Being able to re-use the same mechanics and assessment for creating games that teach different contents would lead to a lot of savings in terms of time and money. The Content Agnostic Game Engineering (CAGE) Architecture mitigates the problem by disengaging the content from game mechanics. Moreover, the content assessment in games is often quite explicit in the way that it disturbs the flow of the players and thus hampers the learning process, as it is not integrated into the game flow. Stealth assessment helps to alleviate this problem by keeping the player engagement intact while assessing them at the same time. Integrating stealth assessment into the CAGE framework in a content-agnostic way will increase its usability and further decrease in game and assessment development time and cost. This research presents an evaluation of the learning outcomes in content-agnostic game-based assessment developed using the CAGE framework.
ContributorsVerma, Vipin (Author) / Craig, Scotty D (Thesis advisor) / Bansal, Ajay (Thesis advisor) / Amresh, Ashish (Committee member) / Baron, Tyler (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2021
161012-Thumbnail Image.png
Description

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence Aware Dictionary for Sentiment Reasoning (VADER) natural language processing package to determine numerical polarities which represented positivity or negativity for a given stock ticker. These generated polarities were paired with stock metrics typically observed by stock analysts as the feature set for a Logistic Regression machine learning model. The model was trained on roughly 1500 major stocks to determine a binary classification between a “Buy” or “Not Buy” rating for each stock, and the results of the model were inserted into the back-end of the Agora Web UI which emulates search engine behavior specifically for stocks found in NYSE and NASDAQ. The model reported an accuracy of 82.5% and for most major stocks, the model’s prediction correlated with stock analysts’ ratings. Given the volatility of the stock market and the propensity for hive-mind behavior in online forums, the performance of the Logistic Regression model would benefit from incorporating historical stock data and more sources of opinion to balance any subjectivity in the model.

ContributorsRamaraju, Venkat (Author) / Rao, Jayanth (Co-author) / Bansal, Ajay (Thesis director) / Smith, James (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2021-12
161079-Thumbnail Image.png
Description

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence Aware Dictionary for Sentiment Reasoning (VADER) natural language processing package to determine numerical polarities which represented positivity or negativity for a given stock ticker. These generated polarities were paired with stock metrics typically observed by stock analysts as the feature set for a Logistic Regression machine learning model. The model was trained on roughly 1500 major stocks to determine a binary classification between a “Buy” or “Not Buy” rating for each stock, and the results of the model were inserted into the back-end of the Agora Web UI which emulates search engine behavior specifically for stocks found in NYSE and NASDAQ. The model reported an accuracy of 82.5% and for most major stocks, the model’s prediction correlated with stock analysts’ ratings. Given the volatility of the stock market and the propensity for hive-mind behavior in online forums, the performance of the Logistic Regression model would benefit from incorporating historical stock data and more sources of opinion to balance any subjectivity in the model.

ContributorsRao, Jayanth (Author) / Ramaraju, Venkat (Co-author) / Bansal, Ajay (Thesis director) / Smith, James (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
128590-Thumbnail Image.png
Description

Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a

Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively) areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2–3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats' behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task.

ContributorsMao, Hongwei (Author) / Yuan, Yuan (Author) / Si, Jennie (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-03-06
Description
Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that help evaluate AD risk pre-symptomatically. High-dimensional non-linear pattern classification methods have

Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that help evaluate AD risk pre-symptomatically. High-dimensional non-linear pattern classification methods have been applied to structural magnetic resonance images (MRI’s) and used to discriminate between clinical groups in Alzheimers progression. Using Fluorodeoxyglucose (FDG) positron emission tomography (PET) as the pre- ferred imaging modality, this thesis develops two independent machine learning based patch analysis methods and uses them to perform six binary classification experiments across different (AD) diagnostic categories. Specifically, features were extracted and learned using dimensionality reduction and dictionary learning & sparse coding by taking overlapping patches in and around the cerebral cortex and using them as fea- tures. Using AdaBoost as the preferred choice of classifier both methods try to utilize 18F-FDG PET as a biological marker in the early diagnosis of Alzheimer’s . Addi- tional we investigate the involvement of rich demographic features (ApoeE3, ApoeE4 and Functional Activities Questionnaires (FAQ)) in classification. The experimental results on Alzheimer’s Disease Neuroimaging initiative (ADNI) dataset demonstrate the effectiveness of both the proposed systems. The use of 18F-FDG PET may offer a new sensitive biomarker and enrich the brain imaging analysis toolset for studying the diagnosis and prognosis of AD.
ContributorsSrivastava, Anant (Author) / Wang, Yalin (Thesis advisor) / Bansal, Ajay (Thesis advisor) / Liang, Jianming (Committee member) / Arizona State University (Publisher)
Created2017
155205-Thumbnail Image.png
Description
When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive but effective. Remote collaboration is inexpensive but inefficient. In, order to gain the most benefit from collaboration there needs to

When software design teams attempt to collaborate on different design docu-

ments they suffer from a serious collaboration problem. Designers collaborate either in person or remotely. In person collaboration is expensive but effective. Remote collaboration is inexpensive but inefficient. In, order to gain the most benefit from collaboration there needs to be remote collaboration that is not only cheap but also as efficient as physical collaboration.

Remotely collaborating on software design relies on general tools such as Word, and Excel. These tools are then shared in an inefficient manner by using either email, cloud based file locking tools, or something like google docs. Because these tools either increase the number of design building blocks, or limit the number

of available times in which one can work on a specific document, they drastically decrease productivity.

This thesis outlines a new methodology to increase design productivity, accom- plished by providing design specific collaboration. Using version control systems, this methodology allows for effective project collaboration between remotely lo- cated design teams. The methodology of this paper encompasses role management, policy management, and design artifact management, including nonfunctional re- quirements. Version control can be used for different design products, improving communication and productivity amongst design teams. This thesis outlines this methodology and then outlines a proof of concept tool that embodies the core of these principles.
ContributorsPike, Shawn (Author) / Gaffar, Ashraf (Thesis advisor) / Lindquist, Timothy (Committee member) / Whitehouse, Richard (Committee member) / Arizona State University (Publisher)
Created2016