Matching Items (136)
151627-Thumbnail Image.png
Description
Text classification, in the artificial intelligence domain, is an activity in which text documents are automatically classified into predefined categories using machine learning techniques. An example of this is classifying uncategorized news articles into different predefined categories such as "Business", "Politics", "Education", "Technology" , etc. In this thesis, supervised machine

Text classification, in the artificial intelligence domain, is an activity in which text documents are automatically classified into predefined categories using machine learning techniques. An example of this is classifying uncategorized news articles into different predefined categories such as "Business", "Politics", "Education", "Technology" , etc. In this thesis, supervised machine learning approach is followed, in which a module is first trained with pre-classified training data and then class of test data is predicted. Good feature extraction is an important step in the machine learning approach and hence the main component of this text classifier is semantic triplet based features in addition to traditional features like standard keyword based features and statistical features based on shallow-parsing (such as density of POS tags and named entities). Triplet {Subject, Verb, Object} in a sentence is defined as a relation between subject and object, the relation being the predicate (verb). Triplet extraction process, is a 5 step process which takes input corpus as a web text document(s), each consisting of one or many paragraphs, from RSS feeds to lists of extremist website. Input corpus feeds into the "Pronoun Resolution" step, which uses an heuristic approach to identify the noun phrases referenced by the pronouns. The next step "SRL Parser" is a shallow semantic parser and converts the incoming pronoun resolved paragraphs into annotated predicate argument format. The output of SRL parser is processed by "Triplet Extractor" algorithm which forms the triplet in the form {Subject, Verb, Object}. Generalization and reduction of triplet features is the next step. Reduced feature representation reduces computing time, yields better discriminatory behavior and handles curse of dimensionality phenomena. For training and testing, a ten- fold cross validation approach is followed. In each round SVM classifier is trained with 90% of labeled (training) data and in the testing phase, classes of remaining 10% unlabeled (testing) data are predicted. Concluding, this paper proposes a model with semantic triplet based features for story classification. The effectiveness of the model is demonstrated against other traditional features used in the literature for text classification tasks.
ContributorsKarad, Ravi Chandravadan (Author) / Davulcu, Hasan (Thesis advisor) / Corman, Steven (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2013
151004-Thumbnail Image.png
Description
The overall contribution of the Minerva Initiative at ASU is to map social organizations in a multidimensional space that provides a measure of their radical or counter radical influence over the demographics of a nation. This tool serves as a simple content management system to store and track project resources

The overall contribution of the Minerva Initiative at ASU is to map social organizations in a multidimensional space that provides a measure of their radical or counter radical influence over the demographics of a nation. This tool serves as a simple content management system to store and track project resources like documents, images, videos and web links. It provides centralized and secure access to email conversations among project team members. Conversations are categorized into one of the seven pre-defined categories. Each category is associated with a certain set of keywords and we follow a frequency based approach for matching email conversations with the categories. The interface is hosted as a web application which can be accessed by the project team.
ContributorsNair, Apurva Aravindakshan (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2012
190990-Thumbnail Image.png
Description
This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the

This thesis is developed in the context of biomanufacturing of modern products that have the following properties: require short design to manufacturing time, they have high variability due to a high desired level of patient personalization, and, as a result, may be manufactured in low volumes. This area at the intersection of therapeutics and biomanufacturing has become increasingly important: (i) a huge push toward the design of new RNA nanoparticles has revolutionized the science of vaccines due to the COVID-19 pandemic; (ii) while the technology to produce personalized cancer medications is available, efficient design and operation of manufacturing systems is not yet agreed upon. In this work, the focus is on operations research methodologies that can support faster design of novel products, specifically RNA; and methods for the enabling of personalization in biomanufacturing, and will specifically look at the problem of cancer therapy manufacturing. Across both areas, methods are presented attempting to embed pre-existing knowledge (e.g., constraints characterizing good molecules, comparison between structures) as well as learn problem structure (e.g., the landscape of the rewards function while synthesizing the control for a single use bioreactor). This thesis produced three key outcomes: (i) ExpertRNA for the prediction of the structure of an RNA molecule given a sequence. RNA structure is fundamental in determining its function. Therefore, having efficient tools for such prediction can make all the difference for a scientist trying to understand optimal molecule configuration. For the first time, the algorithm allows expert evaluation in the loop to judge the partial predictions that the tool produces; (ii) BioMAN, a discrete event simulation tool for the study of single-use biomanufacturing of personalized cancer therapies. The discrete event simulation engine was designed tailored to handle the efficient scheduling of many parallel events which is cause by the presence of single use resources. This is the first simulator of this type for individual therapies; (iii) Part-MCTS, a novel sequential decision-making algorithm to support the control of single use systems. This tool integrates for the first-time simulation, monte-carlo tree search and optimal computing budget allocation for managing the computational effort.
ContributorsLiu, Menghan (Author) / Pedrielli, Giulia (Thesis advisor) / Bertsekas, Dimitri (Committee member) / Pan, Rong (Committee member) / Sulc, Petr (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2023
190719-Thumbnail Image.png
Description
Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate

Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform.However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. To make the silent majority heard is to reveal the true landscape of the platform. In this dissertation, to compensate for this bias in the data, which is related to user-level data scarcity, I introduce three pieces of research work. Two of these proposed solutions deal with the data on hand while the other tries to augment the current data. Specifically, the first proposed approach modifies the weight of users' activity/interaction in the input space, while the second approach involves re-weighting the loss based on the users' activity levels during the downstream task training. Lastly, the third approach uses large language models (LLMs) and learns the user's writing behavior to expand the current data. In other words, by utilizing LLMs as a sophisticated knowledge base, this method aims to augment the silent user's data.
ContributorsKarami, Mansooreh (Author) / Liu, Huan (Thesis advisor) / Sen, Arunabha (Committee member) / Davulcu, Hasan (Committee member) / Mancenido, Michelle V. (Committee member) / Arizona State University (Publisher)
Created2023
129553-Thumbnail Image.png
Description

In this paper, we present a Bayesian analysis for the Weibull proportional hazard (PH) model used in step-stress accelerated life testings. The key mathematical and graphical difference between the Weibull cumulative exposure (CE) model and the PH model is illustrated. Compared with the CE model, the PH model provides more

In this paper, we present a Bayesian analysis for the Weibull proportional hazard (PH) model used in step-stress accelerated life testings. The key mathematical and graphical difference between the Weibull cumulative exposure (CE) model and the PH model is illustrated. Compared with the CE model, the PH model provides more flexibility in fitting step-stress testing data and has the attractive mathematical properties of being desirable in the Bayesian framework. A Markov chain Monte Carlo algorithm with adaptive rejection sampling technique is used for posterior inference. We demonstrate the performance of this method on both simulated and real datasets.

ContributorsSha, Naijun (Author) / Pan, Rong (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-08-01
129323-Thumbnail Image.png
Description

Accelerated life test (ALT) planning in Bayesian framework is studied in this paper with a focus of differentiating competing acceleration models, when there is uncertainty as to whether the relationship between log mean life and the stress variable is linear or exhibits some curvature. The proposed criterion is based on

Accelerated life test (ALT) planning in Bayesian framework is studied in this paper with a focus of differentiating competing acceleration models, when there is uncertainty as to whether the relationship between log mean life and the stress variable is linear or exhibits some curvature. The proposed criterion is based on the Hellinger distance measure between predictive distributions. The optimal stress-factor setup and unit allocation are determined at three stress levels subject to test-lab equipment and test-duration constraints. Optimal designs are validated by their recovery rates, where the true, data-generating, model is selected under the DIC (Deviance Information Criterion) model selection rule, and by comparing their performance with other test plans. Results show that the proposed optimal design method has the advantage of substantially increasing a test plan׳s ability to distinguish among competing ALT models, thus providing better guidance as to which model is appropriate for the follow-on testing phase in the experiment.

ContributorsNasir, Ehab A. (Author) / Pan, Rong (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-02-01