Matching Items (67)
Filtering by

Clear all filters

149708-Thumbnail Image.png
Description
Semiconductor manufacturing facilities are very complex and capital intensive in nature. During the lifecycle of these facilities various disciplines come together, generate and use a tremendous amount of building and process information to support various decisions that enable them to successfully design, build and sustain these advanced facilities. However, a

Semiconductor manufacturing facilities are very complex and capital intensive in nature. During the lifecycle of these facilities various disciplines come together, generate and use a tremendous amount of building and process information to support various decisions that enable them to successfully design, build and sustain these advanced facilities. However, a majority of the information generated and processes taking place are neither integrated nor interoperable and result in a high degree of redundancy. The objective of this thesis is to build an interoperable Building Information Model (BIM) for the Base-Build and Tool Installation in a semiconductor manufacturing facility. It examines existing processes and data exchange standards available to facilitate the implementation of BIM and provides a framework for the development of processes and standards that can help in building an intelligent information model for a semiconductor manufacturing facility. To understand the nature of the flow of information between the various stakeholders the flow of information between the facility designer, process tool manufacturer and tool layout designer is examined. An information model for the base build and process tool is built and the industry standards SEMI E6 and SEMI E51 are used as a basis to model the information. It is found that applications used to create information models support interoperable industry standard formats such as the Industry Foundation Classes (IFC) and ISO 15926 in a limited manner. A gap analysis has revealed that interoperability standards applicable to the semiconductor manufacturing industry such as the IFC and ISO15926 need to be expanded to support information transfers unique to the industry. Information modeling for a semiconductor manufacturing facility is unique in that it is a process model (Process Tool Information Model) within a building model (Building Information Model), each of them supported more robustly by different interoperability standards. Applications support interoperability data standards specific to the domain or industry they serve but information transfers need to occur between the various domains. To facilitate flow of information between the different domains it is recommended that a mapping of the industry standards be undertaken and translators between them be developed for business use.
ContributorsPindukuri, Shruthi (Author) / Chasey, Allan D (Thesis advisor) / Wiezel, Avi (Committee member) / Mamlouk, Michael (Committee member) / Arizona State University (Publisher)
Created2011
149864-Thumbnail Image.png
Description
Rapidly increasing demand for technology support services, and often shrinking budgetary and staff resources, create enormous challenges for information technology (IT) departments in public sector higher education. To address these difficult circumstances, the researcher developed a network of IT professionals from schools in a local community college system and from

Rapidly increasing demand for technology support services, and often shrinking budgetary and staff resources, create enormous challenges for information technology (IT) departments in public sector higher education. To address these difficult circumstances, the researcher developed a network of IT professionals from schools in a local community college system and from a research university in the southwest into an interorganizational community of practice (CoP). This collaboration allowed members from participating institutions to share knowledge and ideas relating to shared technical problems. This study examines the extent to which the community developed, the factors that contributed to its development and the value of such an endeavor. The researcher used a mixed methods approach to gather data and insights relative to these research questions. Data were collected through online surveys, meeting notes and transcripts, post-meeting questionnaires, semi-structured interviews with key informants, and web analytics. The results from this research indicate that the group did coalesce into a CoP. The researcher identified two crucial roles that aided this development: community coordinator and technology steward. Furthermore, the IT professionals who participated and the leaders from their organizations reported that developing the community was a worthwhile venture. They also reported that while the technical collaboration component was very valuable, the non-technical topics and interactions were also very beneficial. Indicators also suggest that the community made progress toward self-sustainability and is likely to continue. There is also discussion of a third leadership role that appears important for developing CoPs that span organizational boundaries, that of the community catalyst. Implications from this study suggest that other higher education IT organizations faced with similar circumstances may be able to follow the model presented here and also achieve positive results.
ContributorsKoan, R. Mark (Russel Mark) (Author) / Puckett, Kathleen S (Thesis advisor) / Foulger, Teresa S (Committee member) / Carmean, Colleen M (Committee member) / Arizona State University (Publisher)
Created2011
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
151811-Thumbnail Image.png
Description
This document builds a model, the Resilience Engine, of how a given sociotechnical innovation contributes to the resilience of its society, where the failure points of that process might be, and what outcomes, resilient or entropic, can be generated by the uptake of a particular innovation. Closed systems, which tend

This document builds a model, the Resilience Engine, of how a given sociotechnical innovation contributes to the resilience of its society, where the failure points of that process might be, and what outcomes, resilient or entropic, can be generated by the uptake of a particular innovation. Closed systems, which tend towards stagnation and collapse, are distinguished from open systems, which through ongoing encounters with external novelty, tend towards enduring resilience. Heterotopia, a space bounded from the dominant order in which novelty is generated and defended, is put forth as the locus of innovation for systemic resilience, defined as the capacity to adapt to environmental changes. The generative aspect of the Resilience Engine lies in a dialectic between a heterotopia and the dominant system across a membrane which permits interaction while maintaining the autonomy of the new space. With a model of how innovation, taken up by agents seeking power outside the dominant order, leads to resilience, and of what generates failures of the Resilience Engine as well as successes, the model is tested against cases drawn from two key virtual worlds of the mid-2000s. The cases presented largely validate the model, but generate a crucial surprise. Within those worlds, 2008-2010 saw an abrupt cultural transformation as the dialectic stage of the Resilience Engine's operation generated victories for the dominant order over promising emergent attributes of virtual heterotopia. At least one emergent practice has been assimilated, generating systemic resilience, that of the conference backchannel. A surprise, however, comes from extensive evidence that one element never problematized in thinking about innovation, the discontent agent, was largely absent from virtual worlds. Rather, what users sought was not greater agency but the comfort of submission over the burdens of self-governance. Thus, aside from minor cases, the outcome of the operation of the Resilience Engine within the virtual worlds studied was the colonization of the heterotopic space for the metropolis along with attempts by agents both external and internal to generate maximum order. Pursuant to the Resilience Engine model, this outcome is a recipe for entropic collapse and for preventing new heterotopias from arising under the current dominant means of production.
ContributorsMcKnight, John Carter (Author) / Miller, Clark (Thesis advisor) / Hayes, Elisabeth (Committee member) / Allenby, Braden (Committee member) / Daer, Alice (Committee member) / Arizona State University (Publisher)
Created2013
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151307-Thumbnail Image.png
Description
This study explores the impact of feedback and feedforward and personality on computer-mediated behavior change. The impact of the effects were studied using subjects who entered information relevant to their diet and exercise into an online tool. Subjects were divided into four experimental groups: those receiving only feedback, those receiving

This study explores the impact of feedback and feedforward and personality on computer-mediated behavior change. The impact of the effects were studied using subjects who entered information relevant to their diet and exercise into an online tool. Subjects were divided into four experimental groups: those receiving only feedback, those receiving only feedforward, those receiving both, and those receiving none. Results were analyzed using regression analysis. Results indicate that both feedforward and feedback impact behavior change and that individuals with individuals ranking low in conscientiousness experienced behavior change equivalent to that of individuals with high conscientiousness in the presence of feedforward and/or feedback.
ContributorsMcCreless, Tamuchin (Author) / St. Louis, Robert (Thesis advisor) / St. Louis, Robert D. (Committee member) / Goul, Kenneth M (Committee member) / Shao, Benjamin B (Committee member) / Arizona State University (Publisher)
Created2012
151508-Thumbnail Image.png
Description
Forrest Research estimated that revenues derived from mobile devices will grow at an annual rate of 39% to reach $31 billion by 2016. With the tremendous market growth, mobile banking, mobile marketing, and mobile retailing have been recently introduced to satisfy customer needs. Academic and practical articles have widely discussed

Forrest Research estimated that revenues derived from mobile devices will grow at an annual rate of 39% to reach $31 billion by 2016. With the tremendous market growth, mobile banking, mobile marketing, and mobile retailing have been recently introduced to satisfy customer needs. Academic and practical articles have widely discussed unique features of m-commerce. For instance, hardware constraints such as small screens have led to the discussion of tradeoff between usability and mobility. Needs for personalization and entertainment foster the development of new mobile data services. Given distinct features of mobile data services, existing empirical literature on m-commerce is mostly from the consumer side and focuses on consumer perceptions toward these features and their adoption intentions. From the supply side, limited data availability in early years explains the lack of firm-level studies on m-commerce. Prior studies have shown that unclear market demand is a major reason that hinders firms' adoption of m-commerce. Given the advances of smart phones, especially the introduction of the iPhone in 2007, firms recently have started to incorporate various mobile information systems in their business operations. The study uses mobile retailing as the context and empirically assesses firms' migration to this new sales venue with a unique cross-sectional dataset. Despite the distinct features of m-commerce, m-Retailing is essentially an extended arm of e-Retailing. Thus, a dependency perspective is used to explore the link between a firm's e-Retail characteristics and the migration to m-Retailing. Rooted in the innovation diffusion theory, the first stage of my study assesses the decision of adoption that indicates whether a firm moves to m-Retailing and the extent of adoption that shows a firm's commitment to m-Retailing in terms of system implementation choices. In this first stage, I take a dependency perspective to examine the impacts of e-Retail characteristics on m-Retailing adoption. The second stage of my study analyzes conditions that affect business value of the m-Retail channel. I examine the association between system implementation choices and m-Retail performance while analyzing the effects of e-Retail characteristics on value realization. The two-stage analysis provides an exploratory assessment of firm's migration from e-Retailing to m-Retailing.
ContributorsChou, Yen-Chun (Author) / Shao, Benjamin (Thesis advisor) / St. Louis, Robert (Committee member) / Goul, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151349-Thumbnail Image.png
Description
This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data mining, machine learning, and

This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data mining, machine learning, and geovisualization techniques. Three different types of spatiotemporal activity data were collected through different data collection approaches: (1) crowd sourced geo-tagged digital photos, representing people's travel activity, were retrieved from the website Panoramio.com through information retrieval techniques; (2) the same techniques were used to crawl crowd sourced GPS trajectory data and related metadata of their daily activities from the website OpenStreetMap.org; and finally (3) preschool children's daily activities and interactions tagged with time and geographical location were collected with a novel TabletPC-based behavioral coding system. The proposed methodology is applied to these data to (1) automatically recommend optimal multi-day and multi-stay travel itineraries for travelers based on discovered attractions from geo-tagged photos, (2) automatically detect movement types of unknown moving objects from GPS trajectories, and (3) explore dynamic social and socio-spatial patterns of preschool children's behavior from both geographic and social perspectives.
ContributorsLi, Xun (Author) / Anselin, Luc (Thesis advisor) / Koschinsky, Julia (Committee member) / Maciejewski, Ross (Committee member) / Rey, Sergio (Committee member) / Griffin, William (Committee member) / Arizona State University (Publisher)
Created2012
151499-Thumbnail Image.png
Description
Parkinson's disease, the most prevalent movement disorder of the central nervous system, is a chronic condition that affects more than 1000,000 U.S. residents and about 3% of the population over the age of 65. The characteristic symptoms include tremors, bradykinesia, rigidity and impaired postural stability. Current therapy based on augmentation

Parkinson's disease, the most prevalent movement disorder of the central nervous system, is a chronic condition that affects more than 1000,000 U.S. residents and about 3% of the population over the age of 65. The characteristic symptoms include tremors, bradykinesia, rigidity and impaired postural stability. Current therapy based on augmentation or replacement of dopamine is designed to improve patients' motor performance but often leads to levodopa-induced complications, such as dyskinesia and motor fluctuation. With the disease progress, clinicians must closely monitor patients' progress in order to identify any complications or decline in motor function as soon as possible in PD management. Unfortunately, current clinical assessment for Parkinson's is subjective and mostly influenced by brief observations during patient visits. Thus improvement or decline in patients' motor function in between visits is extremely difficult to assess. This may hamper clinicians while making informed decisions about the course of therapy for Parkinson's patients and could negatively impact clinical care. In this study we explored new approaches for PD assessment that aim to provide home-based PD assessment and monitoring. By extending the disease assessment to home, the healthcare burden on patients and their family can be reduced, and the disease progress can be more closely monitored by physicians. To achieve these aims, two novel approaches have been designed, developed and validated. The first approach is a questionnaire based self-evaluation metric, which estimate the PD severity through using self-evaluation score on pre-designed questions. Based on the results of the first approach, a smart phone based approach was invented. The approach takes advantage of the mobile computing technology and clinical decision support approach to evaluate the motor performance of patient daily activity and provide the longitudinal disease assessment and monitoring. Both approaches have been validated on recruited PD patients at the movement disorder program of Barrow Neurological Clinic (BNC) at St Joseph's Hospital and Medical Center. The results of validation tests showed favorable accuracy on detecting and assessing critical symptoms of PD, and shed light on promising future of implementing mobile platform based PD evaluation and monitoring tools to facilitate PD management.
ContributorsPan, Di (Author) / Petitti, Diana (Thesis advisor) / Greenes, Robert (Committee member) / Johnson, William (Committee member) / Dhall, Rohit (Committee member) / Arizona State University (Publisher)
Created2013