Matching Items (78)
149307-Thumbnail Image.png
Description
Continuous advancements in biomedical research have resulted in the production of vast amounts of scientific data and literature discussing them. The ultimate goal of computational biology is to translate these large amounts of data into actual knowledge of the complex biological processes and accurate life science models. The ability to

Continuous advancements in biomedical research have resulted in the production of vast amounts of scientific data and literature discussing them. The ultimate goal of computational biology is to translate these large amounts of data into actual knowledge of the complex biological processes and accurate life science models. The ability to rapidly and effectively survey the literature is necessary for the creation of large scale models of the relationships among biomedical entities as well as hypothesis generation to guide biomedical research. To reduce the effort and time spent in performing these activities, an intelligent search system is required. Even though many systems aid in navigating through this wide collection of documents, the vastness and depth of this information overload can be overwhelming. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also facilitate discovery of the unknown information implicitly conveyed in the texts. This thesis presents the different approaches used for large scale biomedical named entity recognition, and the challenges faced in each. It also proposes BioEve: an integrative framework to fuse a faceted search with information extraction to provide a search service that addresses the user's desire for "completeness" of the query results, not just the top-ranked ones. This information extraction system enables discovery of important semantic relationships between entities such as genes, diseases, drugs, and cell lines and events from biomedical text on MEDLINE, which is the largest publicly available database of the world's biomedical journal literature. It is an innovative search and discovery service that makes it easier to search
avigate and discover knowledge hidden in life sciences literature. To demonstrate the utility of this system, this thesis also details a prototype enterprise quality search and discovery service that helps researchers with a guided step-by-step query refinement, by suggesting concepts enriched in intermediate results, and thereby facilitating the "discover more as you search" paradigm.
ContributorsKanwar, Pradeep (Author) / Davulcu, Hasan (Thesis advisor) / Dinu, Valentin (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2010
147677-Thumbnail Image.png
Description

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020, this virus's deadly nature has required clinical testing to meet 2020's demands of higher throughput, higher accuracy and higher efficiency. Information technology has allowed institutions, like Arizona State University (ASU), to make strategic and operational

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020, this virus's deadly nature has required clinical testing to meet 2020's demands of higher throughput, higher accuracy and higher efficiency. Information technology has allowed institutions, like Arizona State University (ASU), to make strategic and operational changes to combat the SARS-CoV-2 pandemic. At ASU, information technology was one of the six facets identified in the ongoing review of the ASU Biodesign Clinical Testing Laboratory (ABCTL) among business, communications, management/training, law, and clinical analysis. The first chapter of this manuscript covers the background of clinical laboratory automation and details the automated laboratory workflow to perform ABCTL’s COVID-19 diagnostic testing. The second chapter discusses the usability and efficiency of key information technology systems of the ABCTL. The third chapter explains the role of quality control and data management within ABCTL’s use of information technology. The fourth chapter highlights the importance of data modeling and 10 best practices when responding to future public health emergencies.

ContributorsWoo, Sabrina (Co-author) / Leung, Michael (Co-author) / Kandan, Mani (Co-author) / Knox, Garrett (Co-author) / Compton, Carolyn (Thesis director) / Dudley, Sean (Committee member) / School of Life Sciences (Contributor) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147542-Thumbnail Image.png
Description

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020, this virus's deadly nature has required clinical testing to meet 2020's demands of higher throughput, higher accuracy and higher efficiency. Information technology has allowed institutions, like Arizona State University (ASU), to make strategic and operational

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020, this virus's deadly nature has required clinical testing to meet 2020's demands of higher throughput, higher accuracy and higher efficiency. Information technology has allowed institutions, like Arizona State University (ASU), to make strategic and operational changes to combat the SARS-CoV-2 pandemic. At ASU, information technology was one of the six facets identified in the ongoing review of the ASU Biodesign Clinical Testing Laboratory (ABCTL) among business, communications, management/training, law, and clinical analysis. The first chapter of this manuscript covers the background of clinical laboratory automation and details the automated laboratory workflow to perform ABCTL’s COVID-19 diagnostic testing. The second chapter discusses the usability and efficiency of key information technology systems of the ABCTL. The third chapter explains the role of quality control and data management within ABCTL’s use of information technology. The fourth chapter highlights the importance of data modeling and 10 best practices when responding to future public health emergencies.

ContributorsLeung, Michael (Co-author) / Kandan, Mani (Co-author) / Knox, Garrett (Co-author) / Woo, Sabrina (Co-author) / Compton, Carolyn (Thesis director) / Dudley, Sean (Committee member) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147552-Thumbnail Image.png
Description

This project is designed as part of the multi-student ASU Biodesign Clinical Testing Laboratory (ABCTL) thesis project sponsored and organized by Dr. Carolyn Compton, professor of Life Sciences at ASU and medical director with the ABCTL. This project divides students into teams with Business, Law, Laboratory, IT, and Documentary focused

This project is designed as part of the multi-student ASU Biodesign Clinical Testing Laboratory (ABCTL) thesis project sponsored and organized by Dr. Carolyn Compton, professor of Life Sciences at ASU and medical director with the ABCTL. This project divides students into teams with Business, Law, Laboratory, IT, and Documentary focused groups, with the goal of providing a comprehensive overview of the operations of the ABCTL as a reference for other institutions and to produce a documentary film about the laboratory. As a member of the IT team, this writeup will focus on quality control throughout the transfer of data in the testing process, security and privacy of data, HIPAA and regulatory compliance, and accessibility of data while maintaining such restrictions.

ContributorsKnox, Garrett (Co-author) / Leung, Michael (Co-author) / Kandan, Mani (Co-author) / Woo, Sabrinia (Co-author) / Compton, Carolyn (Thesis director) / Dudley, Sean (Committee member) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147796-Thumbnail Image.png
Description

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020,<br/>this virus's deadly nature has required clinical testing to meet 2020's demands of higher<br/>throughput, higher accuracy and higher efficiency. Information technology has allowed<br/>institutions, like Arizona State University (ASU), to make strategic and operational changes to<br/>combat the

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020,<br/>this virus's deadly nature has required clinical testing to meet 2020's demands of higher<br/>throughput, higher accuracy and higher efficiency. Information technology has allowed<br/>institutions, like Arizona State University (ASU), to make strategic and operational changes to<br/>combat the SARS-CoV-2 pandemic. At ASU, information technology was one of the six facets<br/>identified in the ongoing review of the ASU Biodesign Clinical Testing Laboratory (ABCTL)<br/>among business, communications, management/training, law, and clinical analysis. The first<br/>chapter of this manuscript covers the background of clinical laboratory automation and details<br/>the automated laboratory workflow to perform ABCTL’s COVID-19 diagnostic testing. The<br/>second chapter discusses the usability and efficiency of key information technology systems of<br/>the ABCTL. The third chapter explains the role of quality control and data management within<br/>ABCTL’s use of information technology. The fourth chapter highlights the importance of data<br/>modeling and 10 best practices when responding to future public health emergencies.

ContributorsKandan, Mani (Co-author) / Leung, Michael (Co-author) / Woo, Sabrina (Co-author) / Knox, Garrett (Co-author) / Compton, Carolyn (Thesis director) / Dudley, Sean (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
168708-Thumbnail Image.png
Description
Software systems can exacerbate and cause contemporary social inequities. As such, scholars and activists have scrutinized sociotechnical systems like those used in facial recognition technology or predictive policing using the frameworks of algorithmic bias and dataset bias. However, these conversations are incomplete without study of data models: the

Software systems can exacerbate and cause contemporary social inequities. As such, scholars and activists have scrutinized sociotechnical systems like those used in facial recognition technology or predictive policing using the frameworks of algorithmic bias and dataset bias. However, these conversations are incomplete without study of data models: the structural, epistemological, and technical frameworks that shape data. In Modeling Power: Data Models and the Production of Social Inequality, I elucidate the connections between relational data modeling techniques and manifestations of systems of power in the United States, specifically white supremacy and cisgender normativity. This project has three distinct parts. First, I historicize early publications by E. F. Codd, Peter Chen, Miles Smith & Diane Smith, and J. R. Abrial to demonstrate that now-taken-for-granted data modeling techniques were products of their social and technical moments and, as such, reinforced dominant systems of power. I further connect database reification techniques to contemporary racial analyses of reification via the work of Cheryl Harris. Second, I reverse engineer Android applications (with Jadx and apktool) to uncover the relational data models within. I analyze DAO annotations, create entity-relationship diagrams, and then examine those resultant models, again linking them back to systems of race and gender power. I craft a method for performing a reverse engineering investigation within a specific sociotechnical context -- a situated analysis of the contextual epistemological frames embedded within relational paradigms. Finally, I develop a relational data model that integrates insights from the project’s reverse and historical engineering phases. In my speculative engineering process, I suggest that the temporality of modern digital computing is incommensurate with the temporality of modern transgender lives. Following this, I speculate and build a trans-inclusive data model that demonstrates uses of reification to actively subvert systems of racialized and gendered power. By promoting aspects of social identity to first-order objects within a data model, I show that additional “intellectual manageability” is possible through reification. Through each part, I argue that contemporary approaches to the social impacts of software systems incomplete without data models. Data models structure algorithmic opportunities. As algorithms continue to reinforce systems of inequality, data models provide opportunities for intervention and subversion.
ContributorsStevens, Nikki Lane (Author) / Wernimont, Jacqueline D (Thesis advisor) / Michael, Katina (Thesis advisor) / Richter, Jennifer (Committee member) / Duarte, Marisa E. (Committee member) / Arizona State University (Publisher)
Created2022
Description
There exists extensive research on the use of twisty puzzles, such as the Rubik's Cube, in educational contexts to assist in developing critical thinking skills and in teaching abstract concepts, such as group theory. However, the existing research does not consider the use of twisty puzzles in developing language proficiency.

There exists extensive research on the use of twisty puzzles, such as the Rubik's Cube, in educational contexts to assist in developing critical thinking skills and in teaching abstract concepts, such as group theory. However, the existing research does not consider the use of twisty puzzles in developing language proficiency. Furthermore, there remain methodological issues in integrating standard twisty puzzles into a class curriculum due to the ease with which erroneous cube twists occur, leading to a puzzle scramble that deviates from the intended teaching goal. To address these issues, an extensive examination of the "smart cube" market took place in order to determine whether a device that virtualizes twisty puzzles while maintaining the intuitive tactility of manipulating such puzzles can be employed both to fill the language education void and to mitigate the potential frustration experienced by students who unintentionally scramble a puzzle due to executing the wrong moves. This examination revealed the presence of Bluetooth smart cubes, which are capable of interfacing with a companion web or mobile application that visualizes and reacts to puzzle manipulations. This examination also revealed the presence of a device called the WOWCube, which is a 2x2x2 smart cube entertainment system that has 24 Liquid Crystal Display (LCD) screens, one for each face's square, enabling better integration of the application with the puzzle hardware. Developing applications both for the Bluetooth smart cube using React Native and for the WOWCube demonstrated the higher feasibility of developing with the WOWCube due to its streamlined development kit as well as its ability to tie the application to the device hardware, enhancing the tactile immersion of the players with the application itself. Using the WOWCube, a word puzzle game featuring three game modes was implemented to assist in teaching players English vocabulary. Due to its incorporation of features that enable dynamic puzzle generation and resetting, players who participated in a user survey found that the game was compelling and that it exercised their critical thinking skills. This demonstrates the feasibility of smart cube applications in both critical thinking and language skills.
ContributorsHreshchyshyn, Jacob (Author) / Bansal, Ajay (Thesis advisor) / Mehlhase, Alexandra (Committee member) / Baron, Tyler (Committee member) / Arizona State University (Publisher)
Created2023
168644-Thumbnail Image.png
Description
The purpose of this study was to evaluate the role a peer-driven technology acceptance model (PDTAM) in the form of a Community of Practice (CoP) played in assisting users in the acceptance of Trellis technologies at the University of Arizona. Constituent Relationship Management (CRM) technologies are becoming more common in

The purpose of this study was to evaluate the role a peer-driven technology acceptance model (PDTAM) in the form of a Community of Practice (CoP) played in assisting users in the acceptance of Trellis technologies at the University of Arizona. Constituent Relationship Management (CRM) technologies are becoming more common in higher education, helping to track interactions, streamline processes, and support customized experiences for students. Unfortunately, not all users are receptive to new technologies, and subsequent adoption can be slow. While the study of technology adoption literature provides insight into what motivates individuals to accept or reject new technologies, used herein was the most prevalent technology adoption theory – the Technology Acceptance Model (TAM; Davis, 1986). I used TAM to explore technology acceptance more spec user’s Perceived Ease of Use (PEU) and Perceived Usefulness (PU). In this MMAR study, I used TAM (Davis, 1986) as well as Everett Roger’s (1983) Diffusion Innovation Theory (DOI) to evaluate the impact of the CoP mentioned above on user adoption. Additionally, I added Perceived Value (PV) as a third construct to the TAM. Using pre-and post-intervention surveys, observation, and interviews, to both collect and analyze data on the impacts of my CoP intervention, I determined that the CoPs did assist in more thoroughly diffusing knowledge share, which reportedly led to improved PEU, PU, and PV in the treatment group. Specifically, the peer-to-peer mentoring that occurred in the CoPs helped users feel empowered to use the capabilities. Additionally, while the CoPs reportedly improved PEU, PU, and PV, the peer-to-peer model and the Trellis technologies still have not matured enough to realize their total value to campus.
ContributorsHodge, Nikolas (Author) / Beardsley, Audrey (Thesis advisor) / Neumann, William (Committee member) / Wolf, Leigh (Committee member) / Arizona State University (Publisher)
Created2022
158767-Thumbnail Image.png
Description
The difficulty of demonstrating a significant return on investment from the use of advanced data analytics has led to a lack of utilization of this tool. The most likely explanation for this phenomenon is the difficulty of incorporating non-financial metrics in the higher levels of analysis that are fully salient

The difficulty of demonstrating a significant return on investment from the use of advanced data analytics has led to a lack of utilization of this tool. The most likely explanation for this phenomenon is the difficulty of incorporating non-financial metrics in the higher levels of analysis that are fully salient and derived in a manner that can be understood and trusted by organizational leaders. Another challenge that has confounded the use of advanced analytics by the leadership of organizations is the widely accepted belief that models are oftentimes developed with an insufficient number of variables that are expected to have an impact, which inhibits extrapolation of results for use in real-world decision making. This research identifies factors that contribute to the underutilization of analytics models in managerial decisions by leadership of the produce industry, and explores a variety of potential tools including descriptive analytics and dashboards that are able to provide predictive, prescriptive, and more advanced cognitive methods of decision making for use by organizational leadership. By understanding the disconnect between availability of the advanced data analysis tools and use of such tools by organizational leadership, this research assists in identifying the programs and resources that should be developed and presented as opportunities for support in the industrial decision-making process. This dissertation explores why managers within the produce industry underutilize higher levels of data analytics and whether it is possible to increase their levels of cognitive comfort. It shows that by providing leadership with digestible and rudimentary business experiments, they become more comfortable with more complex data analytics and then are better able to utilize dashboards and other tools within their decision-making models. As experiments are explained to managers, they become as comfortable with conducting experiments as they are with dashboards, thus becoming comfortable with evaluating their benefits.
ContributorsGlassman, Jeremy Britz (Author) / St. Louis, Robert (Thesis advisor) / Shao, Benjamin (Committee member) / Manfredo, Mark (Committee member) / Arizona State University (Publisher)
Created2020
189217-Thumbnail Image.png
Description
Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible

Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible yet rigorous approach for decomposing a whole model into its components or alternatively, composing a whole model from components. While different concepts, frameworks, and tools offer a variety of visual modeling capabilities, most pose limitations, such as visualizing multiple model hierarchies at any level with arbitrary depths. The visual and persistent layout of any number of hierarchy levels of models can be maintained and navigated seamlessly. Persistence storage is another capability needed for the modeling, simulating, verifying, and validating lifecycle. These are important features to improve the demanding task of creating and changing modular, hierarchical simulation models. This thesis proposes a new approach and develops a tool for the visual development of models. This tool supports storing and reconstructing graphical models using a NoSQL database. It offers unique capabilities important for developing increasingly larger and more complex models essential for analyzing, designing, and building Digital Twins.
ContributorsMohite, Sheetal Chandrakant (Author) / Sarjoughian, Hessam S (Thesis advisor) / Bryan, Chris (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2023