Matching Items (14)
152417-Thumbnail Image.png
Description
Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team. Such team-based

Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team. Such team-based procedures include surgical and clinical procedures, some of which are protocol-driven. Cost and time required for individual team-based training sessions, along with other factors, contribute to making the training complex and challenging. A great deal of research has been done on medically-focused collaborative virtual reality (VR)-based training for protocol-driven procedures as a cost-effective as well as time-efficient solution. Most VR-based simulators focus on training of individual personnel. The ones which focus on providing team training provide an interactive simulation for only a few scenarios in a collaborative virtual environment (CVE). These simulators are suited for didactic training for cognitive skills development. The training sessions in the simulators require the physical presence of mentors. The problem with this kind of system is that the mentor must be present at the training location (either physically or virtually) to evaluate the performance of the team (or an individual). Another issue is that there is no efficient methodology that exists to provide feedback to the trainees during the training session itself (formative feedback). Furthermore, they lack the ability to provide training in acquisition or improvement of psychomotor skills for the tasks that require force or touch feedback such as cardiopulmonary resuscitation (CPR). To find a potential solution to overcome some of these concerns, a novel training system was designed and developed that utilizes the integration of sensors into a CVE for time-critical medical procedures. The system allows the participants to simultaneously access the CVE and receive training from geographically diverse locations. The system is also able to provide real-time feedback and is also able to store important data during each training/testing session. Finally, this study also presents a generalizable collaborative team-training system that can be used across various team-based procedures in medical as well as non-medical domains.
ContributorsKhanal, Prabal (Author) / Greenes, Robert (Thesis advisor) / Patel, Vimla (Thesis advisor) / Smith, Marshall (Committee member) / Gupta, Ashish (Committee member) / Kaufman, David (Committee member) / Arizona State University (Publisher)
Created2014
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
153387-Thumbnail Image.png
Description
A core principle in multiple national quality improvement strategies is the engagement of chronically ill patients in the creation and execution of their treatment plans. Numerous initiatives are underway to use health information technology (HIT) to support patient engagement however the use of HIT and other factors such as health

A core principle in multiple national quality improvement strategies is the engagement of chronically ill patients in the creation and execution of their treatment plans. Numerous initiatives are underway to use health information technology (HIT) to support patient engagement however the use of HIT and other factors such as health literacy may be significant barriers to engagement for older adults. This qualitative descriptive study sought to explore the ways that older adults with multi-morbidities engaged with their plan of care. Forty participants were recruited through multiple case sampling from two ambulatory cardiology practices. Participants were English-speaking, without a dementia-related diagnosis, and between the ages of 65 and 86. The older adults in this study performed many behaviors to engage in the plan of care, including acting in ways to support health, managing health-related information, attending routine visits with their doctors, and participating in treatment planning. A subset of patients engaged in active decision-making because of the point they were at in their chronic disease. At that cross roads, they expressed uncertainly over which road to travel. Two factors influenced the engagement of older adults: a relationship with the provider that met the patient's needs, and the distribution of a Meaningful Use clinical summary at the conclusion of the provider visit. Participants described the ways in which the clinical summary helped and hindered their understanding of the care plan.

Insights gained as a result of this study include an understanding of the discrepancies between what the healthcare system expects of patients and their actual behavior when it comes to the creation of a care plan and the ways in which they take care of their health. Further research should examine the ability of various factors to enhance patient engagement. For example, it may be useful to focus on ways to improve the clinical summary to enhance engagement with the care plan and meet standards for a health literate document. Recommendations for the improvement of the clinical summary are provided. Finally, this study explored potential reasons for the infrequent use of online health information by older adults including the trusting relationship they enjoyed with their cardiologist.
ContributorsJiggins Colorafi, Karen (Author) / Lamb, Gerri (Thesis advisor) / Marek, Karen (Committee member) / Greenes, Robert (Committee member) / Evans, Bronwynne (Committee member) / Arizona State University (Publisher)
Created2015
150708-Thumbnail Image.png
Description
This work involved the analysis of a public health system, and the design, development and deployment of enterprise informatics architecture, and sustainable community methods to address problems with the current public health system. Specifically, assessment of the Nationally Notifiable Disease Surveillance System (NNDSS) was instrumental in forming the design of

This work involved the analysis of a public health system, and the design, development and deployment of enterprise informatics architecture, and sustainable community methods to address problems with the current public health system. Specifically, assessment of the Nationally Notifiable Disease Surveillance System (NNDSS) was instrumental in forming the design of the current implementation at the Southern Nevada Health District (SNHD). The result of the system deployment at SNHD was considered as a basis for projecting the practical application and benefits of an enterprise architecture. This approach has resulted in a sustainable platform to enhance the practice of public health by improving the quality and timeliness of data, effectiveness of an investigation, and reporting across the continuum.
ContributorsKriseman, Jeffrey Michael (Author) / Dinu, Valentin (Thesis advisor) / Greenes, Robert (Committee member) / Johnson, William (Committee member) / Arizona State University (Publisher)
Created2012
151151-Thumbnail Image.png
Description
Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it

Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it may be delivered in a manner that makes it engaging while negating the need to purchase special equipment. This thesis presents a case study in the form of a time critical, team based medical scenario known as Advanced Cardiac Life Support (ACLS). A framework and methodology associated with the design of a VR trainer for ACLS is detailed. In addition, in order to potentially provide an engaging experience, the simulator was designed to incorporate immersive elements and a multimodal interface (haptic, visual, and auditory). A study was conducted to test two primary hypotheses namely: a meaningful transfer of skill is achieved from virtual reality training to real world mock codes and the presence of immersive components in virtual reality leads to an increase in the performance gained. The participant pool consisted of 54 clinicians divided into 9 teams of 6 members each. The teams were categorized into three treatment groups: immersive VR (3 teams), minimally immersive VR (3 teams), and control (3 teams). The study was conducted in 4 phases from a real world mock code pretest to assess baselines to a 30 minute VR training session culminating in a final mock code to assess the performance change from the baseline. The minimally immersive team was treated as control for the immersive components. The teams were graded, in both VR and mock code sessions, using the evaluation metric used in real world mock codes. The study revealed that the immersive VR groups saw greater performance gain from pretest to posttest than the minimally immersive and control groups in case of the VFib/VTach scenario (~20% to ~5%). Also the immersive VR groups had a greater performance gain than the minimally immersive groups from the first to the final session of VFib/VTach (29% to -13%) and PEA (27% to 15%).
ContributorsVankipuram, Akshay (Author) / Li, Baoxin (Thesis advisor) / Burleson, Winslow (Committee member) / Kahol, Kanav (Committee member) / Arizona State University (Publisher)
Created2012
156777-Thumbnail Image.png
Description
Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR)

Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs.

This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards.

Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment.

Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.
ContributorsLee, Preston Victor (Author) / Dinu, Valentin (Thesis advisor) / Sottara, Davide (Committee member) / Greenes, Robert (Committee member) / Arizona State University (Publisher)
Created2018
154663-Thumbnail Image.png
Description
Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems

Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1–6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.
ContributorsEmadzadeh, Ehsan (Author) / Gonzalez, Graciela (Thesis advisor) / Greenes, Robert (Committee member) / Scotch, Matthew (Committee member) / Arizona State University (Publisher)
Created2016
154999-Thumbnail Image.png
Description
Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks such as pharmacovigilance via the use of Natural Language Processing (NLP) techniques. One of the critical steps in information extraction pipelines is Named Entity Recognition

Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks such as pharmacovigilance via the use of Natural Language Processing (NLP) techniques. One of the critical steps in information extraction pipelines is Named Entity Recognition (NER), where the mentions of entities such as diseases are located in text and their entity type are identified. However, the language in social media is highly informal, and user-expressed health-related concepts are often non-technical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and advanced machine learning-based NLP techniques have been underutilized. This work explores the effectiveness of different machine learning techniques, and particularly deep learning, to address the challenges associated with extraction of health-related concepts from social media. Deep learning has recently attracted a lot of attention in machine learning research and has shown remarkable success in several applications particularly imaging and speech recognition. However, thus far, deep learning techniques are relatively unexplored for biomedical text mining and, in particular, this is the first attempt in applying deep learning for health information extraction from social media.

This work presents ADRMine that uses a Conditional Random Field (CRF) sequence tagger for extraction of complex health-related concepts. It utilizes a large volume of unlabeled user posts for automatic learning of embedding cluster features, a novel application of deep learning in modeling the similarity between the tokens. ADRMine significantly improved the medical NER performance compared to the baseline systems.

This work also presents DeepHealthMiner, a deep learning pipeline for health-related concept extraction. Most of the machine learning methods require sophisticated task-specific manual feature design which is a challenging step in processing the informal and noisy content of social media. DeepHealthMiner automatically learns classification features using neural networks and utilizing a large volume of unlabeled user posts. Using a relatively small labeled training set, DeepHealthMiner could accurately identify most of the concepts, including the consumer expressions that were not observed in the training data or in the standard medical lexicons outperforming the state-of-the-art baseline techniques.
ContributorsNikfarjam, Azadeh (Author) / Gonzalez, Graciela (Thesis advisor) / Greenes, Robert (Committee member) / Scotch, Matthew (Committee member) / Arizona State University (Publisher)
Created2016
149535-Thumbnail Image.png
Description
In the modern age, where teams consist of people from disparate locations, remote team training is highly desired. Moreover, team members' overlapping schedules force their mentors to focus on individual training instead of team training. Team training is an integral part of collaborative team work. With the advent of modern

In the modern age, where teams consist of people from disparate locations, remote team training is highly desired. Moreover, team members' overlapping schedules force their mentors to focus on individual training instead of team training. Team training is an integral part of collaborative team work. With the advent of modern technologies such as Web 2.0, cloud computing, etc. it is possible to revolutionize the delivery of time-critical team training in varied domains of healthcare military and education. Collaborative Virtual Environments (CVEs), also known as virtual worlds, and the existing worldwide footprint of high speed internet, would make remote team training ubiquitous. Such an integrated system would potentially help in assisting actual mentors to overcome the challenges in team training. ACLS is a time-critical activity which requires a high performance team effort. This thesis proposes a system that leverages a virtual world (VW) and provides an integrated learning platform for Advanced Cardiac Life Support (ACLS) case scenarios. The system integrates feedback devices such as haptic device so that real time feedback can be provided. Participants can log in remotely and work in a team to diagnose the given scenario. They can be trained and tested for ACLS within the virtual world. This system is well equipped with persuasive elements which aid in learning. The simulated training in this system was validated to teach novices the procedural aspect of ACLS. Sixteen participants were divided into four groups (two control groups and two experimental groups) of four participants. All four groups went through didactic session where they learned about ACLS and its procedures. A quiz after the didactic session revealed that all four groups had equal knowledge about ACLS. The two experimental groups went through training and testing in the virtual world. Experimental group 2 which was aided by the persuasive elements performed better than the control group. To validate the training capabilities of the virtual world system, final transfer test was conducted in real world setting at Banner Simulation Center on high fidelity mannequins. The test revealed that the experimental groups (average score 65/100) performed better than the control groups (average score 16/100). The experimental group 2 which was aided by the persuasive elements (average score 70/100) performed better than the experimental group 1 (average score 55/100). This shows that the persuasive technology can be useful for training purposes.
ContributorsParab, Sainath (Author) / Kahol, Kanav (Thesis advisor) / Burleson, Wnslow (Thesis advisor) / Li, Baioxin (Committee member) / Arizona State University (Publisher)
Created2010
153713-Thumbnail Image.png
Description
Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival

Colorectal cancer is the second-highest cause of cancer-related deaths in the United States with approximately 50,000 estimated deaths in 2015. The advanced stages of colorectal cancer has a poor five-year survival rate of 10%, whereas the diagnosis in early stages of development has showed a more favorable five-year survival rate of 90%. Early diagnosis of colorectal cancer is achievable if colorectal polyps, a possible precursor to cancer, are detected and removed before developing into malignancy.

The preferred method for polyp detection and removal is optical colonoscopy. A colonoscopic procedure consists of two phases: (1) insertion phase during which a flexible endoscope (a flexible tube with a tiny video camera at the tip) is advanced via the anus and then gradually to the end of the colon--called the cecum, and (2) withdrawal phase during which the endoscope is gradually withdrawn while colonoscopists examine the colon wall to find and remove polyps. Colonoscopy is an effective procedure and has led to a significant decline in the incidence and mortality of colon cancer. However, despite many screening and therapeutic advantages, 1 out of every 4 polyps and 1 out of 13 colon cancers are missed during colonoscopy.

There are many factors that contribute to missed polyps and cancers including poor colon preparation, inadequate navigational skills, and fatigue. Poor colon preparation results in a substantial portion of colon covered with fecal content, hindering a careful examination of the colon. Inadequate navigational skills can prevent a colonoscopist from examining hard-to-reach regions of the colon that may contain a polyp. Fatigue can manifest itself in the performance of a colonoscopist by decreasing diligence and vigilance during procedures. Lack of vigilance may prevent a colonoscopist from detecting the polyps that briefly appear in the colonoscopy videos. Lack of diligence may result in hasty examination of the colon that is likely to miss polyps and lesions.

To reduce polyp and cancer miss rates, this research presents a quality assurance system with 3 components. The first component is an automatic polyp detection system that highlights the regions with suspected polyps in colonoscopy videos. The goal is to encourage more vigilance during procedures. The suggested polyp detection system consists of several novel modules: (1) a new patch descriptor that characterizes image appearance around boundaries more accurately and more efficiently than widely-used patch descriptors such HoG, LBP, and Daisy; (2) A 2-stage classification framework that is able to enhance low level image features prior to classification. Unlike the traditional way of image classification where a single patch undergoes the processing pipeline, our system fuses the information extracted from a pair of patches for more accurate edge classification; (3) a new vote accumulation scheme that robustly localizes objects with curvy boundaries in fragmented edge maps. Our voting scheme produces a probabilistic output for each polyp candidate but unlike the existing methods (e.g., Hough transform) does not require any predefined parametric model of the object of interest; (4) and a unique three-way image representation coupled with convolutional neural networks (CNNs) for classifying the polyp candidates. Our image representation efficiently captures a variety of features such as color, texture, shape, and temporal information and significantly improves the performance of the subsequent CNNs for candidate classification. This contrasts with the exiting methods that mainly rely on a subset of the above image features for polyp detection. Furthermore, this research is the first to investigate the use of CNNs for polyp detection in colonoscopy videos.

The second component of our quality assurance system is an automatic image quality assessment for colonoscopy. The goal is to encourage more diligence during procedures by warning against hasty and low quality colon examination. We detect a low quality colon examination by identifying a number of consecutive non-informative frames in videos. We base our methodology for detecting non-informative frames on two key observations: (1) non-informative frames

most often show an unrecognizable scene with few details and blurry edges and thus their information can be locally compressed in a few Discrete Cosine Transform (DCT) coefficients; however, informative images include much more details and their information content cannot be summarized by a small subset of DCT coefficients; (2) information content is spread all over the image in the case of informative frames, whereas in non-informative frames, depending on image artifacts and degradation factors, details may appear in only a few regions. We use the former observation in designing our global features and the latter in designing our local image features. We demonstrated that the suggested new features are superior to the existing features based on wavelet and Fourier transforms.

The third component of our quality assurance system is a 3D visualization system. The goal is to provide colonoscopists with feedback about the regions of the colon that have remained unexamined during colonoscopy, thereby helping them improve their navigational skills. The suggested system is based on a new 3D reconstruction algorithm that combines depth and position information for 3D reconstruction. We propose to use a depth camera and a tracking sensor to obtain depth and position information. Our system contrasts with the existing works where the depth and position information are unreliably estimated from the colonoscopy frames. We conducted a use case experiment, demonstrating that the suggested 3D visualization system can determine the unseen regions of the navigated environment. However, due to technology limitations, we were not able to evaluate our 3D visualization system using a phantom model of the colon.
ContributorsTajbakhsh, Nima (Author) / Liang, Jianming (Thesis advisor) / Greenes, Robert (Committee member) / Scotch, Matthew (Committee member) / Arizona State University (Publisher)
Created2015