This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 5 of 5
Filtering by

Clear all filters

152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
151151-Thumbnail Image.png
Description
Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it

Technology in the modern day has ensured that learning of skills and behavior may be both widely disseminated and cheaply available. An example of this is the concept of virtual reality (VR) training. Virtual Reality training ensures that learning can be provided often, in a safe simulated setting, and it may be delivered in a manner that makes it engaging while negating the need to purchase special equipment. This thesis presents a case study in the form of a time critical, team based medical scenario known as Advanced Cardiac Life Support (ACLS). A framework and methodology associated with the design of a VR trainer for ACLS is detailed. In addition, in order to potentially provide an engaging experience, the simulator was designed to incorporate immersive elements and a multimodal interface (haptic, visual, and auditory). A study was conducted to test two primary hypotheses namely: a meaningful transfer of skill is achieved from virtual reality training to real world mock codes and the presence of immersive components in virtual reality leads to an increase in the performance gained. The participant pool consisted of 54 clinicians divided into 9 teams of 6 members each. The teams were categorized into three treatment groups: immersive VR (3 teams), minimally immersive VR (3 teams), and control (3 teams). The study was conducted in 4 phases from a real world mock code pretest to assess baselines to a 30 minute VR training session culminating in a final mock code to assess the performance change from the baseline. The minimally immersive team was treated as control for the immersive components. The teams were graded, in both VR and mock code sessions, using the evaluation metric used in real world mock codes. The study revealed that the immersive VR groups saw greater performance gain from pretest to posttest than the minimally immersive and control groups in case of the VFib/VTach scenario (~20% to ~5%). Also the immersive VR groups had a greater performance gain than the minimally immersive groups from the first to the final session of VFib/VTach (29% to -13%) and PEA (27% to 15%).
ContributorsVankipuram, Akshay (Author) / Li, Baoxin (Thesis advisor) / Burleson, Winslow (Committee member) / Kahol, Kanav (Committee member) / Arizona State University (Publisher)
Created2012
154256-Thumbnail Image.png
Description
Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured

Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured the sensitivity of the human visual system to blur using 2D test patterns. In this dissertation, subjective tests are performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. The results of this study indicate that, in the symmetric stereo viewing case, binocular disparity does not affect the blur discrimination thresholds for the selected 3D test patterns. In the asymmetric viewing case, the blur discrimination thresholds decreased and the decrease in threshold values is found to be dominated by the eye observing the higher blur.



The second part of the dissertation focuses on texture granularity in the context of 2D images. A texture granularity database referred to as GranTEX, consisting of textures with varying granularity levels is constructed. A subjective study is conducted to measure the perceived granularity level of textures present in the GranTEX database. An objective index that automatically measures the perceived granularity level of textures is also presented. It is shown that the proposed granularity metric correlates well with the subjective granularity scores and outperforms the other methods presented in the literature.

A subjective study is conducted to assess the effect of compression on textures with varying degrees of granularity. A logarithmic function model is proposed as a fit to the subjective test data. It is demonstrated that the proposed model can be used for rate-distortion control by allowing the automatic selection of the needed compression ratio for a target visual quality. The proposed model can also be used for visual quality assessment by providing a measure of the visual quality for a target compression ratio.

The effect of texture granularity on the quality of synthesized textures is studied. A subjective study is presented to assess the quality of synthesized textures with varying levels of texture granularity using different types of texture synthesis methods. This work also proposes a reduced-reference visual quality index referred to as delta texture granularity index for assessing the visual quality of synthesized textures.
ContributorsSubedar, Mahesh M (Author) / Karam, Lina (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2015
155148-Thumbnail Image.png
Description
Visual attention (VA) is the study of mechanisms that allow the human visual system (HVS) to selectively process relevant visual information. This work focuses on the subjective and objective evaluation of computational VA models for the distortion-free case as well as in the presence of image distortions.



Existing VA models are

Visual attention (VA) is the study of mechanisms that allow the human visual system (HVS) to selectively process relevant visual information. This work focuses on the subjective and objective evaluation of computational VA models for the distortion-free case as well as in the presence of image distortions.



Existing VA models are traditionally evaluated by using VA metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though there is a considerable number of objective VA metrics, there exists no study that validates that these metrics are adequate for the evaluation of VA models. This work constructs a VA Quality (VAQ) Database by subjectively assessing the prediction performance of VA models on distortion-free images. Additionally, shortcomings in existing metrics are discussed through illustrative examples and a new metric that uses local weights based on fixation density and that overcomes these flaws, is proposed. The proposed VA metric outperforms all other popular existing metrics in terms of the correlation with subjective ratings.



In practice, the image quality is affected by a host of factors at several stages of the image processing pipeline such as acquisition, compression, and transmission. However, none of the existing studies have discussed the subjective and objective evaluation of visual saliency models in the presence of distortion. In this work, a Distortion-based Visual Attention Quality (DVAQ) subjective database is constructed to evaluate the quality of VA maps for images in the presence of distortions. For creating this database, saliency maps obtained from images subjected to various types of distortions, including blur, noise and compression, and varying levels of distortion severity are rated by human observers in terms of their visual resemblance to corresponding ground-truth fixation density maps. The performance of traditionally used as well as recently proposed VA metrics are evaluated by correlating their scores with the human subjective ratings. In addition, an objective evaluation of 20 state-of-the-art VA models is performed using the top-performing VA metrics together with a study of how the VA models’ prediction performance changes with different types and levels of distortions.
ContributorsGide, Milind Subhash (Author) / Karam, Lina J (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2016
157697-Thumbnail Image.png
Description
The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on

The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on blur detection and its application to image enhancement.

This work proposes a spatially-varying defocus blur detection based on the quotient of spectral bands; additionally, to avoid the use of computationally intensive algorithms for the segmentation of foreground and background regions, a global threshold defined using weak textured regions on the input image is proposed. Quantitative results expressed in the precision-recall space as well as qualitative results overperform current state-of-the-art algorithms while keeping the computational requirements at competitive levels.

Imperfections in the curvature of lenses can lead to image radial distortion (IRD). Computer vision applications can be drastically affected by IRD. This work proposes a novel robust radial distortion correction algorithm based on alternate optimization using two cost functions tailored for the estimation of the center of distortion and radial distortion coefficients. Qualitative and quantitative results show the competitiveness of the proposed algorithm.

Blur is one of the causes of visual discomfort in stereopsis. Sharpening applying traditional algorithms can produce an interdifference which causes eyestrain and visual fatigue for the viewer. A sharpness enhancement method for stereo images that incorporates binocular vision cues and depth information is presented. Perceptual evaluation and quantitative results based on the metric of interdifference deviation are reported; results of the proposed algorithm are competitive with state-of-the-art stereo algorithms.

Digital images and videos are produced every day in astonishing amounts. Consequently, the market-driven demand for higher quality content is constantly increasing which leads to the need of image quality assessment (IQA) methods. A training-free, no-reference image sharpness assessment method based on the singular value decomposition of perceptually-weighted normalized-gradients of relevant pixels in the input image is proposed. Results over six subject-rated publicly available databases show competitive performance when compared with state-of-the-art algorithms.
ContributorsAndrade Rodas, Juan Manuel (Author) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2019