Matching Items (64)

Multiple-Channel Detection in Active Sensing

Description

The problem of detecting the presence of a known signal in multiple channels of additive white Gaussian noise, such as occurs in active radar with a single transmitter and multiple

The problem of detecting the presence of a known signal in multiple channels of additive white Gaussian noise, such as occurs in active radar with a single transmitter and multiple geographically distributed receivers, is addressed via coherent multiple-channel techniques. A replica of the transmitted signal replica is treated as a one channel in a M-channel detector with the remaining M-1 channels comprised of data from the receivers. It is shown that the distribution of the eigenvalues of a Gram matrix are invariant to the presence of the signal replica on one channel provided the other M-1 channels are independent and contain only white Gaussian noise. Thus, the thresholds representing false alarm probabilities for detectors based on functions of these eigenvalues remain valid when one channel is known to not contain only noise. The derivation is supported by results from Monte Carlo simulations. The performance of the largest eigenvalue as a detection statistic in the active case is examined, and compared to the normalized matched filter detector in a two and three channel case.

Contributors

Agent

Created

Date Created
  • 2013-05

137487-Thumbnail Image.png

Intervention Strategies for the DoD Acquisition Process Using Simulation

Description

The current Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation of the major tasks and decisions within the DoD acquisition system, identifies several what-if intervention strategies to improve

The current Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation of the major tasks and decisions within the DoD acquisition system, identifies several what-if intervention strategies to improve program completion time. However, processes that contribute to the program acquisition completion time were not explicitly identified in the simulation study. This research seeks to determine the acquisition processes that contribute significantly to total simulated program time in the acquisition system for all programs reaching Milestone C. Specifically, this research examines the effect of increased scope management, technology maturity, and decreased variation and mean process times in post-Design Readiness Review contractor activities by performing additional simulation analyses. Potential policies are formulated from the results to further improve program acquisition completion time.

Contributors

Agent

Created

Date Created
  • 2013-05

134706-Thumbnail Image.png

Open-Source Feature Selection Tool for Medical Imaging Diagnosis

Description

Open source image analytics and data mining software are widely available but can be overly-complicated and non-intuitive for medical physicians and researchers to use. The ASU-Mayo Clinic Imaging Informatics Lab

Open source image analytics and data mining software are widely available but can be overly-complicated and non-intuitive for medical physicians and researchers to use. The ASU-Mayo Clinic Imaging Informatics Lab has developed an in-house pipeline to process medical images, extract imaging features, and develop multi-parametric models to assist disease staging and diagnosis. The tools have been extensively used in a number of medical studies including brain tumor, breast cancer, liver cancer, Alzheimer's disease, and migraine. Recognizing the need from users in the medical field for a simplified interface and streamlined functionalities, this project aims to democratize this pipeline so that it is more readily available to health practitioners and third party developers.

Contributors

Agent

Created

Date Created
  • 2016-12

MRI-Based Texture Analysis to Differentiate Sinonasal Squamous Cell Carcinoma from Inverted Papilloma

Description

ABSTRACT BACKGROUND AND PURPOSE: Sinonasal inverted papilloma (IP) can harbor squamous cell carcinoma (SCC). Consequently, differentiating these tumors is important. The objective of this study was to determine if MRI-based

ABSTRACT BACKGROUND AND PURPOSE: Sinonasal inverted papilloma (IP) can harbor squamous cell carcinoma (SCC). Consequently, differentiating these tumors is important. The objective of this study was to determine if MRI-based texture analysis can differentiate SCC from IP and provide supplementary information to the radiologist. MATERIALS AND METHODS: Adult patients who had IP or SCC resected were eligible (coexistent IP and SCC were excluded). Inclusion required tumor size greater than 1.5 cm and a pre-operative MRI with axial T1, axial T2, and axial T1 post-contrast sequences. Five well- established texture analysis algorithms were applied to an ROI from the largest tumor cross- section. For a training dataset, machine-learning algorithms were used to identify the most accurate model, and performance was also evaluated in a validation dataset. Based on three separate blinded reviews of the ROI, isolated tumor, and entire images, two neuroradiologists predicted tumor type in consensus. RESULTS: The IP and SCC cohorts were matched for age and gender, while SCC tumor volume was larger (p=0.001). The best classification model achieved similar accuracies for training (17 SCC, 16 IP) and validation (7 SCC, 6 IP) datasets of 90.9% and 84.6% respectively (p=0.537). The machine-learning accuracy for the entire cohort (89.1%) was better than that of the neuroradiologist ROI review (56.5%, p=0.0004) but not significantly different from the neuroradiologist review of the tumors (73.9%, p=0.060) or entire images (87.0%, p=0.748). CONCLUSION: MRI-based texture analysis has potential to differentiate SCC from IP and may provide incremental information to the neuroradiologist, particularly for small or heterogeneous tumors.

Contributors

Agent

Created

Date Created
  • 2016-12

136013-Thumbnail Image.png

An Integrated Framework for Patient Access Staffing Decision

Description

The challenge of healthcare delivery has attracted widespread attention since the report published by the World Health Organization in 2000, ranking the US 37th in overall health systems performance among

The challenge of healthcare delivery has attracted widespread attention since the report published by the World Health Organization in 2000, ranking the US 37th in overall health systems performance among 191 Member States. In addition, Davis et al. (2007) demonstrated that healthcare costs in the US were higher than all other countries, despite the fact that care was not the better than all other countries. The growing population in the US, combined with continued medical advances, has increased the demand for quality healthcare services. With this growth, however, comes the challenge of managing rising costs and maintaining efficient operations while satisfying patient's service level. Research has explored methods of improvement from system engineering, lean and process improvement, and mathematical programming of healthcare operations, to improve healthcare operations. In this project, we are interested in a patient access (patient registration) problem. The key research question is: what is an optimal decision in terms of patient admitting points considering both hospital cost and service level of patient access? To answer this question, we propose the use of the Queueing Theory to evaluate scenarios in a multi-objective decision setting implemented by Excel VBA (Visual Basic for Application). The first objective is to provide a "generic" Excel-based model with user-friendly interface such that users are able to visualize outcomes by changing chosen parameters and understand model sensitivities. The second objective is to evaluate the use Queueing in this patient access staffing decision. The data was provided by Healthcare Excellence Institute (HEI), a Phoenix-based consulting company which has experience in improving healthcare operation for more than 8 years. HEI has several hospital clients interested in determining the "optimal" number of admitting points which motivates us to develop this research project. Please note due to business confidentiality, the date used in this thesis has been modified.

Contributors

Agent

Created

Date Created
  • 2012-05

135788-Thumbnail Image.png

Cost Driven Agent Based Simulation of the Department of Defense Acquisition System

Description

The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system

The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system is responsible for aiding U.S. warfighters. Hence, if the acquisition process is failing that could be a potential threat to our nation's security. Furthermore, the DoD acquisition system is responsible for proper allocation of billions of taxpayer's dollars and employs many civilians and military personnel. Much research has been done in the past on the acquisition system with little impact or success. One reason for this lack of success in improving the system is the lack of accurate models to test theories. This research is a continuation of the effort on the Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation modeling research on DoD acquisition system. We propose to extend ERAM using agent-based simulation principles due to the many interactions among the subsystems of the acquisition system. We initially identify ten sub models needed to simulate the acquisition system. This research focuses on three sub models related to the budget of acquisition programs. In this thesis, we present the data collection, data analysis, initial implementation, and initial validation needed to facilitate these sub models and lay the groundwork for a full agent-based simulation of the DoD acquisition system.

Contributors

Agent

Created

Date Created
  • 2016-05

128818-Thumbnail Image.png

Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma

Description

Background
Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor

Background
Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM.
Methods
We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set.
Results
We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients).
Conclusion
Multi-parametric MRI and texture analysis can help characterize and visualize GBM’s spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

Contributors

Agent

Created

Date Created
  • 2015-11-24

132761-Thumbnail Image.png

AI in Radiology: How the Adoption of an Accountability Framework can Impact Technology Integration in the Expert-Decision-Making Job Space

Description

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for clinical and social impact. Several technologies in cancer screening, such as Computer Aided Detection (CAD), have broken the barrier of research into reality through successful outcomes with patient data (Morton, Whaley, Brandt, & Amrami, 2006; Patel et al, 2018). Technologies, such as the IBM Medical Sieve, are growing excitement with the potential for increased impact through the addition of medical record information ("Medical Sieve Radiology Grand Challenge", 2018). As the capabilities of automation increase and become a part of expert-decision-making jobs, however, the careful consideration of its integration into human systems is often overlooked. This paper aims to identify how healthcare professionals and system engineers implementing and interacting with automated decision-making aids in Radiology should take bureaucratic, legal, professional, and political accountability concerns into consideration. This Accountability Framework is modeled after Romzek and Dubnick’s (1987) public administration framework and expanded on through an analysis of literature on accountability definitions and examples in military, healthcare, and research sectors. A cohesive understanding of this framework and the human concerns it raises helps drive the questions that, if fully addressed, create the potential for a successful integration and adoption of AI in radiology and ultimately the care environment.

Contributors

Agent

Created

Date Created
  • 2019-05

132105-Thumbnail Image.png

Fault Detection and Simulation for Large Building HVAC Systems

Description

The primary purpose of this paper is to evaluate the energy impacts of faults in building heating, ventilation, and air conditioning systems and determine which systems’ faults have the highest

The primary purpose of this paper is to evaluate the energy impacts of faults in building heating, ventilation, and air conditioning systems and determine which systems’ faults have the highest effect on the energy consumption. With the knowledge obtained through the results described in this paper, building engineers and technicians will be more able to implement a data-driven solution to building fault detection and diagnostics

In the United States alone, commercial buildings consume 18% of the country’s energy. Due to this high percentage of energy consumption, many efforts are being made to make buildings more energy efficient. Heating, ventilation, and air conditioning (HVAC) systems are made to provide acceptable air quality and thermal comfort to building occupants. In large buildings, a demand-controlled HVAC system is used to save energy by dynamically adjusting the ventilation of the building. These systems rely on a multitude of sensors, actuators, dampers, and valves in order to keep the building ventilation efficient. Using a fault analysis framework developed by the University of Alabama and the National Renewable Energy Laboratory, building fault modes were simulated in the EnergyPlus whole building energy simulation program. The model and framework are based on the Department of Energy’s Commercial Prototype Building – Medium Office variant. A total of 3,002 simulations were performed in the Atlanta climate zone, with 129 fault cases and 41 fault types. These simulations serve two purposes: to validate the previously developed fault simulation framework, and to analyze how each fault mode affects the building over the simulation period.

The results demonstrate the effects of faults on HVAC systems, and validate the scalability of the framework. The most critical fault cases for the Medium Office building are those that affect the water systems of the building, as they cause the most harm to overall energy costs and occupant comfort.

Contributors

Agent

Created

Date Created
  • 2019-12

129596-Thumbnail Image.png

Prediction of near-term risk of developing breast cancer using computerized features from bilateral mammograms

Description

Asymmetry of bilateral mammographic tissue density and patterns is a potentially strong indicator of having or developing breast abnormalities or early cancers. The purpose of this study is to design

Asymmetry of bilateral mammographic tissue density and patterns is a potentially strong indicator of having or developing breast abnormalities or early cancers. The purpose of this study is to design and test the global asymmetry features from bilateral mammograms to predict the near-term risk of women developing detectable high risk breast lesions or cancer in the next sequential screening mammography examination. The image dataset includes mammograms acquired from 90 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including image preprocessing, suspicious region segmentation, image feature extraction, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under curve (AUC) is 0.754 ± 0.024 when applying the new computerized aided diagnosis (CAD) scheme to our testing dataset. The positive predictive value and the negative predictive value were 0.58 and 0.80, respectively.

Contributors

Agent

Created

Date Created
  • 2014-07-01