Matching Items (4)
Capturing and presenting high-quality data can be challenging for free clinics due to lack of resources and technology avoidance. If free clinics are unable to present impactful data to current and potential donors, this may limit funding and restrict care provided to underserved and vulnerable populations. The following is a quality improvement project which addresses utilization of information systems within a free clinic. For one month, volunteer providers completed appointment summary forms for each patient seen in the clinic. Electronic form submissions (E=110) were compared to paper form submissions (P=196), with quality of data determined by form completeness scores. Welch’s t-test was used to determine statistical significance between electronic and paper form completeness scores (E=9.7, P=8.5) (p < .001). Findings suggest that utilization of electronic data collection tools within a free clinic produce more complete and accurate data. Barriers associated with technology utilization in this under-resourced environment were substantial. Findings related to overcoming some of these barriers may be useful for future exploration of health information technology utilization in under-resourced and technology avoidant settings. Results warrant future investigation of the relationship between quality of free clinic data, information management systems, provider willingness to utilize technology and funding opportunities in free clinics.
Under the supervision of Pofessor Robert Hammond, I handled the programming and record-keeping needs of a project at the Arizona Public Service Solar Test and Research Center (STAR). In the course of the first year that I worked there, I became aware that STAR's Data Management System (DMS) was in need of an overhaul due to an increasingly volatile date set that was quickly growing in size. STAR management was looking for a software system that would retrieve and store data automatically, that would contain a friendly user-interface, that minimized space usage on a crowded hard drive, that provided quick access to charts, and that generated statistical analysis of solar plant operation. STAR's current DMS consists of four top-level procedures. The latest version of STAR's DMS began operation two and a half years ago. The goal of the following chapters is to document and critique the software development process that I used to bring the Visual Basic for Excel version of the current software components into existence. In addition, the conclusion will include a look into the future of STAR's DMS as management introduces an Access database version for the implementation of the DMS.
This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.
This thesis investigates the use of MS Power BI in the case company’s heterogeneous computing environment. The empirical evidence was collected through the authors’ own observations and exposure to the modeling of dashboards, other supported external findings from interviews, published articles, academic journals, and speaking with leading experts at the WA ‘Dynamic Talks Seattle/Redmond: Big Data Analytics’ conference. Power BI modeling is effective for advancing the development of statistical thinking and data retrieving skills, finding trends and patterns in data representations, and making predictions. Computer-based data modeling gave meaning to math results, and supported examining implications of these results with simple charts to improve perception. Querying and other add-ins that would be seen as affordances when using other BI softwares, with some complexity removed in Power BI, make modeling data an easier undertaking for report builders. Using computer-based qualitative data analysis software, this paper details opportunities and challenges of data modeling with dashboards. Simple linear regression is used for case study use only.