Matching Items (3)

Filtering by

Clear all filters

148419-Thumbnail Image.png

A Framework for Measuring Human Uncertainty of Autonomous Vehicles with Specific Attention to the Inclusion of Empathy: Can Human Eyes Reveal Surprise?

Description

Currently, autonomous vehicles are being evaluated by how well they interact with humans without evaluating how well humans interact with them. Since people are not going to unanimously switch over to using autonomous vehicles, attention must be given to how

Currently, autonomous vehicles are being evaluated by how well they interact with humans without evaluating how well humans interact with them. Since people are not going to unanimously switch over to using autonomous vehicles, attention must be given to how well these new vehicles signal intent to human drivers from the driver’s point of view. Ineffective communication will lead to unnecessary discomfort among drivers caused by an underlying uncertainty about what an autonomous vehicle is or isn’t about to do. Recent studies suggest that humans tend to fixate on areas of higher uncertainty so scenarios that have a higher number of vehicle fixations can be reasoned to be more uncertain. We provide a framework for measuring human uncertainty and use the framework to measure the effect of empathetic vs non-empathetic agents. We used a simulated driving environment to create recorded scenarios and manipulate the autonomous vehicle to include either an empathetic or non-empathetic agent. The driving interaction is composed of two vehicles approaching an uncontrolled intersection. These scenarios were played to twelve participants while their gaze was recorded to track what the participants were fixating on. The overall intent was to provide an analytical framework as a tool for evaluating autonomous driving features; and in this case, we choose to evaluate how effective it was for vehicles to have empathetic behaviors included in the autonomous vehicle decision making. A t-test analysis of the gaze indicated that empathy did not in fact reduce uncertainty although additional testing of this hypothesis will be needed due to the small sample size.

Contributors

Agent

Created

Date Created
2021-05

152866-Thumbnail Image.png

Competency Assessment in Nursing Using Simulation: A Generalizability Study and Scenario Validation Process

Description

The measurement of competency in nursing is critical to ensure safe and effective care of patients. This study had two purposes. First, the psychometric characteristics of the Nursing Performance Profile (NPP), an instrument used to measure nursing competency, were evaluated

The measurement of competency in nursing is critical to ensure safe and effective care of patients. This study had two purposes. First, the psychometric characteristics of the Nursing Performance Profile (NPP), an instrument used to measure nursing competency, were evaluated using generalizability theory and a sample of 18 nurses in the Measuring Competency with Simulation (MCWS) Phase I dataset. The relative magnitudes of various error sources and their interactions were estimated in a generalizability study involving a fully crossed, three-facet random design with nurse participants as the object of measurement and scenarios, raters, and items as the three facets. A design corresponding to that of the MCWS Phase I data--involving three scenarios, three raters, and 41 items--showed nurse participants contributed the greatest proportion to total variance (50.00%), followed, in decreasing magnitude, by: rater (19.40%), the two-way participant x scenario interaction (12.93%), and the two-way participant x rater interaction (8.62%). The generalizability (G) coefficient was .65 and the dependability coefficient was .50. In decision study designs minimizing number of scenarios, the desired generalizability coefficients of .70 and .80 were reached at three scenarios with five raters, and five scenarios with nine raters, respectively. In designs minimizing number of raters, G coefficients of .72 and .80 were reached at three raters and five scenarios and four raters and nine scenarios, respectively. A dependability coefficient of .71 was attained with six scenarios and nine raters or seven raters and nine scenarios. Achieving high reliability with designs involving fewer raters may be possible with enhanced rater training to decrease variance components for rater main and interaction effects. The second part of this study involved the design and implementation of a validation process for evidence-based human patient simulation scenarios in assessment of nursing competency. A team of experts validated the new scenario using a modified Delphi technique, involving three rounds of iterative feedback and revisions. In tandem, the psychometric study of the NPP and the development of a validation process for human patient simulation scenarios both advance and encourage best practices for studying the validity of simulation-based assessments.

Contributors

Agent

Created

Date Created
2014

156621-Thumbnail Image.png

Assessing measurement invariance and latent mean differences with bifactor multidimensional data in structural equation modeling

Description

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.

Contributors

Agent

Created

Date Created
2018