A collection of scholarly work published by and supporting the Center for Earth Systems Engineering and Management (CESEM) at Arizona State University.

CESEM focuses on "earth systems engineering and management," providing a basis for understanding, designing, and managing the complex integrated built/human/natural systems that increasingly characterize our planet.

Works in this collection are particularly important in linking engineering, technology, and sustainability, and are increasingly intertwined with the work of ASU's Global Institute of Sustainability (GIOS).

Collaborating Institutions:
School of Sustainable Engineering and the Built Environment (SSEBE), Center for Earth Systems Engineering and Management
Displaying 1 - 2 of 2
Filtering by

Clear all filters

Description

After a brief introduction to Functional Magnetic Resonance Imaging (fMRI), this paper presents some common misunderstandings and problems that are frequently overlooked in the application of the technology. Then, in three progressively more involved examples, the paper demonstrates (a) how use of fMRI in pre-surgical mapping shows promise, (b) how

After a brief introduction to Functional Magnetic Resonance Imaging (fMRI), this paper presents some common misunderstandings and problems that are frequently overlooked in the application of the technology. Then, in three progressively more involved examples, the paper demonstrates (a) how use of fMRI in pre-surgical mapping shows promise, (b) how its use in lie detection seems questionable, and (c) how employing it in defining personhood is useless and pointless. Finally, in making a case for emergentism, the paper concludes that fMRI cannot really tell us as much about ourselves as we had hoped. Since we are more than our brains, even if fMRI were perfect, it is not enough.

Description

Essay scoring is a difficult and contentious business. The problem is exacerbated when there are no “right” answers for the essay prompts. This research developed a simple toolset for essay analysis by integrating a freely available Latent Dirichlet Allocation (LDA) implementation into a homegrown assessment assistant. The complexity of the

Essay scoring is a difficult and contentious business. The problem is exacerbated when there are no “right” answers for the essay prompts. This research developed a simple toolset for essay analysis by integrating a freely available Latent Dirichlet Allocation (LDA) implementation into a homegrown assessment assistant. The complexity of the essay assessment problem is demonstrated and illustrated with a representative collection of open-ended essays. This research also explores the use of “expert vectors” or “keyword essays” for maximizing the utility of LDA with small corpora. While, by itself, LDA appears insufficient for adequately scoring essays, it is quite capable of classifying responses to open-ended essay prompts and providing insight into the responses. This research also reports some trends that might be useful in scoring essays once more data is available. Some observations are made about these insights and a discussion of the use of LDA in qualitative assessment results in proposals that may assist other researchers in developing more complete essay assessment software.