Matching Items (2)
149674-Thumbnail Image.png
Description
In times of fast paced technology, the ability to differentiate quality differences between a reproduction and an original work of art has new urgency. The use of digital reproductions in the classroom is a useful and convenient teaching tool, but can convey visual distortions specifically in regards to texture,

In times of fast paced technology, the ability to differentiate quality differences between a reproduction and an original work of art has new urgency. The use of digital reproductions in the classroom is a useful and convenient teaching tool, but can convey visual distortions specifically in regards to texture, size, and color. Art educators often struggle to achieve a balance between incorporating the use of digital technology and fostering an appreciation for experiences with original artworks. The purpose of this study was to examine the ways in which Dewey's theory of experiential learning explains how thoroughly high school students differentiate between a reproduction and original artwork. This study also explored the influences of painting style (realistic or semi-abstract) and sequence on a student's ability to identify the differences and select a preference between the reproduction and original artwork. To obtain insight into how a student is able to differentiate between a reproduction and an original artwork, this study engaged 27 high school student participants in viewing a digital reproduction and the respective original artwork of one realistic and one semi-abstract painting at the ASU Art Museum. Analysis of qualitative and quantitative data suggests that sequence influences a student's ability to differentiate between a reproduction and original artwork. Students who saw reproductions before viewing the originals, demonstrated a more comprehensive understanding of the differences between the two presentation formats. Implications of this study include the recommendation that art educators address definitional issues surrounding the terms original and reproduction in their teaching, and consider collaborative ways to prepare students for meaningful experiences with original artworks.
ContributorsUscher, Dawn (Author) / Erickson, Mary (Thesis advisor) / Stokrocki, Mary (Committee member) / Young, Bernard (Committee member) / Arizona State University (Publisher)
Created2011
158256-Thumbnail Image.png
Description
There have been multiple attempts of coupling neural networks with external memory components for sequence learning problems. Such architectures have demonstrated success in algorithmic, sequence transduction, question-answering and reinforcement learning tasks. Most notable of these attempts is the Neural Turing Machine (NTM), which is an implementation of the Turing Machine

There have been multiple attempts of coupling neural networks with external memory components for sequence learning problems. Such architectures have demonstrated success in algorithmic, sequence transduction, question-answering and reinforcement learning tasks. Most notable of these attempts is the Neural Turing Machine (NTM), which is an implementation of the Turing Machine with a neural network controller that interacts with a continuous memory. Although the architecture is Turing complete and hence, universally computational, it has seen limited success with complex real-world tasks.

In this thesis, I introduce an extension of the Neural Turing Machine, the Neural Harvard Machine, that implements a fully differentiable Harvard Machine framework with a feed-forward neural network controller. Unlike the NTM, it has two different memories - a read-only program memory and a read-write data memory. A sufficiently complex task is divided into smaller, simpler sub-tasks and the program memory stores parameters of pre-trained networks trained on these sub-tasks. The controller reads inputs from an input-tape, uses the data memory to store valuable signals and writes correct symbols to an output tape. The output symbols are a function of the outputs of each sub-network and the state of the data memory. Hence, the controller learns to load the weights of the appropriate program network to generate output symbols.

A wide range of experiments demonstrate that the Harvard Machine framework learns faster and performs better than the NTM and RNNs like LSTM, as the complexity of tasks increases.
ContributorsBhatt, Manthan Bharat (Author) / Ben Amor, Hani (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020