Barrett, The Honors College at Arizona State University proudly showcases the work of undergraduate honors students by sharing this collection exclusively with the ASU community.

Barrett accepts high performing, academically engaged undergraduate students and works with them in collaboration with all of the other academic units at Arizona State University. All Barrett students complete a thesis or creative project which is an opportunity to explore an intellectual interest and produce an original piece of scholarly research. The thesis or creative project is supervised and defended in front of a faculty committee. Students are able to engage with professors who are nationally recognized in their fields and committed to working with honors students. Completing a Barrett thesis or creative project is an opportunity for undergraduate honors students to contribute to the ASU academic community in a meaningful way.

Displaying 1 - 9 of 9
Filtering by

Clear all filters

130884-Thumbnail Image.png
Description
Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures

Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures have been developed, in which the sensor and processor are stacked directly on top of each other. This reduces energy loss associated with sending data off-sensor. However, processing near the image sensor causes the sensor to heat up. Reports of thermal noise in near-sensor processing architectures motivated us to study how temperature affects image quality on a commercial image sensor and how thermal noise affects computer vision task accuracy. We analyzed image noise across nine different temperatures and three sensor configurations to determine how image noise responds to an increase in temperature. Ultimately, our team used this information, along with transient analysis of a stacked image sensor’s thermal behavior, to advise thermal management strategies that leverage the benefits of near-sensor processing and prevent accuracy loss at problematic temperatures.
ContributorsJones, Britton Steele (Author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
Description

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets.

Java Mission-planning and Analysis for Remote Sensing (JMARS) is a geospatial software that provides mission planning and data-analysis tools with access to orbital data for planetary bodies like Mars and Venus. Using JMARS, terrain scenes can be prepared with an assortment of data layers along with any additional data sets. These scenes can then be exported into the JMARS extended reality platform, which includes both augmented reality and virtual reality experiences. JMARS VR Viewer is a virtual reality experience that allows users to view three-dimensional terrain data in a fully immersive and interactive way. This tool also provides a collaborative environment for users to host a terrain scene where people can analyze the data together. The purpose of the project is to design a set of interactions in virtual reality to try and address these questions: (1) how do we make sense of larger complex geospatial datasets, (2) how can we design interactions that assist users in understanding layered data in both an individual and collaborative work environment, and (3) what are the effects on the user’s cognitive overload while using these interfaces.

ContributorsWang, Olivia (Author) / LiKamWa, Robert (Thesis director) / Gold, Lauren (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
164826-Thumbnail Image.jpg
Description

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team at Meteor Studio has developed an algorithm called Xblock that solves this issue using a crosstalk cancellation technique. This thesis project expands upon the existing Xblock IoT system by providing a way to test the accuracy of the directionality of sounds generated with spatial audio. More specifically, the objective is to determine whether the usage of Xblock with smart speakers can provide generalized audio localization, which refers to the ability to detect a general direction of where a sound might be coming from. This project also expands upon the existing Xblock technique to integrate voice commands, where users can verbalize the name of a lost item using the phrase, “Find [item]”, and the IoT system will use spatial audio to guide them to it.

ContributorsSong, Lucy (Author) / LiKamWa, Robert (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165544-Thumbnail Image.png
Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete's form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete's form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete's form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

ContributorsDbeis, Yasser (Author) / Channar, Sameer (Co-author) / Richards, Connor (Co-author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165564-Thumbnail Image.png
Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete’s form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

ContributorsChannar, Sameer (Author) / Dbeis, Yasser (Co-author) / Richards, Connor (Co-author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Barrett, The Honors College (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165566-Thumbnail Image.png
Description

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for

Video playback is currently the primary method coaches and athletes use in sports training to give feedback on the athlete’s form and timing. Athletes will commonly record themselves using a phone or camera when practicing a sports movement, such as shooting a basketball, to then send to their coach for feedback on how to improve. In this work, we present Augmented Coach, an augmented reality tool for coaches to give spatiotemporal feedback through a 3-dimensional point cloud of the athlete. The system allows coaches to view a pre-recorded video of their athlete in point cloud form, and provides them with the proper tools in order to go frame by frame to both analyze the athlete’s form and correct it. The result is a fundamentally new concept of an interactive video player, where the coach can remotely view the athlete in a 3-dimensional form and create annotations to help improve their form. We then conduct a user study with subject matter experts to evaluate the usability and capabilities of our system. As indicated by the results, Augmented Coach successfully acts as a supplement to in-person coaching, since it allows coaches to break down the video recording in a 3-dimensional space and provide feedback spatiotemporally. The results also indicate that Augmented Coach can be a complete coaching solution in a remote setting. This technology will be extremely relevant in the future as coaches look for new ways to improve their feedback methods, especially in a remote setting.

ContributorsRichards, Connor (Author) / Dbeis, Yasser (Co-author) / Channar, Sameer (Co-author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor)
Created2022-05
165433-Thumbnail Image.png
Description

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is of vital importance to students and yet when printed in

Augmented Reality (AR) especially when used with mobile devices enables the creation of applications that can help students in chemistry learn anything from basic to more advanced concepts. In Chemistry specifically, the 3D representation of molecules and chemical structures is of vital importance to students and yet when printed in 2D as on textbooks and lecture notes it can be quite hard to understand those vital 3D concepts. ARsome Chemistry is an app that aims to utilize AR to display complex and simple molecules in 3D to actively teach students these concepts through quizzes and other features. The ARsome chemistry app uses image target recognition to allow students to hand-draw or print line angle structures or chemical formulas of molecules and then scan those targets to get 3D representation of molecules. Students can use their fingers and the touch screen to zoom, rotate, and highlight different portions of the molecule to gain a better understanding of the molecule's 3D structure. The ARsome chemistry app also features the ability to utilize image recognition to allow students to quiz themselves on drawing line-angle structures and show it to the camera for the app to check their work. The ARsome chemistry app is an accessible and cost-effective study aid platform for students for on demand, interactive, 3D representations of complex molecules.

ContributorsEvans, Brandon (Author) / LiKamWa, Robert (Thesis director) / Johnson, Mina (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
Description
Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team at Meteor Studio has developed an algorithm called Xblock that solves this issue using a crosstalk cancellation technique. This thesis project expands upon the existing Xblock IoT system by providing a way to test the accuracy of the directionality of sounds generated with spatial audio. More specifically, the objective is to determine whether the usage of Xblock with smart speakers can provide generalized audio localization, which refers to the ability to detect a general direction of where a sound might be coming from. This project also expands upon the existing Xblock technique to integrate voice commands, where users can verbalize the name of a lost item using the phrase, “Find [item]”, and the IoT system will use spatial audio to guide them to it.
ContributorsSong, Lucy (Author) / LiKamWa, Robert (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
164825-Thumbnail Image.png
Description

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team

Spatial audio can be especially useful for directing human attention. However, delivering spatial audio through speakers, rather than headphones that deliver audio directly to the ears, produces the issue of crosstalk, where sounds from each of the two speakers reach the opposite ear, inhibiting the spatialized effect. A research team at Meteor Studio has developed an algorithm called Xblock that solves this issue using a crosstalk cancellation technique. This thesis project expands upon the existing Xblock IoT system by providing a way to test the accuracy of the directionality of sounds generated with spatial audio. More specifically, the objective is to determine whether the usage of Xblock with smart speakers can provide generalized audio localization, which refers to the ability to detect a general direction of where a sound might be coming from. This project also expands upon the existing Xblock technique to integrate voice commands, where users can verbalize the name of a lost item using the phrase, “Find [item]”, and the IoT system will use spatial audio to guide them to it.

ContributorsSong, Lucy (Author) / LiKamWa, Robert (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05