Matching Items (9)
157305-Thumbnail Image.png
Description
The Resistive Random Access Memory (ReRAM) is an emerging non-volatile memory

technology because of its attractive attributes, including excellent scalability (< 10 nm), low

programming voltage (< 3 V), fast switching speed (< 10 ns), high OFF/ON ratio (> 10),

good endurance (up to 1012 cycles) and great compatibility with silicon CMOS technology

The Resistive Random Access Memory (ReRAM) is an emerging non-volatile memory

technology because of its attractive attributes, including excellent scalability (< 10 nm), low

programming voltage (< 3 V), fast switching speed (< 10 ns), high OFF/ON ratio (> 10),

good endurance (up to 1012 cycles) and great compatibility with silicon CMOS technology [1].

However, ReRAM suffers from larger write latency, energy and reliability issue compared to

Dynamic Random Access Memory (DRAM). To improve the energy-efficiency, latency efficiency and reliability of ReRAM storage systems, a low cost cross-layer approach that spans device, circuit, architecture and system levels is proposed.

For 1T1R 2D ReRAM system, the effect of both retention and endurance errors on

ReRAM reliability is considered. Proposed approach is to design circuit-level and architecture-level techniques to reduce raw Bit Error Rate significantly and then employ low cost Error Control Coding to achieve the desired lifetime.

For 1S1R 2D ReRAM system, a cross-point array with “multi-bit per access” per subarray

is designed for high energy-efficiency and good reliability. The errors due to cell-level as well

as array-level variations are analyzed and a low cost scheme to maintain reliability and latency

with low energy consumption is proposed.

For 1S1R 3D ReRAM system, access schemes which activate multiple subarrays with

multiple layers in a subarray are used to achieve high energy efficiency through activating fewer

subarray, and good reliability is achieved through innovative data organization.

Finally, a novel ReRAM-based accelerator design is proposed to support multiple

Convolutional Neural Networks (CNN) topologies including VGGNet, AlexNet and ResNet.

The multi-tiled architecture consists of 9 processing elements per tile, where each tile

implements the dot product operation using ReRAM as computation unit. The processing

elements operate in a systolic fashion, thereby maximizing input feature map reuse and

minimizing interconnection cost. The system-level evaluation on several network benchmarks

show that the proposed architecture can improve computation efficiency and energy efficiency

compared to a state-of-the-art ReRAM-based accelerator.
ContributorsMao, Manqing (Author) / Chakrabariti, Chaitali (Thesis advisor) / Yu, Shimeng (Committee member) / Cao, Yu (Committee member) / Orgas, Umit (Committee member) / Arizona State University (Publisher)
Created2019
133359-Thumbnail Image.png
Description
The current trend of interconnected devices, or the internet of things (IOT) has led to the popularization of single board computers (SBC). This is primarily due to their form-factor and low price. This has led to unique networks of devices that can have unstable network connections and minimal processing power.

The current trend of interconnected devices, or the internet of things (IOT) has led to the popularization of single board computers (SBC). This is primarily due to their form-factor and low price. This has led to unique networks of devices that can have unstable network connections and minimal processing power. Many parallel program- ming libraries are intended for use in high performance computing (HPC) clusters. Unlike the IOT environment described, HPC clusters will in general look to obtain very consistent network speeds and topologies. There are a significant number of software choices that make up what is referred to as the HPC stack or parallel processing stack. My thesis focused on building an HPC stack that would run on the SCB computer name the Raspberry Pi. The intention in making this Raspberry Pi cluster is to research performance of MPI implementations in an IOT environment, which had an impact on the design choices of the cluster. This thesis is a compilation of my research efforts in creating this cluster as well as an evaluation of the software that was chosen to create the parallel processing stack.
ContributorsO'Meara, Braedon Richard (Author) / Meuth, Ryan (Thesis director) / Dasgupta, Partha (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134133-Thumbnail Image.png
Description
Hackathons are 24-36 hour events where participants are encouraged to learn, collaborate, and build technological inventions with leaders, companies, and peers in the tech community. Hackathons have been sweeping the nation in the recent years especially at the collegiate level; however, there is no substantial research or documentation of the

Hackathons are 24-36 hour events where participants are encouraged to learn, collaborate, and build technological inventions with leaders, companies, and peers in the tech community. Hackathons have been sweeping the nation in the recent years especially at the collegiate level; however, there is no substantial research or documentation of the actual effects of hackathons especially at the collegiate level. This makes justifying the usage of valuable time and resources to host hackathons difficult for tech companies and academic institutions. This thesis specifically examines the effects of collegiate hackathons through running a collegiate hackathon known as Desert Hacks at Arizona State University (ASU). The participants of Desert Hacks were surveyed at the start and at the end of the event to analyze the effects. The results of the survey implicate that participants have grown in base computer programming skills, inclusion in the tech community, overall confidence, and motivation for the technological field. Through these results, this study can be used to help justify the necessity of collegiate hackathons and events similar.
ContributorsLe, Peter Thuan (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134266-Thumbnail Image.png
Description
Node.js is an extremely popular development framework for web applications. The appeal of its event-driven, asynchronous flow and the convenience of JavaScript as its programming language have driven its rapid growth, and it is currently deployed by leading companies in retail, finance, and other important sectors. However, the tools currently

Node.js is an extremely popular development framework for web applications. The appeal of its event-driven, asynchronous flow and the convenience of JavaScript as its programming language have driven its rapid growth, and it is currently deployed by leading companies in retail, finance, and other important sectors. However, the tools currently available for Node.js developers to secure their applications against malicious attackers are notably scarce. While there has been a substantial amount of security tools created for web applications in many other languages such as PHP and Java, very little exists for Node.js applications. This could compromise private information belonging to companies such as PayPal and WalMart. We propose a tool to statically analyze Node.js web applications for five popular vulnerabilites: cross-site scripting, SQL injection, server-side request forgery, command injection, and code injection. We base our tool off of JSAI, a platform created to parse client-side JavaScript for security risks. JSAI is novel because of its configuration capabilities, which allow a user to choose between various analysis options at runtime in order to select the most thorough analysis with the least amount of processing time. We contribute to the development of our tool by rigorously analyzing and documenting vulnerable functions and objects in Node.js that are relevant to the vulnerabilities we have selected. We intend to use this documentation to build a robust Node.js static analysis tool and we hope that other developers will also incorporate this analysis into their Node.js security projects.
ContributorsWasserman, Jonathan Kanter (Author) / Doupe, Adam (Thesis director) / Ahn, Gail-Joon (Committee member) / Zhao, Ziming (Committee member) / School of Historical, Philosophical and Religious Studies (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
153845-Thumbnail Image.png
Description
Hospital Emergency Departments (EDs) are frequently crowded. The Center for

Medicare and Medicaid Services (CMS) collects performance measurements from EDs

such as that of the door to clinician time. The door to clinician time is the time at which a

patient is first seen by a clinician. Current methods for

Hospital Emergency Departments (EDs) are frequently crowded. The Center for

Medicare and Medicaid Services (CMS) collects performance measurements from EDs

such as that of the door to clinician time. The door to clinician time is the time at which a

patient is first seen by a clinician. Current methods for documenting the door to clinician

time are in written form and may contain inaccuracies. The goal of this thesis is to

provide a method for automatic and accurate retrieval and documentation of the door to

clinician time. To automatically collect door to clinician times, single board computers

were installed in patient rooms that logged the time whenever they saw a specific

Bluetooth emission from a device that the clinician carried. The Bluetooth signal is used

to calculate the distance of the clinician from the single board computer. The logged time

and distance calculation is then sent to the server where it is determined if the clinician

was in the room seeing the patient at the time logged. The times automatically collected

were compared with the handwritten times recorded by clinicians and have shown that

they are justifiably accurate to the minute.
ContributorsFrisby, Joshua (Author) / Nelson, Brian C (Thesis advisor) / Patel, Vimla L. (Thesis advisor) / Smith, Vernon (Committee member) / Kaufman, David R. (Committee member) / Arizona State University (Publisher)
Created2015
148081-Thumbnail Image.png
Description

This work describes the fundamentals of quantum mechanics in relation to quantum computing, as well as the architecture of quantum computing.

ContributorsDemaria, Rachel Emily (Author) / Foy, Joseph (Thesis director) / Hines, Taylor (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
141315-Thumbnail Image.png
Description

The majority of trust research has focused on the benefits trust can have for individual actors, institutions, and organizations. This “optimistic bias” is particularly evident in work focused on institutional trust, where concepts such as procedural justice, shared values, and moral responsibility have gained prominence. But trust in institutions may

The majority of trust research has focused on the benefits trust can have for individual actors, institutions, and organizations. This “optimistic bias” is particularly evident in work focused on institutional trust, where concepts such as procedural justice, shared values, and moral responsibility have gained prominence. But trust in institutions may not be exclusively good. We reveal implications for the “dark side” of institutional trust by reviewing relevant theories and empirical research that can contribute to a more holistic understanding. We frame our discussion by suggesting there may be a “Goldilocks principle” of institutional trust, where trust that is too low (typically the focus) or too high (not usually considered by trust researchers) may be problematic. The chapter focuses on the issue of too-high trust and processes through which such too-high trust might emerge. Specifically, excessive trust might result from external, internal, and intersecting external-internal processes. External processes refer to the actions institutions take that affect public trust, while internal processes refer to intrapersonal factors affecting a trustor’s level of trust. We describe how the beneficial psychological and behavioral outcomes of trust can be mitigated or circumvented through these processes and highlight the implications of a “darkest” side of trust when they intersect. We draw upon research on organizations and legal, governmental, and political systems to demonstrate the dark side of trust in different contexts. The conclusion outlines directions for future research and encourages researchers to consider the ethical nuances of studying how to increase institutional trust.

ContributorsNeal, Tess M.S. (Author) / Shockley, Ellie (Author) / Schilke, Oliver (Author)
Created2016
131873-Thumbnail Image.png
Description
As structural engineers in practice continue to improve their methods and advance their analysis and design techniques through the use of new technology, how should structural engineering education programs evolve as well to match the increasing complexity of the industry? This thesis serves to analyze the many differing opinions and

As structural engineers in practice continue to improve their methods and advance their analysis and design techniques through the use of new technology, how should structural engineering education programs evolve as well to match the increasing complexity of the industry? This thesis serves to analyze the many differing opinions and techniques on modernizing structural engineering education programs through a literature review on the content put out by active structural engineering education reform committees, articles and publications by well-known educators and practitioners, and a series of interviews conducted with key individuals specifically for this project. According to the opinions analyzed in this paper, structural engineering education should be a 5-year program that ends with a master’s degree, so that students obtain enough necessary knowledge to begin their positions as structural engineers. Firms would rather continue the education of new-hires themselves after this time than to wait and pay more for students to finish longer graduate-type programs. Computer programs should be implemented further into education programs, and would be most productive not as a replacement to hand-calculation methods, but as a supplement. Students should be tasked with writing codes, so that they are required to implement these calculations into computer programs themselves, and use classical methods to verify their answers. In this way, engineering programs will be creating critical thinkers who can adapt to any new structural analysis and design programs, and not just be training students on current programs that will become obsolete with time. It is the responsibility of educators to educate current staff on how to implement these coding methods seamlessly into education as a supplement to hand calculation methods. Students will be able to learn what is behind commercial coding software, develop their hand-calculation skills through code verification, and focus more on the ever-important modeling and interpretation phases of problem solving. Practitioners will have the responsibility of not expecting students to graduate with knowledge of specific software programs, but instead recruiting students who showcase critical thinking skills and understand the backbone of these programs. They will continue the education of recent graduates themselves, providing them with real-world experience that they cannot receive in school while training them to use company-specific analysis and design software.
ContributorsMaurer, Cole Chaon (Author) / Hjelmstad, Keith (Thesis director) / Chatziefstratiou, Efthalia (Committee member) / Civil, Environmental and Sustainable Eng Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165594-Thumbnail Image.png
Description

With the recent focus of attention towards remote work and mobile computing, the possibility of taking a powerful workstation wherever needed is enticing. However, even emerging laptops today struggle to compete with desktops in terms of cost, maintenance, and future upgrades. The price point of a powerful laptop is considerably

With the recent focus of attention towards remote work and mobile computing, the possibility of taking a powerful workstation wherever needed is enticing. However, even emerging laptops today struggle to compete with desktops in terms of cost, maintenance, and future upgrades. The price point of a powerful laptop is considerably higher compared to an equally powerful desktop computer, and most laptops are manufactured in a way that makes upgrading parts of the machine difficult or impossible, forcing a complete purchase in the event of failure or a component needing an upgrade. In the case where someone already owns a desktop computer and must be mobile, instead of needing to purchase a second device at full price, it may be possible to develop a low-cost computer that has just enough power to connect to the existing desktop and run all processing there, using the mobile device only as a user interface. This thesis will explore the development of a custom PCB that utilizes a Raspberry Pi Computer Module 4, as well as the development of a fork of the Open Source project Moonlight to stream a host machine's screen to a remote client. This implementation will be compared against other existing remote desktop solutions to analyze it's performance and quality.

ContributorsLathrum, Dylan (Author) / Heinrichs, Robert (Thesis director) / Acuna, Ruben (Committee member) / Jordan, Shawn (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2022-05