Matching Items (715)
Filtering by

Clear all filters

150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
150383-Thumbnail Image.png
Description

This study presents the results of one of the first attempts to characterize the pore water pressure response of soils subjected to traffic loading under saturated and unsaturated conditions. It is widely known that pore water pressure develops within the soil pores as a response to external stimulus. Also, it

This study presents the results of one of the first attempts to characterize the pore water pressure response of soils subjected to traffic loading under saturated and unsaturated conditions. It is widely known that pore water pressure develops within the soil pores as a response to external stimulus. Also, it has been recognized that the development of pores water pressure contributes to the degradation of the resilient modulus of unbound materials. In the last decades several efforts have been directed to model the effect of air and water pore pressures upon resilient modulus. However, none of them consider dynamic variations in pressures but rather are based on equilibrium values corresponding to initial conditions. The measurement of this response is challenging especially in soils under unsaturated conditions. Models are needed not only to overcome testing limitations but also to understand the dynamic behavior of internal pore pressures that under critical conditions may even lead to failure. A testing program was conducted to characterize the pore water pressure response of a low plasticity fine clayey sand subjected to dynamic loading. The bulk stress, initial matric suction and dwelling time parameters were controlled and their effects were analyzed. The results were used to attempt models capable of predicting the accumulated excess pore pressure at any given time during the traffic loading and unloading phases. Important findings regarding the influence of the controlled variables challenge common beliefs. The accumulated excess pore water pressure was found to be higher for unsaturated soil specimens than for saturated soil specimens. The maximum pore water pressure always increased when the high bulk stress level was applied. Higher dwelling time was found to decelerate the accumulation of pore water pressure. In addition, it was found that the higher the dwelling time, the lower the maximum pore water pressure. It was concluded that upon further research, the proposed models may become a powerful tool not only to overcome testing limitations but also to enhance current design practices and to prevent soil failure due to excessive development of pore water pressure.

ContributorsCary, Carlos (Author) / Zapata, Claudia E (Thesis advisor) / Wiczak, Matthew W (Thesis advisor) / Kaloush, Kamil (Committee member) / Sandra, Houston (Committee member) / Arizona State University (Publisher)
Created2011
150365-Thumbnail Image.png
Description

A recent joint study by Arizona State University and the Arizona Department of Transportation (ADOT) was conducted to evaluate certain Warm Mix Asphalt (WMA) properties in the laboratory. WMA material was taken from an actual ADOT project that involved two WMA sections. The first section used a foamed-based WMA admixture,

A recent joint study by Arizona State University and the Arizona Department of Transportation (ADOT) was conducted to evaluate certain Warm Mix Asphalt (WMA) properties in the laboratory. WMA material was taken from an actual ADOT project that involved two WMA sections. The first section used a foamed-based WMA admixture, and the second section used a chemical-based WMA admixture. The rest of the project included control hot mix asphalt (HMA) mixture. The evaluation included testing of field-core specimens and laboratory compacted specimens. The laboratory specimens were compacted at two different temperatures; 270 °F (132 °C) and 310 °F (154 °C). The experimental plan included four laboratory tests: the dynamic modulus (E*), indirect tensile strength (IDT), moisture damage evaluation using AASHTO T-283 test, and the Hamburg Wheel-track Test. The dynamic modulus E* results of the field cores at 70 °F showed similar E* values for control HMA and foaming-based WMA mixtures; the E* values of the chemical-based WMA mixture were relatively higher. IDT test results of the field cores had comparable finding as the E* results. For the laboratory compacted specimens, both E* and IDT results indicated that decreasing the compaction temperatures from 310 °F to 270 °F did not have any negative effect on the material strength for both WMA mixtures; while the control HMA strength was affected to some extent. It was noticed that E* and IDT results of the chemical-based WMA field cores were high; however, the laboratory compacted specimens results didn't show the same tendency. The moisture sensitivity findings from TSR test disagreed with those of Hamburg test; while TSR results indicated relatively low values of about 60% for all three mixtures, Hamburg test results were quite excellent. In general, the results of this study indicated that both WMA mixes can be best evaluated through field compacted mixes/cores; the results of the laboratory compacted specimens were helpful to a certain extent. The dynamic moduli for the field-core specimens were higher than for those compacted in the laboratory. The moisture damage findings indicated that more investigations are needed to evaluate moisture damage susceptibility in field.

ContributorsAlossta, Abdulaziz (Author) / Kaloush, Kamil (Thesis advisor) / Witczak, Matthew W. (Committee member) / Mamlouk, Michael S. (Committee member) / Arizona State University (Publisher)
Created2011
148106-Thumbnail Image.png
Description

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the winner-take-all principle invariably wastes some votes” (121). This means that the majority group gets all of the power in an election while the votes of the minority groups are completely wasted and hold little to no significance. Additionally, FPTP systems reinforce a two-party system in which neither candidate could satisfy the majority of the electorate’s needs and issues, yet forces them to choose between the two dominant parties. Moreover, voting for a third party candidate only hurts the voter since it takes votes away from the party they might otherwise support and gives the victory to the party they prefer the least, ensuring that the two party system is inescapable. Therefore, a winner-take-all system does not provide the electorate with fair or proportional representation and creates voter disenfranchisement: it offers them very few choices that appeal to their needs and forces them to choose a candidate they dislike. There are, however, alternative voting systems that remedy these issues, such as a Ranked voting system, in which voters can rank their candidate choices in the order they prefer them, or a Proportional voting system, in which a political party acquires a number of seats based on the proportion of votes they receive from the voter base. Given these alternatives, we will implement a software simulation of one of these systems to demonstrate how they work in contrast to FPTP systems, and therefore provide evidence of how these alternative systems could work in practice and in place of the current electoral system.

ContributorsSummers, Jack Gillespie (Co-author) / Martin, Autumn (Co-author) / Burger, Kevin (Thesis director) / Voorhees, Matthew (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148109-Thumbnail Image.png
Description

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment and identification of objects. The challenge posed in perception systems involves verifying the accuracy and rigidity of detections. The use of Spatio-Temporal Perception Logic (STPL) enables the user to express requirements for the perception system to verify, validate, and ensure its behavior; however, a drawback to STPL involves its accessibility. It is limited to individuals with an expert or higher-level knowledge of temporal and spatial logics, and the formal-written requirements become quite verbose with more restrictions imposed. In this thesis, I propose a domain-specific language (DSL) catered to Spatio-Temporal Perception Logic to enable non-expert users the ability to capture requirements for perception subsystems while reducing the necessity to have an experienced background in said logic. The domain-specific language for the Spatio-Temporal Perception Logic is built upon the formal language with two abstractions. The main abstraction captures simple programming statements that are translated to a lower-level STPL expression accepted by the testing monitor. The STPL DSL provides a seamless interface to writing formal expressions while maintaining the power and expressiveness of STPL. These translated equivalent expressions are capable of directing a standard for perception systems to ensure the safety and reduce the risks involved in ill-formed detections.

ContributorsAnderson, Jacob (Author) / Fainekos, Georgios (Thesis director) / Yezhou, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The market for searching for food online is exploding. According to one expert at Google, “there are over 1 billion restaurant searches on Google every month” (Kelso, 2020). To capture this market and ride the general digital trend of internet personalization (as evidenced by Google search results, ads, YouTube and

The market for searching for food online is exploding. According to one expert at Google, “there are over 1 billion restaurant searches on Google every month” (Kelso, 2020). To capture this market and ride the general digital trend of internet personalization (as evidenced by Google search results, ads, YouTube and social media algorithms, etc), we created Munch to be an algorithm meant to help people find food they’ll love. <br/><br/>Munch offers the ability to search for food by restaurant or even as specific as a menu item (ex: search for the best Pad Thai). The best part? It is customized to your preferences based on a quiz you take when you open the app and from that point continuously learns from your behavior.<br/><br/>This thesis documents the journey of the team who founded Munch, what progress we made and the reasoning behind our decisions, where this idea fits in a competitive marketplace, how much it could be worth, branding, and our recommendations for a successful app in the future.

ContributorsInocencio, Phillippe Adriane (Co-author) / Rajan, Megha (Co-author) / Krug, Hayden (Co-author) / Byrne, Jared (Thesis director) / Sebold, Brent (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148128-Thumbnail Image.png
Description

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software for CubeSats? by providing a concentrated<br/>list of the best flight software development practices. The CubeSat used in this research is the<br/>Deployable Optical Receiver Aperture (DORA) CubeSat, which is a 3U CubeSat that seeks to<br/>demonstrate optical communication data rates of 1 Gbps over long distances. We present an<br/>analysis over many of the flight software development practices currently in use in the industry,<br/>from industry leads NASA, and identify three key flight software development areas of focus:<br/>memory, concurrency, and error handling. Within each of these areas, the best practices were<br/>defined for how to approach the area. These practices were also developed using experience<br/>from the creation of flight software for the DORA CubeSat in order to drive the design and<br/>testing of the system. We analyze DORA’s effectiveness in the three areas of focus, as well as<br/>discuss how following the best practices identified helped to create a more reliable flight<br/>software system for the DORA CubeSat.

ContributorsHoffmann, Zachary Christian (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Jacobs, Daniel (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147844-Thumbnail Image.png
Description

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded the alarm regarding social media’s unavoidable global impact. He is only one of social media’s countless critics. The more disturbing issue resides in the empirical evidence supporting such notions. At least 95% of adolescents own a smartphone and spend an average time of two to four hours a day on social media. Moreover, 91% of 16-24-year-olds use social media, yet youth rate Instagram, Facebook, and Twitter as the worst social media platforms. However, the social, clinical, and neurodevelopment ramifications of using social media regularly are only beginning to emerge in research. Early research findings show that social media platforms trigger anxiety, depression, low self-esteem, and other negative mental health effects. These negative mental health symptoms are commonly reported by individuals from of 18-25-years old, a unique period of human development known as emerging adulthood. Although emerging adulthood is characterized by identity exploration, unbounded optimism, and freedom from most responsibilities, it also serves as a high-risk period for the onset of most psychological disorders. Despite social media’s adverse impacts, it retains its utility as it facilitates identity exploration and virtual socialization for emerging adults. Investigating the “user-centered” design and neuroscience underlying social media platforms can help reveal, and potentially mitigate, the onset of negative mental health consequences among emerging adults. Effectively deconstructing the Facebook, Twitter, and Instagram (i.e., hereafter referred to as “The Big Three”) will require an extensive analysis into common features across platforms. A few examples of these design features include: like and reaction counters, perpetual news feeds, and omnipresent banners and notifications surrounding the user’s viewport. Such social media features are inherently designed to stimulate specific neurotransmitters and hormones such as dopamine, serotonin, and cortisol. Identifying such predacious social media features that unknowingly manipulate and highjack emerging adults’ brain chemistry will serve as a first step in mitigating the negative mental health effects of today’s social media platforms. A second concrete step will involve altering or eliminating said features by creating a social media platform that supports and even enhances mental well-being.

ContributorsGupta, Anay (Author) / Flores, Valerie (Thesis director) / Carrasquilla, Christina (Committee member) / Barnett, Jessica (Committee member) / The Sidney Poitier New American Film School (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148180-Thumbnail Image.png
Description

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair of online crowdsourced user studies were conducted and analyzed. User performance for Sankey diagrams of varying size and features (number of groups, number of timesteps, and number of flow crossings) were algorithmically modeled as a formula to quantify the complexity of these diagrams. Model accuracy was measured based on the performance of users in the second crowdsourced study. The results of my experiment conclusively demonstrates that the algorithmic complexity formula I created closely models the visual complexity of the Sankey Diagrams in the dataset.

ContributorsGinjpalli, Shashank (Author) / Bryan, Chris (Thesis director) / Hsiao, Sharon (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05