Matching Items (1,063)
Filtering by

Clear all filters

150007-Thumbnail Image.png
Description
Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation

Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site.
ContributorsCoelho, Clyde (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Wu, Tong (Committee member) / Das, Santanu (Committee member) / Rajadas, John (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
148106-Thumbnail Image.png
Description

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the winner-take-all principle invariably wastes some votes” (121). This means that the majority group gets all of the power in an election while the votes of the minority groups are completely wasted and hold little to no significance. Additionally, FPTP systems reinforce a two-party system in which neither candidate could satisfy the majority of the electorate’s needs and issues, yet forces them to choose between the two dominant parties. Moreover, voting for a third party candidate only hurts the voter since it takes votes away from the party they might otherwise support and gives the victory to the party they prefer the least, ensuring that the two party system is inescapable. Therefore, a winner-take-all system does not provide the electorate with fair or proportional representation and creates voter disenfranchisement: it offers them very few choices that appeal to their needs and forces them to choose a candidate they dislike. There are, however, alternative voting systems that remedy these issues, such as a Ranked voting system, in which voters can rank their candidate choices in the order they prefer them, or a Proportional voting system, in which a political party acquires a number of seats based on the proportion of votes they receive from the voter base. Given these alternatives, we will implement a software simulation of one of these systems to demonstrate how they work in contrast to FPTP systems, and therefore provide evidence of how these alternative systems could work in practice and in place of the current electoral system.

ContributorsSummers, Jack Gillespie (Co-author) / Martin, Autumn (Co-author) / Burger, Kevin (Thesis director) / Voorhees, Matthew (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148109-Thumbnail Image.png
Description

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment and identification of objects. The challenge posed in perception systems involves verifying the accuracy and rigidity of detections. The use of Spatio-Temporal Perception Logic (STPL) enables the user to express requirements for the perception system to verify, validate, and ensure its behavior; however, a drawback to STPL involves its accessibility. It is limited to individuals with an expert or higher-level knowledge of temporal and spatial logics, and the formal-written requirements become quite verbose with more restrictions imposed. In this thesis, I propose a domain-specific language (DSL) catered to Spatio-Temporal Perception Logic to enable non-expert users the ability to capture requirements for perception subsystems while reducing the necessity to have an experienced background in said logic. The domain-specific language for the Spatio-Temporal Perception Logic is built upon the formal language with two abstractions. The main abstraction captures simple programming statements that are translated to a lower-level STPL expression accepted by the testing monitor. The STPL DSL provides a seamless interface to writing formal expressions while maintaining the power and expressiveness of STPL. These translated equivalent expressions are capable of directing a standard for perception systems to ensure the safety and reduce the risks involved in ill-formed detections.

ContributorsAnderson, Jacob (Author) / Fainekos, Georgios (Thesis director) / Yezhou, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148128-Thumbnail Image.png
Description

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software for CubeSats? by providing a concentrated<br/>list of the best flight software development practices. The CubeSat used in this research is the<br/>Deployable Optical Receiver Aperture (DORA) CubeSat, which is a 3U CubeSat that seeks to<br/>demonstrate optical communication data rates of 1 Gbps over long distances. We present an<br/>analysis over many of the flight software development practices currently in use in the industry,<br/>from industry leads NASA, and identify three key flight software development areas of focus:<br/>memory, concurrency, and error handling. Within each of these areas, the best practices were<br/>defined for how to approach the area. These practices were also developed using experience<br/>from the creation of flight software for the DORA CubeSat in order to drive the design and<br/>testing of the system. We analyze DORA’s effectiveness in the three areas of focus, as well as<br/>discuss how following the best practices identified helped to create a more reliable flight<br/>software system for the DORA CubeSat.

ContributorsHoffmann, Zachary Christian (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Jacobs, Daniel (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147844-Thumbnail Image.png
Description

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded the alarm regarding social media’s unavoidable global impact. He is only one of social media’s countless critics. The more disturbing issue resides in the empirical evidence supporting such notions. At least 95% of adolescents own a smartphone and spend an average time of two to four hours a day on social media. Moreover, 91% of 16-24-year-olds use social media, yet youth rate Instagram, Facebook, and Twitter as the worst social media platforms. However, the social, clinical, and neurodevelopment ramifications of using social media regularly are only beginning to emerge in research. Early research findings show that social media platforms trigger anxiety, depression, low self-esteem, and other negative mental health effects. These negative mental health symptoms are commonly reported by individuals from of 18-25-years old, a unique period of human development known as emerging adulthood. Although emerging adulthood is characterized by identity exploration, unbounded optimism, and freedom from most responsibilities, it also serves as a high-risk period for the onset of most psychological disorders. Despite social media’s adverse impacts, it retains its utility as it facilitates identity exploration and virtual socialization for emerging adults. Investigating the “user-centered” design and neuroscience underlying social media platforms can help reveal, and potentially mitigate, the onset of negative mental health consequences among emerging adults. Effectively deconstructing the Facebook, Twitter, and Instagram (i.e., hereafter referred to as “The Big Three”) will require an extensive analysis into common features across platforms. A few examples of these design features include: like and reaction counters, perpetual news feeds, and omnipresent banners and notifications surrounding the user’s viewport. Such social media features are inherently designed to stimulate specific neurotransmitters and hormones such as dopamine, serotonin, and cortisol. Identifying such predacious social media features that unknowingly manipulate and highjack emerging adults’ brain chemistry will serve as a first step in mitigating the negative mental health effects of today’s social media platforms. A second concrete step will involve altering or eliminating said features by creating a social media platform that supports and even enhances mental well-being.

ContributorsGupta, Anay (Author) / Flores, Valerie (Thesis director) / Carrasquilla, Christina (Committee member) / Barnett, Jessica (Committee member) / The Sidney Poitier New American Film School (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05