Matching Items (607)
Filtering by

Clear all filters

149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
148106-Thumbnail Image.png
Description

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the

The Electoral College, the current electoral system in the U.S., operates on a Winner-Take-All or First Past the Post (FPTP) principle, where the candidate with the most votes wins. Despite the Electoral College being the current system, it is problematic. According to Lani Guinier in Tyranny of the Majority, “the winner-take-all principle invariably wastes some votes” (121). This means that the majority group gets all of the power in an election while the votes of the minority groups are completely wasted and hold little to no significance. Additionally, FPTP systems reinforce a two-party system in which neither candidate could satisfy the majority of the electorate’s needs and issues, yet forces them to choose between the two dominant parties. Moreover, voting for a third party candidate only hurts the voter since it takes votes away from the party they might otherwise support and gives the victory to the party they prefer the least, ensuring that the two party system is inescapable. Therefore, a winner-take-all system does not provide the electorate with fair or proportional representation and creates voter disenfranchisement: it offers them very few choices that appeal to their needs and forces them to choose a candidate they dislike. There are, however, alternative voting systems that remedy these issues, such as a Ranked voting system, in which voters can rank their candidate choices in the order they prefer them, or a Proportional voting system, in which a political party acquires a number of seats based on the proportion of votes they receive from the voter base. Given these alternatives, we will implement a software simulation of one of these systems to demonstrate how they work in contrast to FPTP systems, and therefore provide evidence of how these alternative systems could work in practice and in place of the current electoral system.

ContributorsSummers, Jack Gillespie (Co-author) / Martin, Autumn (Co-author) / Burger, Kevin (Thesis director) / Voorhees, Matthew (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148109-Thumbnail Image.png
Description

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment

System and software verification is a vital component in the development and reliability of cyber-physical systems - especially in critical domains where the margin of error is minimal. In the case of autonomous driving systems (ADS), the vision perception subsystem is a necessity to ensure correct maneuvering of the environment and identification of objects. The challenge posed in perception systems involves verifying the accuracy and rigidity of detections. The use of Spatio-Temporal Perception Logic (STPL) enables the user to express requirements for the perception system to verify, validate, and ensure its behavior; however, a drawback to STPL involves its accessibility. It is limited to individuals with an expert or higher-level knowledge of temporal and spatial logics, and the formal-written requirements become quite verbose with more restrictions imposed. In this thesis, I propose a domain-specific language (DSL) catered to Spatio-Temporal Perception Logic to enable non-expert users the ability to capture requirements for perception subsystems while reducing the necessity to have an experienced background in said logic. The domain-specific language for the Spatio-Temporal Perception Logic is built upon the formal language with two abstractions. The main abstraction captures simple programming statements that are translated to a lower-level STPL expression accepted by the testing monitor. The STPL DSL provides a seamless interface to writing formal expressions while maintaining the power and expressiveness of STPL. These translated equivalent expressions are capable of directing a standard for perception systems to ensure the safety and reduce the risks involved in ill-formed detections.

ContributorsAnderson, Jacob (Author) / Fainekos, Georgios (Thesis director) / Yezhou, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148128-Thumbnail Image.png
Description

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software

CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software for CubeSats? by providing a concentrated<br/>list of the best flight software development practices. The CubeSat used in this research is the<br/>Deployable Optical Receiver Aperture (DORA) CubeSat, which is a 3U CubeSat that seeks to<br/>demonstrate optical communication data rates of 1 Gbps over long distances. We present an<br/>analysis over many of the flight software development practices currently in use in the industry,<br/>from industry leads NASA, and identify three key flight software development areas of focus:<br/>memory, concurrency, and error handling. Within each of these areas, the best practices were<br/>defined for how to approach the area. These practices were also developed using experience<br/>from the creation of flight software for the DORA CubeSat in order to drive the design and<br/>testing of the system. We analyze DORA’s effectiveness in the three areas of focus, as well as<br/>discuss how following the best practices identified helped to create a more reliable flight<br/>software system for the DORA CubeSat.

ContributorsHoffmann, Zachary Christian (Author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Jacobs, Daniel (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147844-Thumbnail Image.png
Description

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded

"No civil discourse, no cooperation; misinformation, mistruth." These were the words of former Facebook Vice President Chamath Palihapitiya who publicly expressed his regret in a 2017 interview over his role in co-creating Facebook. Palihapitiya shared that social media is ripping apart the social fabric of society and he also sounded the alarm regarding social media’s unavoidable global impact. He is only one of social media’s countless critics. The more disturbing issue resides in the empirical evidence supporting such notions. At least 95% of adolescents own a smartphone and spend an average time of two to four hours a day on social media. Moreover, 91% of 16-24-year-olds use social media, yet youth rate Instagram, Facebook, and Twitter as the worst social media platforms. However, the social, clinical, and neurodevelopment ramifications of using social media regularly are only beginning to emerge in research. Early research findings show that social media platforms trigger anxiety, depression, low self-esteem, and other negative mental health effects. These negative mental health symptoms are commonly reported by individuals from of 18-25-years old, a unique period of human development known as emerging adulthood. Although emerging adulthood is characterized by identity exploration, unbounded optimism, and freedom from most responsibilities, it also serves as a high-risk period for the onset of most psychological disorders. Despite social media’s adverse impacts, it retains its utility as it facilitates identity exploration and virtual socialization for emerging adults. Investigating the “user-centered” design and neuroscience underlying social media platforms can help reveal, and potentially mitigate, the onset of negative mental health consequences among emerging adults. Effectively deconstructing the Facebook, Twitter, and Instagram (i.e., hereafter referred to as “The Big Three”) will require an extensive analysis into common features across platforms. A few examples of these design features include: like and reaction counters, perpetual news feeds, and omnipresent banners and notifications surrounding the user’s viewport. Such social media features are inherently designed to stimulate specific neurotransmitters and hormones such as dopamine, serotonin, and cortisol. Identifying such predacious social media features that unknowingly manipulate and highjack emerging adults’ brain chemistry will serve as a first step in mitigating the negative mental health effects of today’s social media platforms. A second concrete step will involve altering or eliminating said features by creating a social media platform that supports and even enhances mental well-being.

ContributorsGupta, Anay (Author) / Flores, Valerie (Thesis director) / Carrasquilla, Christina (Committee member) / Barnett, Jessica (Committee member) / The Sidney Poitier New American Film School (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148180-Thumbnail Image.png
Description

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair of online crowdsourced user studies were conducted and analyzed. User performance for Sankey diagrams of varying size and features (number of groups, number of timesteps, and number of flow crossings) were algorithmically modeled as a formula to quantify the complexity of these diagrams. Model accuracy was measured based on the performance of users in the second crowdsourced study. The results of my experiment conclusively demonstrates that the algorithmic complexity formula I created closely models the visual complexity of the Sankey Diagrams in the dataset.

ContributorsGinjpalli, Shashank (Author) / Bryan, Chris (Thesis director) / Hsiao, Sharon (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148184-Thumbnail Image.png
Description

In theory, Electric Vehicle (EV) ownership and renewable energy seem like a perfect solution to our climate crisis; however, unless done properly, the effects can be less than ideal. We need to find a way to maximize the impact of our efforts to reduce carbon emissions, which is exactly what

In theory, Electric Vehicle (EV) ownership and renewable energy seem like a perfect solution to our climate crisis; however, unless done properly, the effects can be less than ideal. We need to find a way to maximize the impact of our efforts to reduce carbon emissions, which is exactly what the heart of my paper gets to. Carbon emissions are bad for the environment because they comprise a large majority of greenhouse gases. Greenhouse gases have recently become dramatically out of balance and have resulted in an increase in respiratory diseases from smog and air pollution, as well as extreme weather and an increase in wildfires. Getting these greenhouse gases back in balance and maintaining an ecological balance is the goal of sustainability. According to the Environmental Protection Agency (the EPA), transportation makes up 29% of greenhouse gas emissions in the US followed closely by electricity generation at 28%, which makes Electric Vehicles the perfect target for reducing greenhouse gas emissions<br/>Arizona has many unique constraints when it comes to its electric infrastructure and its electric generation energy mix, which means the impacts of EV ownership become extremely complicated.<br/> In my paper, I aim to address the question: What are the carbon impact effects of Electric Vehicles (EVs) in Arizona through the lens of 1) the time of day that charging occurs, 2) the infrastructure needed to support EV penetration and 3) the incentives given to the public to help provide the impetus for making greener choices? Using the best available research on how EVs are being adopted to reduce emissions, I will provide conclusive recommendations and a framework for how Arizona can best reduce carbon emissions through EVs.

ContributorsSherman, Jessica Janiece (Author) / Keeler, Lauren (Thesis director) / Shaeffer, Lisa (Committee member) / Computer Science and Engineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05