Matching Items (1,087)
Filtering by

Clear all filters

161217-Thumbnail Image.png
Description

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.

ContributorsSmith, Andrew (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
Description

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.

ContributorsSmith, Andrew (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
Description

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.

ContributorsSmith, Andrew (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
161232-Thumbnail Image.png
Description
Many real-world problems, such as model- and data-driven computer simulation analysis, social and collaborative network analysis, brain data analysis, and so on, benefit from jointly modeling and analyzing the underlying patterns associated with complex, multi-relational data. Tensor decomposition is an ideal mathematical tool for this joint modeling, due to its

Many real-world problems, such as model- and data-driven computer simulation analysis, social and collaborative network analysis, brain data analysis, and so on, benefit from jointly modeling and analyzing the underlying patterns associated with complex, multi-relational data. Tensor decomposition is an ideal mathematical tool for this joint modeling, due to its simultaneous analysis of such multi-relational data, which is made possible by the data's multidimensional, array-based nature. A major challenge in tensor decomposition lies with its computational and space complexity, especially for dense datasets. While the process is comparatively faster for sparse tensors, decomposition is still a major bottleneck for many applications. The tensor decomposition process results in dense (hence, large) intermediate results, even when the input tensor is sparse (or small). Noise is another challenge for most data mining techniques, and many tensor decomposition schemes are sensitive to noisy datasets; this is an inevitable problem for real-world data, which can lead to false conclusions. In this dissertation, I develop innovative tensor decomposition algorithms for mining both sparse and dense multi-relational data in a noise-resistant way. I present novel, scalable, parallelizable tensor decomposition algorithms, specifically tuned to be effective for dense, noisy tensors, and which maintain the quality of the resulting analysis. Furthermore, I present results on multi-relational data applications focusing on model- and data-driven computer simulation analysis, as well as social network and web mining, which demonstrate the effectiveness of these tensor decompositions.
ContributorsLi, Xinsheng (Author) / Candan, Kasim S (Thesis advisor) / Davulcu, Hasan (Committee member) / Sapino, Maria L (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2019
161175-Thumbnail Image.png
Description

This thesis explores how large scale cyber exercises work in the 21st century, going in-depth on Exercise Cyber Shield, the Department of Defense’s largest unclassified cyber defense exercise run by the Army National Guard. It highlights why these cyber exercises are so relevant, going over several large scale cyber attacks

This thesis explores how large scale cyber exercises work in the 21st century, going in-depth on Exercise Cyber Shield, the Department of Defense’s largest unclassified cyber defense exercise run by the Army National Guard. It highlights why these cyber exercises are so relevant, going over several large scale cyber attacks that have occurred in the past year and the impact they caused. This research aims to illuminate the intricacies around cyber exercise assessment involving manual vs automated scoring systems; this is brought back to work on creating an automated scoring engine for Exercise Cyber Shield. This thesis provides an inside look behind the scenes of the operations of the largest unclassified cyber defense exercise in the United States, including conversations with the Exercise Officer-In-Charge of Cyber Shield as well as a cyber exercise expert working on assessment of Exercise Cyber Shield, and the research also includes information from past final reports for Cyber Shield. Issues that these large scale cyber exercises have faced over the years are brought to light, and attempts at solutions are discussed.

ContributorsZhao, Henry (Author) / Chavez Echeagaray, Maria Elena (Thesis director) / Rhodes, Brad (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
153968-Thumbnail Image.png
Description
The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density

The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density increases significantly by technology scaling. Due to technology factors, the reduction in power consumption per transistor is not sufficient to offset the increase in power consumption per unit area. Therefore, to improve performance, increasing energy-efficiency must be addressed at all design levels from circuit level to application and algorithm levels.

At architectural level, one promising approach is to populate the system with hardware accelerators each optimized for a specific task. One drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low as they perform one specific function. Using software programmable accelerators is an alternative approach to achieve high energy-efficiency and programmability. Due to intrinsic characteristics of software accelerators, they can exploit both instruction level parallelism and data level parallelism.

Coarse-Grained Reconfigurable Architecture (CGRA) is a software programmable accelerator consists of a number of word-level functional units. Motivated by promising characteristics of software programmable accelerators, the potentials of CGRAs in future computing platforms is studied and an end-to-end CGRA research framework is developed. This framework consists of three different aspects: CGRA architectural design, integration in a computing system, and CGRA compiler. First, the design and implementation of a CGRA and its instruction set is presented. This design is then modeled in a cycle accurate system simulator. The simulation platform enables us to investigate several problems associated with a CGRA when it is deployed as an accelerator in a computing system. Next, the problem of mapping a compute intensive region of a program to CGRAs is formulated. From this formulation, several efficient algorithms are developed which effectively utilize CGRA scarce resources very well to minimize the running time of input applications. Finally, these mapping algorithms are integrated in a compiler framework to construct a compiler for CGRA
ContributorsHamzeh, Mahdi (Author) / Vrudhula, Sarma (Thesis advisor) / Gopalakrishnan, Kailash (Committee member) / Shrivastava, Aviral (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2015
153969-Thumbnail Image.png
Description
Emerging trends in cyber system security breaches in critical cloud infrastructures show that attackers have abundant resources (human and computing power), expertise and support of large organizations and possible foreign governments. In order to greatly improve the protection of critical cloud infrastructures, incorporation of human behavior is needed to predict

Emerging trends in cyber system security breaches in critical cloud infrastructures show that attackers have abundant resources (human and computing power), expertise and support of large organizations and possible foreign governments. In order to greatly improve the protection of critical cloud infrastructures, incorporation of human behavior is needed to predict potential security breaches in critical cloud infrastructures. To achieve such prediction, it is envisioned to develop a probabilistic modeling approach with the capability of accurately capturing system-wide causal relationship among the observed operational behaviors in the critical cloud infrastructure and accurately capturing probabilistic human (users’) behaviors on subsystems as the subsystems are directly interacting with humans. In our conceptual approach, the system-wide causal relationship can be captured by the Bayesian network, and the probabilistic human behavior in the subsystems can be captured by the Markov Decision Processes. The interactions between the dynamically changing state graphs of Markov Decision Processes and the dynamic causal relationships in Bayesian network are key components in such probabilistic modelling applications. In this thesis, two techniques are presented for supporting the above vision to prediction of potential security breaches in critical cloud infrastructures. The first technique is for evaluation of the conformance of the Bayesian network with the multiple MDPs. The second technique is to evaluate the dynamically changing Bayesian network structure for conformance with the rules of the Bayesian network using a graph checker algorithm. A case study and its simulation are presented to show how the two techniques support the specific parts in our conceptual approach to predicting system-wide security breaches in critical cloud infrastructures.
ContributorsNagaraja, Vinjith (Author) / Yau, Stephen S. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2015
153557-Thumbnail Image.png
Description

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value for δ of the call and put option, the %B value of the stock, and the amount of time until expiration of the stock option. The %B value is the most important. The purpose of analyzing the data is to see the relationship between the variables and, given certain values, what is the probability the trade makes money. This result will be used in finding the probability certain trades make money over a period of time.

Since options are so dependent on probability, this research specifically analyzes stock options rather than stocks themselves. Stock options have value like stocks except options are leveraged. The most common model used to calculate the value of an option is the Black-Scholes Model [1]. There are five main variables the Black-Scholes Model uses to calculate the overall value of an option. These variables are θ, δ, γ, v, and ρ. The variable, θ is the rate of change in price of the option due to time decay, δ is the rate of change of the option’s price due to the stock’s changing value, γ is the rate of change of δ, v represents the rate of change of the value of the option in relation to the stock’s volatility, and ρ represents the rate of change in value of the option in relation to the interest rate [2]. In this research, the %B value of the stock is analyzed along with the time until expiration of the option. All options have the same δ. This is due to the fact that all the options analyzed in this experiment are less than two months from expiration and the value of δ reveals how far in or out of the money an option is.

The machine learning technique used to analyze the data and the probability



is support vector machines. Support vector machines analyze data that can be classified in one of two or more groups and attempts to find a pattern in the data to develop a model, which reliably classifies similar, future data into the correct group. This is used to analyze the outcome of stock options.

ContributorsReeves, Michael (Author) / Richa, Andrea (Thesis advisor) / McCarville, Daniel R. (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2015
153926-Thumbnail Image.png
Description
One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To

One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To cope with the relentless expansion, many enthusiastic bloggers have embarked on voluntarily writing, tagging, labeling, and cataloguing their posts in hopes of reaching the widest possible audience. Unbeknown to them, this reaching-for-others process triggers the generation of a new kind of collective wisdom, a result of shared collaboration, and the exchange of ideas, purpose, and objectives, through the formation of associations, links, and relations. Mastering an understanding of the Blogosphere can greatly help facilitate the needs of the ever growing number of these users, as well as producers, service providers, and advertisers into facilitation of the categorization and navigation of this vast environment. This work explores a novel method to leverage the collective wisdom from the infused label space for blog search and discovery. The work demonstrates that the wisdom space can provide a most unique and desirable framework to which to discover the highly sought after background information that could aid in the building of classifiers. This work incorporates this insight into the construction of a better clustering of blogs which boosts the performance of classifiers for identifying more relevant labels for blogs, and offers a mechanism that can be incorporated into replacing spurious labels and mislabels in a multi-labeled space.
ContributorsGalan, Magdiel F (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015
153607-Thumbnail Image.png
Description
Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that

Complex systems are pervasive in science and engineering. Some examples include complex engineered networks such as the internet, the power grid, and transportation networks. The complexity of such systems arises not just from their size, but also from their structure, operation (including control and management), evolution over time, and that people are involved in their design and operation. Our understanding of such systems is limited because their behaviour cannot be characterized using traditional techniques of modelling and analysis.

As a step in model development, statistically designed screening experiments may be used to identify the main effects and interactions most significant on a response of a system. However, traditional approaches for screening are ineffective for complex systems because of the size of the experimental design. Consequently, the factors considered are often restricted, but this automatically restricts the interactions that may be identified as well. Alternatively, the designs are restricted to only identify main effects, but this then fails to consider any possible interactions of the factors.

To address this problem, a specific combinatorial design termed a locating array is proposed as a screening design for complex systems. Locating arrays exhibit logarithmic growth in the number of factors because their focus is on identification rather than on measurement. This makes practical the consideration of an order of magnitude more factors in experimentation than traditional screening designs.

As a proof-of-concept, a locating array is applied to screen for main effects and low-order interactions on the response of average transport control protocol (TCP) throughput in a simulation model of a mobile ad hoc network (MANET). A MANET is a collection of mobile wireless nodes that self-organize without the aid of any centralized control or fixed infrastructure. The full-factorial design for the MANET considered is infeasible (with over 10^{43} design points) yet a locating array has only 421 design points.

In conjunction with the locating array, a ``heavy hitters'' algorithm is developed to identify the influential main effects and two-way interactions, correcting for the non-normal distribution of the average throughput, and uneven coverage of terms in the locating array. The significance of the identified main effects and interactions is validated independently using the statistical software JMP.

The statistical characteristics used to evaluate traditional screening designs are also applied to locating arrays.

These include the matrix of covariance, fraction of design space, and aliasing, among others. The results lend additional support to the use of locating arrays as screening designs.

The use of locating arrays as screening designs for complex engineered systems is promising as they yield useful models. This facilitates quantitative evaluation of architectures and protocols and contributes to our understanding of complex engineered networks.
ContributorsAldaco-Gastelum, Abraham Netzahualcoyotl (Author) / Syrotiuk, Violet R. (Thesis advisor) / Colbourn, Charles J. (Committee member) / Sen, Arunabha (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2015