Filtering by
- All Subjects: Computer Science
Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.
Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.
Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many
of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.
This thesis explores how large scale cyber exercises work in the 21st century, going in-depth on Exercise Cyber Shield, the Department of Defense’s largest unclassified cyber defense exercise run by the Army National Guard. It highlights why these cyber exercises are so relevant, going over several large scale cyber attacks that have occurred in the past year and the impact they caused. This research aims to illuminate the intricacies around cyber exercise assessment involving manual vs automated scoring systems; this is brought back to work on creating an automated scoring engine for Exercise Cyber Shield. This thesis provides an inside look behind the scenes of the operations of the largest unclassified cyber defense exercise in the United States, including conversations with the Exercise Officer-In-Charge of Cyber Shield as well as a cyber exercise expert working on assessment of Exercise Cyber Shield, and the research also includes information from past final reports for Cyber Shield. Issues that these large scale cyber exercises have faced over the years are brought to light, and attempts at solutions are discussed.
At architectural level, one promising approach is to populate the system with hardware accelerators each optimized for a specific task. One drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low as they perform one specific function. Using software programmable accelerators is an alternative approach to achieve high energy-efficiency and programmability. Due to intrinsic characteristics of software accelerators, they can exploit both instruction level parallelism and data level parallelism.
Coarse-Grained Reconfigurable Architecture (CGRA) is a software programmable accelerator consists of a number of word-level functional units. Motivated by promising characteristics of software programmable accelerators, the potentials of CGRAs in future computing platforms is studied and an end-to-end CGRA research framework is developed. This framework consists of three different aspects: CGRA architectural design, integration in a computing system, and CGRA compiler. First, the design and implementation of a CGRA and its instruction set is presented. This design is then modeled in a cycle accurate system simulator. The simulation platform enables us to investigate several problems associated with a CGRA when it is deployed as an accelerator in a computing system. Next, the problem of mapping a compute intensive region of a program to CGRAs is formulated. From this formulation, several efficient algorithms are developed which effectively utilize CGRA scarce resources very well to minimize the running time of input applications. Finally, these mapping algorithms are integrated in a compiler framework to construct a compiler for CGRA
The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value for δ of the call and put option, the %B value of the stock, and the amount of time until expiration of the stock option. The %B value is the most important. The purpose of analyzing the data is to see the relationship between the variables and, given certain values, what is the probability the trade makes money. This result will be used in finding the probability certain trades make money over a period of time.
Since options are so dependent on probability, this research specifically analyzes stock options rather than stocks themselves. Stock options have value like stocks except options are leveraged. The most common model used to calculate the value of an option is the Black-Scholes Model [1]. There are five main variables the Black-Scholes Model uses to calculate the overall value of an option. These variables are θ, δ, γ, v, and ρ. The variable, θ is the rate of change in price of the option due to time decay, δ is the rate of change of the option’s price due to the stock’s changing value, γ is the rate of change of δ, v represents the rate of change of the value of the option in relation to the stock’s volatility, and ρ represents the rate of change in value of the option in relation to the interest rate [2]. In this research, the %B value of the stock is analyzed along with the time until expiration of the option. All options have the same δ. This is due to the fact that all the options analyzed in this experiment are less than two months from expiration and the value of δ reveals how far in or out of the money an option is.
The machine learning technique used to analyze the data and the probability
is support vector machines. Support vector machines analyze data that can be classified in one of two or more groups and attempts to find a pattern in the data to develop a model, which reliably classifies similar, future data into the correct group. This is used to analyze the outcome of stock options.
As a step in model development, statistically designed screening experiments may be used to identify the main effects and interactions most significant on a response of a system. However, traditional approaches for screening are ineffective for complex systems because of the size of the experimental design. Consequently, the factors considered are often restricted, but this automatically restricts the interactions that may be identified as well. Alternatively, the designs are restricted to only identify main effects, but this then fails to consider any possible interactions of the factors.
To address this problem, a specific combinatorial design termed a locating array is proposed as a screening design for complex systems. Locating arrays exhibit logarithmic growth in the number of factors because their focus is on identification rather than on measurement. This makes practical the consideration of an order of magnitude more factors in experimentation than traditional screening designs.
As a proof-of-concept, a locating array is applied to screen for main effects and low-order interactions on the response of average transport control protocol (TCP) throughput in a simulation model of a mobile ad hoc network (MANET). A MANET is a collection of mobile wireless nodes that self-organize without the aid of any centralized control or fixed infrastructure. The full-factorial design for the MANET considered is infeasible (with over 10^{43} design points) yet a locating array has only 421 design points.
In conjunction with the locating array, a ``heavy hitters'' algorithm is developed to identify the influential main effects and two-way interactions, correcting for the non-normal distribution of the average throughput, and uneven coverage of terms in the locating array. The significance of the identified main effects and interactions is validated independently using the statistical software JMP.
The statistical characteristics used to evaluate traditional screening designs are also applied to locating arrays.
These include the matrix of covariance, fraction of design space, and aliasing, among others. The results lend additional support to the use of locating arrays as screening designs.
The use of locating arrays as screening designs for complex engineered systems is promising as they yield useful models. This facilitates quantitative evaluation of architectures and protocols and contributes to our understanding of complex engineered networks.