Matching Items (2)
151754-Thumbnail Image.png
Description
It is commonly known that High Performance Computing (HPC) systems are most frequently used by multiple users for batch job, parallel computations. Less well known, however, are the numerous HPC systems servicing data so sensitive that administrators enforce either a) sequential job processing - only one job at a time

It is commonly known that High Performance Computing (HPC) systems are most frequently used by multiple users for batch job, parallel computations. Less well known, however, are the numerous HPC systems servicing data so sensitive that administrators enforce either a) sequential job processing - only one job at a time on the entire system, or b) physical separation - devoting an entire HPC system to a single project until recommissioned. The driving forces behind this type of security are numerous but share the common origin of data so sensitive that measures above and beyond industry standard are used to ensure information security. This paper presents a network security solution that provides information security above and beyond industry standard, yet still enabling multi-user computations on the system. This paper's main contribution is a mechanism designed to enforce high level time division multiplexing of network access (Time Division Multiple Access, or TDMA) according to security groups. By dividing network access into time windows, interactions between applications over the network can be prevented in an easily verifiable way.
ContributorsFerguson, Joshua (Author) / Gupta, Sandeep Ks (Thesis advisor) / Varsamopoulos, Georgios (Committee member) / Ball, George (Committee member) / Arizona State University (Publisher)
Created2013
158743-Thumbnail Image.png
Description
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human­-in-­the-­loop, and environmental changes result

One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human­-in-­the-­loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI­-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-­enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI­-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI­-enabled CPS in the real world for linear
and non­linear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
ContributorsLamrani, Imane (Author) / Gupta, Sandeep Ks (Thesis advisor) / Banerjee, Ayan (Committee member) / Zhang, Yi (Committee member) / Runger, George C. (Committee member) / Rodriguez, Armando (Committee member) / Arizona State University (Publisher)
Created2020