Matching Items (8)
Filtering by

Clear all filters

Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
153094-Thumbnail Image.png
Description
Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and

Android is currently the most widely used mobile operating system. The permission model in Android governs the resource access privileges of applications. The permission model however is amenable to various attacks, including re-delegation attacks, background snooping attacks and disclosure of private information. This thesis is aimed at understanding, analyzing and performing forensics on application behavior. This research sheds light on several security aspects, including the use of inter-process communications (IPC) to perform permission re-delegation attacks.

Android permission system is more of app-driven rather than user controlled, which means it is the applications that specify their permission requirement and the only thing which the user can do is choose not to install a particular application based on the requirements. Given the all or nothing choice, users succumb to pressures and needs to accept permissions requested. This thesis proposes a couple of ways for providing the users finer grained control of application privileges. The same methods can be used to evade the Permission Re-delegation attack.

This thesis also proposes and implements a novel methodology in Android that can be used to control the access privileges of an Android application, taking into consideration the context of the running application. This application-context based permission usage is further used to analyze a set of sample applications. We found the evidence of applications spoofing or divulging user sensitive information such as location information, contact information, phone id and numbers, in the background. Such activities can be used to track users for a variety of privacy-intrusive purposes. We have developed implementations that minimize several forms of privacy leaks that are routinely done by stock applications.
ContributorsGollapudi, Narasimha Aditya (Author) / Dasgupta, Partha (Thesis advisor) / Xue, Guoliang (Committee member) / Doupe, Adam (Committee member) / Arizona State University (Publisher)
Created2014
153193-Thumbnail Image.png
Description
As the number of cores per chip increases, maintaining cache coherence becomes prohibitive for both power and performance. Non Coherent Cache (NCC) architectures do away with hardware-based cache coherence, but they become difficult to program. Some existing architectures provide a middle ground by providing some shared memory in the hardware.

As the number of cores per chip increases, maintaining cache coherence becomes prohibitive for both power and performance. Non Coherent Cache (NCC) architectures do away with hardware-based cache coherence, but they become difficult to program. Some existing architectures provide a middle ground by providing some shared memory in the hardware. Specifically, the 48-core Intel Single-chip Cloud Computer (SCC) provides some off-chip (DRAM) shared memory some on-chip (SRAM) shared memory. We call such architectures Hybrid Shared Memory, or HSM, manycore architectures. However, how to efficiently execute multi-threaded programs on HSM architectures is an open problem. To be able to execute a multi-threaded program correctly on HSM architectures, the compiler must: i) identify all the shared data and map it to the shared memory, and ii) map the frequently accessed shared data to the on-chip shared memory. This work presents a source-to-source translator written using CETUS that identifies a conservative superset of all the shared data in a multi-threaded application and maps it to the shared memory such that it enables execution on HSM architectures.
ContributorsRawat, Tushar (Author) / Shrivastava, Aviral (Thesis advisor) / Dasgupta, Partha (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2014
150544-Thumbnail Image.png
Description
Limited Local Memory (LLM) multicore architectures are promising powerefficient architectures will scalable memory hierarchy. In LLM multicores, each core can access only a small local memory. Accesses to a large shared global memory can only be made explicitly through Direct Memory Access (DMA) operations. Standard Template Library (STL) is a

Limited Local Memory (LLM) multicore architectures are promising powerefficient architectures will scalable memory hierarchy. In LLM multicores, each core can access only a small local memory. Accesses to a large shared global memory can only be made explicitly through Direct Memory Access (DMA) operations. Standard Template Library (STL) is a powerful programming tool and is widely used for software development. STLs provide dynamic data structures, algorithms, and iterators for vector, deque (double-ended queue), list, map (red-black tree), etc. Since the size of the local memory is limited in the cores of the LLM architecture, and data transfer is not automatically supported by hardware cache or OS, the usage of current STL implementation on LLM multicores is limited. Specifically, there is a hard limitation on the amount of data they can handle. In this article, we propose and implement a framework which manages the STL container classes on the local memory of LLM multicore architecture. Our proposal removes the data size limitation of the STL, and therefore improves the programmability on LLM multicore architectures with little change to the original program. Our implementation results in only about 12%-17% increase in static library code size and reasonable runtime overheads.
ContributorsLu, Di (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2012
150460-Thumbnail Image.png
Description
Performance improvements have largely followed Moore's Law due to the help from technology scaling. In order to continue improving performance, power-efficiency must be reduced. Better technology has improved power-efficiency, but this has a limit. Multi-core architectures have been shown to be an additional aid to this crusade of increased power-efficiency.

Performance improvements have largely followed Moore's Law due to the help from technology scaling. In order to continue improving performance, power-efficiency must be reduced. Better technology has improved power-efficiency, but this has a limit. Multi-core architectures have been shown to be an additional aid to this crusade of increased power-efficiency. Accelerators are growing in popularity as the next means of achieving power-efficient performance. Accelerators such as Intel SSE are ideal, but prove difficult to program. FPGAs, on the other hand, are less efficient due to their fine-grained reconfigurability. A middle ground is found in CGRAs, which are highly power-efficient, but largely programmable accelerators. Power-efficiencies of 100s of GOPs/W have been estimated, more than 2 orders of magnitude greater than current processors. Currently, CGRAs are limited in their applicability due to their ability to only accelerate a single thread at a time. This limitation becomes especially apparent as multi-core/multi-threaded processors have moved into the mainstream. This limitation is removed by enabling multi-threading on CGRAs through a software-oriented approach. The key capability in this solution is enabling quick run-time transformation of schedules to execute on targeted portions of the CGRA. This allows the CGRA to be shared among multiple threads simultaneously. Analysis shows that enabling multi-threading has very small costs but provides very large benefits (less than 1% single-threaded performance loss but nearly 300% CGRA throughput increase). By increasing dynamism of CGRA scheduling, system performance is shown to increase overall system performance of an optimized system by almost 350% over that of a single-threaded CGRA and nearly 20x faster than the same system with no CGRA in a highly threaded environment.
ContributorsPager, Jared (Author) / Shrivastava, Aviral (Thesis advisor) / Gupta, Sandeep (Committee member) / Speyer, Gil (Committee member) / Arizona State University (Publisher)
Created2011
153942-Thumbnail Image.png
Description
This report investigates the improvement in the transmission throughput, when fountain codes are used in opportunistic data routing, for a proposed delay tolerant network to connect remote and isolated communities in the Amazon region in Brazil, to the main city of that area. To extend healthcare facilities to the remote

This report investigates the improvement in the transmission throughput, when fountain codes are used in opportunistic data routing, for a proposed delay tolerant network to connect remote and isolated communities in the Amazon region in Brazil, to the main city of that area. To extend healthcare facilities to the remote and isolated communities, on the banks of river Amazon in Brazil, the network [7] utilizes regularly schedules boats as data mules to carry data from one city to other.

Frequent thunder and rain storms, given state of infrastructure and harsh geographical terrain; all contribute to increase in chances of massages not getting delivered to intended destination. These regions have access to medical facilities only through sporadic visits from medical team from the main city in the region, Belem. The proposed network uses records for routine clinical examinations such as ultrasounds on pregnant women could be sent to the doctors in Belem for evaluation.

However, due to the lack of modern communication infrastructure in these communities and unpredictable boat schedules due to delays and breakdowns, as well as high transmission failures due to the harsh environment in the region, mandate the design of robust delay-tolerant routing algorithms. The work presented here incorporates the unpredictability of the Amazon riverine scenario into the simulation model - accounting for boat mechanical failure in boats leading to delays/breakdowns, possible decrease in transmission speed due to rain and individual packet losses.



Extensive simulation results are presented, to evaluate the proposed approach and to verify that the proposed solution [7] could be used as a viable mode of communication, given the lack of available options in the region. While the simulation results are focused on remote healthcare applications in the Brazilian Amazon, we envision that our approach may also be used for other remote applications, such as distance education, and other similar scenarios.
ContributorsAgarwal, Rachit (Author) / Richa, Andrea (Thesis advisor) / Dasgupta, Partha (Committee member) / Johnson, Thienne (Committee member) / Arizona State University (Publisher)
Created2015
161300-Thumbnail Image.png
Description
Increase in the usage of Internet of Things(IoT) devices across physical systems has provided a platform for continuous data collection, real-time monitoring, and extracting useful insights. Limited computing power and constrained resources on the IoT devices has driven the physical systems to rely on external resources such as cloud computing

Increase in the usage of Internet of Things(IoT) devices across physical systems has provided a platform for continuous data collection, real-time monitoring, and extracting useful insights. Limited computing power and constrained resources on the IoT devices has driven the physical systems to rely on external resources such as cloud computing for handling compute-intensive and data-intensive processing. Recently, physical environments have began to explore the usage of edge devices for handling complex processing. However, these environments may face many challenges suchas uncertainty of device availability, uncertainty of data relevance, and large set of geographically dispersed devices. This research proposes the design of a reliable distributed management system that focuses on the following objectives: 1. improving the success rate of task completion in uncertain environments. 2. enhancing the reliability of the applications and 3. support latency sensitive applications. Main modules of the proposed system include: 1. A novel proactive user recruitment approach to improve the success rate of the task completion. 2.Contextual data acquisition and integration of false data detection for enhancing the reliability of the applications. 3. Novel distributed management of compute resources for achieving real-time monitoring and to support highly responsive applications. User recruitment approaches select the devices for offloading computation. Proposed proactive user recruitment module selects an optimized set of devices that match the resource requirements of the application. Contextual data acquisition module banks on the contextual requirements for identifying the data sources that are more useful to the application. Proposed reliable distributed management system can be used as a framework for offloading the latency sensitive applications across the volunteer computing edge devices.
ContributorsCHAKATI, VINAYA (Author) / Gupta, Sandeep K.S (Thesis advisor) / Dasgupta, Partha (Committee member) / Banerjee, Ayan (Committee member) / Pal, Anamitra (Committee member) / Kumar, Karthik (Committee member) / Arizona State University (Publisher)
Created2021
161998-Thumbnail Image.png
Description
In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With

In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With regard to usability, users need to provide lengthy amount of data compared to other traits such as fingerprint and face to get authenticated. Furthermore, in the majority of works, medical sensors are used which are more accurate compared to commercial ones but have a tedious setup process and are not mobile. Performance wise, the current state-of-art can provide acceptable accuracy on a small pool of users data collected in few sessions close to each other but still falls behind on a large pool of subjects over a longer time period. Finally, a brain security system should be robust against presentation attacks to prevent adversaries from gaining access to the system. This dissertation proposes E-BIAS (EEG-based Identification and Authentication System), a brain-mobile security system that makes contributions in three directions. First, it provides high performance on signals with shorter lengths collected by commercial sensors and processed with lightweight models to meet the computation/energy capacity of mobile devices. Second, to evaluate the system's robustness a novel presentation attack was designed which challenged the literature's presumption of intrinsic liveness property for brain signals. Third, to bridge the gap, I formulated and studied the brain liveness problem and proposed two solution approaches (model-aware & model agnostic) to ensure liveness and enhance robustness against presentation attacks. Under each of the two solution approaches, several methods were suggested and evaluated against both synthetic and manipulative classes of attacks (a total of 43 different attack vectors). Methods in both model-aware and model-agnostic approaches were successful in achieving an error rate of zero (0%). More importantly, such error rates were reached in face of unseen attacks which provides evidence of the generalization potentials of the proposed solution approaches and methods. I suggested an adversarial workflow to facilitate attack and defense cycles to allow for enhanced generalization capacity for domains in which the decision-making process is non-deterministic such as cyber-physical systems (e.g. biometric/medical monitoring, autonomous machines, etc.). I utilized this workflow for the brain liveness problem and was able to iteratively improve the performance of both the designed attacks and the proposed liveness detection methods.
ContributorsSohankar Esfahani, Mohammad Javad (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Santello, Marco (Committee member) / Dasgupta, Partha (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021