Matching Items (143)
150953-Thumbnail Image.png
Description
Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play

Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters.
ContributorsBapat, Namrata Arun (Author) / Syrotiuk, Violet R. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150540-Thumbnail Image.png
Description
In today's world there is a great need for sensing methods as tools to provide critical information to solve today's problems in security applications. Real time detection of trace chemicals, such as explosives, in a complex environment containing various interferents using a portable device that can be reliably deployed in

In today's world there is a great need for sensing methods as tools to provide critical information to solve today's problems in security applications. Real time detection of trace chemicals, such as explosives, in a complex environment containing various interferents using a portable device that can be reliably deployed in a field has been a difficult challenge. A hybrid nanosensor based on the electrochemical reduction of trinitrotoluene (TNT) and the interaction of the reduction products with conducting polymer nanojunctions in an ionic liquid was fabricated. The sensor simultaneously measures the electrochemical current from the reduction of TNT and the conductance change of the polymer nanojunction caused from the reduction product. The hybrid detection mechanism, together with the unique selective preconcentration capability of the ionic liquid, provides a selective, fast, and sensitive detection of TNT. The sensor, in its current form, is capable of detecting parts per trillion level TNT in the presence of various interferents within a few minutes. A novel hybrid electrochemical-colorimetric (EC-C) sensing platform was also designed and fabricated to meet these challenges. The hybrid sensor is based on electrochemical reactions of trace explosives, colorimetric detection of the reaction products, and unique properties of the explosives in an ionic liquid (IL). This approach affords not only increased sensitivity but also selectivity as evident from the demonstrated null rate of false positives and low detection limits. Using an inexpensive webcam a detection limit of part per billion in volume (ppbV) has been achieved and demonstrated selective detection of explosives in the presence of common interferences (perfumes, mouth wash, cleaners, petroleum products, etc.). The works presented in this dissertation, were published in the Journal of the American Chemical Society (JACS, 2009) and Nano Letters (2010), won first place in the National Defense Research contest in (2009) and has been granted a patent (WO 2010/030874 A1). In addition, other work related to conductive polymer junctions and their sensing capabilities has been published in Applied Physics Letters (2005) and IEEE sensors journal (2008).
ContributorsDiaz Aguilar, Alvaro (Author) / Tao, Nongjian (Thesis advisor) / Tsui, Raymond (Committee member) / Barnaby, Hugh (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2012
150776-Thumbnail Image.png
Description
The front end of almost all ADCs consists of a Sample and Hold Circuit in order to make sure a constant analog value is digitized at the end of ADC. The design of Track and Hold Circuit (THA) mainly focuses on following parameters: Input frequency, Sampling frequency, dynamic Range, hold

The front end of almost all ADCs consists of a Sample and Hold Circuit in order to make sure a constant analog value is digitized at the end of ADC. The design of Track and Hold Circuit (THA) mainly focuses on following parameters: Input frequency, Sampling frequency, dynamic Range, hold pedestal, feed through error. This thesis will discuss the importance of these parameters of a THA to the ADCs and commonly used architectures of THA. A new architecture with SiGe HBT transistors in BiCMOS 130 nm technology is presented here. The proposed topology without complicated circuitry achieves high Spurious Free Dynamic Range(SFDR) and Total Harmonic Distortion (THD).These are important figure of merits for any THA which gives a measure of non-linearity of the circuit. The proposed topology is implemented in IBM8HP 130 nm BiCMOS process combines typical emitter follower switch in bipolar THAs and output steering technique proposed in the previous work. With these techniques and the cascode transistor in the input which is used to isolate the switch from the input during the hold mode, better results have been achieved. The THA is designed to work with maximum input frequency of 250 MHz at sampling frequency of 500 MHz with input currents not more than 5mA achieving an SFDR of 78.49 dB. Simulation and results are presented, illustrating the advantages and trade-offs of the proposed topology.
ContributorsRao, Nishita Ramakrishna (Author) / Barnaby, Hugh (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
150529-Thumbnail Image.png
Description
The non-quasi-static (NQS) description of device behavior is useful in fast switching and high frequency circuit applications. Hence, it is necessary to develop a fast and accurate compact NQS model for both large-signal and small-signal simulations. A new relaxation-time-approximation based NQS MOSFET model, consistent between transient and small-signal simulations, has

The non-quasi-static (NQS) description of device behavior is useful in fast switching and high frequency circuit applications. Hence, it is necessary to develop a fast and accurate compact NQS model for both large-signal and small-signal simulations. A new relaxation-time-approximation based NQS MOSFET model, consistent between transient and small-signal simulations, has been developed for surface-potential-based MOSFET compact models. The new model is valid for all regions of operation and is compatible with, and at low frequencies recovers, the quasi-static (QS) description of the MOSFET. The model is implemented in two widely used circuit simulators and tested for speed and convergence. It is verified by comparison with technology computer aided design (TCAD) simulations and experimental data, and by application of a recently developed benchmark test for NQS MOSFET models. In addition, a new and simple technique to characterize NQS and gate resistance, Rgate, MOS model parameters from measured data has been presented. In the process of experimental model verification, the effects of bulk resistance on MOSFET characteristics is investigated both theoretically and experimentally to separate it from the NQS effects.
ContributorsZhu, Zeqin (Author) / Gildenblat, Gennady (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Barnaby, Hugh (Committee member) / Mcandrew, Colin C (Committee member) / Arizona State University (Publisher)
Created2012
150987-Thumbnail Image.png
Description
In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned

In this dissertation, two interrelated problems of service-based systems (SBS) are addressed: protecting users' data confidentiality from service providers, and managing performance of multiple workflows in SBS. Current SBSs pose serious limitations to protecting users' data confidentiality. Since users' sensitive data is sent in unencrypted forms to remote machines owned and operated by third-party service providers, there are risks of unauthorized use of the users' sensitive data by service providers. Although there are many techniques for protecting users' data from outside attackers, currently there is no effective way to protect users' sensitive data from service providers. In this dissertation, an approach is presented to protecting the confidentiality of users' data from service providers, and ensuring that service providers cannot collect users' confidential data while the data is processed or stored in cloud computing systems. The approach has four major features: (1) separation of software service providers and infrastructure service providers, (2) hiding the information of the owners of data, (3) data obfuscation, and (4) software module decomposition and distributed execution. Since the approach to protecting users' data confidentiality includes software module decomposition and distributed execution, it is very important to effectively allocate the resource of servers in SBS to each of the software module to manage the overall performance of workflows in SBS. An approach is presented to resource allocation for SBS to adaptively allocating the system resources of servers to their software modules in runtime in order to satisfy the performance requirements of multiple workflows in SBS. Experimental results show that the dynamic resource allocation approach can substantially increase the throughput of a SBS and the optimal resource allocation can be found in polynomial time
ContributorsAn, Ho Geun (Author) / Yau, Sik-Sang (Thesis advisor) / Huang, Dijiang (Committee member) / Ahn, Gail-Joon (Committee member) / Santanam, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
150994-Thumbnail Image.png
Description
Lateral Double-diffused (LDMOS) transistors are commonly used in power management, high voltage/current, and RF circuits. Their characteristics include high breakdown voltage, low on-resistance, and compatibility with standard CMOS and BiCMOS manufacturing processes. As with other semiconductor devices, an accurate and physical compact model is critical for LDMOS-based circuit design. The

Lateral Double-diffused (LDMOS) transistors are commonly used in power management, high voltage/current, and RF circuits. Their characteristics include high breakdown voltage, low on-resistance, and compatibility with standard CMOS and BiCMOS manufacturing processes. As with other semiconductor devices, an accurate and physical compact model is critical for LDMOS-based circuit design. The goal of this research work is to advance the state-of-the-art by developing a physics-based scalable compact model of LDMOS transistors. The new model, SP-HV, is constructed from a surface-potential-based bulk MOSFET model, PSP, and a nonlinear resistor model, R3. The use of independently verified and mature sub-models leads to increased accuracy and robustness of an overall LDMOS model. Improved geometry scaling and simplified statistical modeling are other useful and practical consequences of the approach. Extensions are made to both PSP and R3 for improved modeling of LDMOS devices, and one internal node is introduced to connect the two component models. The presence of the lightly-doped drift region in LDMOS transistors causes some characteristic device effects which are usually not observed in conventional MOSFETs. These include quasi-saturation, a sharp peak in transconductance at low VD, gate capacitance exceeding oxide capacitance at positive VD, negative transcapacitances CBG and CGB at positive VD, a "double-hump" IB(VG) current and expansion effects. SP-HV models these effects accurately. It also includes a scalable self-heating model which is important to model the geometry dependence of the expansion effect. SP-HV, including its scalability, is verified extensively by comparison both to TCAD simulations and experimental data. The close agreement confirms the validity of the model structure. Circuit simulation examples are presented to demonstrate its convergence.
ContributorsYao, Wei (Author) / Gildenblat, Gennady (Thesis advisor) / Barnaby, Hugh (Committee member) / Cao, Yu (Committee member) / McAndrew, Colin (Committee member) / Arizona State University (Publisher)
Created2012
150827-Thumbnail Image.png
Description
In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more

In modern healthcare environments, there is a strong need to create an infrastructure that reduces time-consuming efforts and costly operations to obtain a patient's complete medical record and uniformly integrates this heterogeneous collection of medical data to deliver it to the healthcare professionals. As a result, healthcare providers are more willing to shift their electronic medical record (EMR) systems to clouds that can remove the geographical distance barriers among providers and patient. Even though cloud-based EMRs have received considerable attention since it would help achieve lower operational cost and better interoperability with other healthcare providers, the adoption of security-aware cloud systems has become an extremely important prerequisite for bringing interoperability and efficient management to the healthcare industry. Since a shared electronic health record (EHR) essentially represents a virtualized aggregation of distributed clinical records from multiple healthcare providers, sharing of such integrated EHRs may comply with various authorization policies from these data providers. In this work, we focus on the authorized and selective sharing of EHRs among several parties with different duties and objectives that satisfies access control and compliance issues in healthcare cloud computing environments. We present a secure medical data sharing framework to support selective sharing of composite EHRs aggregated from various healthcare providers and compliance of HIPAA regulations. Our approach also ensures that privacy concerns need to be accommodated for processing access requests to patients' healthcare information. To realize our proposed approach, we design and implement a cloud-based EHRs sharing system. In addition, we describe case studies and evaluation results to demonstrate the effectiveness and efficiency of our approach.
ContributorsWu, Ruoyu (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2012
151152-Thumbnail Image.png
Description
Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control

Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this dissertation, I present an Assurance Management Framework (AMF) that is designed to cope with various assurance management requirements from both access control system development and policy-based computing. On one hand, the AMF framework facilitates comprehensive analysis and thorough realization of formal access control models in secure system development. I demonstrate how this method can be applied to build role-based access control systems by adopting the NIST/ANSI RBAC standard as an underlying security model. On the other hand, the AMF framework ensures the correctness of access control policies in policy-based computing through automated reasoning techniques and anomaly management mechanisms. A systematic method is presented to formulate XACML in Answer Set Programming (ASP) that allows users to leverage off-the-shelf ASP solvers for a variety of analysis services. In addition, I introduce a novel anomaly management mechanism, along with a grid-based visualization approach, which enables systematic and effective detection and resolution of policy anomalies. I further evaluate the AMF framework through modeling and analyzing multiparty access control in Online Social Networks (OSNs). A MultiParty Access Control (MPAC) model is formulated to capture the essence of multiparty authorization requirements in OSNs. In particular, I show how AMF can be applied to OSNs for identifying and resolving privacy conflicts, and representing and reasoning about MPAC model and policy. To demonstrate the feasibility of the proposed methodology, a suite of proof-of-concept prototype systems is implemented as well.
ContributorsHu, Hongxin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Dasgupta, Partha (Committee member) / Ye, Nong (Committee member) / Arizona State University (Publisher)
Created2012
151182-Thumbnail Image.png
Description
ABSTRACT As the technology length shrinks down, achieving higher gain is becoming very difficult in deep sub-micron technologies. As the supply voltages drop, cascodes are very difficult to implement and cascade amplifiers are needed to achieve sufficient gain with required output swing. This sets the fundamental limit on the SNR

ABSTRACT As the technology length shrinks down, achieving higher gain is becoming very difficult in deep sub-micron technologies. As the supply voltages drop, cascodes are very difficult to implement and cascade amplifiers are needed to achieve sufficient gain with required output swing. This sets the fundamental limit on the SNR and hence the maximum resolution that can be achieved by ADC. With the RSD algorithm and the range overlap, the sub ADC can tolerate large comparator offsets leaving the linearity and accuracy requirement for the DAC and residue gain stage. Typically, the multiplying DAC requires high gain wide bandwidth op-amp and the design of this high gain op-amp becomes challenging in the deep submicron technologies. This work presents `A 12 bit 25MSPS 1.2V pipelined ADC using split CLS technique' in IBM 130nm 8HP process using only CMOS devices for the application of Large Hadron Collider (LHC). CLS technique relaxes the gain requirement of op-amp and improves the signal-to-noise ratio without increase in power or input sampling capacitor with rail-to-rail swing. An op-amp sharing technique has been incorporated with split CLS technique which decreases the number of op-amps and hence the power further. Entire pipelined converter has been implemented as six 2.5 bit RSD stages and hence decreases the latency associated with the pipelined architecture - one of the main requirements for LHC along with the power requirement. Two different OTAs have been designed to use in the split-CLS technique. Bootstrap switches and pass gate switches are used in the circuit along with a low power dynamic kick-back compensated comparator.
ContributorsSwaminathan, Visu Vaithiyanathan (Author) / Barnaby, Hugh (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
136523-Thumbnail Image.png
Description
Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of digital forensics. Teamwork in forensic analysis has been overlooked in systems even though forensics relies on collaboration. Forensic analysis lacks a system that is flexible and available on different electronic devices

Cyber threats are growing in number and sophistication making it important to continually study and improve all dimensions of digital forensics. Teamwork in forensic analysis has been overlooked in systems even though forensics relies on collaboration. Forensic analysis lacks a system that is flexible and available on different electronic devices which are being used and incorporated into everyday life. For instance, cellphones or tablets that are easy to bring on-the-go to sites where the first steps of forensic analysis is done. Due to the present day conversion to online accessibility, most electronic devices connect to the internet. Squeegee is a proof of concept that forensic analysis can be done on the web. The forensic analysis expansion to the web opens many doors to collaboration and accessibility.
ContributorsJuntiff, Samantha Maria (Author) / Ahn, Gail-Joon (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05