Matching Items (18)
Filtering by

Clear all filters

149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149858-Thumbnail Image.png
Description
This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large,

This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large, linearly increasing ciphertext. The proposed CCP-ABE dramatically reduces the ciphertext to small, constant size. This is the first existing ABE scheme that achieves constant ciphertext size. Also, the proposed CCP-ABE scheme is fully collusion-resistant such that users can not combine their attributes to elevate their decryption capacity. Next step, efficient ABE schemes are applied to construct optimal group communication schemes and broadcast encryption schemes. An attribute based Optimal Group Key (OGK) management scheme that attains communication-storage optimality without collusion vulnerability is presented. Then, a novel broadcast encryption model: Attribute Based Broadcast Encryption (ABBE) is introduced, which exploits the many-to-many nature of attributes to dramatically reduce the storage complexity from linear to logarithm and enable expressive attribute based access policies. The privacy issues are also considered and addressed in ABSS. Firstly, a hidden policy based ABE schemes is proposed to protect receivers' privacy by hiding the access policy. Secondly,a new concept: Gradual Identity Exposure (GIE) is introduced to address the restrictions of hidden policy based ABE schemes. GIE's approach is to reveal the receivers' information gradually by allowing ciphertext recipients to decrypt the message using their possessed attributes one-by-one. If the receiver does not possess one attribute in this procedure, the rest of attributes are still hidden. Compared to hidden-policy based solutions, GIE provides significant performance improvement in terms of reducing both computation and communication overhead. Last but not least, ABSS are incorporated into the mobile cloud computing scenarios. In the proposed secure mobile cloud data management framework, the light weight mobile devices can securely outsource expensive ABE operations and data storage to untrusted cloud service providers. The reported scheme includes two components: (1) a Cloud-Assisted Attribute-Based Encryption/Decryption (CA-ABE) scheme and (2) An Attribute-Based Data Storage (ABDS) scheme that achieves information theoretical optimality.
ContributorsZhou, Zhibin (Author) / Huang, Dijiang (Thesis advisor) / Yau, Sik-Sang (Committee member) / Ahn, Gail-Joon (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2011
156783-Thumbnail Image.png
Description
In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of

In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of extensive datasets and the nature of the Inertial Measurement Unit (IMU) data, there are difficulties in applying deep learning techniques to them. Although many machine learning models have good accuracy, most of them assume that training data is available for every user while other works that do not require user data have lower accuracies. MirrorGen is a technique which uses wearable sensor data and generates synthetic videos using hand movements and it mitigates the traditional challenges of vision based recognition such as occlusion, lighting restrictions, lack of viewpoint variations, and environmental noise. In addition, MirrorGen allows for user-independent recognition involving minimal human effort during data collection. It also helps leverage the advances in vision-based recognition by using various techniques like optical flow extraction, 3D convolution. Projecting the orientation (IMU) information to a video helps in gaining position information of the hands. To validate these claims, we perform entropy analysis on various configurations such as raw data, stick model, hand model and real video. Human hand model is found to have an optimal entropy that helps in achieving user independent recognition. It also serves as a pervasive option as opposed to a video-based recognition. The average user independent recognition accuracy of 99.03% was achieved for a sign language dataset with 59 different users, 20 different signs with 20 repetitions each for a total of 23k training instances. Moreover, synthetic videos can be used to augment real videos to improve recognition accuracy.
ContributorsRamesh, Arun Srivatsa (Author) / Gupta, Sandeep K S (Thesis advisor) / Banerjee, Ayan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
156796-Thumbnail Image.png
Description
Mobile devices have penetrated into every aspect of modern world. For one thing, they are becoming ubiquitous in daily life. For the other thing, they are storing more and more data, including sensitive data. Therefore, security and privacy of mobile devices are indispensable. This dissertation consists of five parts: two

Mobile devices have penetrated into every aspect of modern world. For one thing, they are becoming ubiquitous in daily life. For the other thing, they are storing more and more data, including sensitive data. Therefore, security and privacy of mobile devices are indispensable. This dissertation consists of five parts: two authentication schemes, two attacks, and one countermeasure related to security and privacy of mobile devices.

Specifically, in Chapter 1, I give an overview the challenges and existing solutions in these areas. In Chapter 2, a novel authentication scheme is presented, which is based on a user’s tapping or sliding on the touchscreen of a mobile device. In Chapter 3, I focus on mobile app fingerprinting and propose a method based on analyzing the power profiles of targeted mobile devices. In Chapter 4, I mainly explore a novel liveness detection method for face authentication on mobile devices. In Chapter 5, I investigate a novel keystroke inference attack on mobile devices based on user eye movements. In Chapter 6, a novel authentication scheme is proposed, based on detecting a user’s finger gesture through acoustic sensing. In Chapter 7, I discuss the future work.
ContributorsChen, Yimin (Author) / Zhang, Yanchao (Thesis advisor) / Zhang, Junshan (Committee member) / Reisslein, Martin (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2018
155032-Thumbnail Image.png
Description
We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+

We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+ and others. These powerful social networks are not only used for harnessing revenue from the infrastructure networks, but are also increasingly being used as “non-conventional sensors” for monitoring the infrastructure networks. Accordingly, nowadays, analyses of social and infrastructure networks go hand-in-hand. This dissertation studies resource allocation problems encountered in this set of diverse, heterogeneous, and interdependent networks. Three problems studied in this dissertation are encountered in the physical network domain while the three other problems studied are encountered in the social network domain.

The first problem from the infrastructure network domain relates to distributed files storage scheme with a goal of enhancing robustness of data storage by making it tolerant against large scale geographically-correlated failures. The second problem relates to placement of relay nodes in a deployment area with multiple sensor nodes with a goal of augmenting connectivity of the resulting network, while staying within the budget specifying the maximum number of relay nodes that can be deployed. The third problem studied in this dissertation relates to complex interdependencies that exist between infrastructure networks, such as power grid and communication network. The progressive recovery problem in an interdependent network is studied whose goal is to maximize system utility over the time when recovery process of failed entities takes place in a sequential manner.

The three problems studied from the social network domain relate to influence propagation in adversarial environment and political sentiment assessment in various states in a country with a goal of creation of a “political heat map” of the country. In the first problem of the influence propagation domain, the goal of the second player is to restrict the influence of the first player, while in the second problem the goal of the second player is to have a larger market share with least amount of initial investment.
ContributorsMazumder, Anisha (Author) / Sen, Arunabha (Thesis advisor) / Richa, Andrea (Committee member) / Xue, Guoliang (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2016
155869-Thumbnail Image.png
Description
A medical control system, a real-time controller, uses a predictive model of human physiology for estimation and controlling of drug concentration in the human body. Artificial Pancreas (AP) is an example of the control system which regulates blood glucose in T1D patients. The predictive model in the control system

A medical control system, a real-time controller, uses a predictive model of human physiology for estimation and controlling of drug concentration in the human body. Artificial Pancreas (AP) is an example of the control system which regulates blood glucose in T1D patients. The predictive model in the control system such as Bergman Minimal Model (BMM) is based on physiological modeling technique which separates the body into the number of anatomical compartments and each compartment's effect on body system is determined by their physiological parameters. These models are less accurate due to unaccounted physiological factors effecting target values. Estimation of a large number of physiological parameters through optimization algorithm is computationally expensive and stuck in local minima. This work evaluates a machine learning(ML) framework which has an ML model guided through physiological models. A support vector regression model guided through modified BMM is implemented for estimation of blood glucose levels. Physical activity and Endogenous glucose production are key factors that contribute in the increased hypoglycemia events thus, this work modifies Bergman Minimal Model ( Bergman et al. 1981) for more accurate estimation of blood glucose levels. Results show that the SVR outperformed BMM by 0.164 average RMSE for 7 different patients in the free-living scenario. This computationally inexpensive data driven model can potentially learn parameters more accurately with time. In conclusion, advised prediction model is promising in modeling the physiology elements in living systems.
ContributorsAgrawal, Anurag (Author) / Gupta, Sandeep K. S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Kudva, Yogish (Committee member) / Arizona State University (Publisher)
Created2017
149382-Thumbnail Image.png
Description
Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and

Today, many wireless networks are single-channel systems. However, as the interest in wireless services increases, the contention by nodes to occupy the medium is more intense and interference worsens. One direction with the potential to increase system throughput is multi-channel systems. Multi-channel systems have been shown to reduce collisions and increase concurrency thus producing better bandwidth usage. However, the well-known hidden- and exposed-terminal problems inherited from single-channel systems remain, and a new channel selection problem is introduced. In this dissertation, Multi-channel medium access control (MAC) protocols are proposed for mobile ad hoc networks (MANETs) for nodes equipped with a single half-duplex transceiver, using more sophisticated physical layer technologies. These include code division multiple access (CDMA), orthogonal frequency division multiple access (OFDMA), and diversity. CDMA increases channel reuse, while OFDMA enables communication by multiple users in parallel. There is a challenge to using each technology in MANETs, where there is no fixed infrastructure or centralized control. CDMA suffers from the near-far problem, while OFDMA requires channel synchronization to decode the signal. As a result CDMA and OFDMA are not yet widely used. Cooperative (diversity) mechanisms provide vital information to facilitate communication set-up between source-destination node pairs and help overcome limitations of physical layer technologies in MANETs. In this dissertation, the Cooperative CDMA-based Multi-channel MAC (CCM-MAC) protocol uses CDMA to enable concurrent transmissions on each channel. The Power-controlled CDMA-based Multi-channel MAC (PCC-MAC) protocol uses transmission power control at each node and mitigates collisions of control packets on the control channel by using different sizes of the spreading factor to have different processing gains for the control signals. The Cooperative Dual-access Multi-channel MAC (CDM-MAC) protocol combines the use of OFDMA and CDMA and minimizes channel interference by a resolvable balanced incomplete block design (BIBD). In each protocol, cooperating nodes help reduce the incidence of the multi-channel hidden- and exposed-terminal and help address the near-far problem of CDMA by supplying information. Simulation results show that each of the proposed protocols achieve significantly better system performance when compared to IEEE 802.11, other multi-channel protocols, and another protocol CDMA-based.
ContributorsMoon, Yuhan (Author) / Syrotiuk, Violet R. (Thesis advisor) / Huang, Dijiang (Committee member) / Reisslein, Martin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2010
187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
171782-Thumbnail Image.png
Description
Security requirements are at the heart of developing secure, invulnerable software. Without embedding security principles in the software development life cycle, the likelihood of producing insecure software increases, putting the consumers of that software at great risk. For large-scale software development, this problem is complicated as there may be hundreds

Security requirements are at the heart of developing secure, invulnerable software. Without embedding security principles in the software development life cycle, the likelihood of producing insecure software increases, putting the consumers of that software at great risk. For large-scale software development, this problem is complicated as there may be hundreds or thousands of security requirements that need to be met, and it only worsens if the software development project is developed by a distributed development team. In this thesis, an approach is provided for software security requirement traceability for large-scale and complex software development projects being developed by distributed development teams. The approach utilizes blockchain technology to improve the automation of security requirement satisfaction and create a more transparent and trustworthy development environment for distributed development teams. The approach also introduces immutability, auditability, and non-repudiation into the security requirement traceability process. The approach is evaluated against existing software security requirement solutions.
ContributorsKulkarni, Adi Deepak (Author) / Yau, Stephen S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Wang, Ruoyu (Committee member) / Baek, Jaejong (Committee member) / Arizona State University (Publisher)
Created2022
171617-Thumbnail Image.png
Description
Ontologies play an important role in storing and exchanging digitized data. As the need for semantic web information grows, organizations from around the globe has defined ontologies in different domains to better represent the data. But different organizations define ontologies of the same entity in their own way. Finding ontologies

Ontologies play an important role in storing and exchanging digitized data. As the need for semantic web information grows, organizations from around the globe has defined ontologies in different domains to better represent the data. But different organizations define ontologies of the same entity in their own way. Finding ontologies of the same entity in different fields and domains has become very important for unifying and improving interoperability of data between these multiple domains. Many different techniques have been used over the year, including human assisted, automated and hybrid. In recent years with the availability of many machine learning techniques, researchers are trying to apply these techniques to solve the ontology alignment problem across different domains. In this study I have looked into the use of different machine learning techniques such as Support Vector Machine, Stochastic Gradient Descent, Random Forest etc. for solving ontology alignment problem with some of the most commonly used datasets found from the famous Ontology Alignment Evaluation Initiative (OAEI). I have proposed a method OntoAlign which demonstrates the importance of using different types of similarity measures for feature extraction from ontology data in order to achieve better results for ontology alignment.
ContributorsNasim, Tariq M (Author) / Bansal, Srividya (Thesis advisor) / Mehlhase, Alexandra (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2022