Matching Items (13)
Filtering by

Clear all filters

151530-Thumbnail Image.png
Description
Wireless technologies for health monitoring systems have seen considerable interest in recent years owing to it's potential to achieve vision of pervasive healthcare, that is healthcare to anyone, anywhere and anytime. Development of wearable wireless medical devices which have the capability to sense, compute, and send physiological information to a

Wireless technologies for health monitoring systems have seen considerable interest in recent years owing to it's potential to achieve vision of pervasive healthcare, that is healthcare to anyone, anywhere and anytime. Development of wearable wireless medical devices which have the capability to sense, compute, and send physiological information to a mobile gateway, forming a Body Sensor Network (BSN) is considered as a step towards achieving the vision of pervasive health monitoring systems (PHMS). PHMS consisting of wearable body sensors encourages unsupervised long-term monitoring, reducing frequent visit to hospital and nursing cost. Therefore, it is of utmost importance that operation of PHMS must be reliable, safe and have longer lifetime. A model-based automatic code generation provides a state-of-art code generation of sensor and smart phone code from high-level specification of a PHMS. Code generator intakes meta-model of PHMS specification, uses codebase containing code templates and algorithms, and generates platform specific code. Health-Dev, a framework for model-based development of PHMS, uses code generation to implement PHMS in sensor and smart phone. As a part of this thesis, model-based automatic code generation was evaluated and experimentally validated. The generated code was found to be safe in terms of ensuring no race condition, array, or pointer related errors in the generated code and more optimized as compared to hand-written BSN benchmark code in terms of lesser unreachable code.
ContributorsVerma, Sunit (Author) / Gupta, Sandeep (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2013
Description
The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for

The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for a wireless system (using a 802.11a/b/g
and cellular protocols as a model) that can scale to support thousands of users simultaneously (say in a large office building, super chain store, etc.) or in a small, but very dense communication RF region. Elements of communication between a base station and a Mobile Station will be analyzed statistically to demonstrate higher throughput, fewer collisions and lower bit error rates (BER) with the given bandwidth defined by the 802.11n wireless specification (use of MIMO channels will be evaluated). A new network nodal paradigm will be presented. Alternative link layer communication techniques will be recommended and analyzed for the affect on mobile devices. The analysis will describe how the algorithms used by state machines implemented on Mobile Stations and Wi-Fi client devices will be influenced by new base station transmission behavior. New hardware design techniques that can be used to optimize this architecture as well as hardware design principles in regard to the minimal hardware functional blocks required to support such a system design will be described. Hardware design and verification simulation techniques to prove the hardware design will accommodate an acceptable level of performance to meet the strict timing as it relates to this new system architecture.
ContributorsJames, Frank (Author) / Reisslein, Martin (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2014
152767-Thumbnail Image.png
Description
Voice and other circuit switched services in a LTE deployment can be based on a Circuit Switched Fall Back mechanism or on the upcoming Voice Over LTE option. Voice Over LTE option can be used with its SIP based signaling to route voice calls and other circuit switched services over

Voice and other circuit switched services in a LTE deployment can be based on a Circuit Switched Fall Back mechanism or on the upcoming Voice Over LTE option. Voice Over LTE option can be used with its SIP based signaling to route voice calls and other circuit switched services over the LTE's packet switched core. The main issue that is faced though is the validation of this approach before the deployment over commercial network. The test strategy devised as a result of this work will be able to visit corner scenarios and error sensitive services, so that signaling involved can be verified to ensure a robust deployment of the Voice Over LTE network. Signaling test strategy is based on the observations made during a simulated Voice Over LTE call inside the lab in a controlled environment. Emergency services offered are carefully studied to devise a robust test strategy to make sure that any service failure is avoided. Other area were the service is routed via different protocol stack layer than it normally is in a legacy circuit switched core are identified and brought into the scope of the test strategy.
ContributorsThotton Veettil, Vinayak (Author) / Reisslein, Martin (Thesis advisor) / Ying, Lei (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
152758-Thumbnail Image.png
Description
Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user

Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user is estimated. The existing cognitive radio channel selection literature assumes perfect spectrum sensing. However, this assumption becomes problematic as the noise in the channels increases, resulting in high probability of false alarm and high probability of missed detection. This thesis proposes a solution to this problem by incorporating the estimated state of channel occupancy into a selection cost function. The problem of optimal single-channel selection in cognitive radio is considered. A unique approach to the channel selection problem is proposed which consists of first using a particle filter to estimate the state of channel occupancy and then using the estimated state with a cost function to select a single channel for transmission. The selection cost function provides a means of assessing the various combinations of unoccupied channels in terms of desirability. By minimizing the expected selection cost function over all possible channel occupancy combinations, the optimal hypothesis which identifies the optimal single channel is obtained. Several variations of the proposed cost-based channel selection approach are discussed and simulated in a variety of environments, ranging from low to high number of primary user channels, low to high levels of signal-to-noise ratios, and low to high levels of primary user traffic.
ContributorsZapp, Joseph (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
153081-Thumbnail Image.png
Description
LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important

LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important factors to network success. In this thesis, we are going to study the effect of LTE backhaul on Optical network, in an attempt to interoperate Fiber and Wireless networks. This project is based on traffic forecast for the LTE networks. Traffic models are studied and gathered from literature to reflect applications accurately. Careful capacity planning of the mobile backhaul is going to bring a better experience for LTE users, in terms of bit rates and latency they can expect, while allowing the network operators to spend their funds effectively.
ContributorsAlharbi, Ziyad (Author) / Reisslein, Martin (Thesis advisor) / Zhang, Yanchao (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
153016-Thumbnail Image.png
Description
Survey indicates a rise of 81% in mobile data usage in the year 2013. A fair share of this total data demand can be attributed to video streaming. The encoding structure of videos, introduces nuances that can be utilized to ensure a fair and optimal means of streaming the video

Survey indicates a rise of 81% in mobile data usage in the year 2013. A fair share of this total data demand can be attributed to video streaming. The encoding structure of videos, introduces nuances that can be utilized to ensure a fair and optimal means of streaming the video data. This dissertation proposes a novel user and packet scheduling algorithm that guarantees a fair allocation of resources. MS-SSIM index

is used to calculate the mean opinion score (DMOS) to evaluate the quality of the received video. Simulations indicate that the proposed algorithm outperforms existing algorithms in the literature.
ContributorsChoudhuri, Sabarna (Author) / Ying, Lei (Thesis advisor) / Bliss, Dan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
151059-Thumbnail Image.png
Description
With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or

With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or OSI model, which in turn increases the bandwidth in the upper layers. The work in this thesis covers general description of basic DBA Schemes and mathematical derivations that have been established in research. We introduce a Novel Survey Topology that classifes DBA schemes based on their functionality. The novel perspective of classification will be useful in determining which scheme will best suit consumer's needs. We classify DBA as Direct, Intelligent and Predictive back on its computation method and we are able to qualitatively describe their delay and throughput bounds. Also we describe a recently developed DBA Scheme, Multi-thread polling(MTP) used in LRPON and describes the different viewpoints and issues and consequently introduce a novel technique Parallel Polling that overcomes most of issues faced in MTP and that promises better delay performance for LRPON.
ContributorsMercian, Anu (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2012
151024-Thumbnail Image.png
Description
Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line

Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line averaging (LA). The proposed method performs better than several popular benchmarking methods in terms of both visual quality and peak signal to noise ratio (PSNR). The algorithm performs better than existing approaches like edge-based line averaging (ELA) and spatio-temporal edge-based median filtering (STELA) on fine moving edges and semi-static regions of videos, which are recognized as particularly challenging deinterlacing cases. The proposed approach also performs better than the state-of-the-art content adaptive vertical temporal filtering (CAVTF) approach. Along with the main approach several spin-off approaches are also proposed each with its own characteristics.
ContributorsVenkatesan, Ragav (Author) / Frakes, David H (Thesis advisor) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2012