Matching Items (35)
Filtering by

Clear all filters

154256-Thumbnail Image.png
Description
Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured

Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured the sensitivity of the human visual system to blur using 2D test patterns. In this dissertation, subjective tests are performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. The results of this study indicate that, in the symmetric stereo viewing case, binocular disparity does not affect the blur discrimination thresholds for the selected 3D test patterns. In the asymmetric viewing case, the blur discrimination thresholds decreased and the decrease in threshold values is found to be dominated by the eye observing the higher blur.



The second part of the dissertation focuses on texture granularity in the context of 2D images. A texture granularity database referred to as GranTEX, consisting of textures with varying granularity levels is constructed. A subjective study is conducted to measure the perceived granularity level of textures present in the GranTEX database. An objective index that automatically measures the perceived granularity level of textures is also presented. It is shown that the proposed granularity metric correlates well with the subjective granularity scores and outperforms the other methods presented in the literature.

A subjective study is conducted to assess the effect of compression on textures with varying degrees of granularity. A logarithmic function model is proposed as a fit to the subjective test data. It is demonstrated that the proposed model can be used for rate-distortion control by allowing the automatic selection of the needed compression ratio for a target visual quality. The proposed model can also be used for visual quality assessment by providing a measure of the visual quality for a target compression ratio.

The effect of texture granularity on the quality of synthesized textures is studied. A subjective study is presented to assess the quality of synthesized textures with varying levels of texture granularity using different types of texture synthesis methods. This work also proposes a reduced-reference visual quality index referred to as delta texture granularity index for assessing the visual quality of synthesized textures.
ContributorsSubedar, Mahesh M (Author) / Karam, Lina (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2015
152835-Thumbnail Image.png
Description
Conceptual knowledge and self-efficacy are two research topics that are well-established at universities, however very little has been investigated about these at the community college. A sample of thirty-seven students enrolled in three introductory circuit analysis classes at a large southwestern community college was used to answer questions about conceptual

Conceptual knowledge and self-efficacy are two research topics that are well-established at universities, however very little has been investigated about these at the community college. A sample of thirty-seven students enrolled in three introductory circuit analysis classes at a large southwestern community college was used to answer questions about conceptual knowledge and self-efficacy of community college engineering students. Measures included a demographic survey and a pre/post three-tiered concept inventory to evaluate student conceptual knowledge of basic DC circuit analysis and self-efficacy for circuit analysis. A group effect was present in the data, so descriptive statistics were used to investigate the relationships among students' personal and academic characteristics and conceptual knowledge of circuit analysis. The a priori attribute approach was used to qualitatively investigate misconceptions students have for circuit analysis. The results suggest that students who take more credit hours score higher on a test of conceptual knowledge of circuit analysis, however additional research is required to confirm this, due to the group effect. No new misconceptions were identified. In addition to these, one group of students received more time to practice using the concepts. Consequently, that group scored higher on the concept inventory, possibly indicating that students who have extra practice time may score higher on a test of conceptual knowledge of circuit analysis. Correlation analysis was used to identify relationships among students' personal and academic characteristics and self-efficacy for circuit analysis, as well as to investigate the relationship between self-efficacy for circuit analysis and conceptual knowledge of circuit analysis. Subject's father's education level was found to be inversely correlated with self-efficacy for circuit analysis, and subject's age was found to be directly correlated with self-efficacy for circuit analysis. Finally, self-efficacy for circuit analysis was found to be positively correlated with conceptual knowledge of circuit analysis.
ContributorsWhitesel, Carl Arthur (Author) / Baker, Dale R. (Thesis advisor) / Reisslein, Martin (Committee member) / Carberry, Adam (Committee member) / Arizona State University (Publisher)
Created2014
Description
The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for

The purpose of this paper is to introduce a new method of dividing wireless communication (such as the 802.11a/b/g
and cellular UMTS MAC protocols) across multiple unreliable communication links (such as Ethernet). The purpose is to introduce the appropriate hardware, software, and system architecture required to provide the basis for a wireless system (using a 802.11a/b/g
and cellular protocols as a model) that can scale to support thousands of users simultaneously (say in a large office building, super chain store, etc.) or in a small, but very dense communication RF region. Elements of communication between a base station and a Mobile Station will be analyzed statistically to demonstrate higher throughput, fewer collisions and lower bit error rates (BER) with the given bandwidth defined by the 802.11n wireless specification (use of MIMO channels will be evaluated). A new network nodal paradigm will be presented. Alternative link layer communication techniques will be recommended and analyzed for the affect on mobile devices. The analysis will describe how the algorithms used by state machines implemented on Mobile Stations and Wi-Fi client devices will be influenced by new base station transmission behavior. New hardware design techniques that can be used to optimize this architecture as well as hardware design principles in regard to the minimal hardware functional blocks required to support such a system design will be described. Hardware design and verification simulation techniques to prove the hardware design will accommodate an acceptable level of performance to meet the strict timing as it relates to this new system architecture.
ContributorsJames, Frank (Author) / Reisslein, Martin (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2014
152475-Thumbnail Image.png
Description
Recently, the location of the nodes in wireless networks has been modeled as point processes. In this dissertation, various scenarios of wireless communications in large-scale networks modeled as point processes are considered. The first part of the dissertation considers signal reception and detection problems with symmetric alpha stable noise which

Recently, the location of the nodes in wireless networks has been modeled as point processes. In this dissertation, various scenarios of wireless communications in large-scale networks modeled as point processes are considered. The first part of the dissertation considers signal reception and detection problems with symmetric alpha stable noise which is from an interfering network modeled as a Poisson point process. For the signal reception problem, the performance of space-time coding (STC) over fading channels with alpha stable noise is studied. We derive pairwise error probability (PEP) of orthogonal STCs. For general STCs, we propose a maximum-likelihood (ML) receiver, and its approximation. The resulting asymptotically optimal receiver (AOR) does not depend on noise parameters and is computationally simple, and close to the ML performance. Then, signal detection in coexisting wireless sensor networks (WSNs) is considered. We define a binary hypothesis testing problem for the signal detection in coexisting WSNs. For the problem, we introduce the ML detector and simpler alternatives. The proposed mixed-fractional lower order moment (FLOM) detector is computationally simple and close to the ML performance. Stochastic orders are binary relations defined on probability. The second part of the dissertation introduces stochastic ordering of interferences in large-scale networks modeled as point processes. Since closed-form results for the interference distributions for such networks are only available in limited cases, it is of interest to compare network interferences using stochastic. In this dissertation, conditions on the fading distribution and path-loss model are given to establish stochastic ordering between interferences. Moreover, Laplace functional (LF) ordering is defined between point processes and applied for comparing interference. Then, the LF orderings of general classes of point processes are introduced. It is also shown that the LF ordering is preserved when independent operations such as marking, thinning, random translation, and superposition are applied. The LF ordering of point processes is a useful tool for comparing spatial deployments of wireless networks and can be used to establish comparisons of several performance metrics such as coverage probability, achievable rate, and resource allocation even when closed form expressions for such metrics are unavailable.
ContributorsLee, Junghoon (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Committee member) / Reisslein, Martin (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014
152767-Thumbnail Image.png
Description
Voice and other circuit switched services in a LTE deployment can be based on a Circuit Switched Fall Back mechanism or on the upcoming Voice Over LTE option. Voice Over LTE option can be used with its SIP based signaling to route voice calls and other circuit switched services over

Voice and other circuit switched services in a LTE deployment can be based on a Circuit Switched Fall Back mechanism or on the upcoming Voice Over LTE option. Voice Over LTE option can be used with its SIP based signaling to route voice calls and other circuit switched services over the LTE's packet switched core. The main issue that is faced though is the validation of this approach before the deployment over commercial network. The test strategy devised as a result of this work will be able to visit corner scenarios and error sensitive services, so that signaling involved can be verified to ensure a robust deployment of the Voice Over LTE network. Signaling test strategy is based on the observations made during a simulated Voice Over LTE call inside the lab in a controlled environment. Emergency services offered are carefully studied to devise a robust test strategy to make sure that any service failure is avoided. Other area were the service is routed via different protocol stack layer than it normally is in a legacy circuit switched core are identified and brought into the scope of the test strategy.
ContributorsThotton Veettil, Vinayak (Author) / Reisslein, Martin (Thesis advisor) / Ying, Lei (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
152758-Thumbnail Image.png
Description
Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user

Dynamic channel selection in cognitive radio consists of two main phases. The first phase is spectrum sensing, during which the channels that are occupied by the primary users are detected. The second phase is channel selection, during which the state of the channel to be used by the secondary user is estimated. The existing cognitive radio channel selection literature assumes perfect spectrum sensing. However, this assumption becomes problematic as the noise in the channels increases, resulting in high probability of false alarm and high probability of missed detection. This thesis proposes a solution to this problem by incorporating the estimated state of channel occupancy into a selection cost function. The problem of optimal single-channel selection in cognitive radio is considered. A unique approach to the channel selection problem is proposed which consists of first using a particle filter to estimate the state of channel occupancy and then using the estimated state with a cost function to select a single channel for transmission. The selection cost function provides a means of assessing the various combinations of unoccupied channels in terms of desirability. By minimizing the expected selection cost function over all possible channel occupancy combinations, the optimal hypothesis which identifies the optimal single channel is obtained. Several variations of the proposed cost-based channel selection approach are discussed and simulated in a variety of environments, ranging from low to high number of primary user channels, low to high levels of signal-to-noise ratios, and low to high levels of primary user traffic.
ContributorsZapp, Joseph (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
153016-Thumbnail Image.png
Description
Survey indicates a rise of 81% in mobile data usage in the year 2013. A fair share of this total data demand can be attributed to video streaming. The encoding structure of videos, introduces nuances that can be utilized to ensure a fair and optimal means of streaming the video

Survey indicates a rise of 81% in mobile data usage in the year 2013. A fair share of this total data demand can be attributed to video streaming. The encoding structure of videos, introduces nuances that can be utilized to ensure a fair and optimal means of streaming the video data. This dissertation proposes a novel user and packet scheduling algorithm that guarantees a fair allocation of resources. MS-SSIM index

is used to calculate the mean opinion score (DMOS) to evaluate the quality of the received video. Simulations indicate that the proposed algorithm outperforms existing algorithms in the literature.
ContributorsChoudhuri, Sabarna (Author) / Ying, Lei (Thesis advisor) / Bliss, Dan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2014
152872-Thumbnail Image.png
Description
LTE-Advanced networks employ random access based on preambles

transmitted according to multi-channel slotted Aloha principles. The

random access is controlled through a limit W on the number of

transmission attempts and a timeout period for uniform backoff after a

collision. We model the LTE-Advanced random access system by formulating

the equilibrium condition for the ratio

LTE-Advanced networks employ random access based on preambles

transmitted according to multi-channel slotted Aloha principles. The

random access is controlled through a limit W on the number of

transmission attempts and a timeout period for uniform backoff after a

collision. We model the LTE-Advanced random access system by formulating

the equilibrium condition for the ratio of the number of requests

successful within the permitted number of transmission attempts to those

successful in one attempt. We prove that for W≤8 there is only one

equilibrium operating point and for W≥9 there are three operating

points if the request load ρ is between load boundaries ρ1

and ρ2. We analytically identify these load boundaries as well as

the corresponding system operating points. We analyze the throughput and

delay of successful requests at the operating points and validate the

analytical results through simulations. Further, we generalize the

results using a steady-state equilibrium based approach and develop

models for single-channel and multi-channel systems, incorporating the

barring probability PB. Ultimately, we identify the de-correlating

effect of parameters O, PB, and Tomax and introduce the

Poissonization effect due to the backlogged requests in a slot. We

investigate the impact of Poissonization on different traffic and

conclude this thesis.
ContributorsTyagi, Revak (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / McGarry, Michael (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2014
153081-Thumbnail Image.png
Description
LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important

LTE (Long Term Evolution) represents an emerging technology that will change how service providers backhaul user traffic to their infrastructure over IP networks. To support growing mobile bandwidth demand, an EPON backhaul infrastructure will make possible realtime high bandwidth applications. LTE backhaul planning and deployment scenarios are important factors to network success. In this thesis, we are going to study the effect of LTE backhaul on Optical network, in an attempt to interoperate Fiber and Wireless networks. This project is based on traffic forecast for the LTE networks. Traffic models are studied and gathered from literature to reflect applications accurately. Careful capacity planning of the mobile backhaul is going to bring a better experience for LTE users, in terms of bit rates and latency they can expect, while allowing the network operators to spend their funds effectively.
ContributorsAlharbi, Ziyad (Author) / Reisslein, Martin (Thesis advisor) / Zhang, Yanchao (Committee member) / McGarry, Michael (Committee member) / Arizona State University (Publisher)
Created2014
155679-Thumbnail Image.png
Description
Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has

Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has been introduced to address the bottleneck. The Sm-GW flexibly schedules uplink transmissions for the eNBs. Based on software defined networking (SDN) a management mechanism that allows multiple operator to flexibly inter-operate via multiple Sm-GWs with a multitude of small cells has been proposed. This dissertation also comprehensively survey the studies that examine the SDN paradigm in optical networks. Along with the PHY functional split improvements, the performance of Distributed Converged Cable Access Platform (DCCAP) in the cable architectures especially for the Remote-PHY and Remote-MACPHY nodes has been evaluated. In the PHY functional split, in addition to the re-use of infrastructure with a common FFT module for multiple technologies, a novel cross functional split interaction to cache the repetitive QAM symbols across time at the remote node to reduce the transmission rate requirement of the fronthaul link has been proposed.
ContributorsThyagaturu, Akhilesh Thyagaturu (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Zhang, Yanchao (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017