Matching Items (52)
Filtering by
- All Subjects: Electrical Engineering
- Creators: Zhang, Junshan
- Resource Type: Text
- Status: Published

Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multiple-access channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In a first class of problems, distributed inference over fading Gaussian multiple-access channels with amplify-and-forward is considered. Sensors observe a phenomenon and transmit their observations using the amplify-and-forward scheme to a fusion center (FC). Distributed estimation is considered with a single antenna at the FC, where the performance is evaluated using the asymptotic variance of the estimator. The loss in performance due to varying assumptions on the limited amounts of channel information at the sensors is quantified. With multiple antennas at the FC, a distributed detection problem is also considered, where the error exponent is used to evaluate performance. It is shown that for zero-mean channels between the sensors and the FC when there is no channel information at the sensors, arbitrarily large gains in the error exponent can be obtained with sufficient increase in the number of antennas at the FC. In stark contrast, when there is channel information at the sensors, the gain in error exponent due to having multiple antennas at the FC is shown to be no more than a factor of 8/π for Rayleigh fading channels between the sensors and the FC, independent of the number of antennas at the FC, or correlation among noise samples across sensors. In a second class of problems, sensor observations are transmitted to the FC using constant-modulus phase modulation over Gaussian multiple-access-channels. The phase modulation scheme allows for constant transmit power and estimation of moments other than the mean with a single transmission from the sensors. Estimators are developed for the mean, variance and signal-to-noise ratio (SNR) of the sensor observations. The performance of these estimators is studied for different distributions of the observations. It is proved that the estimator of the mean is asymptotically efficient if and only if the distribution of the sensor observations is Gaussian.

Ethernet switching is provided to interconnect multiple Ethernets for the exchange of Ethernet data frames. Most Ethernet switches require data buffering and Ethernet signal regeneration at the switch which incur the problems of substantial signal processing, power consumption, and transmission delay. To solve these problems, a cross bar architecture switching system for 10GBASE-T Ethernet is proposed in this thesis. The switching system is considered as the first step of implementing a multi-stage interconnection network to achieve Terabit or Petabit switching. By routing customized headers in capsulated Ethernet frames in an out-of-band control method, the proposed switching system would transmit the original Ethernet frames with little processing, thereby makes the system appear as a simple physical medium for different hosts. The switching system is designed and performed by using CMOS technology.

This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of many microfluidic devices. Since it is often the case that un-modeled micro-scale physics frustrates principled design methodologies, particle based velocity field estimation is an essential design and validation tool. Current technologies that achieve this goal use particle constellation correlation strategies and rely heavily on costly, high-speed imaging hardware. The proposed image/ video processing based method achieves comparable accuracy for fraction of the cost. In the context of micro-channel velocimetry, the usability of particle streaks has been poorly studied so far. Their use has remained restricted mostly to bulk flow measurements and occasional ad-hoc uses in microfluidics. A second look at the usability of particle streak lengths in this work reveals that they can be efficiently used, after approximately 15 years from their first use for micro-channel velocimetry. Particle tracks in steady, smooth microfluidic flows is mathematically modeled and a framework for using experimentally observed particle track lengths for local velocity field estimation is introduced here, followed by algorithm implementation and quantitative verification. Further, experimental considerations and image processing techniques that can facilitate the proposed methods are also discussed in this dissertation. Unavailability of benchmarked particle track image data motivated the implementation of a simulation framework with the capability to generate exposure time controlled particle track image sequence for velocity vector fields. This dissertation also describes this work and shows that arbitrary velocity fields designed in computational fluid dynamics software tools can be used to obtain such images. Apart from aiding gold-standard data generation, such images would find use for quick microfluidic flow field visualization and help improve device designs.

Diffusion processes in networks can be used to model many real-world processes, such as the propagation of a rumor on social networks and cascading failures on power networks. Analysis of diffusion processes in networks can help us answer important questions such as the role and the importance of each node in the network for spreading the diffusion and how to top or contain a cascading failure in the network. This dissertation consists of three parts.
In the first part, we study the problem of locating multiple diffusion sources in networks under the Susceptible-Infected-Recovered (SIR) model. Given a complete snapshot of the network, we developed a sample-path-based algorithm, named clustering and localization, and proved that for regular trees, the estimators produced by the proposed algorithm are within a constant distance from the real sources with a high probability. Then, we considered the case in which only a partial snapshot is observed and proposed a new algorithm, named Optimal-Jordan-Cover (OJC). The algorithm first extracts a subgraph using a candidate selection algorithm that selects source candidates based on the number of observed infected nodes in their neighborhoods. Then, in the extracted subgraph, OJC finds a set of nodes that "cover" all observed infected nodes with the minimum radius. The set of nodes is called the Jordan cover, and is regarded as the set of diffusion sources. We proved that OJC can locate all sources with probability one asymptotically with partial observations in the Erdos-Renyi (ER) random graph. Multiple experiments on different networks were done, which show our algorithms outperform others.
In the second part, we tackle the problem of reconstructing the diffusion history from partial observations. We formulated the diffusion history reconstruction problem as a maximum a posteriori (MAP) problem and proved the problem is NP hard. Then we proposed a step-by- step reconstruction algorithm, which can always produce a diffusion history that is consistent with the partial observations. Our experimental results based on synthetic and real networks show that the algorithm significantly outperforms some existing methods.
In the third part, we consider the problem of improving the robustness of an interdependent network by rewiring a small number of links during a cascading attack. We formulated the problem as a Markov decision process (MDP) problem. While the problem is NP-hard, we developed an effective and efficient algorithm, RealWire, to robustify the network and to mitigate the damage during the attack. Extensive experimental results show that our algorithm outperforms other algorithms on most of the robustness metrics.

Fundamental limits of fixed-to-variable (F-V) and variable-to-fixed (V-F) length universal source coding at short blocklengths is characterized. For F-V length coding, the Type Size (TS) code has previously been shown to be optimal up to the third-order rate for universal compression of all memoryless sources over finite alphabets. The TS code assigns sequences ordered based on their type class sizes to binary strings ordered lexicographically.
Universal F-V coding problem for the class of first-order stationary, irreducible and aperiodic Markov sources is first considered. Third-order coding rate of the TS code for the Markov class is derived. A converse on the third-order coding rate for the general class of F-V codes is presented which shows the optimality of the TS code for such Markov sources.
This type class approach is then generalized for compression of the parametric sources. A natural scheme is to define two sequences to be in the same type class if and only if they are equiprobable under any model in the parametric class. This natural approach, however, is shown to be suboptimal. A variation of the Type Size code is introduced, where type classes are defined based on neighborhoods of minimal sufficient statistics. Asymptotics of the overflow rate of this variation is derived and a converse result establishes its optimality up to the third-order term. These results are derived for parametric families of i.i.d. sources as well as Markov sources.
Finally, universal V-F length coding of the class of parametric sources is considered in the short blocklengths regime. The proposed dictionary which is used to parse the source output stream, consists of sequences in the boundaries of transition from low to high quantized type complexity, hence the name Type Complexity (TC) code. For large enough dictionary, the $\epsilon$-coding rate of the TC code is derived and a converse result is derived showing its optimality up to the third-order term.

We consider the problem of routing packets with end-to-end hard deadlines in multihop communication networks. This is a challenging problem due to the complex spatial-temporal correlation among flows with different deadlines especially when significant traffic fluctuation exists. To tackle this problem, based on the spatial-temporal routing algorithm that specifies where and when a packet should be routed using concepts of virtual links and virtual routes, we proposed a constrained resource-pooling heuristic into the spatial-temporal routing, which enhances the ``work-conserving" capability and improves the delivery ratio. Our extensive simulations show that the policies improve the performance of spatial-temporal routing algorithm and outperform traditional policies such as backpressure and earliest-deadline-first (EDF) for more general traffic flows in multihop communication networks.

Dynamic spectrum access (DSA) has great potential to address worldwide spectrum shortage by enhancing spectrum efficiency. It allows unlicensed secondary users to access the under-utilized spectrum when the primary users are not transmitting. On the other hand, the open wireless medium subjects DSA systems to various security and privacy issues, which might hinder the practical deployment. This dissertation consists of two parts to discuss the potential challenges and solutions.
The first part consists of three chapters, with a focus on secondary-user authentication. Chapter One gives an overview of the challenges and existing solutions in spectrum-misuse detection. Chapter Two presents SpecGuard, the first crowdsourced spectrum-misuse detection framework for DSA systems. In SpecGuard, three novel schemes are proposed for embedding and detecting a spectrum permit at the physical layer. Chapter Three proposes SafeDSA, a novel PHY-based scheme utilizing temporal features for authenticating secondary users. In SafeDSA, the secondary user embeds his spectrum authorization into the cyclic prefix of each physical-layer symbol, which can be detected and authenticated by a verifier.
The second part also consists of three chapters, with a focus on crowdsourced spectrum sensing (CSS) with privacy consideration. CSS allows a spectrum sensing provider (SSP) to outsource the spectrum sensing to distributed mobile users. Without strong incentives and location-privacy protection in place, however, mobile users are reluctant to act as crowdsourcing workers for spectrum-sensing tasks. Chapter Four gives an overview of the challenges and existing solutions. Chapter Five presents PriCSS, where the SSP selects participants based on the exponential mechanism such that the participants' sensing cost, associated with their locations, are privacy-preserved. Chapter Six further proposes DPSense, a framework that allows the honest-but-curious SSP to select mobile users for executing spatiotemporal spectrum-sensing tasks without violating the location privacy of mobile users. By collecting perturbed location traces with differential privacy guarantee from participants, the SSP assigns spectrum-sensing tasks to participants with the consideration of both spatial and temporal factors.
Through theoretical analysis and simulations, the efficacy and effectiveness of the proposed schemes are validated.

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.

In the deregulated power system, locational marginal prices are used in transmission engineering predominantly as near real-time pricing signals. This work extends this concept to distribution engineering so that a distribution class locational marginal price might be used for real-time pricing and control of advanced control systems in distribution circuits. A formulation for the distribution locational marginal price signal is presented that is based on power flow sensitivities in a distribution system. A Jacobian-based sensitivity analysis has been developed for application in the distribution pricing method. Increasing deployment of distributed energy sources is being seen at the distribution level and this trend is expected to continue. To facilitate an optimal use of the distributed infrastructure, the control of the energy demand on a feeder node in the distribution system has been formulated as a multiobjective optimization problem and a solution algorithm has been developed. In multiobjective problems the Pareto optimality criterion is generally applied, and commonly used solution algorithms are decision-based and heuristic. In contrast, a mathematically-robust technique called normal boundary intersection has been modeled for use in this work, and the control variable is solved via separable programming. The Roy Billinton Test System (RBTS) has predominantly been used to demonstrate the application of the formulation in distribution system control. A parallel processing environment has been used to replicate the distributed nature of controls at many points in the distribution system. Interactions between the real-time prices in a distribution feeder and the nodal prices at the aggregated load bus have been investigated. The application of the formulations in an islanded operating condition has also been demonstrated. The DLMP formulation has been validated using the test bed systems and a practical framework for its application in distribution engineering has been presented. The multiobjective optimization yields excellent results and is found to be robust for finer time resolutions. The work shown in this report is applicable to, and has been researched under the aegis of the Future Renewable Electric Energy Delivery and Management (FREEDM) center, which is a generation III National Science Foundation engineering research center headquartered at North Carolina State University.