Matching Items (59)
Filtering by

Clear all filters

187813-Thumbnail Image.png
Description
The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem

The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem explores the impact of social learning in collecting and trading unverifiable information where a data collector purchases data from users through a payment mechanism. Each user starts with a personal signal which represents the knowledge about the underlying state the data collector desires to learn. Through social interactions, each user also acquires additional information from his neighbors in the social network. It is revealed that both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation for a given total payment budget. In the second half, a federated learning scheme to train a global learning model with strategic agents, who are not bound to contribute their resources unconditionally, is considered. Since the agents are not obliged to provide their true stochastic gradient updates and the server is not capable of directly validating the authenticity of reported updates, the learning process may reach a noncooperative equilibrium. First, the actions of the agents are assumed to be binary: cooperative or defective. If the cooperative action is taken, the agent sends a privacy-preserved version of stochastic gradient signal. If the defective action is taken, the agent sends an arbitrary uninformative noise signal. Furthermore, this setup is extended into the scenarios with more general actions spaces where the quality of the stochastic gradient updates have a range of discrete levels. The proposed methodology evaluates each agent's stochastic gradient according to a reference gradient estimate which is constructed from the gradients provided by other agents, and rewards the agent based on that evaluation.
ContributorsAkbay, Abdullah Basar (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Committee member) / Kosut, Oliver (Committee member) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
156306-Thumbnail Image.png
Description
Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications are studied.

In the first part, unmanned aerial system-to-ground communications

Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications are studied.

In the first part, unmanned aerial system-to-ground communications channel models are derived from empirical data collected from software-defined radio transceivers in residential and mountainous desert environments using a small (< 20 kg) unmanned aerial system during low-altitude flight (< 130 m). The Kullback-Leibler divergence measure was employed to characterize model mismatch from the empirical data. Using this measure the derived models accurately describe the underlying data.

In the second part, an experimental joint sensing and communications system is implemented using a network of software-defined radio transceivers. A novel co-design receiver architecture is presented and demonstrated within a three-node joint multiple access system topology consisting of an independent radar and communications transmitter along with a joint radar and communications receiver. The receiver tracks an emulated target moving along a predefined path and simultaneously decodes a communications message. Experimental system performance bounds are characterized jointly using the communications channel capacity and novel estimation information rate.
ContributorsGutierrez, Richard (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Ogras, Umit Y. (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2018
187375-Thumbnail Image.png
Description
With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such

With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such systems. In RIS-aided communication systems, designing this smart interaction, however, requires acquiring large-dimensional channel knowledge between the RIS and the transmitter/receiver. Acquiring this knowledge is one of the most crucial challenges in RISs as it is associated with large computational and hardware complexity. For RIS-aided sensing systems, it is interesting to first investigate scene depth perception based on millimeter wave (mmWave) multiple-input multiple-output (MIMO) sensing. While mmWave MIMO sensing systems address some critical limitations suffered by optical sensors, realizing these systems possess several key challenges: communication-constrained sensing framework design, beam codebook design, and scene depth estimation challenges. Given the high spatial resolution provided by the RISs, RIS-aided mmWave sensing systems have the potential to improve the scene depth perception, while imposing some key challenges too. In this dissertation, for RIS-aided communication systems, efficient RIS interaction design solutions are proposed by leveraging tools from compressive sensing and deep learning. The achievable rates of these solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead. For RIS-aided sensing systems, a mmWave MIMO based sensing framework is first developed for building accurate depth maps under the constraints imposed by the communication transceivers. Then, a scene depth estimation framework based on RIS-aided sensing is developed for building high-resolution accurate depth maps. Numerical simulations illustrate the promising performance of the proposed solutions, highlighting their potential for next-generation communication and sensing systems.
ContributorsTaha, Abdelrahman (Author) / Alkhateeb, Ahmed (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
187456-Thumbnail Image.png
Description
The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target

The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target distributions and (iv) belief on existing metrics as reliable indicators of performance. When any of these assumptions are violated, the models exhibit brittleness producing adversely varied behavior. This dissertation focuses on methods for accurate model design and characterization that enhance process reliability when certain assumptions are not met. With the need to safely adopt artificial intelligence tools in practice, it is vital to build reliable failure detectors that indicate regimes where the model must not be invoked. To that end, an error predictor trained with a self-calibration objective is developed to estimate loss consistent with the underlying model. The properties of the error predictor are described and their utility in supporting introspection via feature importances and counterfactual explanations is elucidated. While such an approach can signal data regime changes, it is critical to calibrate models using regimes of inlier (training) and outlier data to prevent under- and over-generalization in models i.e., incorrectly identifying inliers as outliers and vice-versa. By identifying the space for specifying inliers and outliers, an anomaly detector that can effectively flag data of varying semantic complexities in medical imaging is next developed. Uncertainty quantification in deep learning models involves identifying sources of failure and characterizing model confidence to enable actionability. A training strategy is developed that allows the accurate estimation of model uncertainties and its benefits are demonstrated for active learning and generalization gap prediction. This helps identify insufficiently sampled regimes and representation insufficiency in models. In addition, the task of deep inversion under data scarce scenarios is considered, which in practice requires a prior to control the optimization. By identifying limitations in existing work, data priors powered by generative models and deep model priors are designed for audio restoration. With relevant empirical studies on a variety of benchmarks, the need for such design strategies is demonstrated.
ContributorsNarayanaswamy, Vivek Sivaraman (Author) / Spanias, Andreas (Thesis advisor) / J. Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2023
171997-Thumbnail Image.png
Description
In the recent years, deep learning has gained popularity for its ability to be utilized for several computer vision applications without any apriori knowledge. However, to introduce better inductive bias incorporating prior knowledge along with learnedinformation is critical. To that end, human intervention including choice of algorithm, data and model

In the recent years, deep learning has gained popularity for its ability to be utilized for several computer vision applications without any apriori knowledge. However, to introduce better inductive bias incorporating prior knowledge along with learnedinformation is critical. To that end, human intervention including choice of algorithm, data and model in deep learning pipelines can be considered a prior. Thus, it is extremely important to select effective priors for a given application. This dissertation explores different aspects of a deep learning pipeline and provides insights as to why a particular prior is effective for the corresponding application. For analyzing the effect of model priors, three applications which involvesequential modelling problems i.e. Audio Source Separation, Clinical Time-series (Electroencephalogram (EEG)/Electrocardiogram(ECG)) based Differential Diagnosis and Global Horizontal Irradiance Forecasting for Photovoltaic (PV) Applications are chosen. For data priors, the application of image classification is chosen and a new algorithm titled,“Invenio” that can effectively use data semantics for both task and distribution shift scenarios is proposed. Finally, the effectiveness of a data selection prior is shown using the application of object tracking wherein the aim is to maintain the tracking performance while prolonging the battery usage of image sensors by optimizing the data selected for reading from the environment. For every research contribution of this dissertation, several empirical studies are conducted on benchmark datasets. The proposed design choices demonstrate significant performance improvements in comparison to the existing application specific state-of-the-art deep learning strategies.
ContributorsKatoch, Sameeksha (Author) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Thiagarajan, Jayaraman J. (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2022
154022-Thumbnail Image.png
Description
There has been a lot of work on the characterization of capacity and achievable rate regions, and rate region outer-bounds for various multi-user channels of interest. Parallel to the developed information theoretic results, practical codes have also been designed for some multi-user channels such as multiple access channels, broadcast channels

There has been a lot of work on the characterization of capacity and achievable rate regions, and rate region outer-bounds for various multi-user channels of interest. Parallel to the developed information theoretic results, practical codes have also been designed for some multi-user channels such as multiple access channels, broadcast channels and relay channels; however, interference channels have not received much attention and only a limited amount of work has been conducted on them. With this motivation, in this dissertation, design of practical and implementable channel codes is studied focusing on multi-user channels with special emphasis on interference channels; in particular, irregular low-density-parity-check codes are exploited for a variety of cases and trellis based codes for short block length designs are performed.

Novel code design approaches are first studied for the two-user Gaussian multiple access channel. Exploiting Gaussian mixture approximation, new methods are proposed wherein the optimized codes are shown to improve upon the available designs and off-the-shelf point-to-point codes applied to the multiple access channel scenario. The code design is then examined for the two-user Gaussian interference channel implementing the Han-Kobayashi encoding and decoding strategy. Compared with the point-to-point codes, the newly designed codes consistently offer better performance. Parallel to this work, code design is explored for the discrete memoryless interference channels wherein the channel inputs and outputs are taken from a finite alphabet and it is demonstrated that the designed codes are superior to the single user codes used with time sharing. Finally, the code design principles are also investigated for the two-user Gaussian interference channel employing trellis-based codes with short block lengths for the case of strong and mixed interference levels.
ContributorsSharifi, Shahrouz (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2015
154246-Thumbnail Image.png
Description
The power of science lies in its ability to infer and predict the

existence of objects from which no direct information can be obtained

experimentally or observationally. A well known example is to

ascertain the existence of black holes of various masses in different

parts of the universe from indirect evidence, such as X-ray

The power of science lies in its ability to infer and predict the

existence of objects from which no direct information can be obtained

experimentally or observationally. A well known example is to

ascertain the existence of black holes of various masses in different

parts of the universe from indirect evidence, such as X-ray emissions.

In the field of complex networks, the problem of detecting

hidden nodes can be stated, as follows. Consider a network whose

topology is completely unknown but whose nodes consist of two types:

one accessible and another inaccessible from the outside world. The

accessible nodes can be observed or monitored, and it is assumed that time

series are available from each node in this group. The inaccessible

nodes are shielded from the outside and they are essentially

``hidden.'' The question is, based solely on the

available time series from the accessible nodes, can the existence and

locations of the hidden nodes be inferred? A completely data-driven,

compressive-sensing based method is developed to address this issue by utilizing

complex weighted networks of nonlinear oscillators, evolutionary game

and geospatial networks.

Both microbes and multicellular organisms actively regulate their cell

fate determination to cope with changing environments or to ensure

proper development. Here, the synthetic biology approaches are used to

engineer bistable gene networks to demonstrate that stochastic and

permanent cell fate determination can be achieved through initializing

gene regulatory networks (GRNs) at the boundary between dynamic

attractors. This is experimentally realized by linking a synthetic GRN

to a natural output of galactose metabolism regulation in yeast.

Combining mathematical modeling and flow cytometry, the

engineered systems are shown to be bistable and that inherent gene expression

stochasticity does not induce spontaneous state transitioning at

steady state. By interfacing rationally designed synthetic

GRNs with background gene regulation mechanisms, this work

investigates intricate properties of networks that illuminate possible

regulatory mechanisms for cell differentiation and development that

can be initiated from points of instability.
ContributorsSu, Ri-Qi (Author) / Lai, Ying-Cheng (Thesis advisor) / Wang, Xiao (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2015
154319-Thumbnail Image.png
Description
In many applications, measured sensor data is meaningful only when the location of sensors is accurately known. Therefore, the localization accuracy is crucial. In this dissertation, both location estimation and location detection problems are considered.

In location estimation problems, sensor nodes at known locations, called anchors, transmit signals to sensor

In many applications, measured sensor data is meaningful only when the location of sensors is accurately known. Therefore, the localization accuracy is crucial. In this dissertation, both location estimation and location detection problems are considered.

In location estimation problems, sensor nodes at known locations, called anchors, transmit signals to sensor nodes at unknown locations, called nodes, and use these transmissions to estimate the location of the nodes. Specifically, the location estimation in the presence of fading channels using time of arrival (TOA) measurements with narrowband communication signals is considered. Meanwhile, the Cramer-Rao lower bound (CRLB) for localization error under different assumptions is derived. Also, maximum likelihood estimators (MLEs) under these assumptions are derived.

In large WSNs, distributed location estimation algorithms are more efficient than centralized algorithms. A sequential localization scheme, which is one of distributed location estimation algorithms, is considered. Also, different localization methods, such as TOA, received signal strength (RSS), time difference of arrival (TDOA), direction of arrival (DOA), and large aperture array (LAA) are compared under different signal-to-noise ratio (SNR) conditions. Simulation results show that DOA is the preferred scheme at the low SNR regime and the LAA localization algorithm provides better performance for network discovery at high SNRs. Meanwhile, the CRLB for the localization error using the TOA method is also derived.

A distributed location detection scheme, which allows each anchor to make a decision as to whether a node is active or not is proposed. Once an anchor makes a decision, a bit is transmitted to a fusion center (FC). The fusion center combines all the decisions and uses a design parameter $K$ to make the final decision. Three scenarios are considered in this dissertation. Firstly, location detection at a known location is considered. Secondly, detecting a node in a known region is considered. Thirdly, location detection in the presence of fading is considered. The optimal thresholds are derived and the total probability of false alarm and detection under different scenarios are derived.
ContributorsZhang, Xue (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2016
154202-Thumbnail Image.png
Description
The recent proposal of two-way relaying has attracted much attention due to its promising features for many practical scenarios. Hereby, two users communicate simultaneously in both directions to exchange their messages with the help of a relay node. This doctoral study investigates various aspects of two-way relaying. Specifically, the issue

The recent proposal of two-way relaying has attracted much attention due to its promising features for many practical scenarios. Hereby, two users communicate simultaneously in both directions to exchange their messages with the help of a relay node. This doctoral study investigates various aspects of two-way relaying. Specifically, the issue of asynchronism, lack of channel knowledge, transmission of correlated sources and multi-way relaying techniques involving multiple users are explored.

With the motivation of developing enabling techniques for two-way relay (TWR) channels experiencing excessive synchronization errors, two conceptually-different schemes are proposed to accommodate any relative misalignment between the signals received at any node. By designing a practical transmission/detection mechanism based on orthogonal frequency division multiplexing (OFDM), the proposed schemes perform significantly better than existing competing solutions. In a related direction, differential modulation is implemented for asynchronous TWR systems that lack the channel state information (CSI) knowledge. The challenge in this problem compared to the conventional point-to-point counterpart arises not only from the asynchrony but also from the existence of an interfering signal. Extensive numerical examples, supported by analytical work, are given to demonstrate the advantages of the proposed schemes.

Other important issues considered in this dissertation are related to the extension of the two-way relaying scheme to the multiple-user case, known as the multi-way relaying. First, a distributed source coding solution based on Slepian-Wolf coding is proposed to compress correlated messages close to the information theoretical limits in the context of multi-way relay (MWR) channels. Specifically, the syndrome approach based on low-density parity-check (LDPC) codes is implemented. A number of relaying strategies are considered for this problem offering a tradeoff between performance and complexity. The proposed solutions have shown significant improvements compared to the existing ones in terms of the achievable compression rates. On a different front, a novel approach to channel coding is proposed for the MWR channel based on the implementation of nested codes in a distributed manner. This approach ensures that each node decodes the messages of the other users without requiring complex operations at the relay, and at the same time, providing substantial benefits compared to the traditional routing solution.
ContributorsSalīm, Aḥmad (Author) / Duman, Tolga M. (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2015
154232-Thumbnail Image.png
Description
Access Networks provide the backbone to the Internet connecting the end-users to

the core network thus forming the most important segment for connectivity. Access

Networks have multiple physical layer medium ranging from fiber cables, to DSL links

and Wireless nodes, creating practically-used hybrid access networks. We explore the

hybrid access network at the Medium

Access Networks provide the backbone to the Internet connecting the end-users to

the core network thus forming the most important segment for connectivity. Access

Networks have multiple physical layer medium ranging from fiber cables, to DSL links

and Wireless nodes, creating practically-used hybrid access networks. We explore the

hybrid access network at the Medium ACcess (MAC) Layer which receives packets

segregated as data and control packets, thus providing the needed decoupling of data

and control plane. We utilize the Software Defined Networking (SDN) principle of

centralized processing with segregated data and control plane to further extend the

usability of our algorithms. This dissertation introduces novel techniques in Dynamic

Bandwidth allocation, control message scheduling policy, flow control techniques and

Grouping techniques to provide improved performance in Hybrid Passive Optical Networks (PON) such as PON-xDSL, FiWi etc. Finally, we study the different types of

software defined algorithms in access networks and describe the various open challenges and research directions.
ContributorsMercian, Anu (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael P (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2015