Matching Items (96)
Filtering by

Clear all filters

168528-Thumbnail Image.png
Description
Existing radio access networks (RANs) allow only for very limited sharing of thecommunication and computation resources among wireless operators and heterogeneous wireless technologies. The introduced LayBack architecture facilitates communication and computation resource sharing among different wireless operators and technologies. LayBack organizes the RAN communication and multiaccess edge computing (MEC) resources into layers, including a

Existing radio access networks (RANs) allow only for very limited sharing of thecommunication and computation resources among wireless operators and heterogeneous wireless technologies. The introduced LayBack architecture facilitates communication and computation resource sharing among different wireless operators and technologies. LayBack organizes the RAN communication and multiaccess edge computing (MEC) resources into layers, including a devices layer, a radio node (enhanced Node B and access point) layer, and a gateway layer. The layback optimization study addresses the problem of how a central SDN orchestrator can flexibly share the total backhaul capacity of the various wireless operators among their gateways and radio nodes (e.g., LTE enhanced Node Bs or Wi-Fi access points). In order to facilitate flexible network service virtualization and migration, network functions (NFs) are increasingly executed by software modules as so-called "softwarized NFs" on General-Purpose Computing (GPC) platforms and infrastructures. GPC platforms are not specifically designed to efficiently execute NFs with their typically intense Input/Output (I/O) demands. Recently, numerous hardware-based accelerations have been developed to augment GPC platforms and infrastructures, e.g., the central processing unit (CPU) and memory, to efficiently execute NFs. The computing capabilities of client devices are continuously increasing; at the same time, demands for ultra-low latency (ULL) services are increasing. These ULL services can be provided by migrating some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices.
ContributorsShantharama, Prateek (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Thyagaturu, Akhilesh (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2022
168514-Thumbnail Image.png
Description
Operational efficiency of solar energy farms requires detailed analytics and information on each panel regarding voltage, current, temperature, and irradiance. Monitoring utility-scale solar arrays was shown to minimize the cost of maintenance and help optimize the performance of photovoltaic (PV) arrays under various conditions. This dissertation describes a project that

Operational efficiency of solar energy farms requires detailed analytics and information on each panel regarding voltage, current, temperature, and irradiance. Monitoring utility-scale solar arrays was shown to minimize the cost of maintenance and help optimize the performance of photovoltaic (PV) arrays under various conditions. This dissertation describes a project that focuses on the development of machine learning and neural network algorithms. It also describes an 18kW solar array testbed for the purpose of PV monitoring and control. The use of the 18kW Sensor Signal and Information Processing (SenSIP) PV testbed which consists of 104 modules fitted with smart monitoring devices (SMDs) is described in detail. Each of the SMDs has embedded, a wireless transceiver, and relays that enable continuous monitoring, fault detection, and real-time connection topology changes. Data is obtained in real time using the SenSIP PV testbed. Machine learning and neural network algorithms for PV fault classification is are studied in depth. More specifically, the development of a series of customized neural networks for detection and classification of solar array faults that include soiling, shading, degradation, short circuits and standard test conditions is considered. The evaluation of fault detection and classification methods using metrics such as accuracy, confusion matrices, and the Risk Priority Number (RPN) is performed. The examination and assessment the classification performance of customized neural networks with dropout regularizers is presented in detail. The development and evaluation of neural network pruning strategies and illustration of the trade-off between fault classification model accuracy and algorithm complexity is studied. This study includes data from the National Renewable Energy Laboratory (NREL) database and also real-time data collected from the SenSIP testbed at MTW under various loading and shading conditions. The overall approach for detection and classification promises to elevate the performance and robustness of PV arrays.
ContributorsRao, Sunil (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Srinivasan, Devarajan (Committee member) / Arizona State University (Publisher)
Created2021
168287-Thumbnail Image.png
Description
Dealing with relational data structures is central to a wide-range of applications including social networks, epidemic modeling, molecular chemistry, medicine, energy distribution, and transportation. Machine learning models that can exploit the inherent structural/relational bias in the graph structured data have gained prominence in recent times. A recurring idea that appears

Dealing with relational data structures is central to a wide-range of applications including social networks, epidemic modeling, molecular chemistry, medicine, energy distribution, and transportation. Machine learning models that can exploit the inherent structural/relational bias in the graph structured data have gained prominence in recent times. A recurring idea that appears in all approaches is to encode the nodes in the graph (or the entire graph) as low-dimensional vectors also known as embeddings, prior to carrying out downstream task-specific learning. It is crucial to eliminate hand-crafted features and instead directly incorporate the structural inductive bias into the deep learning architectures. In this dissertation, deep learning models that directly operate on graph structured data are proposed for effective representation learning. A literature review on existing graph representation learning is provided in the beginning of the dissertation. The primary focus of dissertation is on building novel graph neural network architectures that are robust against adversarial attacks. The proposed graph neural network models are extended to multiplex graphs (heterogeneous graphs). Finally, a relational neural network model is proposed to operate on a human structural connectome. For every research contribution of this dissertation, several empirical studies are conducted on benchmark datasets. The proposed graph neural network models, approaches, and architectures demonstrate significant performance improvements in comparison to the existing state-of-the-art graph embedding strategies.
ContributorsShanthamallu, Uday Shankar (Author) / Spanias, Andreas (Thesis advisor) / Thiagarajan, Jayaraman J (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2021
187813-Thumbnail Image.png
Description
The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem

The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem explores the impact of social learning in collecting and trading unverifiable information where a data collector purchases data from users through a payment mechanism. Each user starts with a personal signal which represents the knowledge about the underlying state the data collector desires to learn. Through social interactions, each user also acquires additional information from his neighbors in the social network. It is revealed that both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation for a given total payment budget. In the second half, a federated learning scheme to train a global learning model with strategic agents, who are not bound to contribute their resources unconditionally, is considered. Since the agents are not obliged to provide their true stochastic gradient updates and the server is not capable of directly validating the authenticity of reported updates, the learning process may reach a noncooperative equilibrium. First, the actions of the agents are assumed to be binary: cooperative or defective. If the cooperative action is taken, the agent sends a privacy-preserved version of stochastic gradient signal. If the defective action is taken, the agent sends an arbitrary uninformative noise signal. Furthermore, this setup is extended into the scenarios with more general actions spaces where the quality of the stochastic gradient updates have a range of discrete levels. The proposed methodology evaluates each agent's stochastic gradient according to a reference gradient estimate which is constructed from the gradients provided by other agents, and rewards the agent based on that evaluation.
ContributorsAkbay, Abdullah Basar (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Committee member) / Kosut, Oliver (Committee member) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
ContributorsSharma, Aradhita (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2023
187456-Thumbnail Image.png
Description
The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target

The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target distributions and (iv) belief on existing metrics as reliable indicators of performance. When any of these assumptions are violated, the models exhibit brittleness producing adversely varied behavior. This dissertation focuses on methods for accurate model design and characterization that enhance process reliability when certain assumptions are not met. With the need to safely adopt artificial intelligence tools in practice, it is vital to build reliable failure detectors that indicate regimes where the model must not be invoked. To that end, an error predictor trained with a self-calibration objective is developed to estimate loss consistent with the underlying model. The properties of the error predictor are described and their utility in supporting introspection via feature importances and counterfactual explanations is elucidated. While such an approach can signal data regime changes, it is critical to calibrate models using regimes of inlier (training) and outlier data to prevent under- and over-generalization in models i.e., incorrectly identifying inliers as outliers and vice-versa. By identifying the space for specifying inliers and outliers, an anomaly detector that can effectively flag data of varying semantic complexities in medical imaging is next developed. Uncertainty quantification in deep learning models involves identifying sources of failure and characterizing model confidence to enable actionability. A training strategy is developed that allows the accurate estimation of model uncertainties and its benefits are demonstrated for active learning and generalization gap prediction. This helps identify insufficiently sampled regimes and representation insufficiency in models. In addition, the task of deep inversion under data scarce scenarios is considered, which in practice requires a prior to control the optimization. By identifying limitations in existing work, data priors powered by generative models and deep model priors are designed for audio restoration. With relevant empirical studies on a variety of benchmarks, the need for such design strategies is demonstrated.
ContributorsNarayanaswamy, Vivek Sivaraman (Author) / Spanias, Andreas (Thesis advisor) / J. Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2023
187467-Thumbnail Image.png
Description
A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources, and in dynamic environments, it aims to efficiently respond to

A distributed framework is proposed for addressing resource sharing problems in communications, micro-economics, and various other network systems. The approach uses a hierarchical multi-layer decomposition for network utility maximization. This methodology uses central management and distributed computations to allocate resources, and in dynamic environments, it aims to efficiently respond to network changes. The main contributions include a comprehensive description of an exemplary unifying optimization framework to share resources across different operators and platforms, and a detailed analysis of the generalized methods under the assumption that the network changes are on the same time-scale as the convergence time of the algorithms employed for local computations.Assuming strong concavity and smoothness of the objective functions, and under some stability conditions for each layer, convergence rates and optimality bounds are presented. The effectiveness of the framework is demonstrated through numerical examples. Furthermore, a novel Federated Edge Network Utility Maximization (FEdg-NUM) architecture is proposed for solving large-scale distributed network utility maximization problems in a fully decentralized way. In FEdg-NUM, clients with private utilities communicate with a peer-to-peer network of edge servers. Convergence properties are examined both through analysis and numerical simulations, and potential applications are highlighted. Finally, problems in a complex stochastic dynamic environment, specifically motivated by resource sharing during disasters occurring in multiple areas, are studied. In a hierarchical management scenario, a method of applying a primal-dual algorithm in higher-layer along with deep reinforcement learning algorithms in localities is presented. Analytical details as well as case studies such as pandemic and wildfire response are provided.
ContributorsKarakoc, Nurullah (Author) / Scaglione, Anna (Thesis advisor) / Reisslein, Martin (Thesis advisor) / Nedich, Angelia (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
187375-Thumbnail Image.png
Description
With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such

With the rapid development of reflect-arrays and software-defined meta-surfaces, reconfigurable intelligent surfaces (RISs) have been envisioned as promising technologies for next-generation wireless communication and sensing systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals in a smart way to improve the performance of such systems. In RIS-aided communication systems, designing this smart interaction, however, requires acquiring large-dimensional channel knowledge between the RIS and the transmitter/receiver. Acquiring this knowledge is one of the most crucial challenges in RISs as it is associated with large computational and hardware complexity. For RIS-aided sensing systems, it is interesting to first investigate scene depth perception based on millimeter wave (mmWave) multiple-input multiple-output (MIMO) sensing. While mmWave MIMO sensing systems address some critical limitations suffered by optical sensors, realizing these systems possess several key challenges: communication-constrained sensing framework design, beam codebook design, and scene depth estimation challenges. Given the high spatial resolution provided by the RISs, RIS-aided mmWave sensing systems have the potential to improve the scene depth perception, while imposing some key challenges too. In this dissertation, for RIS-aided communication systems, efficient RIS interaction design solutions are proposed by leveraging tools from compressive sensing and deep learning. The achievable rates of these solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead. For RIS-aided sensing systems, a mmWave MIMO based sensing framework is first developed for building accurate depth maps under the constraints imposed by the communication transceivers. Then, a scene depth estimation framework based on RIS-aided sensing is developed for building high-resolution accurate depth maps. Numerical simulations illustrate the promising performance of the proposed solutions, highlighting their potential for next-generation communication and sensing systems.
ContributorsTaha, Abdelrahman (Author) / Alkhateeb, Ahmed (Thesis advisor) / Bliss, Daniel (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Michelusi, Nicolò (Committee member) / Arizona State University (Publisher)
Created2023
193540-Thumbnail Image.png
Description
The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19 cases and recoveries. A consensus algorithm is used to identify

The primary objective of this thesis is to identify locations or regions where COVID-19 transmission is more prevalent, termed “hotspots,” assess the likelihood of contracting the virus after visiting crowded areas or potential hotspots, and make predictions on confirmed COVID-19 cases and recoveries. A consensus algorithm is used to identify such hotspots; the SEIR epidemiological model tracks COVID-19 cases, allowing for a better understanding of the disease dynamics and enabling informed decision-making in public health strategies. Consensus-based distributed methodologies have been developed to estimate the magnitude, density, and locations of COVID-19 hotspots to provide well-informed alerts based on continuous data risk assessments. Assuming agents own a mobile device, transmission hotspots use information from user devices with Bluetooth and WiFi. In a consensus-based distributed clustering algorithm, users are divided into smaller groups, and then the number of users is estimated in each group. This process allows for the determination of the population of an outdoor site and the distances between individuals. The proposed algorithm demonstrates versatility by being applicable not only in outdoor environments but also in indoor settings. Considerations are made for signal attenuation caused by walls and other barriers to adapt to indoor environments, and a wall detection algorithm is employed for this purpose. The clustering mechanism is designed to dynamically choose the appropriate clustering technique based on data-dependent patterns, ensuring that every node undergoes proper clustering. After networks have been established and clustered, the output of the consensus algorithmis fed as one of many inputs into the SEIR model. SEIR, representing Susceptible, Exposed, Infectious, and Removed, forms the basis of a model designed to assess the probability of infection at a Point of Interest (POI). The SEIR model utilizes calculated parameters such as β (contact), σ (latency),γ (recovery), ω (loss of immunity) along with current COVID-19 case data to precisely predict the infection spread in a specific area. The SEIR model is implemented with diverse methodologies for transitioning populations between compartments. Hence, the model identifies optimal parameter values under different conditions and scenarios and forecasts the number of infected and recovered cases for the upcoming days.
ContributorsPatel, Bhavikkumar (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Banavar, Mahesh (Committee member) / Arizona State University (Publisher)
Created2024
156646-Thumbnail Image.png
Description
Both two-way relays (TWR) and full-duplex (FD) radios are spectrally efficient, and their integration shows great potential to further improve the spectral efficiency, which offers a solution to the fifth generation wireless systems. High quality channel state information (CSI) are the key components for the implementation and the performance of

Both two-way relays (TWR) and full-duplex (FD) radios are spectrally efficient, and their integration shows great potential to further improve the spectral efficiency, which offers a solution to the fifth generation wireless systems. High quality channel state information (CSI) are the key components for the implementation and the performance of the FD TWR system, making channel estimation in FD TWRs crucial.

The impact of channel estimation on spectral efficiency in half-duplex multiple-input-multiple-output (MIMO) TWR systems is investigated. The trade-off between training and data energy is proposed. In the case that two sources are symmetric in power and number of antennas, a closed-form for the optimal ratio of data energy to total energy is derived. It can be shown that the achievable rate is a monotonically increasing function of the data length. The asymmetric case is discussed as well.

Efficient and accurate training schemes for FD TWRs are essential for profiting from the inherent spectrally efficient structures of both FD and TWRs. A novel one-block training scheme with a maximum likelihood (ML) estimator is proposed to estimate the channels between the nodes and the residual self-interference (RSI) channel simultaneously. Baseline training schemes are also considered to compare with the one-block scheme. The Cramer-Rao bounds (CRBs) of the training schemes are derived and analyzed by using the asymptotic properties of Toeplitz matrices. The benefit of estimating the RSI channel is shown analytically in terms of Fisher information.

To obtain fundamental and analytic results of how the RSI affects the spectral efficiency, one-way FD relay systems are studied. Optimal training design and ML channel estimation are proposed to estimate the RSI channel. The CRBs are derived and analyzed in closed-form so that the optimal training sequence can be found via minimizing the CRB. Extensions of the training scheme to frequency-selective channels and multiple relays are also presented.

Simultaneously sensing and transmission in an FD cognitive radio system with MIMO is considered. The trade-off between the transmission rate and the detection accuracy is characterized by the sum-rate of the primary and the secondary users. Different beamforming and combining schemes are proposed and compared.
ContributorsLi, Xiaofeng (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2018