Matching Items (98)
Filtering by

Clear all filters

168525-Thumbnail Image.png
Description
Diamond as a wide-bandgap (WBG) semiconductor material has distinct advantages for power electronics applications over Si and other WBG materials due to its high critical electric field (> 10 MV/cm), high electron and hole mobility (??=4500 cm2/V-s, ??=3800 cm2/V-s), high thermal conductivity (~22 W/cm-K) and large bandgap (5.47 eV). Owing

Diamond as a wide-bandgap (WBG) semiconductor material has distinct advantages for power electronics applications over Si and other WBG materials due to its high critical electric field (> 10 MV/cm), high electron and hole mobility (??=4500 cm2/V-s, ??=3800 cm2/V-s), high thermal conductivity (~22 W/cm-K) and large bandgap (5.47 eV). Owing to its remarkable properties, the application space of WBG materials has widened into areas requiring very high current, operating voltage and temperature. Remarkable progress has been made in demonstrating high breakdown voltage (>10 kV), ultra-high current density (> 100 kA/cm2) and ultra-high temperature (~1000oC) diamond devices, giving further evidence of diamond’s huge potential. However, despite the great success, fabricated diamond devices have not yet delivered diamond’s true potential. Some of the main reasons are high dopant activation energies, substantial bulk defect and trap densities, high contact resistance, and high leakage currents. A lack of complete understanding of the diamond specific device physics also impedes the progress in correct design approaches. The main three research focuses of this work are high power, high frequency and high temperature. Through the design, fabrication, testing, analysis and modeling of diamond p-i-n and Schottky diodes a milestone in diamond research is achieved and gain important theoretical understanding. In particular, a record highest current density in diamond diodes of ~116 kA/cm2 is demonstrated, RF characterization of diamond diodes is performed from 0.1 GHz to 25 GHz and diamond diodes are successfully tested in extreme environments of 500oC and ~93 bar of CO2 pressure. Theoretical models are constructed analytically and inii Silvaco ATLAS including incomplete ionization and hopping mobility to explain space charge limited current phenomenon, effects of traps and Mott-Gurney dominated diode ???. A new interpretation of the Baliga figure of merit for WBG materials is also formulated and a new cubic relationship between ??? and breakdown voltage is established. Through Silvaco ATLAS modeling, predictions on the power limitation of diamond diodes in receiver-protector circuits is made and a range of self-heating effects is established. Poole-Frenkel emission and hopping conduction models are also utilized to analyze high temperature (500oC) leakage behavior of diamond diodes. Finally, diamond JFET simulations are performed and designs are proposed for high temperature – extreme environment applications.
ContributorsSurdi, Harshad (Author) / Goodnick, Stephen M (Thesis advisor) / Nemanich, Robert J (Committee member) / Thornton, Trevor J (Committee member) / Lyons, James R (Committee member) / Arizona State University (Publisher)
Created2022
168514-Thumbnail Image.png
Description
Operational efficiency of solar energy farms requires detailed analytics and information on each panel regarding voltage, current, temperature, and irradiance. Monitoring utility-scale solar arrays was shown to minimize the cost of maintenance and help optimize the performance of photovoltaic (PV) arrays under various conditions. This dissertation describes a project that

Operational efficiency of solar energy farms requires detailed analytics and information on each panel regarding voltage, current, temperature, and irradiance. Monitoring utility-scale solar arrays was shown to minimize the cost of maintenance and help optimize the performance of photovoltaic (PV) arrays under various conditions. This dissertation describes a project that focuses on the development of machine learning and neural network algorithms. It also describes an 18kW solar array testbed for the purpose of PV monitoring and control. The use of the 18kW Sensor Signal and Information Processing (SenSIP) PV testbed which consists of 104 modules fitted with smart monitoring devices (SMDs) is described in detail. Each of the SMDs has embedded, a wireless transceiver, and relays that enable continuous monitoring, fault detection, and real-time connection topology changes. Data is obtained in real time using the SenSIP PV testbed. Machine learning and neural network algorithms for PV fault classification is are studied in depth. More specifically, the development of a series of customized neural networks for detection and classification of solar array faults that include soiling, shading, degradation, short circuits and standard test conditions is considered. The evaluation of fault detection and classification methods using metrics such as accuracy, confusion matrices, and the Risk Priority Number (RPN) is performed. The examination and assessment the classification performance of customized neural networks with dropout regularizers is presented in detail. The development and evaluation of neural network pruning strategies and illustration of the trade-off between fault classification model accuracy and algorithm complexity is studied. This study includes data from the National Renewable Energy Laboratory (NREL) database and also real-time data collected from the SenSIP testbed at MTW under various loading and shading conditions. The overall approach for detection and classification promises to elevate the performance and robustness of PV arrays.
ContributorsRao, Sunil (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Srinivasan, Devarajan (Committee member) / Arizona State University (Publisher)
Created2021
168287-Thumbnail Image.png
Description
Dealing with relational data structures is central to a wide-range of applications including social networks, epidemic modeling, molecular chemistry, medicine, energy distribution, and transportation. Machine learning models that can exploit the inherent structural/relational bias in the graph structured data have gained prominence in recent times. A recurring idea that appears

Dealing with relational data structures is central to a wide-range of applications including social networks, epidemic modeling, molecular chemistry, medicine, energy distribution, and transportation. Machine learning models that can exploit the inherent structural/relational bias in the graph structured data have gained prominence in recent times. A recurring idea that appears in all approaches is to encode the nodes in the graph (or the entire graph) as low-dimensional vectors also known as embeddings, prior to carrying out downstream task-specific learning. It is crucial to eliminate hand-crafted features and instead directly incorporate the structural inductive bias into the deep learning architectures. In this dissertation, deep learning models that directly operate on graph structured data are proposed for effective representation learning. A literature review on existing graph representation learning is provided in the beginning of the dissertation. The primary focus of dissertation is on building novel graph neural network architectures that are robust against adversarial attacks. The proposed graph neural network models are extended to multiplex graphs (heterogeneous graphs). Finally, a relational neural network model is proposed to operate on a human structural connectome. For every research contribution of this dissertation, several empirical studies are conducted on benchmark datasets. The proposed graph neural network models, approaches, and architectures demonstrate significant performance improvements in comparison to the existing state-of-the-art graph embedding strategies.
ContributorsShanthamallu, Uday Shankar (Author) / Spanias, Andreas (Thesis advisor) / Thiagarajan, Jayaraman J (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2021
190765-Thumbnail Image.png
Description
Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on

Speech analysis for clinical applications has emerged as a burgeoning field, providing valuable insights into an individual's physical and physiological state. Researchers have explored speech features for clinical applications, such as diagnosing, predicting, and monitoring various pathologies. Before presenting the new deep learning frameworks, this thesis introduces a study on conventional acoustic feature changes in subjects with post-traumatic headache (PTH) attributed to mild traumatic brain injury (mTBI). This work demonstrates the effectiveness of using speech signals to assess the pathological status of individuals. At the same time, it highlights some of the limitations of conventional acoustic and linguistic features, such as low repeatability and generalizability. Two critical characteristics of speech features are (1) good robustness, as speech features need to generalize across different corpora, and (2) high repeatability, as speech features need to be invariant to all confounding factors except the pathological state of targets. This thesis presents two research thrusts in the context of speech signals in clinical applications that focus on improving the robustness and repeatability of speech features, respectively. The first thrust introduces a deep learning framework to generate acoustic feature embeddings sensitive to vocal quality and robust across different corpora. A contrastive loss combined with a classification loss is used to train the model jointly, and data-warping techniques are employed to improve the robustness of embeddings. Empirical results demonstrate that the proposed method achieves high in-corpus and cross-corpus classification accuracy and generates good embeddings sensitive to voice quality and robust across different corpora. The second thrust introduces using the intra-class correlation coefficient (ICC) to evaluate the repeatability of embeddings. A novel regularizer, the ICC regularizer, is proposed to regularize deep neural networks to produce embeddings with higher repeatability. This ICC regularizer is implemented and applied to three speech applications: a clinical application, speaker verification, and voice style conversion. The experimental results reveal that the ICC regularizer improves the repeatability of learned embeddings compared to the contrastive loss, leading to enhanced performance in downstream tasks.
ContributorsZhang, Jianwei (Author) / Jayasuriya, Suren (Thesis advisor) / Berisha, Visar (Thesis advisor) / Liss, Julie (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2023
Description
Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can lower this increasing cost. However, implementation of such a system

Machine learning has been increasingly integrated into several new areas, namely those related to vision processing and language learning models. These implementations of these processes in new products have demanded increasingly more expensive memory usage and computational requirements. Microcontrollers can lower this increasing cost. However, implementation of such a system on a microcontroller is difficult and has to be culled appropriately in order to find the right balance between optimization of the system and allocation of resources present in the system. A proof of concept that these algorithms can be implemented on such as system will be attempted in order to find points of contention of the construction of such a system on such limited hardware, as well as the steps taken to enable the usage of machine learning onto a limited system such as the general purpose MSP430 from Texas Instruments.
ContributorsMalcolm, Ian (Author) / Allee, David (Thesis director) / Spanias, Andreas (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2024-05
193509-Thumbnail Image.png
Description
In the rapidly evolving field of computer vision, propelled by advancements in deeplearning, the integration of hardware-software co-design has become crucial to overcome the limitations of traditional imaging systems. This dissertation explores the integration of hardware-software co-design in computational imaging, particularly in light transport acquisition and Non-Line-of-Sight (NLOS) imaging. By leveraging projector-camera systems and

In the rapidly evolving field of computer vision, propelled by advancements in deeplearning, the integration of hardware-software co-design has become crucial to overcome the limitations of traditional imaging systems. This dissertation explores the integration of hardware-software co-design in computational imaging, particularly in light transport acquisition and Non-Line-of-Sight (NLOS) imaging. By leveraging projector-camera systems and computational techniques, this thesis address critical challenges in imaging complex environments, such as adverse weather conditions, low-light scenarios, and the imaging of reflective or transparent objects. The first contribution in this thesis is the theory, design, and implementation of a slope disparity gating system, which is a vertically aligned configuration of a synchronized raster scanning projector and rolling-shutter camera, facilitating selective imaging through disparity-based triangulation. This system introduces a novel, hardware-oriented approach to selective imaging, circumventing the limitations of post-capture processing. The second contribution of this thesis is the realization of two innovative approaches for spotlight optimization to improve localization and tracking for NLOS imaging. The first approach utilizes radiosity-based optimization to improve 3D localization and object identification for small-scale laboratory settings. The second approach introduces a learningbased illumination network along with a differentiable renderer and NLOS estimation network to optimize human 2D localization and activity recognition. This approach is validated on a large, room-scale scene with complex line-of-sight geometries and occluders. The third contribution of this thesis is an attention-based neural network for passive NLOS settings where there is no controllable illumination. The thesis demonstrates realtime, dynamic NLOS human tracking where the camera is moving on a mobile robotic platform. In addition, this thesis contains an appendix featuring temporally consistent relighting for portrait videos with applications in computer graphics and vision.
ContributorsChandran, Sreenithy (Author) / Jayasuriya, Suren (Thesis advisor) / Turaga, Pavan (Committee member) / Dasarathy, Gautam (Committee member) / Kubo, Hiroyuki (Committee member) / Arizona State University (Publisher)
Created2024
187804-Thumbnail Image.png
Description
Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical

Quantum computing is becoming more accessible through modern noisy intermediate scale quantum (NISQ) devices. These devices require substantial error correction and scaling before they become capable of fulfilling many of the promises that quantum computing algorithms make. This work investigates the current state of NISQ devices by implementing multiple classical computing scenarios with a quantum analog to observe how current quantum technology can be leveraged to achieve different tasks. First, quantum homomorphic encryption (QHE) is applied to the quantum teleportation protocol to show that this form of algorithm security is possible to implement with modern quantum computing simulators. QHE is capable of completely obscuring a teleported state with a liner increase in the number of qubit gates O(n). Additionally, the circuit depth increases minimally by only a constant factor O(c) when using only stabilizer circuits. Quantum machine learning (QML) is another potential application of NISQ technology that can be used to modify classical AI. QML is investigated using quantum hybrid neural networks for the classification of spoken commands on live audio data. Additionally, an edge computing scenario is examined to profile the interactions between a quantum simulator acting as a cloud server and an embedded processor board at the network edge. It is not practical to embed NISQ processors at a network edge, so this paradigm is important to study for practical quantum computing systems. The quantum hybrid neural network (QNN) learned to classify audio with equivalent accuracy (~94%) to a classical recurrent neural network. Introducing quantum simulation slows the systems responsiveness because it takes significantly longer to process quantum simulations than a classical neural network. This work shows that it is viable to implement classical computing techniques with quantum algorithms, but that current NISQ processing is sub-optimal when compared to classical methods.
ContributorsYarter, Maxwell (Author) / Spanias, Andreas (Thesis advisor) / Arenz, Christian (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2023
187813-Thumbnail Image.png
Description
The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem

The presence of strategic agents can pose unique challenges to data collection and distributed learning. This dissertation first explores the social network dimension of data collection markets, and then focuses on how the strategic agents can be efficiently and effectively incentivized to cooperate in distributed machine learning frameworks. The first problem explores the impact of social learning in collecting and trading unverifiable information where a data collector purchases data from users through a payment mechanism. Each user starts with a personal signal which represents the knowledge about the underlying state the data collector desires to learn. Through social interactions, each user also acquires additional information from his neighbors in the social network. It is revealed that both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation for a given total payment budget. In the second half, a federated learning scheme to train a global learning model with strategic agents, who are not bound to contribute their resources unconditionally, is considered. Since the agents are not obliged to provide their true stochastic gradient updates and the server is not capable of directly validating the authenticity of reported updates, the learning process may reach a noncooperative equilibrium. First, the actions of the agents are assumed to be binary: cooperative or defective. If the cooperative action is taken, the agent sends a privacy-preserved version of stochastic gradient signal. If the defective action is taken, the agent sends an arbitrary uninformative noise signal. Furthermore, this setup is extended into the scenarios with more general actions spaces where the quality of the stochastic gradient updates have a range of discrete levels. The proposed methodology evaluates each agent's stochastic gradient according to a reference gradient estimate which is constructed from the gradients provided by other agents, and rewards the agent based on that evaluation.
ContributorsAkbay, Abdullah Basar (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Committee member) / Kosut, Oliver (Committee member) / Ewaisha, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
189305-Thumbnail Image.png
Description
Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction

Quantum computing has the potential to revolutionize the signal-processing field by providing more efficient methods for analyzing signals. This thesis explores the application of quantum computing in signal analysis synthesis for compression applications. More specifically, the study focuses on two key approaches: quantum Fourier transform (QFT) and quantum linear prediction (QLP). The research is motivated by the potential advantages offered by quantum computing in massive signal processing tasks and presents novel quantum circuit designs for QFT, quantum autocorrelation, and QLP, enabling signal analysis synthesis using quantum algorithms. The two approaches are explained as follows. The Quantum Fourier transform (QFT) demonstrates the potential for improved speed in quantum computing compared to classical methods. This thesis focuses on quantum encoding of signals and designing quantum algorithms for signal analysis synthesis, and signal compression using QFTs. Comparative studies are conducted to evaluate quantum computations for Fourier transform applications, considering Signal-to-Noise-Ratio results. The effects of qubit precision and quantum noise are also analyzed. The QFT algorithm is also developed in the J-DSP simulation environment, providing hands-on laboratory experiences for signal-processing students. User-friendly simulation programs on QFT-based signal analysis synthesis using peak picking, and perceptual selection using psychoacoustics in the J-DSP are developed. Further, this research is extended to analyze the autocorrelation of the signal using QFTs and develop a quantum linear prediction (QLP) algorithm for speech processing applications. QFTs and IQFTs are used to compute the quantum autocorrelation of the signal, and the HHL algorithm is modified and used to compute the solutions of the linear equations using quantum computing. The performance of the QLP algorithm is evaluated for system identification, spectral estimation, and speech analysis synthesis, and comparisons are performed for QLP and CLP results. The results demonstrate the following: effective quantum circuits for accurate QFT-based speech analysis synthesis, evaluation of performance with quantum noise, design of accurate quantum autocorrelation, and development of a modified HHL algorithm for efficient QLP. Overall, this thesis contributes to the research on quantum computing for signal processing applications and provides a foundation for further exploration of quantum algorithms for signal analysis synthesis.
ContributorsSharma, Aradhita (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2023
187456-Thumbnail Image.png
Description
The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target

The past decade witnessed the success of deep learning models in various applications of computer vision and natural language processing. This success can be predominantly attributed to the (i) availability of large amounts of training data; (ii) access of domain aware knowledge; (iii) i.i.d assumption between the train and target distributions and (iv) belief on existing metrics as reliable indicators of performance. When any of these assumptions are violated, the models exhibit brittleness producing adversely varied behavior. This dissertation focuses on methods for accurate model design and characterization that enhance process reliability when certain assumptions are not met. With the need to safely adopt artificial intelligence tools in practice, it is vital to build reliable failure detectors that indicate regimes where the model must not be invoked. To that end, an error predictor trained with a self-calibration objective is developed to estimate loss consistent with the underlying model. The properties of the error predictor are described and their utility in supporting introspection via feature importances and counterfactual explanations is elucidated. While such an approach can signal data regime changes, it is critical to calibrate models using regimes of inlier (training) and outlier data to prevent under- and over-generalization in models i.e., incorrectly identifying inliers as outliers and vice-versa. By identifying the space for specifying inliers and outliers, an anomaly detector that can effectively flag data of varying semantic complexities in medical imaging is next developed. Uncertainty quantification in deep learning models involves identifying sources of failure and characterizing model confidence to enable actionability. A training strategy is developed that allows the accurate estimation of model uncertainties and its benefits are demonstrated for active learning and generalization gap prediction. This helps identify insufficiently sampled regimes and representation insufficiency in models. In addition, the task of deep inversion under data scarce scenarios is considered, which in practice requires a prior to control the optimization. By identifying limitations in existing work, data priors powered by generative models and deep model priors are designed for audio restoration. With relevant empirical studies on a variety of benchmarks, the need for such design strategies is demonstrated.
ContributorsNarayanaswamy, Vivek Sivaraman (Author) / Spanias, Andreas (Thesis advisor) / J. Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2023