Matching Items (536)
Filtering by

Clear all filters

158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
ContributorsMitra, Sinjini (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
158866-Thumbnail Image.png
Description
The quest to find efficient algorithms to numerically solve differential equations isubiquitous in all branches of computational science. A natural approach to address
this problem is to try all possible algorithms to solve the differential equation and
choose the one that is satisfactory to one's needs. However, the vast variety of algorithms
in

The quest to find efficient algorithms to numerically solve differential equations isubiquitous in all branches of computational science. A natural approach to address
this problem is to try all possible algorithms to solve the differential equation and
choose the one that is satisfactory to one's needs. However, the vast variety of algorithms
in place makes this an extremely time consuming task. Additionally, even
after choosing the algorithm to be used, the style of programming is not guaranteed
to result in the most efficient algorithm. This thesis attempts to address the same
problem but pertinent to the field of computational nanoelectronics, by using PETSc
linear solver and SLEPc eigenvalue solver packages to efficiently solve Schrödinger
and Poisson equations self-consistently.
In this work, quasi 1D nanowire fabricated in the GaN material system is considered
as a prototypical example. Special attention is placed on the proper description
of the heterostructure device, the polarization charges and accurate treatment of the
free surfaces. Simulation results are presented for the conduction band profiles, the
electron density and the energy eigenvalues/eigenvectors of the occupied sub-bands
for this quasi 1D nanowire. The simulation results suggest that the solver is very
efficient and can be successfully used for the analysis of any device with two dimensional
confinement. The tool is ported on www.nanoHUB.org and as such is freely
available.
ContributorsBaikadi, Pranay Kumar Reddy (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen (Committee member) / Povolotskyi, Mykhailo (Committee member) / Arizona State University (Publisher)
Created2020
158876-Thumbnail Image.png
Description
Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of

Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of the Number Theoretic Transform (NTT) are combined to tackle the most significant bottleneck of polynomial ring multiplication with the hardware design of an optimized RNS-based NTT polynomial multiplier. The design utilizes Negative Wrapped Convolution, the NTT, RNS Montgomery reduction with Bajard and Shenoy extensions, and optimized modular 32-bit channel arithmetic for nine RNS channels to accomplish an RNS polynomial multiplication. In addition to a full software implementation of the whole system, a pipelined and optimized RNS-based NTT unit with 4 RNS butterflies is implemented on the Xilinx Artix-7 FPGA(xc7a200tlffg1156-2L) for size and delay estimates. The hardware implementation achieves an operating frequency of 47.043 MHz and utilizes 13239 LUT's, 4010 FF's, and 330 DSP blocks, allowing for multiple simultaneously operating NTT units depending on FGPA size constraints.
ContributorsBrist, Logan Alan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
158779-Thumbnail Image.png
Description
The primary goal of this thesis is to evaluate the influence of ethyl vinyl acetate (EVA) and polyolefin elastomer (POE) encapsulant types on the glass-glass (GG) photovoltaic (PV) module reliability. The influence of these two encapsulant types on the reliability of GG modules was compared with baseline glass-polymer backsheet (GB)

The primary goal of this thesis is to evaluate the influence of ethyl vinyl acetate (EVA) and polyolefin elastomer (POE) encapsulant types on the glass-glass (GG) photovoltaic (PV) module reliability. The influence of these two encapsulant types on the reliability of GG modules was compared with baseline glass-polymer backsheet (GB) modules for a benchmarking purpose. Three sets of modules, with four modules in each set, were constructed with two substrates types i.e. glass-glass (GG) and glass- polymer backsheet (GB); and 2 encapsulants types i.e. ethyl vinyl acetate (EVA) and polyolefin elastomer (POE). Each module set was subjected to the following accelerated tests as specified in the International Electrotechnical Commission (IEC) standard and Qualification Plus protocol of NREL: Ultraviolet (UV) 250 kWh/m2; Thermal Cycling (TC) 200 cycles; Damp Heat (DH) 1250 hours. To identify the failure modes and reliability issues of the stressed modules, several module-level non-destructive characterizations were carried out and they include colorimetry, UV-Vis-NIR spectral reflectance, ultraviolet fluorescence (UVF) imaging, electroluminescence (EL) imaging, and infrared (IR) imaging. The above-mentioned characterizations were performed on the front side of the modules both before the stress tests (i.e. pre-stress) and after the stress tests (i.e. post-stress). The UV-250 extended stress results indicated slight changes in the reflectance on the non-cell area of EVA modules probably due to minor adhesion loss at the cell and module edges. From the DH-1250 extended stress tests, significant changes, in both encapsulant types modules, were observed in reflectance and UVF images indicating early stages of delamination. In the case of the TC-200 stress test, practically no changes were observed in all sets of modules. From the above short-term stress tests, it appears although not conclusive at this stage of the analysis, delamination seems to be the only failure mode that could possibly be affecting the module performance, as observed from UV and DH extended stress tests. All these stress tests need to be continued to identify the wear-out failure modes and their impacts on the performance parameters of PV modules.
ContributorsBhaskaran, Rahul (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Phelan, Patrick (Thesis advisor) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2020
158450-Thumbnail Image.png
Description
In the current photovoltaic (PV) industry, the O&M (operations and maintenance) personnel in the field primarily utilize three approaches to identify the underperforming or defective modules in a string: i) EL (electroluminescence) imaging of all the modules in the string; ii) IR (infrared) thermal imaging of all the modules in

In the current photovoltaic (PV) industry, the O&M (operations and maintenance) personnel in the field primarily utilize three approaches to identify the underperforming or defective modules in a string: i) EL (electroluminescence) imaging of all the modules in the string; ii) IR (infrared) thermal imaging of all the modules in the string; and, iii) current-voltage (I-V) curve tracing of all the modules in the string. In the first and second approaches, the EL images are used to detect the modules with broken cells, and the IR images are used to detect the modules with hotspot cells, respectively. These two methods may identify the modules with defective cells only semi-qualitatively, but not accurately and quantitatively. The third method, I-V curve tracing, is a quantitative method to identify the underperforming modules in a string, but it is an extremely time consuming, labor-intensive, and highly ambient conditions dependent method. Since the I-V curves of individual modules in a string are obtained by disconnecting them individually at different irradiance levels, module operating temperatures, angle of incidences (AOI) and air-masses/spectra, all these measured curves are required to be translated to a single reporting condition (SRC) of a single irradiance, single temperature, single AOI and single spectrum. These translations are not only time consuming but are also prone to inaccuracy due to inherent issues in the translation models. Therefore, the current challenges in using the traditional I-V tracers are related to: i) obtaining I-V curves simultaneously of all the modules and substrings in a string at a single irradiance, operating temperature, irradiance spectrum and angle of incidence due to changing weather parameters and sun positions during the measurements, ii) safety of field personnel when disconnecting and reconnecting of cables in high voltage systems (especially field aged connectors), and iii) enormous time and hardship for the test personnel in harsh outdoor climatic conditions. In this thesis work, a non-contact I-V (NCIV) curve tracing tool has been integrated and implemented to address the above mentioned three challenges of the traditional I-V tracers.

This work compares I-V curves obtained using a traditional I-V curve tracer with the I-V curves obtained using a NCIV curve tracer for the string, substring and individual modules of crystalline silicon (c-Si) and cadmium telluride (CdTe) technologies. The NCIV curve tracer equipment used in this study was integrated using three commercially available components: non-contact voltmeters (NCV) with voltage probes to measure the voltages of substrings/modules in a string, a hall sensor to measure the string current and a DAS (data acquisition system) for simultaneous collection of the voltage data obtained from the NCVs and the current data obtained from the hall sensor. This study demonstrates the concept and accuracy of the NCIV curve tracer by comparing the I-V curves obtained using a traditional capacitor-based tracer and the NCIV curve tracer in a three-module string of c-Si modules and of CdTe modules under natural sunlight with uniform light conditions on all the modules in the string and with partially shading one or more of the modules in the string to simulate and quantitatively detect the underperforming module(s) in a string.
ContributorsMurali, Sanjay (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2020
158464-Thumbnail Image.png
Description
In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of

In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect).

This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process.
ContributorsMariajohn, Aaquila (Author) / Berisha, Visar (Thesis advisor) / Spanias, Andreas (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2020
158584-Thumbnail Image.png
Description
The following document describes the hardware implementation and analysis of Temporal Interference Mitigation using High-Level Synthesis. As the problem of spectral congestion becomes more chronic and widespread, Electromagnetic radio frequency (RF) based systems are posing as viable solution to this problem. Among the existing RF methods Cooperation based systems have

The following document describes the hardware implementation and analysis of Temporal Interference Mitigation using High-Level Synthesis. As the problem of spectral congestion becomes more chronic and widespread, Electromagnetic radio frequency (RF) based systems are posing as viable solution to this problem. Among the existing RF methods Cooperation based systems have been a solution to a host of congestion problems. One of the most important elements of RF receiver is the spatially adaptive part of the receiver. Temporal Mitigation is vital technique employed at the receiver for signal recovery and future propagation along the radar chain.

The computationally intensive parts of temporal mitigation are identified and hardware accelerated. The hardware implementation is based on sequential approach with optimizations applied on the individual components for better performance.

An extensive analysis using a range of fixed point data types is performed to find the optimal data type necessary.

Finally a hybrid combination of data types for different components of temporal mitigation is proposed based on results from the above analysis.
ContributorsSiddiqui, Saquib Ahmad (Author) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Ogras, Umit Y. (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2020
158241-Thumbnail Image.png
Description
This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not

This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not strongly

taken into account the advantages/disadvantages of using an active ankle, as opposed

to a passive ankle, for balancing. This design uses low-cost compliant materials, but

the materials used are thick enough to mimic rigid properties under low stresses, so

this paper will treat the links as rigid materials. A new leg design has been created

that contains three degrees of freedom that can be adapted to contain either a passive

ankle using springs, or an actively controlled ankle using an additional actuator. This

thesis largely aims to focus on the ankle and foot design of the robot and the torque

and speed requirements of the design for motor selection. The dynamics of the system,

including height, foot width, weight, and resistances will be analyzed to determine

how to improve design performance. Model-based control techniques will be used to

control the angle of the leg for balancing. In doing so, it will also be shown that it

is possible to implement model-based control techniques on robots made of laminate

materials.
ContributorsShafa, Taha A (Author) / Aukes, Daniel M (Thesis advisor) / Rogers, Bradley (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2020
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
158193-Thumbnail Image.png
Description
Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based

Energy is one of the wheels on which the modern world runs. Therefore, standards and limits have been devised to maintain the stability and reliability of the power grid. This research shows a simple methodology for increasing the amount of Inverter-based Renewable Generation (IRG), which is also known as Inverter-based Resources (IBR), for that considers the voltage and frequency limits specified by the Western Electricity Coordinating Council (WECC) Transmission Planning (TPL) criteria, and the tie line power flow limits between the area-under-study and its neighbors under contingency conditions. A WECC power flow and dynamic file is analyzed and modified in this research to demonstrate the performance of the methodology. GE's Positive Sequence Load Flow (PSLF) software is used to conduct this research and Python was used to analyze the output data.

The thesis explains in detail how the system with 11% of IRG operated before conducting any adjustments (addition of IRG) and what procedures were modified to make the system run correctly. The adjustments made to the dynamic models are also explained in depth to give a clearer picture of how each adjustment affects the system performance. A list of proposed IRG units along with their locations were provided by SRP, a power utility in Arizona, which were to be integrated into the power flow and dynamic files. In the process of finding the maximum IRG penetration threshold, three sensitivities were also considered, namely, momentary cessation due to low voltages, transmission vs. distribution connected solar generation, and stalling of induction motors. Finally, the thesis discusses how the system reacts to the aforementioned modifications, and how IRG penetration threshold gets adjusted with regards to the different sensitivities applied to the system.
ContributorsAlbhrani, Hashem A M H S (Author) / Pal, Anamitra (Thesis advisor) / Holbert, Keith E. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2020