Matching Items (134)
171819-Thumbnail Image.png
Description
The space industry is rapidly expanding, and components are getting increasinglysmaller leading to the prominence of cubesats. Cubesats are satellites from about coffee mug size to cereal box size. The challenges of shortened timeline and smaller budgets for smaller spacecraft are also their biggest advantages. This benefits educational missions and industry missions a

The space industry is rapidly expanding, and components are getting increasinglysmaller leading to the prominence of cubesats. Cubesats are satellites from about coffee mug size to cereal box size. The challenges of shortened timeline and smaller budgets for smaller spacecraft are also their biggest advantages. This benefits educational missions and industry missions a like but can burden teams to be smaller or have less experience. Thermal analysis of cubesats is no exception to these burdens which is why this thesis has been written to provide a guide for conducting the thermal analysis of a cubesat using the Deployable Optical Receiver Aperture (DORA) mission as an example. Background on cubesats and their role in the space industry will be examined. The theoretical side of heat transfer necessary for conducting a thermal analysis will be explored. The DORA thermal analysis will then be conducted by constructing a thermal model in Thermal Desktop software from the ground up. Insight to assumptions for model construction to move accurately yet quickly will be detailed. Lastly, this fast and quick method will be compared to a standard finite element mesh model to show quality results can be achieved in significantly less time.
ContributorsAdkins, Matthew Thomas (Author) / Phelan, Patrick (Thesis advisor) / Jacobs, Danny (Thesis advisor) / Wang, Liping (Committee member) / Bowman, Judd (Committee member) / Arizona State University (Publisher)
Created2022
171841-Thumbnail Image.png
Description
Radiation heat transfer can surpass blackbody limit when distance between the hot emitter and cold receiver is less than the characteristic wavelength of electromagnetic radiation. The enhanced radiation heat transfer achieved is also called near-field radiation heat transfer. Several theoretical and experimental studies have demonstrated enhancement in near-field radiation heat

Radiation heat transfer can surpass blackbody limit when distance between the hot emitter and cold receiver is less than the characteristic wavelength of electromagnetic radiation. The enhanced radiation heat transfer achieved is also called near-field radiation heat transfer. Several theoretical and experimental studies have demonstrated enhancement in near-field radiation heat transfer for isotropic materials such as silicon carbide (SiC), undoped and doped Si. The enhancement achieved however is narrow-banded. Significant improvement in radiation heat transfer is necessary to satisfy some of the energy demands. So, there is a growing interest to use hyperbolic materials because of its enhancement due to propagating modes. The main objective of the current thesis project is to investigate the control of hyperbolic bands using boron nitride nanotubes (nanostructure of hexagonal boron nitride) for near-field radiative heat transfer. Optical properties of boron nitride nanotubes are calculated using Maxwell-Garnet’s effective medium theory and its corresponding hyperbolic bands are identified. It is observed that the boron nitride nanotubes have only one hyperbolic band located at higher frequencies. Preliminary comparisons of the near-field radiative heat flux calculations with literature are performed using a more general 4×4 transfer matrix method. Due to its high computational time, anisotropic thin film optics is used to calculate near-field radiative heat transfer. Factors contributing to enhancement is investigated. In the end, Spectral allocation ratio, the ratio of heat flux contributed from higher frequencies to the heat flux contributed from lower frequencies is calculated to assess the contribution of each hyperbolic band to total heat flux.
ContributorsRajan, Vishwa Krishna (Author) / Wang, Liping (Thesis advisor) / Phelan, Patrick (Committee member) / Wang, Robert (Committee member) / Arizona State University (Publisher)
Created2022
168404-Thumbnail Image.png
Description
Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to

Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to EEG data to perform various tasks like emotion recognition, motor imagery classification, sleep analysis, and many more. Despite the rise of interest in EEG signal classification, very few studies have explored the MindBigData dataset, which collects EEG signals recorded at the stimulus of seeing a digit and thinking about it. This dataset takes us closer to realizing the idea of mind-reading or communication via thought. Thus classifying these signals into the respective digit that the user thinks about is a challenging task. This serves as a motivation to study this dataset and apply existing deep learning techniques to study it. Given the recent success of transformer architecture in different domains like Computer Vision and Natural language processing, this thesis studies transformer architecture for EEG signal classification. Also, it explores other deep learning techniques for the same. As a result, the proposed classification pipeline achieves comparable performance with the existing methods.
ContributorsMuglikar, Omkar Dushyant (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2021
168405-Thumbnail Image.png
Description
Polarization detection and control techniques play essential roles in various applications, including optical communication, polarization imaging, chemical analysis, target detection, and biomedical diagnosis. Conventional methods for polarization detection and polarization control require bulky optical systems. Flat optics opens a new way for ultra-compact, lower-cost devices and systems for polarization detection

Polarization detection and control techniques play essential roles in various applications, including optical communication, polarization imaging, chemical analysis, target detection, and biomedical diagnosis. Conventional methods for polarization detection and polarization control require bulky optical systems. Flat optics opens a new way for ultra-compact, lower-cost devices and systems for polarization detection and control. However, polarization measurement and manipulating devices with high efficiency and accuracy in the mid-infrared (MIR) range remain elusive. This dissertation presented design concepts and experimental demonstrations of full-Stokes parameters detection and polarization generation devices based on chip-integrated plasmonic metasurfaces with high performance and record efficiency. One of the significant challenges for full-Stokes polarization detection is to achieve high-performance circular polarization (CP) filters. The first design presented in this dissertation is based on the direct integration of plasmonic quarter-wave plate (QWP) onto gold nanowire gratings. It is featured with the subwavelength thickness (~500nm) and extinction ratio around 16. The second design is based on the anisotropic thin-film interference between two vertically integrated anisotropic plasmonic metasurfaces. It provides record high efficiency (around 90%) and extinction ratio (>180). These plasmonic CP filters can be used for circular, elliptical, and linear polarization generation at different wavelengths. The maximum degree of circular polarization (DOCP) measured from the sample achieves 0.99998. The proposed CP filters were integrated with nanograting-based linear polarization (LP) filters on the same chip for single-shot polarization detection. Full-Stokes measurements were experimentally demonstrated with high accuracy at the single wavelength using the direct subtraction method and over a broad wavelength range from 3.5 to 4.5mm using the Mueller matrix method. This design concept was later expanded to a pixelized array of polarization filters. A full-Stokes imaging system was experimentally demonstrated based on integrating a metasurface with pixelized polarization filters arrays and an MIR camera.
ContributorsBai, Jing (Author) / Yao, Yu (Thesis advisor) / Balanis, Constantine A. (Committee member) / Wang, Liping (Committee member) / Zhang, Yong-Hang (Committee member) / Arizona State University (Publisher)
Created2021
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168275-Thumbnail Image.png
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168292-Thumbnail Image.png
Description
In this dissertation, two types of passive air freshener products from Henkel, the wick-based air freshener and gel-based air freshener, are studied for their wicking mechanisms and evaporation performances.The fibrous pad of the wick-based air freshener is a porous medium that absorbs fragrance by capillary force and releases the fragrance

In this dissertation, two types of passive air freshener products from Henkel, the wick-based air freshener and gel-based air freshener, are studied for their wicking mechanisms and evaporation performances.The fibrous pad of the wick-based air freshener is a porous medium that absorbs fragrance by capillary force and releases the fragrance into the ambient air. To investigate the wicking process, a two-dimensional multiphase flow numerical model using COMSOL Multiphysics is built. Saturation and liquid pressure inside the pad are solved. Comparison between the simulation results and experiments shows that evaporation occurs simultaneously with the wicking process. The evaporation performance on the surface of the wicking pad is analyzed based on the kinetic theory, from which the mass flow rate of molecules passing the interface of each pore of the porous medium is obtained. A 3D model coupling the evaporation model and dynamic wicking on the evaporation pad is built to simulate the entire performance of the air freshener to the environment for a long period of time. Diffusion and natural convection effects are included in the simulation. The simulation results match well with the experiments for both the air fresheners placed in a chamber and in the absent of a chamber, the latter of which is subject to indoor airflow. The gel-based air freshener can be constructed as a porous medium in which the solid network of particles spans the volume of the fragrance liquid. To predict the evaporation performance of the gel, two approaches are tested for gel samples in hemispheric shape. The first approach is the sessile drop model commonly used for the drying process of a pure liquid droplet. It can be used to estimate the weight loss rate and time duration of the evaporation. Another approach is to simulate the concentration profile outside the gel and estimate the evaporation rate from the surface of the gel using the kinetic theory. The evaporation area is updated based on the change of pore size. A 3D simulation using the same analysis is further applied to the cylindrical gel sample. The simulation results match the experimental data well.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Wang, Liping (Committee member) / Jiao, Yang (Committee member) / Arizona State University (Publisher)
Created2021
168749-Thumbnail Image.png
Description
Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid

Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid (CL) scale. For identifying the Aβ biomarker, A deep learning model that can model AD progression by predicting the CL value for brain magnetic resonance images (MRIs) is proposed. Brain MRI images can be obtained through the Alzheimer's Disease Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets, however a single model cannot perform well on both datasets at once. Thus, A regularization-based continuous learning framework to perform domain adaptation on the previous model is also proposed which captures the latent information about the relationship between Aβ and AD progression within both datasets.
ContributorsTrinh, Matthew Brian (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Su, Yi (Committee member) / Arizona State University (Publisher)
Created2022
165711-Thumbnail Image.png
Description
The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the

The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the visual cortex of the brain). However, using the pRF model is very time consuming. Past research has focused on the creation of Convolutional Neural Networks (CNN) to mimic the pRF model in a fraction of the time, and they have worked well under highly controlled conditions. However, these models have not been thoroughly tested on real human data. This thesis focused on adapting one of these CNNs to accurately predict the retinotopy of a real human subject using a dataset from the Human Connectome Project. The results show promise towards creating a fully functioning CNN, but they also expose new challenges that must be overcome before the model could be used to predict the retinotopy of new human subjects.
ContributorsBurgard, Braeden (Author) / Wang, Yalin (Thesis director) / Ta, Duyan (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05