Matching Items (209)
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
150045-Thumbnail Image.png
Description
A relatively simple subset of nanotechnology - nanofluids - can be obtained by adding nanoparticles to conventional base fluids. The promise of these fluids stems from the fact that relatively low particle loadings (typically <1% volume fractions) can significantly change the properties of the base fluid. This research

A relatively simple subset of nanotechnology - nanofluids - can be obtained by adding nanoparticles to conventional base fluids. The promise of these fluids stems from the fact that relatively low particle loadings (typically <1% volume fractions) can significantly change the properties of the base fluid. This research explores how low volume fraction nanofluids, composed of common base-fluids, interact with light energy. Comparative experimentation and modeling reveals that absorbing light volumetrically (i.e. in the depth of the fluid) is fundamentally different from surface-based absorption. Depending on the particle material, size, shape, and volume fraction, a fluid can be changed from being mostly transparent to sunlight (in the case of water, alcohols, oils, and glycols) to being a very efficient volumetric absorber of sunlight. This research also visualizes, under high levels of irradiation, how nanofluids undergo interesting, localized phase change phenomena. For this, images were taken of bubble formation and boiling in aqueous nanofluids heated by a hot wire and by a laser. Infrared thermography was also used to quantify this phenomenon. Overall, though, this research reveals the possibility for novel solar collectors in which the working fluid directly absorbs light energy and undergoes phase change in a single step. Modeling results indicate that these improvements can increase a solar thermal receiver's efficiency by up to 10%.
ContributorsTaylor, Robert (Author) / Phelan, Patrick E (Thesis advisor) / Adrian, Ronald (Committee member) / Trimble, Steve (Committee member) / Posner, Jonathan (Committee member) / Maracas, George (Committee member) / Arizona State University (Publisher)
Created2011
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149676-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have

Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have enabled the engineering of synthetic analogues, bimetallic colloidal particles, that swim due to asymmetric ion flux originally proposed by Mitchell. Bimetallic colloidal particles swim through aqueous solutions by converting chemical fuel to fluid motion through asymmetric electrochemical reactions. This dissertation presents novel bimetallic motor fabrication strategies, motor functionality, and a study of the motor collective behavior in chemical concentration gradients. Brownian dynamics simulations and experiments show that the motors exhibit chemokinesis, a motile response to chemical gradients that results in net migration and concentration of particles. Chemokinesis is typically observed in living organisms and distinct from chemotaxis in that there is no particle directional sensing. The synthetic motor chemokinesis observed in this work is due to variation in the motor's velocity and effective diffusivity as a function of the fuel and salt concentration. Static concentration fields are generated in microfluidic devices fabricated with porous walls. The development of nanoscale particles that swim autonomously and collectively in chemical concentration gradients can be leveraged for a wide range of applications such as directed drug delivery, self-healing materials, and environmental remediation.
ContributorsWheat, Philip Matthew (Author) / Posner, Jonathan D (Thesis advisor) / Phelan, Patrick (Committee member) / Chen, Kangping (Committee member) / Buttry, Daniel (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150392-Thumbnail Image.png
Description
In this thesis the performance of a Hybrid AC System (HACS) is modeled and optimized. The HACS utilizes solar photovoltaic (PV) panels to help reduce the demand from the utility during peak hours. The system also includes an ice Thermal Energy Storage (TES) tank to accumulate cooling energy during off-peak

In this thesis the performance of a Hybrid AC System (HACS) is modeled and optimized. The HACS utilizes solar photovoltaic (PV) panels to help reduce the demand from the utility during peak hours. The system also includes an ice Thermal Energy Storage (TES) tank to accumulate cooling energy during off-peak hours. The AC runs continuously on grid power during off-peak hours to generate cooling for the house and to store thermal energy in the TES. During peak hours, the AC runs on the power supplied from the PV, and cools the house along with the energy stored in the TES. A higher initial cost is expected due to the additional components of the HACS (PV and TES), but a lower operational cost due to higher energy efficiency, energy storage and renewable energy utilization. A house cooled by the HACS will require a smaller size AC unit (about 48% less in the rated capacity), compared to a conventional AC system. To compare the cost effectiveness of the HACS with a regular AC system, time-of-use (TOU) utility rates are considered, as well as the cost of the system components and the annual maintenance. The model shows that the HACS pays back its initial cost of $28k in about 6 years with an 8% APR, and saves about $45k in total cost when compared to a regular AC system that cools the same house for the same period of 6 years.
ContributorsJubran, Sadiq (Author) / Phelan, Patrick (Thesis advisor) / Calhoun, Ronald (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2011
Description
As the demand for power increases in populated areas, so will the demand for water. Current power plant technology relies heavily on the Rankine cycle in coal, nuclear and solar thermal power systems which ultimately use condensers to cool the steam in the system. In dry climates, the amount of

As the demand for power increases in populated areas, so will the demand for water. Current power plant technology relies heavily on the Rankine cycle in coal, nuclear and solar thermal power systems which ultimately use condensers to cool the steam in the system. In dry climates, the amount of water to cool off the condenser can be extremely large. Current wet cooling technologies such as cooling towers lose water from evaporation. One alternative to prevent this would be to implement a radiative cooling system. More specifically, a system that utilizes the volumetric radiation emission from water to the night sky could be implemented. This thesis analyzes the validity of a radiative cooling system that uses direct radiant emission to cool water. A brief study on potential infrared transparent cover materials such as polyethylene (PE) and polyvinyl carbonate (PVC) was performed. Also, two different experiments to determine the cooling power from radiation were developed and run. The results showed a minimum cooling power of 33.7 W/m2 for a vacuum insulated glass system and 37.57 W/m2 for a tray system with a maximum of 98.61 Wm-2 at a point when conduction and convection heat fluxes were considered to be zero. The results also showed that PE proved to be the best cover material. The minimum numerical results compared well with other studies performed in the field using similar techniques and materials. The results show that a radiative cooling system for a power plant could be feasible given that the cover material selection is narrowed down, an ample amount of land is available and an economic analysis is performed proving it to be cost competitive with conventional systems.
ContributorsOvermann, William (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Taylor, Robert (Committee member) / Arizona State University (Publisher)
Created2011
150339-Thumbnail Image.png
Description
A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air

A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air that is stored in a tank at a particular pressure will be introduced during the compression stage of the engine cycle to reduce pump work. In the modified design the intake and exhaust valve timings are modified to achieve this process. The time required to fill the combustion chamber with compressed air to the storage pressure immediately before spark and the state of the air with respect to crank angle is modeled numerically using a crank step energy and mass balance model. The results are used to complete the engine cycle analysis based on air standard assumptions and air to fuel ratio of 15 for gasoline. It is found that at the baseline storage conditions (280 psi, 70OF) the modified engine does not meet the imposed constraints of staying below the maximum pressure of the unmodified engine. A new storage pressure of 235 psi is recommended. This only provides a 7.7% increase in thermal efficiency for the same work output. The modification of this engine for this low efficiency gain is not recommended.
ContributorsJoy, Lijin (Author) / Trimble, Steve (Thesis advisor) / Davidson, Joseph (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
148172-Thumbnail Image.png
Description

Increasing reliable produce farming and clean energy generation in the southwestern United States will be important for increasing the food supply for a growing population and reducing reliance on fossil fuels to generate energy. Combining greenhouses with photovoltaic (PV) films can allow both food and electric power to be produced

Increasing reliable produce farming and clean energy generation in the southwestern United States will be important for increasing the food supply for a growing population and reducing reliance on fossil fuels to generate energy. Combining greenhouses with photovoltaic (PV) films can allow both food and electric power to be produced simultaneously. This study tests if the combination of semi-transparent PV films and a transmission control layer can generate energy and spectrally control the transmission of light into a greenhouse. Testing the layer combinations in a variety of real-world conditions, it was shown that light can be spectrally controlled in a greenhouse. The transmission was overall able to be controlled by an average of 11.8% across the spectrum of sunlight, with each semi-transparent PV film able to spectrally select transmission of light in both the visible and near-infrared light wavelength. The combination of layers was also able to generate energy at an average efficiency of 8.71% across all panels and testing conditions. The most efficient PV film was the blue dyed, at 9.12%. This study also suggests additional improvements for this project, including the removal of the red PV film due to inefficiencies in spectral selection and additional tests with new materials to optimize plant growth and energy generation in a variety of light conditions.

ContributorsGunderson, Evan (Author) / Phelan, Patrick (Thesis director) / Villalobos, Rene (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05