Matching Items (274)
Filtering by

Clear all filters

150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
150045-Thumbnail Image.png
Description
A relatively simple subset of nanotechnology - nanofluids - can be obtained by adding nanoparticles to conventional base fluids. The promise of these fluids stems from the fact that relatively low particle loadings (typically <1% volume fractions) can significantly change the properties of the base fluid. This research

A relatively simple subset of nanotechnology - nanofluids - can be obtained by adding nanoparticles to conventional base fluids. The promise of these fluids stems from the fact that relatively low particle loadings (typically <1% volume fractions) can significantly change the properties of the base fluid. This research explores how low volume fraction nanofluids, composed of common base-fluids, interact with light energy. Comparative experimentation and modeling reveals that absorbing light volumetrically (i.e. in the depth of the fluid) is fundamentally different from surface-based absorption. Depending on the particle material, size, shape, and volume fraction, a fluid can be changed from being mostly transparent to sunlight (in the case of water, alcohols, oils, and glycols) to being a very efficient volumetric absorber of sunlight. This research also visualizes, under high levels of irradiation, how nanofluids undergo interesting, localized phase change phenomena. For this, images were taken of bubble formation and boiling in aqueous nanofluids heated by a hot wire and by a laser. Infrared thermography was also used to quantify this phenomenon. Overall, though, this research reveals the possibility for novel solar collectors in which the working fluid directly absorbs light energy and undergoes phase change in a single step. Modeling results indicate that these improvements can increase a solar thermal receiver's efficiency by up to 10%.
ContributorsTaylor, Robert (Author) / Phelan, Patrick E (Thesis advisor) / Adrian, Ronald (Committee member) / Trimble, Steve (Committee member) / Posner, Jonathan (Committee member) / Maracas, George (Committee member) / Arizona State University (Publisher)
Created2011
149676-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have

Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have enabled the engineering of synthetic analogues, bimetallic colloidal particles, that swim due to asymmetric ion flux originally proposed by Mitchell. Bimetallic colloidal particles swim through aqueous solutions by converting chemical fuel to fluid motion through asymmetric electrochemical reactions. This dissertation presents novel bimetallic motor fabrication strategies, motor functionality, and a study of the motor collective behavior in chemical concentration gradients. Brownian dynamics simulations and experiments show that the motors exhibit chemokinesis, a motile response to chemical gradients that results in net migration and concentration of particles. Chemokinesis is typically observed in living organisms and distinct from chemotaxis in that there is no particle directional sensing. The synthetic motor chemokinesis observed in this work is due to variation in the motor's velocity and effective diffusivity as a function of the fuel and salt concentration. Static concentration fields are generated in microfluidic devices fabricated with porous walls. The development of nanoscale particles that swim autonomously and collectively in chemical concentration gradients can be leveraged for a wide range of applications such as directed drug delivery, self-healing materials, and environmental remediation.
ContributorsWheat, Philip Matthew (Author) / Posner, Jonathan D (Thesis advisor) / Phelan, Patrick (Committee member) / Chen, Kangping (Committee member) / Buttry, Daniel (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150392-Thumbnail Image.png
Description
In this thesis the performance of a Hybrid AC System (HACS) is modeled and optimized. The HACS utilizes solar photovoltaic (PV) panels to help reduce the demand from the utility during peak hours. The system also includes an ice Thermal Energy Storage (TES) tank to accumulate cooling energy during off-peak

In this thesis the performance of a Hybrid AC System (HACS) is modeled and optimized. The HACS utilizes solar photovoltaic (PV) panels to help reduce the demand from the utility during peak hours. The system also includes an ice Thermal Energy Storage (TES) tank to accumulate cooling energy during off-peak hours. The AC runs continuously on grid power during off-peak hours to generate cooling for the house and to store thermal energy in the TES. During peak hours, the AC runs on the power supplied from the PV, and cools the house along with the energy stored in the TES. A higher initial cost is expected due to the additional components of the HACS (PV and TES), but a lower operational cost due to higher energy efficiency, energy storage and renewable energy utilization. A house cooled by the HACS will require a smaller size AC unit (about 48% less in the rated capacity), compared to a conventional AC system. To compare the cost effectiveness of the HACS with a regular AC system, time-of-use (TOU) utility rates are considered, as well as the cost of the system components and the annual maintenance. The model shows that the HACS pays back its initial cost of $28k in about 6 years with an 8% APR, and saves about $45k in total cost when compared to a regular AC system that cools the same house for the same period of 6 years.
ContributorsJubran, Sadiq (Author) / Phelan, Patrick (Thesis advisor) / Calhoun, Ronald (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2011
Description
As the demand for power increases in populated areas, so will the demand for water. Current power plant technology relies heavily on the Rankine cycle in coal, nuclear and solar thermal power systems which ultimately use condensers to cool the steam in the system. In dry climates, the amount of

As the demand for power increases in populated areas, so will the demand for water. Current power plant technology relies heavily on the Rankine cycle in coal, nuclear and solar thermal power systems which ultimately use condensers to cool the steam in the system. In dry climates, the amount of water to cool off the condenser can be extremely large. Current wet cooling technologies such as cooling towers lose water from evaporation. One alternative to prevent this would be to implement a radiative cooling system. More specifically, a system that utilizes the volumetric radiation emission from water to the night sky could be implemented. This thesis analyzes the validity of a radiative cooling system that uses direct radiant emission to cool water. A brief study on potential infrared transparent cover materials such as polyethylene (PE) and polyvinyl carbonate (PVC) was performed. Also, two different experiments to determine the cooling power from radiation were developed and run. The results showed a minimum cooling power of 33.7 W/m2 for a vacuum insulated glass system and 37.57 W/m2 for a tray system with a maximum of 98.61 Wm-2 at a point when conduction and convection heat fluxes were considered to be zero. The results also showed that PE proved to be the best cover material. The minimum numerical results compared well with other studies performed in the field using similar techniques and materials. The results show that a radiative cooling system for a power plant could be feasible given that the cover material selection is narrowed down, an ample amount of land is available and an economic analysis is performed proving it to be cost competitive with conventional systems.
ContributorsOvermann, William (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Taylor, Robert (Committee member) / Arizona State University (Publisher)
Created2011
150339-Thumbnail Image.png
Description
A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air

A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air that is stored in a tank at a particular pressure will be introduced during the compression stage of the engine cycle to reduce pump work. In the modified design the intake and exhaust valve timings are modified to achieve this process. The time required to fill the combustion chamber with compressed air to the storage pressure immediately before spark and the state of the air with respect to crank angle is modeled numerically using a crank step energy and mass balance model. The results are used to complete the engine cycle analysis based on air standard assumptions and air to fuel ratio of 15 for gasoline. It is found that at the baseline storage conditions (280 psi, 70OF) the modified engine does not meet the imposed constraints of staying below the maximum pressure of the unmodified engine. A new storage pressure of 235 psi is recommended. This only provides a 7.7% increase in thermal efficiency for the same work output. The modification of this engine for this low efficiency gain is not recommended.
ContributorsJoy, Lijin (Author) / Trimble, Steve (Thesis advisor) / Davidson, Joseph (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150359-Thumbnail Image.png
Description
S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic

S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic (MTL) to describe the user specifications for the hybrid systems. We then try to falsify the MTL specification using global minimization of robustness metric. Global minimization is carried out using stochastic optimization algorithms like Monte-Carlo (MC) and Extended Ant Colony Optimization (EACO) algorithms. Irrespective of the type of the model we provide as an input to S-Taliro, the user needs to specify the MTL specification, the initial conditions and the bounds on the inputs. S-Taliro then uses this information to generate test inputs which are used to simulate the system. The simulation trace is then provided as an input to Taliro which computes the robustness estimate of the MTL formula. Global minimization of this robustness metric is performed to generate new test inputs which again generate simulation traces which are closer to falsifying the MTL formula. Traces with negative robustness values indicate that the simulation trace falsified the MTL formula. Traces with positive robustness values are also of great importance because they indicate how robust the system is against the given specification. S-Taliro has been seamlessly integrated into the Matlab environment, which is extensively used for model-based development of control software. Moreover the toolbox has been developed in a modular fashion and therefore adding new optimization algorithms is easy and straightforward. In this work I present the architecture of S-Taliro and its working on a few benchmark problems.
ContributorsAnnapureddy, Yashwanth Singh Rahul (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2011