Matching Items (770)
Filtering by

Clear all filters

150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149992-Thumbnail Image.png
Description
Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness,

Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation.
ContributorsGummalla, Samatha (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150375-Thumbnail Image.png
Description
Current sensing ability is one of the most desirable features of contemporary current or voltage mode controlled DC-DC converters. Current sensing can be used for over load protection, multi-stage converter load balancing, current-mode control, multi-phase converter current-sharing, load independent control, power efficiency improvement etc. There are handful existing approaches for

Current sensing ability is one of the most desirable features of contemporary current or voltage mode controlled DC-DC converters. Current sensing can be used for over load protection, multi-stage converter load balancing, current-mode control, multi-phase converter current-sharing, load independent control, power efficiency improvement etc. There are handful existing approaches for current sensing such as external resistor sensing, triode mode current mirroring, observer sensing, Hall-Effect sensors, transformers, DC Resistance (DCR) sensing, Gm-C filter sensing etc. However, each method has one or more issues that prevent them from being successfully applied in DC-DC converter, e.g. low accuracy, discontinuous sensing nature, high sensitivity to switching noise, high cost, requirement of known external power filter components, bulky size, etc. In this dissertation, an offset-independent inductor Built-In Self Test (BIST) architecture is proposed which is able to measure the inductor inductance and DCR. The measured DCR enables the proposed continuous, lossless, average current sensing scheme. A digital Voltage Mode Control (VMC) DC-DC buck converter with the inductor BIST and current sensing architecture is designed, fabricated, and experimentally tested. The average measurement errors for inductance, DCR and current sensing are 2.1%, 3.6%, and 1.5% respectively. For the 3.5mm by 3.5mm die area, inductor BIST and current sensing circuits including related pins only consume 5.2% of the die area. BIST mode draws 40mA current for a maximum time period of 200us upon start-up and the continuous current sensing consumes about 400uA quiescent current. This buck converter utilizes an adaptive compensator. It could update compensator internally so that the overall system has a proper loop response for large range inductance and load current. Next, a digital Average Current Mode Control (ACMC) DC-DC buck converter with the proposed average current sensing circuits is designed and tested. To reduce chip area and power consumption, a 9 bits hybrid Digital Pulse Width Modulator (DPWM) which uses a Mixed-mode DLL (MDLL) is also proposed. The DC-DC converter has a maximum of 12V input, 1-11 V output range, and a maximum of 3W output power. The maximum error of one least significant bit (LSB) delay of the proposed DPWM is less than 1%.
ContributorsLiu, Tao (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ozev, Sule (Committee member) / Vermeire, Bert (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2011
150360-Thumbnail Image.png
Description
A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to

A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to get workload, temperature and CPU performance counter values. The controller is designed and simulated using circuit-design and synthesis tools. At device-level, on-chip planar inductors suffer from low inductance occupying large chip area. On-chip inductors with integrated magnetic materials are designed, simulated and fabricated to explore performance-efficiency trade offs and explore potential applications such as resonant clocking and on-chip voltage regulation. A system level study is conducted to evaluate the effect of on-chip voltage regulator employing magnetic inductors as the output filter. It is concluded that neuromorphic power controller is beneficial for fine-grained per-core power management in conjunction with on-chip voltage regulators utilizing scaled magnetic inductors.
ContributorsSinha, Saurabh (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Yu, Hongbin (Committee member) / Christen, Jennifer B. (Committee member) / Arizona State University (Publisher)
Created2011
148124-Thumbnail Image.png
Description

Before the COVID-19 pandemic, there was a great need for United States’ restaurants to “go green” due to consumers’ habits of frequently eating out. Unfortunately, COVID-19 has caused this initiative to lose traction. While the amount of customers ordering takeout has increased, there is less emphasis on sustainability.<br/>Plastic is known

Before the COVID-19 pandemic, there was a great need for United States’ restaurants to “go green” due to consumers’ habits of frequently eating out. Unfortunately, COVID-19 has caused this initiative to lose traction. While the amount of customers ordering takeout has increased, there is less emphasis on sustainability.<br/>Plastic is known for its harmful effects on the environment and the extreme length of time it takes to decompose. According to the International Union for Conservation of Nature (IUCN), almost 8 million tons of plastic end up in the oceans at an annual rate, threatening not only the safety of marine species but also human health. Modern food packaging materials have included a blend of synthetic ingredients, trickling into our daily lives and polluting the air, water, and land. Single-use plastic items slowly degrade into microplastics and can take up to hundreds of years to biodegrade.<br/>Due to COVID-19, restaurants have switched to takeout and delivery options to adapt to the new business environment and guidelines enforced by the Center of Disease Control (CDC) mandated guidelines. Some of these guidelines include: notices encouraging social distancing and mask-wearing, mandated masks for employees, and easy access to sanitary supplies. This cultural shift is motivating restaurants to search for a quick, cheap, and easy fix to adapt to the increased demand of take-out and delivery methods. This increases their plastic consumption of items such as plastic bags/paper bags, styrofoam containers, and beverage cups. Plastic is the most popular takeout material because of its price and durability as well as allowing for limited contamination and easy disposability.<br/>Almost all food products come in packaging and this, more often than not, is single-use. Food is the largest market out of all the packaging industry, maintaining roughly two-thirds of material going to food. The US Environmental Protection Agency reports that almost half of all municipal solid waste is made up of food and food packaging materials. In 2014, over 162 million tons of packaging material waste was generated in the states. This typically contains toxic inks and dyes that leach into groundwater and soil. When degrading, pieces of plastic absorb toxins like PCBs and pesticides, and then each piece will, in turn, release toxic chemicals like Bisphenol-A. Even before being thrown away, it causes negative effects for the environment. The creation of packaging materials uses many resources such as petroleum and chemicals and then releases toxic byproducts. Such byproducts include sludge containing contaminants, greenhouse gases, and heavy metal and particulate matter emissions. Unlike many other industries, plastic manufacturing has actually increased production. Demand has increased and especially in the food industry to keep things sanitary. This increase in production is reflective of the increase in waste. <br/>Although restaurants have implemented their own sustainable initiatives to combat their carbon footprint, the pandemic has unfortunately forced restaurants to digress. For example, Just Salad, a fast-food restaurant chain, incentivized customers with discounted meals to use reusable bowls which saved over 75,000 pounds of plastic per year. However, when the pandemic hit, the company halted the program to pivot towards takeout and delivery. This effect is apparent on an international scale. Singapore was in lock-down for eight weeks and during that time, 1,470 tons of takeout and food delivery plastic waste was thrown out. In addition, the Hong Kong environmental group Greeners Action surveyed 2,000 people in April and the results showed that people are ordering out twice as much as last year, doubling the use of plastic.<br/>However, is this surge of plastic usage necessary in the food industry or are there methods that can be used to reduce the amount of waste production? The COVID-19 pandemic caused a fracture in the food system’s supply chain, involving food, factory, and farm. This thesis will strive to tackle such topics by analyzing the supply chains of the food industry and identify areas for sustainable opportunities. These recommendations will help to identify areas for green improvement.

ContributorsDeng, Aretha (Co-author) / Tao, Adlar (Co-author) / Vargas, Cassandra (Co-author) / Printezis, Antonios (Thesis director) / Konopka, John (Committee member) / Department of Supply Chain Management (Contributor) / School of International Letters and Cultures (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148125-Thumbnail Image.png
Description

In recent years, advanced metrics have dominated the game of Major League Baseball. One such metric, the Pythagorean Win-Loss Formula, is commonly used by fans, reporters, analysts and teams alike to use a team’s runs scored and runs allowed to estimate their expected winning percentage. However, this method is not

In recent years, advanced metrics have dominated the game of Major League Baseball. One such metric, the Pythagorean Win-Loss Formula, is commonly used by fans, reporters, analysts and teams alike to use a team’s runs scored and runs allowed to estimate their expected winning percentage. However, this method is not perfect, and shows notable room for improvement. One such area that could be improved is its ability to be affected drastically by a single blowout game, a game in which one team significantly outscores their opponent.<br/>We hypothesize that meaningless runs scored in blowouts are harming the predictive power of Pythagorean Win-Loss and similar win expectancy statistics such as the Linear Formula for Baseball and BaseRuns. We developed a win probability-based cutoff approach that tallied the score of each game once a certain win probability threshold was passed, effectively removing those meaningless runs from a team’s season-long runs scored and runs allowed totals. These truncated totals were then inserted into the Pythagorean Win-Loss and Linear Formulas and tested against the base models.<br/>The preliminary results show that, while certain runs are more meaningful than others depending on the situation in which they are scored, the base models more accurately predicted future record than our truncated versions. For now, there is not enough evidence to either confirm or reject our hypothesis. In this paper, we suggest several potential improvement strategies for the results.<br/>At the end, we address how these results speak to the importance of responsibility and restraint when using advanced statistics within reporting.

ContributorsIversen, Joshua Allen (Author) / Satpathy, Asish (Thesis director) / Kurland, Brett (Committee member) / Department of Information Systems (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most opportunities for growth, along with identify a specific market strategy that Company X could do to capture market share within the most opportunistic segment.

ContributorsRaimondi, Ronald Frank (Co-author) / Hamkins, Sean (Co-author) / Gandolfi, Michael (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / Department of Information Systems (Contributor) / Department of Management and Entrepreneurship (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147851-Thumbnail Image.png
Description

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most

Edge computing is a new and growing market that Company X has an opportunity to expand their presence. Within this paper, we compare many external research studies to better quantify the Total Addressable Market of the Edge Computing space. Furthermore, we highlight which Segments within Edge Computing have the most opportunities for growth, along with identify a specific market strategy that Company X could do to capture market share within the most opportunistic segment.

ContributorsHamkins, Sean (Co-author) / Raimondi, Ronnie (Co-author) / Gandolfi, Micheal (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Mike (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05