Matching Items (524)
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
Description

Consider Steven Cryos’ words, “When disaster strikes, the time to prepare has passed.” Witnessing domestic water insecurity in events such as Hurricane Katrina, the instability in Flint, Michigan, and most recently the winter storms affecting millions across Texas, we decided to take action. The period between a water supply’s disruption

Consider Steven Cryos’ words, “When disaster strikes, the time to prepare has passed.” Witnessing domestic water insecurity in events such as Hurricane Katrina, the instability in Flint, Michigan, and most recently the winter storms affecting millions across Texas, we decided to take action. The period between a water supply’s disruption and restoration is filled with anxiety, uncertainty, and distress -- particularly since there is no clear indication of when, exactly, restoration comes. It is for this reason that Water Works now exists. As a team of students from diverse backgrounds, what started as an honors project with the Founders Lab at Arizona State University became the seed that will continue to mature into an economically sustainable business model supporting the optimistic visions and tenants of humanitarianism. By having conversations with community members, conducting market research, competing for funding and fostering progress amid the COVID-19 pandemic, our team’s problem-solving traverses the disciplines. The purpose of this paper is to educate our readers about a unique solution to emerging issues of water insecurity that are nested across and within systems who could benefit from the introduction of a personal water reclamation system, showcase our team’s entrepreneurial journey, and propose future directions that will this once pedagogical exercise to continue fulfilling its mission: To heal, to hydrate and to help bring safe water to everyone.

ContributorsReitzel, Gage Alexander (Co-author) / Filipek, Marina (Co-author) / Sadiasa, Aira (Co-author) / Byrne, Jared (Thesis director) / Sebold, Brent (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / School of Human Evolution & Social Change (Contributor, Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147847-Thumbnail Image.png
Description

The Constitution is a document that was made over 200 years ago by a population that could have never imagined the type of technology or social advances made in the 21st century. This creates a natural rift between governing ideals between then and now, that needs to be addressed. Rather

The Constitution is a document that was made over 200 years ago by a population that could have never imagined the type of technology or social advances made in the 21st century. This creates a natural rift between governing ideals between then and now, that needs to be addressed. Rather than holding the values of the nation to a time when people were not considered citizens because of the color of their skin, there need to be updates made to the Constitution itself. The need for change and the mechanisms were both established by the Framers while creating and advancing the Constitution. The ideal process to go about these changes is split between the formal Article V amendment process and judicial activism. The amendment process has infinite scope for changes that can be done, but due to the challenge involved in trying to pass any form of the amendment through both State and Federal Congresses, that process should be reserved for only fundamental or structural changes. Judicial activism, by way of Supreme Court decisions, is a method best applied to the protection of people’s rights.

Created2021-05
148052-Thumbnail Image.png
Description

The thesis analyzes the apathetic youth turnout myth and researches to see if voter suppression can explain the reason behind low youth turnout. This thesis is a study done with Arizona State University students to assess their level of voter turnout, their levels of political engagement, and if they have

The thesis analyzes the apathetic youth turnout myth and researches to see if voter suppression can explain the reason behind low youth turnout. This thesis is a study done with Arizona State University students to assess their level of voter turnout, their levels of political engagement, and if they have experienced voter suppression. Respondents were also asked about the support given by ASU in terms of helping with the voting process. Results indicate that Arizona State students have high levels of political engagement, and that 1 in 5 ASU students have experienced voter suppression. Furthermore, ASU students on a whole are uncertain about the role ASU should play in supporting students with the voting process.

Created2021-05
147987-Thumbnail Image.png
Description

As political campaigning becomes increasingly digital and data-driven, data privacy has become a question of democratic governance. Yet, Congress has yet to pass a comprehensive federal data privacy law and even the strongest subnational data privacy laws exempt political campaigns from regulation. <br/><br/>This thesis examines how data privacy laws impact

As political campaigning becomes increasingly digital and data-driven, data privacy has become a question of democratic governance. Yet, Congress has yet to pass a comprehensive federal data privacy law and even the strongest subnational data privacy laws exempt political campaigns from regulation. <br/><br/>This thesis examines how data privacy laws impact data-driven and digital political campaigning. Specifically, it investigates what information is incorporated into the political data ecosystem, how data privacy laws regulate the collection of this data, and how actors in the political data ecosystem respond to these laws. It examines both sector-specific federal law and subnational data protection regulation through a case study of California. This research suggests that although the California Consumer Privacy Act and California Privacy Rights Act are landmark steps in American data protection, subnational data privacy law remains inhibited by the federal market-based approach.

ContributorsScovil, Daiva Julija (Author) / Gary, Marchant (Thesis director) / K, Royal (Committee member) / Historical, Philosophical & Religious Studies (Contributor, Contributor) / School of Politics and Global Studies (Contributor, Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor, Contributor) / Watts College of Public Service & Community Solut (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148002-Thumbnail Image.png
Description

Disinformation has long been a tactic used by the Russian government to achieve its goals. Today, Vladimir Putin aims to achieve several things: weaken the United States’ strength on the world stage, relieve Western sanctions on himself and his inner circle, and reassert dominant influence over Russia’s near abroad (the

Disinformation has long been a tactic used by the Russian government to achieve its goals. Today, Vladimir Putin aims to achieve several things: weaken the United States’ strength on the world stage, relieve Western sanctions on himself and his inner circle, and reassert dominant influence over Russia’s near abroad (the Baltics, Ukraine, etc.). This research analyzed disinformation in English, Spanish, and Russian; noting the dominant narratives and geopolitical goals Russia hoped to achieve by destabilizing democracy in each country/region.

Created2021-05
147811-Thumbnail Image.png
Description

For my project, I delve into the relationships of Victor and the Monster as well as the relationships Victor shares with other characters that were underdeveloped within the original novel by Mary Shelley in the novel Franeknstein. I examine their relationships in two components. The first through my own interpretation

For my project, I delve into the relationships of Victor and the Monster as well as the relationships Victor shares with other characters that were underdeveloped within the original novel by Mary Shelley in the novel Franeknstein. I examine their relationships in two components. The first through my own interpretation of Victor and the Monster’s relationship within a creative writing piece that extends the novel as if Victor had lived rather than died in the arctic in order to explore the possibilities of a more complex set of relationships between Victor and the Monster than simply creator-creation. My writing focuses on the development of their relationship once all they have left is each other. The second part of my project focuses on an analytical component. I analyze and cite the reasoning for my creative take on Victor and the Monster as well as their relationship within the novel and Mary Shelley’s intentions.

ContributorsHodge Smith, Elizabeth Ann (Author) / Fette, Don (Thesis director) / Hoyt, Heather (Committee member) / Historical, Philosophical & Religious Studies (Contributor, Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05