Matching Items (341)
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
148123-Thumbnail Image.png
Description

When examining the average college campus, it becomes obvious that students feel rushed from one place to another as they try to participate in class, clubs, and extracurricular activities. One way that students can feel more comfortable and relaxed around campus is to introduce the aspect of gaming. Studies show

When examining the average college campus, it becomes obvious that students feel rushed from one place to another as they try to participate in class, clubs, and extracurricular activities. One way that students can feel more comfortable and relaxed around campus is to introduce the aspect of gaming. Studies show that “Moderate videogame play has been found to contribute to emotional stability” (Jones, 2014). This demonstrates that the stress of college can be mitigated by introducing the ability to interact with video games. This same concept has been applied in the workplace, where studies have shown that “Gaming principles such as challenges, competition, rewards and personalization keep employees engaged and learning” (Clark, 2020). This means that if we manage to gamify the college experience, students will be more engaged which will increase and stabilize the retention rate of colleges which utilize this type of experience. Gaming allows students to connect with their peers in a casual environment while also allowing them to find resources around campus and find new places to eat and relax. We plan to gamify the college experience by introducing augmented reality in the form of an app. Augmented reality is “. . . a technology that combines virtual information with the real world” (Chen, 2019). College students will be able to utilize the resources and amenities available to them on campus while completing quests that help them within the application. This demonstrates the ability for video games to engage students using artificial tasks but real actions and experiences which help them feel more connected to campus. Our Founders Lab team has developed and tested an AR application that can be used to connect students with their campus and the resources available to them.

ContributorsKlein, Jonathan (Co-author) / Rangarajan, Padmapriya (Co-author) / Li, Shimei (Co-author) / Byrne, Jared (Thesis director) / Pierce, John (Committee member) / School of International Letters and Cultures (Contributor) / Department of Management and Entrepreneurship (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147847-Thumbnail Image.png
Description

The Constitution is a document that was made over 200 years ago by a population that could have never imagined the type of technology or social advances made in the 21st century. This creates a natural rift between governing ideals between then and now, that needs to be addressed. Rather

The Constitution is a document that was made over 200 years ago by a population that could have never imagined the type of technology or social advances made in the 21st century. This creates a natural rift between governing ideals between then and now, that needs to be addressed. Rather than holding the values of the nation to a time when people were not considered citizens because of the color of their skin, there need to be updates made to the Constitution itself. The need for change and the mechanisms were both established by the Framers while creating and advancing the Constitution. The ideal process to go about these changes is split between the formal Article V amendment process and judicial activism. The amendment process has infinite scope for changes that can be done, but due to the challenge involved in trying to pass any form of the amendment through both State and Federal Congresses, that process should be reserved for only fundamental or structural changes. Judicial activism, by way of Supreme Court decisions, is a method best applied to the protection of people’s rights.

Created2021-05
148028-Thumbnail Image.png
Description

In this study, I sought to determine which NFL Combine metrics are predictive of future NFL success among the quarterback, running back, and wide receiver positions, with the hope of providing meaningful information that can be utilized by NFL executives when making decisions about draft selections. I gathered samples spanning

In this study, I sought to determine which NFL Combine metrics are predictive of future NFL success among the quarterback, running back, and wide receiver positions, with the hope of providing meaningful information that can be utilized by NFL executives when making decisions about draft selections. I gathered samples spanning across the years 2010-2015 of all three of the aforementioned position groups. Among these samples, I used certain criteria which split them up within their position groups. The two groups of players were identified as: those who had successful careers and those who had unsuccessful careers. Given this information, I performed t-tests and ANOVA between successful and unsuccessful groups with the goal of identifying which combine metrics are predictive of future NFL success, and which are not. For quarterbacks, the 40-yard dash, broad jump, three-cone, and 10-yard shuttle all appear to be predictive of success. Notably, quarterback height does not appear to be predictive, despite the popular belief that a quarterback should be tall if they are to succeed. For running backs, player weight, 40-yard dash, and three-cone all appear to be predictive of success, with the broad jump and 10-yard shuttle seemingly predicting success as well, albeit to a lesser degree of strength. For wide receivers, all metrics do not appear to be predictive of success, with the exception of the 40-yard dash, which only appears to be slightly predictive. While there are likely many other factors that contribute to a player’s success than tests administered at the NFL combine, NFL general managers can look to these results when making draft selections.

ContributorsFox, Dallas Alexander (Author) / Cox, Richard (Thesis director) / Lin, Elva (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147983-Thumbnail Image.png
Description

In 2020, the world was swept by a global pandemic. It disrupted the lives of millions; many lost their jobs, students were forced to leave schools, and children were left with little to do while quarantined at their houses. Although the media outlets covered very little of how children were

In 2020, the world was swept by a global pandemic. It disrupted the lives of millions; many lost their jobs, students were forced to leave schools, and children were left with little to do while quarantined at their houses. Although the media outlets covered very little of how children were being affected by COVID-19, it was obvious that their group was not immune to the issues the world was facing. Being stuck at home with little to do took a mental and physical toll on many kids. That is when EVOLVE Academy became an idea; our team wanted to create a fully online platform for children to help them practice and evolve their athletics skills, or simply spend part of their day performing a physical and health activity. Our team designed a solution that would benefit children, as well as parents that were struggling to find engaging activities for their kids while out of school. We quickly encountered issues that made it difficult for us to reach our target audience and make them believe and trust our platform. However, we persisted and tried to solve and answer the questions and problems that came along the way. Sadly, the same pandemic that opened the widow for EVOLVE Academy to exist, is now the reason people are walking away from it. Children want real interaction. They want to connect with other kids through more than just a screen. Although the priority of parents remains the safety and security of their kids, parents are also searching and opting for more “human” interactions, leaving EVOLVE Academy with little room to grow and succeed.

ContributorsParmenter, Taylor (Co-author) / Hernandez, Melany (Co-author) / Whitelocke, Kailas (Co-author) / Byrne, Jared (Thesis director) / Lee, Christopher (Committee member) / Kunowski, Jeff (Committee member) / Dean, W.P. Carey School of Business (Contributor, Contributor, Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

In the early years of the National Football League, scouting and roster development resembled the wild west. Drafts were held in hotel ballrooms the day after the last game of regular season college football was played. There was no combine, limited scouting, and no salary cap. Over time, these aspects

In the early years of the National Football League, scouting and roster development resembled the wild west. Drafts were held in hotel ballrooms the day after the last game of regular season college football was played. There was no combine, limited scouting, and no salary cap. Over time, these aspects have changed dramatically, in part due to key figures from Pete Rozelle to Gil Brandt to Bill Belichick. The development and learning from this time period have laid the foundational infrastructure that modern roster construction is based upon. In this modern day, managing a team and putting together a roster involves numerous people, intense scouting, layers of technology, and, critically, the management of the salary cap. Since it was first put into place in 1994, managing the cap has become an essential element of building and sustaining a successful team. The New England Patriots’ mastery of the cap is a large part of what enabled their dynastic run over the past twenty years. While their model has undoubtedly proven to be successful, an opposing model has become increasingly popular and yielded results of its own. Both models center around different distributions of the salary cap, starting with the portion paid to the starting quarterback. The Patriots dynasty was, in part, made possible due to their use of both models over the course of their dominance. Drafting, organizational culture, and coaching are all among the numerous critical factors in determining a team’s success and it becomes difficult to pinpoint the true source of success for any given team. Ultimately, however, effective management of the cap proves to be a force multiplier; it does not guarantee that a team will be successful, but it helps teams that handle the other variables well sustain their success.

ContributorsBolger, William (Author) / Eaton, John (Thesis director) / Mokwa, Michael (Committee member) / Department of Marketing (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148060-Thumbnail Image.png
Description

The field of behavioral economics explores the ways in which individuals make choices under uncertainty, in part, by examining the role that risk attitudes play in a person’s efforts to maximize their own utility. This thesis aims to contribute to the body of economic literature regarding risk attitudes by first

The field of behavioral economics explores the ways in which individuals make choices under uncertainty, in part, by examining the role that risk attitudes play in a person’s efforts to maximize their own utility. This thesis aims to contribute to the body of economic literature regarding risk attitudes by first evaluating the traditional economic method for discerning risk coefficients by examining whether students provide reasonable answers to lottery questions. Second, the answers of reasonable respondents are subject to our economic model using the CRRA utility function in which Python code is used to make predictions of the risk coefficients of respondents via a two-step regression procedure. Lastly, the degree to which the economic model provides a good fit for the lottery answers given by reasonable respondents is discerned. The most notable findings of the study are as follows. College students had extreme difficulty in understanding lottery questions of this sort, with Medical and Life Science majors struggling significantly more than both Business and Engineering majors. Additionally, gender was correlated with estimated risk coefficients, with females being more risk-loving relative to males. Lastly, in regards to the model’s goodness of fit when evaluating potential losses, the expected utility model involving choice under uncertainty was consistent with the behavior of progressives and moderates but inconsistent with the behavior of conservatives.

ContributorsSansone, Morgan Marie (Author) / Leiva Bertran, Fernando (Thesis director) / Vereshchagina, Galina (Committee member) / Economics Program in CLAS (Contributor) / School of Politics and Global Studies (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05