Matching Items (14)

137384-Thumbnail Image.png

UTILIZATION OF DEDUCTIVE LOGIC AND LEADERSHIP CONCEPTS: A BEST VALUE (BV) APPROACH TO EDUCATION

Description

A new honors class created at Arizona State University utilizes a new "thinking" paradigm. The new paradigm is a problem solution using deductive logic and natural laws to replace the

A new honors class created at Arizona State University utilizes a new "thinking" paradigm. The new paradigm is a problem solution using deductive logic and natural laws to replace the traditional acquisition and usage of detailed knowledge. When utilizing deductive logic, less time is required for students to learn, and students are able to resolve unique issues with minimal amounts of information. Students use their logic and processing skills to replace the traditional need of collecting large amounts of detailed information. The concepts taught in the class have come from the industry success of the Best Value (BV) approach developed by a leading research group at Arizona State University over the last 17 years. The research group identified the source of the industry's problem is due to the traditional business approach of management, direction and control (MDC). With over 1500 tests conducted, delivering $5.7B of services, with results showing: 30% decrease in cost, 30% increase in value, and customer satisfaction improvement by up to 140%, the Best Value (BV) approach has been identified as more efficient and can deliver better quality services than the traditional MDC approach. Through the research group's implementation of the new paradigm in higher education, the author identified a windfall effect that was able to give students understanding and an increased ability to cope with stressful situations, disease and extraordinary complications. It also exposed students to potentially harmful practices in their lives and has helped them to change. The study tested in K-12 proved potential value in exposing the paradigm to K-12 students, and what impact it may have on future professionals. The author's results include satisfaction rating of 9.5 (out of 10), increased career alignment by up to 113%, increased understanding of self by up to 70%, and a reduction of stress by up to 71%. The author's K-12 case studies aligned with the successful results shown in the industry and college classes run by the leading research group. The pattern of the new paradigm shows as resistance to it decreases, productivity, efficiency, processing speed, understanding, and effectiveness all increase.

Contributors

Agent

Created

Date Created
  • 2013-12

135340-Thumbnail Image.png

Design and Implementation of an Electronic Preventative Maintenance System for Autonomous Vehicles

Description

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.

Contributors

Created

Date Created
  • 2016-05

137375-Thumbnail Image.png

Evaluation of Multiplayer Modes in Mobile Apps

Description

Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social

Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social aspect of gaming is very important, and thus most mobile games include support for playing with multiple players. However, there is a lack of common knowledge about which implementation of this functionality is most favorable from a development standpoint. In this study, we evaluate three different types of multiplayer gameplay (pass-and-play, Bluetooth, and GameCenter) via development cost and user interviews. We find that pass-and-play, the most easily-implemented mode, is not favored by players due to its inconvenience. We also find that GameCenter is not as well favored as expected due to latency of GameCenter's servers, and that Bluetooth multiplayer is the most well favored for social play due to its similarity to real-life play. Despite there being a large overhead in developing and testing Bluetooth and GameCenter multiplayer due to Apple's development process, this is irrelevant since professional developers must enroll in this process anyway. Therefore, the most effective multiplayer mode to develop is mostly determined by whether Internet play is desirable: Bluetooth if not, GameCenter if so. Future studies involving more complete development work and more types of multiplayer modes could yield more promising results.

Contributors

Created

Date Created
  • 2013-12

137679-Thumbnail Image.png

LOYALS: WEB ACHIEVEMENTS FOR EVALUATING CUSTOMER TRENDS AND LOYALTY

Description

Gamification is the process of adding game mechanics to non game activities, thus creating a more engaging environment. Loyals provides a gamification API which can be consumed to add Loyals

Gamification is the process of adding game mechanics to non game activities, thus creating a more engaging environment. Loyals provides a gamification API which can be consumed to add Loyals (achievements) to any website, application, or mobile app. Loyals are used in two major ways: (1) to create an interactive environment where users are rewarded for completing tasks and (2) as contextual information useful for analyzing user interaction with the application. The interactive environment inspires users to continue using an application while the contextual information can be used for improving the application to draw in new loyal visitors, ad targeting, creating user profiles, and much more.

Contributors

Agent

Created

Date Created
  • 2013-05

A distributed component-based software framework for laboratory automation systems

Description

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.

Contributors

Agent

Created

Date Created
  • 2012

155200-Thumbnail Image.png

Affect-driven self-adaptation: a manufacturing vision with a software product line paradigm

Description

Affect signals what humans care about and is involved in rational decision-making and action selection. Many technologies may be improved by the capability to recognize human affect and to respond

Affect signals what humans care about and is involved in rational decision-making and action selection. Many technologies may be improved by the capability to recognize human affect and to respond adaptively by appropriately modifying their operation. This capability, named affect-driven self-adaptation, benefits systems as diverse as learning environments, healthcare applications, and video games, and indeed has the potential to improve systems that interact intimately with users across all sectors of society. The main challenge is that existing approaches to advancing affect-driven self-adaptive systems typically limit their applicability by supporting the creation of one-of-a-kind systems with hard-wired affect recognition and self-adaptation capabilities, which are brittle, costly to change, and difficult to reuse. A solution to this limitation is to leverage the development of affect-driven self-adaptive systems with a manufacturing vision.

This dissertation demonstrates how using a software product line paradigm can jumpstart the development of affect-driven self-adaptive systems with that manufacturing vision. Applying a software product line approach to the affect-driven self-adaptive domain provides a comprehensive, flexible and reusable infrastructure of components with mechanisms to monitor a user’s affect and his/her contextual interaction with a system, to detect opportunities for improvements, to select a course of action, and to effect changes. It also provides a domain-specific architecture and well-documented process guidelines, which facilitate an understanding of the organization of affect-driven self-adaptive systems and their implementation by systematically customizing the infrastructure to effectively address the particular requirements of specific systems.

The software product line approach is evaluated by applying it in the development of learning environments and video games that demonstrate the significant potential of the solution, across diverse development scenarios and applications.

The key contributions of this work include extending self-adaptive system modeling, implementing a reusable infrastructure, and leveraging the use of patterns to exploit the commonalities between systems in the affect-driven self-adaptation domain.

Contributors

Agent

Created

Date Created
  • 2016

151275-Thumbnail Image.png

Characterization of cost excess in cloud applications

Description

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.

Contributors

Agent

Created

Date Created
  • 2012

155654-Thumbnail Image.png

In pursuit of optimal workflow within the Apache Software Foundation

Description

The following is a case study composed of three workflow investigations at the open source software development (OSSD) based Apache Software Foundation (Apache). I start with an examination of

The following is a case study composed of three workflow investigations at the open source software development (OSSD) based Apache Software Foundation (Apache). I start with an examination of the workload inequality within the Apache, particularly with regard to requirements writing. I established that the stronger a participant's experience indicators are, the more likely they are to propose a requirement that is not a defect and the more likely the requirement is eventually implemented. Requirements at Apache are divided into work tickets (tickets). In our second investigation, I reported many insights into the distribution patterns of these tickets. The participants that create the tickets often had the best track records for determining who should participate in that ticket. Tickets that were at one point volunteered for (self-assigned) had a lower incident of neglect but in some cases were also associated with severe delay. When a participant claims a ticket but postpones the work involved, these tickets exist without a solution for five to ten times as long, depending on the circumstances. I make recommendations that may reduce the incidence of tickets that are claimed but not implemented in a timely manner. After giving an in-depth explanation of how I obtained this data set through web crawlers, I describe the pattern mining platform I developed to make my data mining efforts highly scalable and repeatable. Lastly, I used process mining techniques to show that workflow patterns vary greatly within teams at Apache. I investigated a variety of process choices and how they might be influencing the outcomes of OSSD projects. I report a moderately negative association between how often a team updates the specifics of a requirement and how often requirements are completed. I also verified that the prevalence of volunteerism indicators is positively associated with work completion but what was surprising is that this correlation is stronger if I exclude the very large projects. I suggest the largest projects at Apache may benefit from some level of traditional delegation in addition to the phenomenon of volunteerism that OSSD is normally associated with.

Contributors

Agent

Created

Date Created
  • 2017

153070-Thumbnail Image.png

Construction of GCCFG for inter-procedural optimizations in Software Managed Manycore (SMM)

Description

Software Managed Manycore (SMM) architectures - in which each core has only a scratch pad memory (instead of caches), - are a promising solution for scaling memory hierarchy to hundreds

Software Managed Manycore (SMM) architectures - in which each core has only a scratch pad memory (instead of caches), - are a promising solution for scaling memory hierarchy to hundreds of cores. However, in these architectures, the code and data of the tasks mapped to the cores must be explicitly managed in the software by the compiler. State-of-the-art compiler techniques for SMM architectures require inter-procedural information and analysis. A call graph of the program does not have enough information, and Global CFG, i.e., combining all the control flow graphs of the program has too much information, and becomes too big. As a result, most new techniques have informally defined and used GCCFG (Global Call Control Flow Graph) - a whole program representation which captures the control-flow as well as function call information in a succinct way - to perform inter-procedural analysis. However, how to construct it has not been shown yet. We find that for several simple call and control flow graphs, constructing GCCFG is relatively straightforward, but there are several cases in common applications where unique graph transformation is needed in order to formally and correctly construct the GCCFG. This paper fills this gap, and develops graph transformations to allow the construction of GCCFG in (almost) all cases. Our experiments show that by using succinct representation (GCCFG) rather than elaborate representation (GlobalCFG), the compilation time of state-of-the-art code management technique [4] can be improved by an average of 5X, and that of stack management [20] can be improved by an average of 4X.

Contributors

Agent

Created

Date Created
  • 2014

152590-Thumbnail Image.png

Automated testing for RBAC policies

Description

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.

Contributors

Agent

Created

Date Created
  • 2014