Matching Items (12,990)
Filtering by

Clear all filters

152413-Thumbnail Image.png
Description
Switch mode DC/DC converters are suited for battery powered applications, due to their high efficiency, which help in conserving the battery lifetime. Fixed Frequency PWM based converters, which are generally used for these applications offer good voltage regulation, low ripple and excellent efficiency at high load currents. However at light

Switch mode DC/DC converters are suited for battery powered applications, due to their high efficiency, which help in conserving the battery lifetime. Fixed Frequency PWM based converters, which are generally used for these applications offer good voltage regulation, low ripple and excellent efficiency at high load currents. However at light load currents, fixed frequency PWM converters suffer from poor efficiencies The PFM control offers higher efficiency at light loads at the cost of a higher ripple. The PWM has a poor efficiency at light loads but good voltage ripple characteristics, due to a high switching frequency. To get the best of both control modes, both loops are used together with the control switched from one loop to another based on the load current. Such architectures are referred to as hybrid converters. While transition from PFM to PWM loop can be made by estimating the average load current, transition from PFM to PWM requires voltage or peak current sensing. This theses implements a hysteretic PFM solution for a synchronous buck converter with external MOSFET's, to achieve efficiencies of about 80% at light loads. As the PFM loop operates independently of the PWM loop, a transition circuit for automatically transitioning from PFM to PWM is implemented. The transition circuit is implemented digitally without needing any external voltage or current sensing circuit.
ContributorsVivek, Parasuram (Author) / Bakkaloglu, Bertan (Thesis advisor) / Ogras, Umit Y. (Committee member) / Song, Hongjiang (Committee member) / Arizona State University (Publisher)
Created2014
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
152416-Thumbnail Image.png
Description
Droughts are a common phenomenon of the arid South-west USA climate. Despite water limitations, the region has been substantially transformed by agriculture and urbanization. The water requirements to support these human activities along with the projected increase in droughts intensity and frequency challenge long term sustainability and water security, thus

Droughts are a common phenomenon of the arid South-west USA climate. Despite water limitations, the region has been substantially transformed by agriculture and urbanization. The water requirements to support these human activities along with the projected increase in droughts intensity and frequency challenge long term sustainability and water security, thus the need to spatially and temporally characterize land use/land cover response to drought and quantify water consumption is crucial. This dissertation evaluates changes in `undisturbed' desert vegetation in response to water availability to characterize climate-driven variability. A new model coupling phenology and spectral unmixing was applied to Landsat time series (1987-2010) in order to derive fractional cover (FC) maps of annuals, perennials, and evergreen vegetation. Results show that annuals FC is controlled by short term water availability and antecedent soil moisture. Perennials FC follow wet-dry multi-year regime shifts, while evergreen is completely decoupled from short term changes in water availability. Trend analysis suggests that different processes operate at the local scale. Regionally, evergreen cover increased while perennials and annuals cover decreased. Subsequently, urban land cover was compared with its surrounding desert. A distinct signal of rain use efficiency and aridity index was documented from remote sensing and a soil-water-balance model. It was estimated that a total of 295 mm of water input is needed to sustain current greenness. Finally, an energy balance model was developed to spatio-temporally estimate evapotranspiration (ET) as a proxy for water consumption, and evaluate land use/land cover types in response to drought. Agricultural fields show an average ET of 9.3 mm/day with no significant difference between drought and wet conditions, implying similar level of water usage regardless of climatic conditions. Xeric neighborhoods show significant variability between dry and wet conditions, while mesic neighborhoods retain high ET of 400-500 mm during drought due to irrigation. Considering the potentially limited water availability, land use/land cover changes due to population increases, and the threat of a warming and drying climate, maintaining large water-consuming, irrigated landscapes challenges sustainable practices of water conservation and the need to provide amenities of this desert area for enhancing quality of life.
ContributorsKaplan, Shai (Author) / Myint, Soe Win (Thesis advisor) / Brazel, Anthony J. (Committee member) / Georgescu, Matei (Committee member) / Arizona State University (Publisher)
Created2014
152417-Thumbnail Image.png
Description
Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team. Such team-based

Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team. Such team-based procedures include surgical and clinical procedures, some of which are protocol-driven. Cost and time required for individual team-based training sessions, along with other factors, contribute to making the training complex and challenging. A great deal of research has been done on medically-focused collaborative virtual reality (VR)-based training for protocol-driven procedures as a cost-effective as well as time-efficient solution. Most VR-based simulators focus on training of individual personnel. The ones which focus on providing team training provide an interactive simulation for only a few scenarios in a collaborative virtual environment (CVE). These simulators are suited for didactic training for cognitive skills development. The training sessions in the simulators require the physical presence of mentors. The problem with this kind of system is that the mentor must be present at the training location (either physically or virtually) to evaluate the performance of the team (or an individual). Another issue is that there is no efficient methodology that exists to provide feedback to the trainees during the training session itself (formative feedback). Furthermore, they lack the ability to provide training in acquisition or improvement of psychomotor skills for the tasks that require force or touch feedback such as cardiopulmonary resuscitation (CPR). To find a potential solution to overcome some of these concerns, a novel training system was designed and developed that utilizes the integration of sensors into a CVE for time-critical medical procedures. The system allows the participants to simultaneously access the CVE and receive training from geographically diverse locations. The system is also able to provide real-time feedback and is also able to store important data during each training/testing session. Finally, this study also presents a generalizable collaborative team-training system that can be used across various team-based procedures in medical as well as non-medical domains.
ContributorsKhanal, Prabal (Author) / Greenes, Robert (Thesis advisor) / Patel, Vimla (Thesis advisor) / Smith, Marshall (Committee member) / Gupta, Ashish (Committee member) / Kaufman, David (Committee member) / Arizona State University (Publisher)
Created2014
152418-Thumbnail Image.png
Description
Species distribution modeling is used to study changes in biodiversity and species range shifts, two currently well-known manifestations of climate change. The focus of this study is to explore how distributions of suitable habitat might shift under climate change for shrub communities within the Santa Monica Mountains National Recreation Area

Species distribution modeling is used to study changes in biodiversity and species range shifts, two currently well-known manifestations of climate change. The focus of this study is to explore how distributions of suitable habitat might shift under climate change for shrub communities within the Santa Monica Mountains National Recreation Area (SMMNRA), through a comparison of community level to individual species level distribution modeling. Species level modeling is more commonly utilized, in part because community level modeling requires detailed community composition data that are not always available. However, community level modeling may better detect patterns in biodiversity. To examine the projected impact on suitable habitat in the study area, I used the MaxEnt modeling algorithm to create and evaluate species distribution models with presence only data for two future climate models at community and individual species levels. I contrasted the outcomes as a method to describe uncertainty in projected models. To derive a range of sensitivity outcomes I extracted probability frequency distributions for suitable habitat from raster grids for communities modeled directly as species groups and contrasted those with communities assembled from intersected individual species models. The intersected species models were more sensitive to climate change relative to the grouped community models. Suitable habitat in SMMNRA's bounds was projected to decline from about 30-90% for the intersected models and about 20-80% for the grouped models from its current state. Models generally captured floristic distinction between community types as drought tolerance. Overall the impact on drought tolerant communities, growing in hotter, drier habitat such as Coastal Sage Scrub, was predicted to be less than on communities growing in cooler, moister more interior habitat, such as some chaparral types. Of the two future climate change models, the wetter model projected less impact for most communities. These results help define risk exposure for communities and species in this conservation area and could be used by managers to focus vegetation monitoring tasks to detect early response to climate change. Increasingly hot and dry conditions could motivate opportunistic restoration projects for Coastal Sage Scrub, a threatened vegetation type in Southern California.
ContributorsJames, Jennifer (Author) / Franklin, Janet (Thesis advisor) / Rey, Sergio (Committee member) / Wentz, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2014
152419-Thumbnail Image.png
Description
Science, Technology, Engineering & Mathematics (STEM) careers have been touted as critical to the success of our nation and also provide important opportunities for access and equity of underrepresented minorities (URM's). Community colleges serve a diverse population and a large number of undergraduates currently enrolled in college, they are well

Science, Technology, Engineering & Mathematics (STEM) careers have been touted as critical to the success of our nation and also provide important opportunities for access and equity of underrepresented minorities (URM's). Community colleges serve a diverse population and a large number of undergraduates currently enrolled in college, they are well situated to help address the increasing STEM workforce demands. Geoscience is a discipline that draws great interest, but has very low representation of URM's as majors. What factors influence a student's decision to major in the geosciences and are community college students different from research universities in what factors influence these decisions? Through a survey-design mixed with classroom observations, structural equation model was employed to predict a student's intent to persist in introductory geology based on student expectancy for success in their geology class, math self-concept, and interest in the content. A measure of classroom pedagogy was also used to determine if instructor played a role in predicting student intent to persist. The targeted population was introductory geology students participating in the Geoscience Affective Research NETwork (GARNET) project, a national sampling of students in enrolled in introductory geology courses. Results from SEM analysis indicated that interest was the primary predictor in a students intent to persist in the geosciences for both community college and research university students. In addition, self-efficacy appeared to be mediated by interest within these models. Classroom pedagogy impacted how much interest was needed to predict intent to persist, in which as classrooms became more student centered, less interest was required to predict intent to persist. Lastly, math self-concept did not predict student intent to persist in the geosciences, however, it did share variance with self-efficacy and control of learning beliefs, indicating it may play a moderating effect on student interest and self-efficacy. Implications of this work are that while community college students and research university students are different in demographics and content preparation, student-centered instruction continues to be the best way to support student's interest in the sciences. Future work includes examining how math self-concept may play a role in longitudinal persistence in the geosciences.
ContributorsKraft, Katrien J. van der Hoeven (Author) / Husman, Jenefer (Thesis advisor) / Semken, Steven (Thesis advisor) / Baker, Dale R. (Committee member) / McConnell, David (Committee member) / Arizona State University (Publisher)
Created2014
152420-Thumbnail Image.png
Description
This dissertation considers an integrated approach to system design and controller design based on analyzing limits of system performance. Historically, plant design methodologies have not incorporated control relevant considerations. Such an approach could result in a system that might not meet its specifications (or one that requires a complex control

This dissertation considers an integrated approach to system design and controller design based on analyzing limits of system performance. Historically, plant design methodologies have not incorporated control relevant considerations. Such an approach could result in a system that might not meet its specifications (or one that requires a complex control architecture to do so). System and controller designers often go through several iterations in order to converge to an acceptable plant and controller design. The focus of this dissertation is on the design and control an air-breathing hypersonic vehicle using such an integrated system-control design framework. The goal is to reduce the number of system-control design iterations (by explicitly incorporate control considerations in the system design process), as well as to influence the guidance/trajectory specifications for the system. Due to the high computational costs associated with obtaining a dynamic model for each plant configuration considered, approximations to the system dynamics are used in the control design process. By formulating the control design problem using bilinear and polynomial matrix inequalities, several common control and system design constraints can be simultaneously incorporated into a vehicle design optimization. Several design problems are examined to illustrate the effectiveness of this approach (and to compare the computational burden of this methodology against more traditional approaches).
ContributorsSridharan, Srikanth (Author) / Rodriguez, Armando A (Thesis advisor) / Mittelmann, Hans D (Committee member) / Si, Jennie (Committee member) / Tsakalis, Konstantinos S (Committee member) / Arizona State University (Publisher)
Created2014
152421-Thumbnail Image.png
Description
ABSTRACT The D flip flop acts as a sequencing element while designing any pipelined system. Radiation Hardening by Design (RHBD) allows hardened circuits to be fabricated on commercially available CMOS manufacturing process. Recently, single event transients (SET's) have become as important as single event upset (SEU) in radiation hardened high

ABSTRACT The D flip flop acts as a sequencing element while designing any pipelined system. Radiation Hardening by Design (RHBD) allows hardened circuits to be fabricated on commercially available CMOS manufacturing process. Recently, single event transients (SET's) have become as important as single event upset (SEU) in radiation hardened high speed digital designs. A novel temporal pulse based RHBD flip-flop design is presented. Temporally delayed pulses produced by a radiation hardened pulse generator design samples the data in three redundant pulse latches. The proposed RHBD flip-flop has been statistically designed and fabricated on 90 nm TSMC LP process. Detailed simulations of the flip-flop operation in both normal and radiation environments are presented. Spatial separation of critical nodes for the physical design of the flip-flop is carried out for mitigating multi-node charge collection upsets. The proposed flip-flop is also used in commercial CAD flows for high performance chip designs. The proposed flip-flop is used in the design and auto-place-route (APR) of an advanced encryption system and the metrics analyzed.
ContributorsKumar, Sushil (Author) / Clark, Lawrence (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2014
152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014