Matching Items (689)
Filtering by

Clear all filters

152539-Thumbnail Image.png
Description
The slider-crank mechanism is popularly used in internal combustion engines to convert the reciprocating motion of the piston into a rotary motion. This research discusses an alternate mechanism proposed by the Wiseman Technology Inc. which involves replacing the crankshaft with a hypocycloid gear assembly. The unique hypocycloid gear arrangement allows

The slider-crank mechanism is popularly used in internal combustion engines to convert the reciprocating motion of the piston into a rotary motion. This research discusses an alternate mechanism proposed by the Wiseman Technology Inc. which involves replacing the crankshaft with a hypocycloid gear assembly. The unique hypocycloid gear arrangement allows the piston and the connecting rod to move in a straight line, creating a perfect sinusoidal motion. To analyze the performance advantages of the Wiseman mechanism, engine simulation software was used. The Wiseman engine with the hypocycloid piston motion was modeled in the software and the engine's simulated output results were compared to those with a conventional engine of the same size. The software was also used to analyze the multi-fuel capabilities of the Wiseman engine using a contra piston. The engine's performance was studied while operating on diesel, ethanol and gasoline fuel. Further, a scaling analysis on the future Wiseman engine prototypes was carried out to understand how the performance of the engine is affected by increasing the output power and cylinder displacement. It was found that the existing Wiseman engine produced about 7% less power at peak speeds compared to the slider-crank engine of the same size. It also produced lower torque and was about 6% less fuel efficient than the slider-crank engine. These results were concurrent with the dynamometer tests performed in the past. The 4 stroke diesel variant of the same Wiseman engine performed better than the 2 stroke gasoline version as well as the slider-crank engine in all aspects. The Wiseman engine using contra piston showed poor fuel efficiency while operating on E85 fuel. But it produced higher torque and about 1.4% more power than while running on gasoline. While analyzing the effects of the engine size on the Wiseman prototypes, it was found that the engines performed better in terms of power, torque, fuel efficiency and cylinder BMEP as their displacements increased. The 30 horsepower (HP) prototype, while operating on E85, produced the most optimum results in all aspects and the diesel variant of the same engine proved to be the most fuel efficient.
ContributorsRay, Priyesh (Author) / Redkar, Sangram (Thesis advisor) / Mayyas, Abdel Ra'Ouf (Committee member) / Meitz, Robert (Committee member) / Arizona State University (Publisher)
Created2014
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152600-Thumbnail Image.png
Description
This thesis contains the applications of the ASU mathematical model (Tolerance Maps, T-Maps) to the construction of T-Maps for patterns of line profiles. Previously, Tolerance Maps were developed for patterns of features such as holes, pins, slots and tabs to control their position. The T-Maps that are developed in this

This thesis contains the applications of the ASU mathematical model (Tolerance Maps, T-Maps) to the construction of T-Maps for patterns of line profiles. Previously, Tolerance Maps were developed for patterns of features such as holes, pins, slots and tabs to control their position. The T-Maps that are developed in this thesis are fully compatible with the ASME Y14.5 Standard. A pattern of square profiles, both linear and 2D, is used throughout this thesis to illustrate the idea of constructing the T-Maps for line profiles. The Standard defines two ways of tolerancing a pattern of profiles - Composite Tolerancing and Multiple Single Segment Tolerancing. Further, in the composite tolerancing scheme, there are two different ways to control the entire pattern - repeating a single datum or two datums in the secondary datum reference frame. T-Maps are constructed for all the different specifications. The Standard also describes a way to control the coplanarity of discontinuous surfaces using a profile tolerance and T-Maps have been developed. Since verification of manufactured parts relative to the tolerance specifications is crucial, a least squares fit approach, which was developed earlier for line profiles, has been extended to patterns of line profiles. For a pattern, two tolerances are specified, and the manufactured profile needs to lie within the tolerance zones established by both of these tolerances. An i-Map representation of the manufactured variation, located within the T-Map is also presented in this thesis.
ContributorsRao, Shyam Subramanya (Author) / Davidson, Joseph K. (Thesis advisor) / Arizona State University (Publisher)
Created2014
152562-Thumbnail Image.png
Description
Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits
ContributorsMohan, Prashant (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
152510-Thumbnail Image.png
Description
Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two

Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two distinct experimental studies were performed to further the understanding of fatigue damage mechanisms in aluminum alloys and their composites, specifically fracture and plasticity. Fatigue resistance of metal matrix composites (MMCs) depends on many aspects of composite microstructure. Fatigue crack growth behavior is particularly dependent on the reinforcement characteristics and matrix microstructure. The goal of this work was to obtain a fundamental understanding of fatigue crack growth behavior in SiC particle-reinforced 2080 Al alloy composites. In situ X-ray synchrotron tomography was performed on two samples at low (R=0.1) and at high (R=0.6) R-ratios. The resulting reconstructed images were used to obtain three-dimensional (3D) rendering of the particles and fatigue crack. Behaviors of the particles and crack, as well as their interaction, were analyzed and quantified. Four-dimensional (4D) visual representations were constructed to aid in the overall understanding of damage evolution. During fatigue crack growth in ductile materials, a plastic zone is created in the region surrounding the crack tip. Knowledge of the plastic zone is important for the understanding of fatigue crack formation as well as subsequent growth behavior. The goal of this work was to quantify the 3D size and shape of the plastic zone in 7075 Al alloys. X-ray synchrotron tomography and Laue microdiffraction were used to non-destructively characterize the volume surrounding a fatigue crack tip. The precise 3D crack profile was segmented from the reconstructed tomography data. Depth-resolved Laue patterns were obtained using differential-aperture X-ray structural microscopy (DAXM), from which peak-broadening characteristics were quantified. Plasticity, as determined by the broadening of diffracted peaks, was mapped in 3D. Two-dimensional (2D) maps of plasticity were directly compared to the corresponding tomography slices. A 3D representation of the plastic zone surrounding the fatigue crack was generated by superimposing the mapped plasticity on the 3D crack profile.
ContributorsHruby, Peter (Author) / Chawla, Nikhilesh (Thesis advisor) / Solanki, Kiran (Committee member) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2014
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014
152471-Thumbnail Image.png
Description
In engineering, buckling is mechanical instability of walls or columns under compression and usually is a problem that engineers try to prevent. In everyday life buckles (wrinkles) on different substrates are ubiquitous -- from human skin to a rotten apple they are a commonly observed phenomenon. It seems that buckles

In engineering, buckling is mechanical instability of walls or columns under compression and usually is a problem that engineers try to prevent. In everyday life buckles (wrinkles) on different substrates are ubiquitous -- from human skin to a rotten apple they are a commonly observed phenomenon. It seems that buckles with macroscopic wavelengths are not technologically useful; over the past decade or so, however, thanks to the widespread availability of soft polymers and silicone materials micro-buckles with wavelengths in submicron to micron scale have received increasing attention because it is useful for generating well-ordered periodic microstructures spontaneously without conventional lithographic techniques. This thesis investigates the buckling behavior of thin stiff films on soft polymeric substrates and explores a variety of applications, ranging from optical gratings, optical masks, energy harvest to energy storage. A laser scanning technique is proposed to detect micro-strain induced by thermomechanical loads and a periodic buckling microstructure is employed as a diffraction grating with broad wavelength tunability, which is spontaneously generated from a metallic thin film on polymer substrates. A mechanical strategy is also presented for quantitatively buckling nanoribbons of piezoelectric material on polymer substrates involving the combined use of lithographically patterning surface adhesion sites and transfer printing technique. The precisely engineered buckling configurations provide a route to energy harvesters with extremely high levels of stretchability. This stiff-thin-film/polymer hybrid structure is further employed into electrochemical field to circumvent the electrochemically-driven stress issue in silicon-anode-based lithium ion batteries. It shows that the initial flat silicon-nanoribbon-anode on a polymer substrate tends to buckle to mitigate the lithiation-induced stress so as to avoid the pulverization of silicon anode. Spontaneously generated submicron buckles of film/polymer are also used as an optical mask to produce submicron periodic patterns with large filling ratio in contrast to generating only ~100 nm edge submicron patterns in conventional near-field soft contact photolithography. This thesis aims to deepen understanding of buckling behavior of thin films on compliant substrates and, in turn, to harness the fundamental properties of such instability for diverse applications.
ContributorsMa, Teng (Author) / Jiang, Hanqing (Thesis advisor) / Yu, Hongyu (Committee member) / Yu, Hongbin (Committee member) / Poon, Poh Chieh Benny (Committee member) / Rajagopalan, Jagannathan (Committee member) / Arizona State University (Publisher)
Created2014