Matching Items (555)
Filtering by

Clear all filters

152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152197-Thumbnail Image.png
Description
Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current

Microelectronic industry is continuously moving in a trend requiring smaller and smaller devices and reduced form factors with time, resulting in new challenges. Reduction in device and interconnect solder bump sizes has led to increased current density in these small solders. Higher level of electromigration occurring due to increased current density is of great concern affecting the reliability of the entire microelectronics systems. This paper reviews electromigration in Pb- free solders, focusing specifically on Sn0.7wt.% Cu solder joints. Effect of texture, grain orientation, and grain-boundary misorientation angle on electromigration and intermetallic compound (IMC) formation is studied through EBSD analysis performed on actual C4 bumps.
ContributorsLara, Leticia (Author) / Tasooji, Amaneh (Thesis advisor) / Lee, Kyuoh (Committee member) / Krause, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152208-Thumbnail Image.png
Description
Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households

Vehicle type choice is a significant determinant of fuel consumption and energy sustainability; larger, heavier vehicles consume more fuel, and expel twice as many pollutants, than their smaller, lighter counterparts. Over the course of the past few decades, vehicle type choice has seen a vast shift, due to many households making more trips in larger vehicles with lower fuel economy. During the 1990s, SUVs were the fastest growing segment of the automotive industry, comprising 7% of the total light vehicle market in 1990, and 25% in 2005. More recently, due to rising oil prices, greater awareness to environmental sensitivity, the desire to reduce dependence on foreign oil, and the availability of new vehicle technologies, many households are considering the use of newer vehicles with better fuel economy, such as hybrids and electric vehicles, over the use of the SUV or low fuel economy vehicles they may already own. The goal of this research is to examine how vehicle miles traveled, fuel consumption and emissions may be reduced through shifts in vehicle type choice behavior. Using the 2009 National Household Travel Survey data it is possible to develop a model to estimate household travel demand and total fuel consumption. If given a vehicle choice shift scenario, using the model it would be possible to calculate the potential fuel consumption savings that would result from such a shift. In this way, it is possible to estimate fuel consumption reductions that would take place under a wide variety of scenarios.
ContributorsChristian, Keith (Author) / Pendyala, Ram M. (Thesis advisor) / Chester, Mikhail (Committee member) / Kaloush, Kamil (Committee member) / Ahn, Soyoung (Committee member) / Arizona State University (Publisher)
Created2013
152178-Thumbnail Image.png
Description
The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past

The construction industry in India suffers from major time and cost overruns. Data from government and industry reports suggest that projects suffer from 20 to 25 percent time and cost overruns. Waste of resources has been identified as a major source of inefficiency. Despite a substantial increase in the past few years, demand for professionals and contractors still exceeds supply by a large margin. The traditional methods adopted in the Indian construction industry may not suffice the needs of this dynamic environment, as they have produced large inefficiencies. Innovative ways of procurement and project management can satisfy the needs aspired to as well as bring added value. The problems faced by the Indian construction industry are very similar to those faced by other developing countries. The objective of this paper is to discuss and analyze the economic concerns, inefficiencies and investigate a model that both explains the Indian construction industry structure and provides a framework to improve efficiencies. The Best Value (BV) model is examined as an approach to be adopted in lieu of the traditional approach. This could result in efficient construction projects by minimizing cost overruns and delays, which until now have been a rarity.
ContributorsNihas, Syed (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Kashiwagi, Jacob (Committee member) / Arizona State University (Publisher)
Created2013
152181-Thumbnail Image.png
Description
The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the

The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the spectral characteristics such as natural frequencies and amplitudes. Statistical pattern recognition tools such as Hilbert Huang, Fisher's Discriminant, and Neural Network were used to identify and classify the unknown samples whether they are defective or not. In this work, a Finite Element Analysis software packages (ANSYS 13.0 and NASTRAN NX8.0) was used to obtain estimates of resonance frequencies in `good' and `bad' samples. Furthermore, a system identification approach was used to generate Auto-Regressive-Moving Average with exogenous component, Box-Jenkins, and Output Error models from experimental data that can be used for classification
ContributorsJameel, Osama (Author) / Redkar, Sangram (Thesis advisor) / Arizona State University (Publisher)
Created2013
152185-Thumbnail Image.png
Description
Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e.

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e. Sprayed Polyurethane Foam Roofs (SPF roofs). Thirty seven urethane coated SPF roofs that were installed in 2005 / 2006 were visually inspected to measure the percentage of blisters and repairs three times over a period of 4 year, 6 year and 7 year marks. A repairing criteria was established after a 6 year mark based on the data that were reported to contractors as vulnerable roofs. Furthermore, the relation between four possible contributing time of installation factors i.e. contractor, demographics, season, and difficulty (number of penetrations and size of the roof in square feet) that could affect the quality of the roof was determined. Demographics and difficulty did not affect the quality of the roofs whereas the contractor and the season when the roof was installed did affect the quality of the roofs.
ContributorsGajjar, Dhaval (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2013
152136-Thumbnail Image.png
Description
Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving

Reductive dechlorination by members of the bacterial genus Dehalococcoides is a common and cost-effective avenue for in situ bioremediation of sites contaminated with the chlorinated solvents, trichloroethene (TCE) and perchloroethene (PCE). The overarching goal of my research was to address some of the challenges associated with bioremediation timeframes by improving the rates of reductive dechlorination and the growth of Dehalococcoides in mixed communities. Biostimulation of contaminated sites or microcosms with electron donor fails to consistently promote dechlorination of PCE/TCE beyond cis-dichloroethene (cis-DCE), even when the presence of Dehalococcoides is confirmed. Supported by data from microcosm experiments, I showed that the stalling at cis-DCE is due a H2 competition in which components of the soil or sediment serve as electron acceptors for competing microorganisms. However, once competition was minimized by providing selective enrichment techniques, I illustrated how to obtain both fast rates and high-density Dehalococcoides using three distinct enrichment cultures. Having achieved a heightened awareness of the fierce competition for electron donor, I then identified bicarbonate (HCO3-) as a potential H2 sink for reductive dechlorination. HCO3- is the natural buffer in groundwater but also the electron acceptor for hydrogenotrophic methanogens and homoacetogens, two microbial groups commonly encountered with Dehalococcoides. By testing a range of concentrations in batch experiments, I showed that methanogens are favored at low HCO3 and homoacetogens at high HCO3-. The high HCO3- concentrations increased the H2 demand which negatively affected the rates and extent of dechlorination. By applying the gained knowledge on microbial community management, I ran the first successful continuous stirred-tank reactor (CSTR) at a 3-d hydraulic retention time for cultivation of dechlorinating cultures. I demonstrated that using carefully selected conditions in a CSTR, cultivation of Dehalococcoides at short retention times is feasible, resulting in robust cultures capable of fast dechlorination. Lastly, I provide a systematic insight into the effect of high ammonia on communities involved in dechlorination of chloroethenes. This work documents the potential use of landfill leachate as a substrate for dechlorination and an increased tolerance of Dehalococcoides to high ammonia concentrations (2 g L-1 NH4+-N) without loss of the ability to dechlorinate TCE to ethene.
ContributorsDelgado, Anca Georgiana (Author) / Krajmalnik-Brown, Rosa (Thesis advisor) / Cadillo-Quiroz, Hinsby (Committee member) / Halden, Rolf U. (Committee member) / Rittmann, Bruce E. (Committee member) / Stout, Valerie (Committee member) / Arizona State University (Publisher)
Created2013
152146-Thumbnail Image.png
Description
Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost

Human breath is a concoction of thousands of compounds having in it a breath-print of physiological processes in the body. Though breath provides a non-invasive and easy to handle biological fluid, its analysis for clinical diagnosis is not very common. Partly the reason for this absence is unavailability of cost effective and convenient tools for such analysis. Scientific literature is full of novel sensor ideas but it is challenging to develop a working device, which are few. These challenges include trace level detection, presence of hundreds of interfering compounds, excessive humidity, different sampling regulations and personal variability. To meet these challenges as well as deliver a low cost solution, optical sensors based on specific colorimetric chemical reactions on mesoporous membranes have been developed. Sensor hardware utilizing cost effective and ubiquitously available light source (LED) and detector (webcam/photo diodes) has been developed and optimized for sensitive detection. Sample conditioning mouthpiece suitable for portable sensors is developed and integrated. The sensors are capable of communication with mobile phones realizing the idea of m-health for easy personal health monitoring in free living conditions. Nitric oxide and Acetone are chosen as analytes of interest. Nitric oxide levels in the breath correlate with lung inflammation which makes it useful for asthma management. Acetone levels increase during ketosis resulting from fat metabolism in the body. Monitoring breath acetone thus provides useful information to people with type1 diabetes, epileptic children on ketogenic diets and people following fitness plans for weight loss.
ContributorsPrabhakar, Amlendu (Author) / Tao, Nongjian (Thesis advisor) / Forzani, Erica (Committee member) / Lindsay, Stuart (Committee member) / Arizona State University (Publisher)
Created2013
152153-Thumbnail Image.png
Description
Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can be divided into two parts. The first part of this dissertation focuses on developing a more accurate network model for TEP study. First, a mixed-integer linear programming (MILP) based TEP model is proposed for solving multi-stage TEP problems. Compared with previous work, the proposed approach reduces the number of variables and constraints needed and improves the computational efficiency significantly. Second, the AC power flow model is applied to TEP models. Relaxations and reformulations are proposed to make the AC model based TEP problem solvable. Third, a convexified AC network model is proposed for TEP studies with reactive power and off-nominal bus voltage magnitudes included in the model. A MILP-based loss model and its relaxations are also investigated. The second part of this dissertation investigates the uncertainty modeling issues in the TEP problem. A two-stage stochastic TEP model is proposed and decomposition algorithms based on the L-shaped method and progressive hedging (PH) are developed to solve the stochastic model. Results indicate that the stochastic TEP model can give a more accurate estimation of the annual operating cost as compared to the deterministic TEP model which focuses only on the peak load.
ContributorsZhang, Hui (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald T (Thesis advisor) / Mittelmann, Hans D (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2013