Matching Items (222)
150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
150383-Thumbnail Image.png
Description

This study presents the results of one of the first attempts to characterize the pore water pressure response of soils subjected to traffic loading under saturated and unsaturated conditions. It is widely known that pore water pressure develops within the soil pores as a response to external stimulus. Also, it

This study presents the results of one of the first attempts to characterize the pore water pressure response of soils subjected to traffic loading under saturated and unsaturated conditions. It is widely known that pore water pressure develops within the soil pores as a response to external stimulus. Also, it has been recognized that the development of pores water pressure contributes to the degradation of the resilient modulus of unbound materials. In the last decades several efforts have been directed to model the effect of air and water pore pressures upon resilient modulus. However, none of them consider dynamic variations in pressures but rather are based on equilibrium values corresponding to initial conditions. The measurement of this response is challenging especially in soils under unsaturated conditions. Models are needed not only to overcome testing limitations but also to understand the dynamic behavior of internal pore pressures that under critical conditions may even lead to failure. A testing program was conducted to characterize the pore water pressure response of a low plasticity fine clayey sand subjected to dynamic loading. The bulk stress, initial matric suction and dwelling time parameters were controlled and their effects were analyzed. The results were used to attempt models capable of predicting the accumulated excess pore pressure at any given time during the traffic loading and unloading phases. Important findings regarding the influence of the controlled variables challenge common beliefs. The accumulated excess pore water pressure was found to be higher for unsaturated soil specimens than for saturated soil specimens. The maximum pore water pressure always increased when the high bulk stress level was applied. Higher dwelling time was found to decelerate the accumulation of pore water pressure. In addition, it was found that the higher the dwelling time, the lower the maximum pore water pressure. It was concluded that upon further research, the proposed models may become a powerful tool not only to overcome testing limitations but also to enhance current design practices and to prevent soil failure due to excessive development of pore water pressure.

ContributorsCary, Carlos (Author) / Zapata, Claudia E (Thesis advisor) / Wiczak, Matthew W (Thesis advisor) / Kaloush, Kamil (Committee member) / Sandra, Houston (Committee member) / Arizona State University (Publisher)
Created2011
150365-Thumbnail Image.png
Description

A recent joint study by Arizona State University and the Arizona Department of Transportation (ADOT) was conducted to evaluate certain Warm Mix Asphalt (WMA) properties in the laboratory. WMA material was taken from an actual ADOT project that involved two WMA sections. The first section used a foamed-based WMA admixture,

A recent joint study by Arizona State University and the Arizona Department of Transportation (ADOT) was conducted to evaluate certain Warm Mix Asphalt (WMA) properties in the laboratory. WMA material was taken from an actual ADOT project that involved two WMA sections. The first section used a foamed-based WMA admixture, and the second section used a chemical-based WMA admixture. The rest of the project included control hot mix asphalt (HMA) mixture. The evaluation included testing of field-core specimens and laboratory compacted specimens. The laboratory specimens were compacted at two different temperatures; 270 °F (132 °C) and 310 °F (154 °C). The experimental plan included four laboratory tests: the dynamic modulus (E*), indirect tensile strength (IDT), moisture damage evaluation using AASHTO T-283 test, and the Hamburg Wheel-track Test. The dynamic modulus E* results of the field cores at 70 °F showed similar E* values for control HMA and foaming-based WMA mixtures; the E* values of the chemical-based WMA mixture were relatively higher. IDT test results of the field cores had comparable finding as the E* results. For the laboratory compacted specimens, both E* and IDT results indicated that decreasing the compaction temperatures from 310 °F to 270 °F did not have any negative effect on the material strength for both WMA mixtures; while the control HMA strength was affected to some extent. It was noticed that E* and IDT results of the chemical-based WMA field cores were high; however, the laboratory compacted specimens results didn't show the same tendency. The moisture sensitivity findings from TSR test disagreed with those of Hamburg test; while TSR results indicated relatively low values of about 60% for all three mixtures, Hamburg test results were quite excellent. In general, the results of this study indicated that both WMA mixes can be best evaluated through field compacted mixes/cores; the results of the laboratory compacted specimens were helpful to a certain extent. The dynamic moduli for the field-core specimens were higher than for those compacted in the laboratory. The moisture damage findings indicated that more investigations are needed to evaluate moisture damage susceptibility in field.

ContributorsAlossta, Abdulaziz (Author) / Kaloush, Kamil (Thesis advisor) / Witczak, Matthew W. (Committee member) / Mamlouk, Michael S. (Committee member) / Arizona State University (Publisher)
Created2011
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
150282-Thumbnail Image.png
Description
The structural design of pavements in both highways and airfields becomes complex when one considers environmental effects and ground water table variation. Environmental effects have been incorporated on the new Mechanistic-Empirical Pavement Design Guide (MEPDG) but little has been done to incorporate environmental effects on airfield design. This work presents

The structural design of pavements in both highways and airfields becomes complex when one considers environmental effects and ground water table variation. Environmental effects have been incorporated on the new Mechanistic-Empirical Pavement Design Guide (MEPDG) but little has been done to incorporate environmental effects on airfield design. This work presents a developed code produced from this research study called ZAPRAM, which is a mechanistically based pavement model based upon Limiting Strain Criteria in airfield HMA pavement design procedures. ZAPRAM is capable of pavement and airfield design analyses considering environmental effects. The program has been coded in Visual Basic and implemented in an event-driven, user-friendly educational computer program, which runs in Excel environment. Several studies were conducted in order to insure the validity of the analysis as well as the efficiency of the software. The first study yielded the minimum threshold number of computational points the user should use at a specific depth within the pavement system. The second study was completed to verify the correction factor for the Odemark's transformed thickness equation. Default correction factors were included in the code base on a large comparative study between Odemark's and MLET. A third study was conducted to provide a comparison of flexible airfield pavement design thicknesses derived from three widely accepted design procedures used in practice today: the Asphalt Institute, Shell Oil, and the revised Corps of Engineering rutting failure criteria to calculate the thickness requirements necessary for a range of design input variables. The results of the comparative study showed that there is a significant difference between the pavement thicknesses obtained from the three design procedures, with the greatest deviation found between the Shell Oil approach and the other two criteria. Finally, a comprehensive sensitivity study of environmental site factors and the groundwater table depth upon flexible airfield pavement design and performance was completed. The study used the newly revised USACE failure criteria for subgrade shear deformation. The methodology utilized the same analytical methodology to achieve real time environmental effects upon unbound layer modulus, as that used in the new AASHTO MEPDG. The results of this effort showed, for the first time, the quantitative impact of the significant effects of the climatic conditions at the design site, coupled with the importance of the depth of the groundwater table, on the predicted design thicknesses. Significant cost savings appear to be quite reasonable by utilizing principles of unsaturated soil mechanics into the new airfield pavement design procedure found in program ZAPRAM.
ContributorsSalim, Ramadan A (Author) / Zapata, Claudia (Thesis advisor) / Witczak, Matthew (Thesis advisor) / Kaloush, Kamil (Committee member) / Arizona State University (Publisher)
Created2011
150114-Thumbnail Image.png
Description
Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling

Reverse engineering gene regulatory networks (GRNs) is an important problem in the domain of Systems Biology. Learning GRNs is challenging due to the inherent complexity of the real regulatory networks and the heterogeneity of samples in available biomedical data. Real world biological data are commonly collected from broad surveys (profiling studies) and aggregate highly heterogeneous biological samples. Popular methods to learn GRNs simplistically assume a single universal regulatory network corresponding to available data. They neglect regulatory network adaptation due to change in underlying conditions and cellular phenotype or both. This dissertation presents a novel computational framework to learn common regulatory interactions and networks underlying the different sets of relatively homogeneous samples from real world biological data. The characteristic set of samples/conditions and corresponding regulatory interactions defines the cellular context (context). Context, in this dissertation, represents the deterministic transcriptional activity within the specific cellular regulatory mechanism. The major contributions of this framework include - modeling and learning context specific GRNs; associating enriched samples with contexts to interpret contextual interactions using biological knowledge; pruning extraneous edges from the context-specific GRN to improve the precision of the final GRNs; integrating multisource data to learn inter and intra domain interactions and increase confidence in obtained GRNs; and finally, learning combinatorial conditioning factors from the data to identify regulatory cofactors. The framework, Expattern, was applied to both real world and synthetic data. Interesting insights were obtained into mechanism of action of drugs on analysis of NCI60 drug activity and gene expression data. Application to refractory cancer data and Glioblastoma multiforme yield GRNs that were readily annotated with context-specific phenotypic information. Refractory cancer GRNs also displayed associations between distinct cancers, not observed through only clustering. Performance comparisons on multi-context synthetic data show the framework Expattern performs better than other comparable methods.
ContributorsSen, Ina (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Bittner, Michael (Committee member) / Konjevod, Goran (Committee member) / Arizona State University (Publisher)
Created2011
150126-Thumbnail Image.png
Description
Given the process of tumorigenesis, biological signaling pathways have become of interest in the field of oncology. Many of the regulatory mechanisms that are altered in cancer are directly related to signal transduction and cellular communication. Thus, identifying signaling pathways that have become deregulated may provide useful information

Given the process of tumorigenesis, biological signaling pathways have become of interest in the field of oncology. Many of the regulatory mechanisms that are altered in cancer are directly related to signal transduction and cellular communication. Thus, identifying signaling pathways that have become deregulated may provide useful information to better understanding altered regulatory mechanisms within cancer. Many methods that have been created to measure the distinct activity of signaling pathways have relied strictly upon transcription profiles. With advancements in comparative genomic hybridization techniques, copy number data has become extremely useful in providing valuable information pertaining to the genomic landscape of cancer. The purpose of this thesis is to develop a methodology that incorporates both gene expression and copy number data to identify signaling pathways that have become deregulated in cancer. The central idea is that copy number data may significantly assist in identifying signaling pathway deregulation by justifying the aberrant activity being measured in gene expression profiles. This method was then applied to four different subtypes of breast cancer resulting in the identification of signaling pathways associated with distinct functionalities for each of the breast cancer subtypes.
ContributorsTrevino, Robert (Author) / Kim, Seungchan (Thesis advisor) / Ringner, Markus (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2011
150093-Thumbnail Image.png
Description
Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language

Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language of CCalc in terms of answer set programming (ASP), based on the translation of nonmonotonic causal logic to formulas under the stable model semantics. I designed a standard library which describes the constructs of the input language of CCalc in terms of ASP, allowing a simple modular method to represent CCalc input programs in the language of ASP. Using the combination of system F2LP and answer set solvers, this method achieves functionality close to that of CCalc while taking advantage of answer set solvers to yield efficient computation that is orders of magnitude faster than CCalc for many benchmark examples. In support of this, I created an automated translation system Cplus2ASP that implements the translation and encoding method and automatically invokes the necessary software to solve the translated input programs.
ContributorsCasolary, Michael (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2011
151471-Thumbnail Image.png
Description
In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns

In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns the notable improvements to the format of the temporal fragment of the International Planning Competitions (IPCs). Particularly: the theory I expound upon here is the primary cause of--and justification for--the altered (i) selection of benchmark problems, and (ii) notion of "winning temporal planner". For higher level motivation: robotics, web service composition, industrial manufacturing, business process management, cybersecurity, space exploration, deep ocean exploration, and logistics all benefit from applying domain-independent automated planning technique. Naturally, actually carrying out such case studies has much to offer. For example, we may extract the lesson that reasoning carefully about deadlines is rather crucial to planning in practice. More generally, effectively automating specifically temporal planning is well-motivated from applications. Entirely abstractly, the aim is to improve the theory of automated temporal planning by distilling from its practice. My thesis is that the key feature of computational interest is concurrency. To support, I demonstrate by way of compilation methods, worst-case counting arguments, and analysis of algorithmic properties such as completeness that the more immediately pressing computational obstacles (facing would-be temporal generalizations of classical planning systems) can be dealt with in theoretically efficient manner. So more accurately the technical contribution here is to demonstrate: The computationally significant obstacle to automated temporal planning that remains is just concurrency.
ContributorsCushing, William Albemarle (Author) / Kambhampati, Subbarao (Thesis advisor) / Weld, Daniel S. (Committee member) / Smith, David E. (Committee member) / Baral, Chitta (Committee member) / Davalcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2012