Matching Items (255)
152088-Thumbnail Image.png
Description
The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and

The alkali activation of aluminosilicate materials as binder systems derived from industrial byproducts have been extensively studied due to the advantages they offer in terms enhanced material properties, while increasing sustainability by the reuse of industrial waste and byproducts and reducing the adverse impacts of OPC production. Fly ash and ground granulated blast furnace slag are commonly used for their content of soluble silica and aluminate species that can undergo dissolution, polymerization with the alkali, condensation on particle surfaces and solidification. The following topics are the focus of this thesis: (i) the use of microwave assisted thermal processing, in addition to heat-curing as a means of alkali activation and (ii) the relative effects of alkali cations (K or Na) in the activator (powder activators) on the mechanical properties and chemical structure of these systems. Unsuitable curing conditions instigate carbonation, which in turn lowers the pH of the system causing significant reductions in the rate of fly ash activation and mechanical strength development. This study explores the effects of sealing the samples during the curing process, which effectively traps the free water in the system, and allows for increased aluminosilicate activation. The use of microwave-curing in lieu of thermal-curing is also studied in order to reduce energy consumption and for its ability to provide fast volumetric heating. Potassium-based powder activators dry blended into the slag binder system is shown to be effective in obtaining very high compressive strengths under moist curing conditions (greater than 70 MPa), whereas sodium-based powder activation is much weaker (around 25 MPa). Compressive strength decreases when fly ash is introduced into the system. Isothermal calorimetry is used to evaluate the early hydration process, and to understand the reaction kinetics of the alkali powder activated systems. A qualitative evidence of the alkali-hydroxide concentration of the paste pore solution through the use of electrical conductivity measurements is also presented, with the results indicating the ion concentration of alkali is more prevalent in the pore solution of potassium-based systems. The use of advanced spectroscopic and thermal analysis techniques to distinguish the influence of studied parameters is also discussed.
ContributorsChowdhury, Ussala (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramanium D. (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2013
152113-Thumbnail Image.png
Description
The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus

The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus of this thesis is to design scheduling and power control algorithms in wireless networks, and analyze their performances. In this thesis, we first study the multicast capacity of wireless ad hoc networks. Gupta and Kumar studied the scaling law of the unicast capacity of wireless ad hoc networks. They derived the order of the unicast throughput, as the number of nodes in the network goes to infinity. In our work, we characterize the scaling of the multicast capacity of large-scale MANETs under a delay constraint D. We first derive an upper bound on the multicast throughput, and then propose a lower bound on the multicast capacity by proposing a joint coding-scheduling algorithm that achieves a throughput within logarithmic factor of the upper bound. We then study the power control problem in ad-hoc wireless networks. We propose a distributed power control algorithm based on the Gibbs sampler, and prove that the algorithm is throughput optimal. Finally, we consider the scheduling algorithm in collocated wireless networks with flow-level dynamics. Specifically, we study the delay performance of workload-based scheduling algorithm with SRPT as a tie-breaking rule. We demonstrate the superior flow-level delay performance of the proposed algorithm using simulations.
ContributorsZhou, Shan (Author) / Ying, Lei (Thesis advisor) / Zhang, Yanchao (Committee member) / Zhang, Junshan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2013
152385-Thumbnail Image.png
Description
This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual

This thesis addresses the ever increasing threat of botnets in the smartphone domain and focuses on the Android platform and the botnets using Online Social Networks (OSNs) as Command and Control (C&C;) medium. With any botnet, C&C; is one of the components on which the survival of botnet depends. Individual bots use the C&C; channel to receive commands and send the data. This thesis develops active host based approach for identifying the presence of bot based on the anomalies in the usage patterns of the user before and after the bot is installed on the user smartphone and alerting the user to the presence of the bot. A profile is constructed for each user based on the regular web usage patterns (achieved by intercepting the http(s) traffic) and implementing machine learning techniques to continuously learn the user's behavior and changes in the behavior and all the while looking for any anomalies in the user behavior above a threshold which will cause the user to be notified of the anomalous traffic. A prototype bot which uses OSN s as C&C; channel is constructed and used for testing. Users are given smartphones(Nexus 4 and Galaxy Nexus) running Application proxy which intercepts http(s) traffic and relay it to a server which uses the traffic and constructs the model for a particular user and look for any signs of anomalies. This approach lays the groundwork for the future host-based counter measures for smartphone botnets using OSN s as C&C; channel.
ContributorsKilari, Vishnu Teja (Author) / Xue, Guoliang (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2013
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
151587-Thumbnail Image.png
Description
The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is one of the most common challenges for machine learning and

The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is one of the most common challenges for machine learning and data mining tasks. Feature selection aims to reduce dimensionality by selecting a small subset of the features that perform at least as good as the full feature set. Generally, the learning performance, e.g. classification accuracy, and algorithm complexity are used to measure the quality of the algorithm. Recently, the stability of feature selection algorithms has gained an increasing attention as a new indicator due to the necessity to select similar subsets of features each time when the algorithm is run on the same dataset even in the presence of a small amount of perturbation. In order to cure the selection stability issue, we should understand the cause of instability first. In this dissertation, we will investigate the causes of instability in high-dimensional datasets using well-known feature selection algorithms. As a result, we found that the stability mostly data-dependent. According to these findings, we propose a framework to improve selection stability by solving these main causes. In particular, we found that data noise greatly impacts the stability and the learning performance as well. So, we proposed to reduce it in order to improve both selection stability and learning performance. However, current noise reduction approaches are not able to distinguish between data noise and variation in samples from different classes. For this reason, we overcome this limitation by using Supervised noise reduction via Low Rank Matrix Approximation, SLRMA for short. The proposed framework has proved to be successful on different types of datasets with high-dimensionality, such as microarrays and images datasets. However, this framework cannot handle unlabeled, hence, we propose Local SVD to overcome this limitation.
ContributorsAlelyani, Salem (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Ye, Jieping (Committee member) / Zhao, Zheng (Committee member) / Arizona State University (Publisher)
Created2013
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
151362-Thumbnail Image.png
Description
Urban water systems face sustainability challenges ranging from water quality, leaks, over-use, energy consumption, and long-term supply concerns. Resiliency challenges include the capacity to respond to drought, managing pipe deterioration, responding to natural disasters, and preventing terrorism. One strategy to enhance sustainability and resiliency is the development and adoption of

Urban water systems face sustainability challenges ranging from water quality, leaks, over-use, energy consumption, and long-term supply concerns. Resiliency challenges include the capacity to respond to drought, managing pipe deterioration, responding to natural disasters, and preventing terrorism. One strategy to enhance sustainability and resiliency is the development and adoption of smart water grids. A smart water grid incorporates networked monitoring and control devices into its structure, which provides diverse, real-time information about the system, as well as enhanced control. Data provide input for modeling and analysis, which informs control decisions, allowing for improvement in sustainability and resiliency. While smart water grids hold much potential, there are also potential tradeoffs and adoption challenges. More publicly available cost-benefit analyses are needed, as well as system-level research and application, rather than the current focus on individual technologies. This thesis seeks to fill one of these gaps by analyzing the cost and environmental benefits of smart irrigation controllers. Smart irrigation controllers can save water by adapting watering schedules to climate and soil conditions. The potential benefit of smart irrigation controllers is particularly high in southwestern U.S. states, where the arid climate makes water scarcer and increases watering needs of landscapes. To inform the technology development process, a design for environment (DfE) method was developed, which overlays economic and environmental performance parameters under different operating conditions. This method is applied to characterize design goals for controller price and water savings that smart irrigation controllers must meet to yield life cycle carbon dioxide reductions and economic savings in southwestern U.S. states, accounting for regional variability in electricity and water prices and carbon overhead. Results from applying the model to smart irrigation controllers in the Southwest suggest that some areas are significantly easier to design for.
ContributorsMutchek, Michele (Author) / Allenby, Braden (Thesis advisor) / Williams, Eric (Committee member) / Westerhoff, Paul (Committee member) / Arizona State University (Publisher)
Created2012
150953-Thumbnail Image.png
Description
Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play

Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters.
ContributorsBapat, Namrata Arun (Author) / Syrotiuk, Violet R. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150907-Thumbnail Image.png
Description
The presence of compounds such as pharmaceuticals and personal care products (PPCPs) in the environment is a cause for concern as they exhibit secondary effects on non-target organisms and are also indicative of incomplete removal by wastewater treatment plants (WWTPs) during water reclamation. Analytical methods and predictive models can hel

The presence of compounds such as pharmaceuticals and personal care products (PPCPs) in the environment is a cause for concern as they exhibit secondary effects on non-target organisms and are also indicative of incomplete removal by wastewater treatment plants (WWTPs) during water reclamation. Analytical methods and predictive models can help inform on the rates at which these contaminants enter the environment via biosolids use or wastewater effluent release to estimate the risk of adverse effects. The goals of this research project were to integrate the results obtained from the two different methods of risk assessment, (a) in silico modeling and (b) experimental analysis. Using a previously published empirical model, influent and effluent concentration ranges were predicted for 10 sterols and validated with peer-reviewed literature. The in silico risk assessment analysis performed for sterols and hormones in biosolids concluded that hormones possess high leaching potentials and that particularly 17-α-ethinyl estradiol (EE2) can pose significant threat to fathead minnows (P. promelas) via leaching from terrestrial depositions of biosolids. Six mega-composite biosolids samples representative of 94 WWTPs were analyzed for a suite of 120 PPCPs using the extended U.S. EPA Method 1694 protocol. Results indicated the presence of 26 previously unmonitored PPCPs in the samples with estimated annual release rates of 5-15 tons yr-1 via land application of biosolids. A mesocosm sampling analysis that was included in the study concluded that four compounds amitriptyline, paroxetine, propranolol and sertraline warrant further monitoring due to their high release rates from land applied biosolids and their calculated extended half-lives in soils. There is a growing interest in the scientific community towards the development of new analytical protocols for analyzing solid matrices such as biosolids for the presence of PPCPs and other established and emerging contaminants of concern. The two studies presented here are timely and an important addition to the increasing base of scientific articles regarding environmental release of PPCPs and exposure risks associated with biosolids land application. This research study emphasizes the need for coupling experimental results with predictive analytical modeling output in order to more fully assess the risks posed by compounds detected in biosolids.
ContributorsPrakash Chari, Bipin (Author) / Halden, Rolf U. (Thesis advisor) / Westerhoff, Paul (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2012
150594-Thumbnail Image.png
Description
As engineered nanomaterials (NMs) become used in industry and commerce their loading to sewage will increase. However, the fate of widely used NMs in wastewater treatment plants (WWTPs) remains poorly understood. In this research, sequencing batch reactors (SBRs) were operated with hydraulic (HRT) and sludge (SRT) retention times representative of

As engineered nanomaterials (NMs) become used in industry and commerce their loading to sewage will increase. However, the fate of widely used NMs in wastewater treatment plants (WWTPs) remains poorly understood. In this research, sequencing batch reactors (SBRs) were operated with hydraulic (HRT) and sludge (SRT) retention times representative of full-scale biological WWTPs for several weeks. NM loadings at the higher range of expected environmental concentrations were selected. To achieve the pseudo-equilibrium state concentration of NMs in biomass, SBR experiments needed to operate for more than three times the SRT value, approximately 18 days. Under the conditions tested, NMs had negligible effects on ability of the wastewater bacteria to biodegrade organic material, as measured by chemical oxygen demand (COD). NM mass balance closure was achieved by measuring NMs in liquid effluent and waste biosolids. All NMs were well removed at the typical biomass concentration (1~2 gSS/L). However, carboxy-terminated polymer coated silver nanoparticles (fn-Ag) were removed less effectively (88% removal) than hydroxylated fullerenes (fullerols; >90% removal), nano TiO2 (>95% removal) or aqueous fullerenes (nC60; >95% removal). Although most NMs did not settle out of the feed solution without bacteria present, approximately 65% of the titanium dioxide was removed even in the absence of biomass simply due to self-aggregation and settling. Experiments conducted over 4 months with daily loadings of nC60 showed that nC60 removal from solution depends on the biomass concentration. Under conditions representative of most suspended growth biological WWTPs (e.g., activated sludge), most of the NMs will accumulate in biosolids rather than in liquid effluent discharged to surface waters. Significant fractions of fn-Ag were associated with colloidal material which suggests that efficient particle separation processes (sedimentation or filtration) could further improve removal of NM from effluent. As most NMs appear to accumulate in biosolids, future research should examine the fate of NMs during disposal of WWTP biosolids, which may occur through composting or anaerobic digestion and/or land application, incineration, or landfill disposal.
ContributorsWang, Yifei (Author) / Westerhoff, Paul (Thesis advisor) / Krajmalnik-Brown, Rosa (Committee member) / Rittmann, Bruce (Committee member) / Hristovski, Kiril (Committee member) / Arizona State University (Publisher)
Created2012