Matching Items (205)
150037-Thumbnail Image.png
Description
Intimate coupling of Ti2 photocatalysis and biodegradation (ICPB) offers potential for degrading biorecalcitrant and toxic organic compounds much better than possible with conventional wastewater treatments. This study reports on using a novel sponge-type, Ti2-coated biofilm carrier that shows significant adherence of Ti2 to its exterior and the ability to accumulate

Intimate coupling of Ti2 photocatalysis and biodegradation (ICPB) offers potential for degrading biorecalcitrant and toxic organic compounds much better than possible with conventional wastewater treatments. This study reports on using a novel sponge-type, Ti2-coated biofilm carrier that shows significant adherence of Ti2 to its exterior and the ability to accumulate biomass in its interior (protected from UV light and free radicals). First, this carrier was tested for ICPB in a continuous-flow photocatalytic circulating-bed biofilm reactor (PCBBR) to mineralize biorecalcitrant organic: 2,4,5-trichlorophenol (TCP). Four mechanisms possibly acting of ICPB were tested separately: TCP adsorption, UV photolysis/photocatalysis, and biodegradation. The carrier exhibited strong TCP adsorption, while photolysis was negligible. Photocatalysis produced TCP-degradation products that could be mineralized and the strong adsorption of TCP to the carrier enhanced biodegradation by relieving toxicity. Validating the ICPB concept, biofilm was protected inside the carriers from UV light and free radicals. ICPB significantly lowered the diversity of the bacterial community, but five genera known to biodegrade chlorinated phenols were markedly enriched. Secondly, decolorization and mineralization of reactive dyes by ICPB were investigated on a refined Ti2-coated biofilm carrier in a PCBBR. Two typical reactive dyes: Reactive Black 5 (RB5) and Reactive Yellow 86 (RY86), showed similar first-order kinetics when being photocatalytically decolorized at low pH (~4-5), which was inhibited at neutral pH in the presence of phosphate or carbonate buffer, presumably due to electrostatic repulsion from negatively charged surface sites on Ti2, radical scavenging by phosphate or carbonate, or both. In the PCBBR, photocatalysis alone with Ti2-coated carriers could remove RB5 and COD by 97% and 47%, respectively. Addition of biofilm inside macroporous carriers maintained a similar RB5 removal efficiency, but COD removal increased to 65%, which is evidence of ICPB despite the low pH. A proposed ICPB pathway for RB5 suggests that a major intermediate, a naphthol derivative, was responsible for most of the residual COD. Finally, three low-temperature sintering methods, called O, D and DN, were compared based on photocatalytic efficiency and Ti2 adherence. The DN method had the best Ti2-coating properties and was a successful carrier for ICPB of RB5 in a PCBBR.
ContributorsLi, Guozheng (Author) / Rittmann, Bruce E. (Thesis advisor) / Halden, Rolf (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2011
150174-Thumbnail Image.png
Description
Internet sites that support user-generated content, so-called Web 2.0, have become part of the fabric of everyday life in technologically advanced nations. Users collectively spend billions of hours consuming and creating content on social networking sites, weblogs (blogs), and various other types of sites in the United States and around

Internet sites that support user-generated content, so-called Web 2.0, have become part of the fabric of everyday life in technologically advanced nations. Users collectively spend billions of hours consuming and creating content on social networking sites, weblogs (blogs), and various other types of sites in the United States and around the world. Given the fundamentally emotional nature of humans and the amount of emotional content that appears in Web 2.0 content, it is important to understand how such websites can affect the emotions of users. This work attempts to determine whether emotion spreads through an online social network (OSN). To this end, a method is devised that employs a model based on a general threshold diffusion model as a classifier to predict the propagation of emotion between users and their friends in an OSN by way of mood-labeled blog entries. The model generalizes existing information diffusion models in that the state machine representation of a node is generalized from being binary to having n-states in order to support n class labels necessary to model emotional contagion. In the absence of ground truth, the prediction accuracy of the model is benchmarked with a baseline method that predicts the majority label of a user's emotion label distribution. The model significantly outperforms the baseline method in terms of prediction accuracy. The experimental results make a strong case for the existence of emotional contagion in OSNs in spite of possible alternative arguments such confounding influence and homophily, since these alternatives are likely to have negligible effect in a large dataset or simply do not apply to the domain of human emotions. A hybrid manual/automated method to map mood-labeled blog entries to a set of emotion labels is also presented, which enables the application of the model to a large set (approximately 900K) of blog entries from LiveJournal.
ContributorsCole, William David, M.S (Author) / Liu, Huan (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Candan, Kasim S (Committee member) / Arizona State University (Publisher)
Created2011
150317-Thumbnail Image.png
Description
To address sustainability issues in wastewater treatment (WWT), Siemens Water Technologies (SWT) has designed a "hybrid" process that couples common activated sludge (AS) and anaerobic digestion (AD) technologies with the novel concepts of AD sludge recycle and biosorption. At least 85% of the hybrid's AD sludge is recycled to the

To address sustainability issues in wastewater treatment (WWT), Siemens Water Technologies (SWT) has designed a "hybrid" process that couples common activated sludge (AS) and anaerobic digestion (AD) technologies with the novel concepts of AD sludge recycle and biosorption. At least 85% of the hybrid's AD sludge is recycled to the AS process, providing additional sorbent for influent particulate chemical oxygen demand (PCOD) biosorption in contact tanks. Biosorbed PCOD is transported to the AD, where it is converted to methane. The aim of this study is to provide mass balance and microbial community analysis (MCA) of SWT's two hybrid and one conventional pilot plant trains and mathematical modeling of the hybrid process including a novel model of biosorption. A detailed mass balance was performed on each tank and the overall system. The mass balance data supports the hybrid process is more sustainable: It produces 1.5 to 5.5x more methane and 50 to 83% less sludge than the conventional train. The hybrid's superior performance is driven by 4 to 8 times longer solid retention times (SRTs) as compared to conventional trains. However, the conversion of influent COD to methane was low at 15 to 22%, and neither train exhibited significant nitrification or denitrification. Data were inconclusive as to the role of biosorption in the processes. MCA indicated the presence of Archaea and nitrifiers throughout both systems. However, it is inconclusive as to how active Archaea and nitrifiers are under anoxic, aerobic, and anaerobic conditions. Mathematical modeling confirms the hybrid process produces 4 to 20 times more methane and 20 to 83% less sludge than the conventional train under various operating conditions. Neither process removes more than 25% of the influent nitrogen or converts more that 13% to nitrogen gas due to biomass washout in the contact tank and short SRTs in the stabilization tank. In addition, a mathematical relationship was developed to describe PCOD biosorption through adsorption to biomass and floc entrapment. Ultimately, process performance is more heavily influenced by the higher AD SRTs attained when sludge is recycled through the system and less influenced by the inclusion of biosorption kinetics.
ContributorsYoung, Michelle Nichole (Author) / Rittmann, Bruce E. (Thesis advisor) / Fox, Peter (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2011
150284-Thumbnail Image.png
Description
Free/Libre Open Source Software (FLOSS) is the product of volunteers collaborating to build software in an open, public manner. The large number of FLOSS projects, combined with the data that is inherently archived with this online process, make studying this phenomenon attractive. Some FLOSS projects are very functional, well-known, and

Free/Libre Open Source Software (FLOSS) is the product of volunteers collaborating to build software in an open, public manner. The large number of FLOSS projects, combined with the data that is inherently archived with this online process, make studying this phenomenon attractive. Some FLOSS projects are very functional, well-known, and successful, such as Linux, the Apache Web Server, and Firefox. However, for every successful FLOSS project there are 100's of projects that are unsuccessful. These projects fail to attract sufficient interest from developers and users and become inactive or abandoned before useful functionality is achieved. The goal of this research is to better understand the open source development process and gain insight into why some FLOSS projects succeed while others fail. This dissertation presents an agent-based model of the FLOSS development process. The model is built around the concept that projects must manage to attract contributions from a limited pool of participants in order to progress. In the model developer and user agents select from a landscape of competing FLOSS projects based on perceived utility. Via the selections that are made and subsequent contributions, some projects are propelled to success while others remain stagnant and inactive. Findings from a diverse set of empirical studies of FLOSS projects are used to formulate the model, which is then calibrated on empirical data from multiple sources of public FLOSS data. The model is able to reproduce key characteristics observed in the FLOSS domain and is capable of making accurate predictions. The model is used to gain a better understanding of the FLOSS development process, including what it means for FLOSS projects to be successful and what conditions increase the probability of project success. It is shown that FLOSS is a producer-driven process, and project factors that are important for developers selecting projects are identified. In addition, it is shown that projects are sensitive to when core developers make contributions, and the exhibited bandwagon effects mean that some projects will be successful regardless of competing projects. Recommendations for improving software engineering in general based on the positive characteristics of FLOSS are also presented.
ContributorsRadtke, Nicholas Patrick (Author) / Collofello, James S. (Thesis advisor) / Janssen, Marco A (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Sundaram, Hari (Committee member) / Arizona State University (Publisher)
Created2011
151467-Thumbnail Image.png
Description
A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP

A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP models. Recently a single-echelon heuristic Inventory Strategy Module (ISM) was added to correct for forecast bias in customer demand data using different smoothing techniques. The optimization model could then use information provided by the forecast model to make better decisions for the process model. The composition of ISM with LP and DEVS models resulted in the first realization of what is now called the Optimization Simulation Forecast (OSF) platform. It could handle a single echelon supply chain system consisting of single hubs and single products In this thesis, this single-echelon simulation platform is extended to handle multiple echelons with multiple inventory elements handling multiple products. The main aspect for the multi-echelon OSF platform was to extend the KIBDEVS/LP such that ISM interactions with the LP and DEVS models could also be supported. To achieve this, a new, scalable XML schema for the KIB has been developed. The XML schema has also resulted in strengthening the KIB execution engine design. A sequential scheme controls the executions of the DEVS-Suite simulator, CPLEX optimizer, and ISM engine. To use the ISM for multiple echelons, it is extended to compute forecast customer demands and safety stocks over multiple hubs and products. Basic examples for semiconductor manufacturing spanning single and two echelon supply chain systems have been developed and analyzed. Experiments using perfect data were conducted to show the correctness of the OSF platform design and implementation. Simple, but realistic experiments have also been conducted. They highlight the kinds of supply chain dynamics that can be evaluated using discrete event process simulation, linear programming optimization, and heuristics forecasting models.
ContributorsSmith, James Melkon (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Davulcu, Hasan (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2012
151802-Thumbnail Image.png
Description
The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software

The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software design activities, however the same cannot be said for reverse engineering activities. The introduction of abstraction to reverse engineering will allow the engineer to move farther away from the details of the system, increasing his ability to see the role that domain level concepts play in the system. In this thesis, we present a technique that facilitates filtering of classes from existing systems at the source level based on their relationship to concepts in the domain via a classification method using machine learning. We showed that concepts can be identified using a machine learning classifier based on source level metrics. We developed an Eclipse plugin to assist with the process of manually classifying Java source code, and collecting metrics and classifications into a standard file format. We developed an Eclipse plugin to act as a concept identifier that visually indicates a class as a domain concept or not. We minimized the size of training sets to ensure a useful approach in practice. This allowed us to determine that a training set of 7:5 to 10% is nearly as effective as a training set representing 50% of the system. We showed that random selection is the most consistent and effective means of selecting a training set. We found that KNN is the most consistent performer among the learning algorithms tested. We determined the optimal feature set for this classification problem. We discussed two possible structures besides a one to one mapping of domain knowledge to implementation. We showed that classes representing more than one concept are simply concepts at differing levels of abstraction. We also discussed composite concepts representing a domain concept implemented by more than one class. We showed that these composite concepts are difficult to detect because the problem is NP-complete.
ContributorsCarey, Maurice (Author) / Colbourn, Charles (Thesis advisor) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151784-Thumbnail Image.png
Description
This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be

This work focuses on a generalized assessment of source zone natural attenuation (SZNA) at chlorinated aliphatic hydrocarbon (CAH) impacted sites. Given the numbers of sites and technical challenges for cleanup there is a need for a SZNA method at CAH impacted sites. The method anticipates that decision makers will be interested in the following questions: 1-Is SZNA occurring and what processes contribute? 2-What are the current SZNA rates? 3-What are the longer-term implications? The approach is macroscopic and uses multiple lines-of-evidence. An in-depth application of the generalized non-site specific method over multiple site events, with sampling refinement approaches applied for improving SZNA estimates, at three CAH impacted sites is presented with a focus on discharge rates for four events over approximately three years (Site 1:2.9, 8.4, 4.9, 2.8kg/yr as PCE, Site 2:1.6, 2.2, 1.7, 1.1kg/y as PCE, Site 3:570, 590, 250, 240kg/y as TCE). When applying the generalized CAH-SZNA method, it is likely that different practitioners will not sample a site similarly, especially regarding sampling density on a groundwater transect. Calculation of SZNA rates is affected by contaminant spatial variability with reference to transect sampling intervals and density with variations in either resulting in different mass discharge estimates. The effects on discharge estimates from varied sampling densities and spacings were examined to develop heuristic sampling guidelines with practical site sampling densities; the guidelines aim to reduce the variability in discharge estimates due to different sampling approaches and to improve confidence in SZNA rates allowing decision-makers to place the rates in perspective and determine a course of action based on remedial goals. Finally bench scale testing was used to address longer term questions; specifically the nature and extent of source architecture. A rapid in-situ disturbance method was developed using a bench-scale apparatus. The approach allows for rapid identification of the presence of DNAPL using several common pilot scale technologies (ISCO, air-sparging, water-injection) and can identify relevant source architectural features (ganglia, pools, dissolved source). Understanding of source architecture and identification of DNAPL containing regions greatly enhances site conceptualization models, improving estimated time frames for SZNA, and possibly improving design of remedial systems.
ContributorsEkre, Ryan (Author) / Johnson, Paul Carr (Thesis advisor) / Rittmann, Bruce (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2013
151868-Thumbnail Image.png
Description
Microbial electrochemical cells (MXCs) are promising platforms for bioenergy production from renewable resources. In these systems, specialized anode-respiring bacteria (ARB) deliver electrons from oxidation of organic substrates to the anode of an MXC. While much progress has been made in understanding the microbiology, physiology, and electrochemistry of well-studied model ARB

Microbial electrochemical cells (MXCs) are promising platforms for bioenergy production from renewable resources. In these systems, specialized anode-respiring bacteria (ARB) deliver electrons from oxidation of organic substrates to the anode of an MXC. While much progress has been made in understanding the microbiology, physiology, and electrochemistry of well-studied model ARB such as Geobacter and Shewanella, tremendous potential exists for MXCs as microbiological platforms for exploring novel ARB. This dissertation introduces approaches for selective enrichment and characterization of phototrophic, halophilic, and alkaliphilic ARB. An enrichment scheme based on manipulation of poised anode potential, light, and nutrient availability led to current generation that responded negatively to light. Analysis of phototrophically enriched communities suggested essential roles for green sulfur bacteria and halophilic ARB in electricity generation. Reconstruction of light-responsive current generation could be successfully achieved using cocultures of anode-respiring Geobacter and phototrophic Chlorobium isolated from the MXC enrichments. Experiments lacking exogenously supplied organic electron donors indicated that Geobacter could produce a measurable current from stored photosynthate in the dark. Community analysis of phototrophic enrichments also identified members of the novel genus Geoalkalibacter as potential ARB. Electrochemical characterization of two haloalkaliphilic, non-phototrophic Geoalkalibacter spp. showed that these bacteria were in fact capable of producing high current densities (4-8 A/m2) and using higher organic substrates under saline or alkaline conditions. The success of these selective enrichment approaches and community analyses in identifying and understanding novel ARB capabilities invites further use of MXCs as robust platforms for fundamental microbiological investigations.
ContributorsBadalamenti, Jonathan P (Author) / Krajmalnik-Brown, Rosa (Thesis advisor) / Garcia-Pichel, Ferran (Committee member) / Rittmann, Bruce E. (Committee member) / Torres, César I (Committee member) / Vermaas, Willem (Committee member) / Arizona State University (Publisher)
Created2013
151889-Thumbnail Image.png
Description
This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for

This dissertation explores the use of bench-scale batch microcosms in remedial design of contaminated aquifers, presents an alternative methodology for conducting such treatability studies, and - from technical, economical, and social perspectives - examines real-world application of this new technology. In situ bioremediation (ISB) is an effective remedial approach for many contaminated groundwater sites. However, site-specific variability necessitates the performance of small-scale treatability studies prior to full-scale implementation. The most common methodology is the batch microcosm, whose potential limitations and suitable technical alternatives are explored in this thesis. In a critical literature review, I discuss how continuous-flow conditions stimulate microbial attachment and biofilm formation, and identify unique microbiological phenomena largely absent in batch bottles, yet potentially relevant to contaminant fate. Following up on this theoretical evaluation, I experimentally produce pyrosequencing data and perform beta diversity analysis to demonstrate that batch and continuous-flow (column) microcosms foster distinctly different microbial communities. Next, I introduce the In Situ Microcosm Array (ISMA), which took approximately two years to design, develop, build and iteratively improve. The ISMA can be deployed down-hole in groundwater monitoring wells of contaminated aquifers for the purpose of autonomously conducting multiple parallel continuous-flow treatability experiments. The ISMA stores all sample generated in the course of each experiment, thereby preventing the release of chemicals into the environment. Detailed results are presented from an ISMA demonstration evaluating ISB for the treatment of hexavalent chromium and trichloroethene. In a technical and economical comparison to batch microcosms, I demonstrate the ISMA is both effective in informing remedial design decisions and cost-competitive. Finally, I report on a participatory technology assessment (pTA) workshop attended by diverse stakeholders of the Phoenix 52nd Street Superfund Site evaluating the ISMA's ability for addressing a real-world problem. In addition to receiving valuable feedback on perceived ISMA limitations, I conclude from the workshop that pTA can facilitate mutual learning even among entrenched stakeholders. In summary, my doctoral research (i) pinpointed limitations of current remedial design approaches, (ii) produced a novel alternative approach, and (iii) demonstrated the technical, economical and social value of this novel remedial design tool, i.e., the In Situ Microcosm Array technology.
ContributorsKalinowski, Tomasz (Author) / Halden, Rolf U. (Thesis advisor) / Johnson, Paul C (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2013