This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 33
Filtering by

Clear all filters

Description

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks over a relatively wide range of consecutive IP addresses, we successfully uncover intrinsic spatiotemporal patterns underlying cyberattacks, where the term “spatio” refers to the IP address space. In particular, we focus on analyzing macroscopic properties of the attack traffic flows and identify two main patterns with distinct spatiotemporal characteristics: deterministic and stochastic. Strikingly, there are very few sets of major attackers committing almost all the attacks, since their attack “fingerprints” and target selection scheme can be unequivocally identified according to the very limited number of unique spatiotemporal characteristics, each of which only exists on a consecutive IP region and differs significantly from the others. We utilize a number of quantitative measures, including the flux-fluctuation law, the Markov state transition probability matrix, and predictability measures, to characterize the attack patterns in a comprehensive manner. A general finding is that the attack patterns possess high degrees of predictability, potentially paving the way to anticipating and, consequently, mitigating or even preventing large-scale cyberattacks using macroscopic approaches.

ContributorsChen, Yu-Zhong (Author) / Huang, Zi-Gang (Author) / Xu, Shouhuai (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-20
Description

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers.

We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

ContributorsZhang, Si-Ping (Author) / Huang, Zi-Gang (Author) / Dong, Jia-Qi (Author) / Eisenberg, Daniel (Author) / Seager, Thomas (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-23
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128828-Thumbnail Image.png
Description

The low pH of the stomach serves as a barrier to ingested microbes and must be overcome or bypassed when delivering live bacteria for vaccine or probiotic applications. Typically, the impact of stomach acidity on bacterial survival is evaluated in vitro, as there are no small animal models to evaluate

The low pH of the stomach serves as a barrier to ingested microbes and must be overcome or bypassed when delivering live bacteria for vaccine or probiotic applications. Typically, the impact of stomach acidity on bacterial survival is evaluated in vitro, as there are no small animal models to evaluate these effects in vivo. To better understand the effect of this low pH barrier to live attenuated Salmonella vaccines, which are often very sensitive to low pH, we investigated the value of the histamine mouse model for this application. A low pH gastric compartment was transiently induced in mice by the injection of histamine. This resulted in a gastric compartment of approximately pH 1.5 that was capable of distinguishing between acid-sensitive and acid-resistant microbes. Survival of enteric microbes during gastric transit in this model directly correlated with their in vitro acid resistance. Because many Salmonella enterica serotype Typhi vaccine strains are sensitive to acid, we have been investigating systems to enhance the acid resistance of these bacteria. Using the histamine mouse model, we demonstrate that the in vivo survival of S. Typhi vaccine strains increased approximately 10-fold when they carried a sugar-inducible arginine decarboxylase system. We conclude that this model will be a useful for evaluating live bacterial preparations prior to clinical trials.

Created2014-01-29
128827-Thumbnail Image.png
Description

Leucine-responsive regulatory protein (Lrp) is known to be an indirect activator of type 1 fimbriae synthesis in Salmonella enterica serovar Typhimurium via direct regulation of FimZ, a direct positive regulator for type 1 fimbriae production. Using RT-PCR, we have shown previously that fimA transcription is dramatically impaired in both lrp-deletion

Leucine-responsive regulatory protein (Lrp) is known to be an indirect activator of type 1 fimbriae synthesis in Salmonella enterica serovar Typhimurium via direct regulation of FimZ, a direct positive regulator for type 1 fimbriae production. Using RT-PCR, we have shown previously that fimA transcription is dramatically impaired in both lrp-deletion (Δlrp) and constitutive-lrp expression (lrpC) mutant strains. In this work, we used chromosomal PfimA-lacZ fusions and yeast agglutination assays to confirm and extend our previous results. Direct binding of Lrp to PfimA was shown by an electrophoretic mobility shift assay (EMSA) and DNA footprinting assay. Site-directed mutagenesis revealed that the Lrp-binding motifs in PfimA play a role in both activation and repression of type 1 fimbriae production. Overproduction of Lrp also abrogates fimZ expression. EMSA data showed that Lrp and FimZ proteins independently bind to PfimA without competitive exclusion. In addition, both Lrp and FimZ binding to PfimA caused a hyper retardation (supershift) of the DNA-protein complex compared to the shift when each protein was present alone. Nutrition-dependent cellular Lrp levels closely correlated with the amount of type 1 fimbriae production. These observations suggest that Lrp plays important roles in type 1 fimbriation by acting as both a positive and negative regulator and its effect depends, at least in part, on the cellular concentration of Lrp in response to the nutritional environment.

ContributorsBaek, Chang-Ho (Author) / Kang, Ho-Young (Author) / Roland, Kenneth (Author) / Curtiss, Roy (Author) / ASU Biodesign Center Immunotherapy, Vaccines and Virotherapy (Contributor) / Biodesign Institute (Contributor)
Created2011-10-28
129320-Thumbnail Image.png
Description

Researchers have iterated that the future of synthetic biology and biotechnology lies in novel consumer applications of crossing biology with engineering. However, if the new biology's future is to be sustainable, early and serious efforts must be made towards social sustainability. Therefore, the crux of new applications of synthetic biology

Researchers have iterated that the future of synthetic biology and biotechnology lies in novel consumer applications of crossing biology with engineering. However, if the new biology's future is to be sustainable, early and serious efforts must be made towards social sustainability. Therefore, the crux of new applications of synthetic biology and biotechnology is public understanding and acceptance. The RASVaccine is a novel recombinant design not found in nature that re-engineers a common bacteria ( Salmonella) to produce a strong immune response in humans. Synthesis of the RASVaccine has the potential to improve public health as an inexpensive, non-injectable product. But how can scientists move forward to create a dialogue of creating a 'common sense' of this new technology in order to promote social sustainability? This paper delves into public issues raised around these novel technologies and uses the RASVaccine as an example of meeting the public with a common sense of its possibilities and limitations.

ContributorsDankel, Dorothy J. (Author) / Roland, Kenneth (Author) / Fisher, Michael (Author) / Brenneman, Karen (Author) / Delgado, Ana (Author) / Santander, Javier (Author) / Baek, Chang-Ho (Author) / Clark-Curtiss, Josephine (Author) / Strand, Roger (Author) / Curtiss, Roy (Author) / ASU Biodesign Center Immunotherapy, Vaccines and Virotherapy (Contributor) / Biodesign Institute (Contributor)
Created2014-08-01
129021-Thumbnail Image.png
Description

Background: Salmonella has been employed to deliver therapeutic molecules against cancer and infectious diseases. As the carrier for target gene(s), the cargo plasmid should be stable in the bacterial vector. Plasmid recombination has been reduced in E. coli by mutating several genes including the recA, recE, recF and recJ. However, to

Background: Salmonella has been employed to deliver therapeutic molecules against cancer and infectious diseases. As the carrier for target gene(s), the cargo plasmid should be stable in the bacterial vector. Plasmid recombination has been reduced in E. coli by mutating several genes including the recA, recE, recF and recJ. However, to our knowledge, there have been no published studies of the effect of these or any other genes that play a role in plasmid recombination in Salmonella enterica.

Results: The effect of recA, recF and recJ deletions on DNA recombination was examined in three serotypes of Salmonella enterica. We found that (1) intraplasmid recombination between direct duplications was RecF-independent in Typhimurium and Paratyphi A, but could be significantly reduced in Typhi by a ΔrecA or ΔrecF mutation; (2) in all three Salmonella serotypes, both ΔrecA and ΔrecF mutations reduced intraplasmid recombination when a 1041 bp intervening sequence was present between the duplications; (3) ΔrecA and ΔrecF mutations resulted in lower frequencies of interplasmid recombination in Typhimurium and Paratyphi A, but not in Typhi; (4) in some cases, a ΔrecJ mutation could reduce plasmid recombination but was less effective than ΔrecA and ΔrecF mutations. We also examined chromosome-related recombination. The frequencies of intrachromosomal recombination and plasmid integration into the chromosome were 2 and 3 logs lower than plasmid recombination frequencies in Rec[superscript +] strains. A ΔrecA mutation reduced both intrachromosomal recombination and plasmid integration frequencies.

Conclusions: The ΔrecA and ΔrecF mutations can reduce plasmid recombination frequencies in Salmonella enterica, but the effect can vary between serovars. This information will be useful for developing Salmonella delivery vectors able to stably maintain plasmid cargoes for vaccine development and gene therapy.

ContributorsZhang, Xiangmin (Author) / Wanda, Soo-Young (Author) / Brenneman, Karen (Author) / Kong, Wei (Author) / Zhang, Xin (Author) / Roland, Kenneth (Author) / Curtiss, Roy (Author) / ASU Biodesign Center Immunotherapy, Vaccines and Virotherapy (Contributor) / Biodesign Institute (Contributor)
Created2011-02-08