Matching Items (45)
Filtering by

Clear all filters

141499-Thumbnail Image.png
Description

Graph pebbling is a network optimization model for transporting discrete resources that are consumed in transit: the movement of 2 pebbles across an edge consumes one of the pebbles. The pebbling number of a graph is the fewest number of pebbles t so that, from any initial configuration of t

Graph pebbling is a network optimization model for transporting discrete resources that are consumed in transit: the movement of 2 pebbles across an edge consumes one of the pebbles. The pebbling number of a graph is the fewest number of pebbles t so that, from any initial configuration of t pebbles on its vertices, one can place a pebble on any given target vertex via such pebbling steps. It is known that deciding whether a given configuration on a particular graph can reach a specified target is NP-complete, even for diameter 2 graphs, and that deciding whether the pebbling number has a prescribed upper bound is Π[P over 2]-complete. On the other hand, for many families of graphs there are formulas or polynomial algorithms for computing pebbling numbers; for example, complete graphs, products of paths (including cubes), trees, cycles, diameter 2 graphs, and more. Moreover, graphs having minimum pebbling number are called Class 0, and many authors have studied which graphs are Class 0 and what graph properties guarantee it, with no characterization in sight. In this paper we investigate an important family of diameter 3 chordal graphs called split graphs; graphs whose vertex set can be partitioned into a clique and an independent set. We provide a formula for the pebbling number of a split graph, along with an algorithm for calculating it that runs in O(n[superscript β]) time, where β = 2ω/(ω + 1) [= over ∼] 1.41 and ω [= over ∼] 2.376 is the exponent of matrix multiplication. Furthermore we determine that all split graphs with minimum degree at least 3 are Class 0.

ContributorsAlcon, Liliana (Author) / Gutierrez, Marisa (Author) / Hurlbert, Glenn (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
132010-Thumbnail Image.png
Description
Complex human controls is a topic of much interest in the fields of robotics, manufacturing, space exploration and many others. Even simple tasks that humans perform with ease can be extremely complicated when observed from a controls and complex systems perspective. One such simple task is that of a human

Complex human controls is a topic of much interest in the fields of robotics, manufacturing, space exploration and many others. Even simple tasks that humans perform with ease can be extremely complicated when observed from a controls and complex systems perspective. One such simple task is that of a human carrying and moving a coffee cup. Though this may be a mundane task for humans, when this task is modelled and analyzed, the system may be quite chaotic in nature. Understanding such systems is key to the development robots and autonomous systems that can perform these tasks themselves.

The coffee cup system can be simplified and modeled by a cart-and-pendulum system. Bazzi et al. and Maurice et al. present two different cart-and-pendulum systems to represent the coffee cup system [1],[2]. The purpose of this project was to build upon these systems and to gain a better understanding of the coffee cup system and to determine where chaos existed within the system. The honors thesis team first worked with their senior design group to develop a mathematical model for the cart-and-pendulum system based on the Bazzi and Maurice papers [1],[2]. This system was analyzed and then built upon by the honors thesis team to build a cart-and-two-pendulum model to represent the coffee cup system more accurately.

Analysis of the single pendulum model showed that there exists a low frequency region where the pendulum and the cart remain in phase with each other and a high frequency region where the cart and pendulum have a π phase difference between them. The transition point of the low and high frequency region is determined by the resonant frequency of the pendulum. The analysis of the two-pendulum system also confirmed this result and revealed that differences in length between the pendulum cause the pendulums to transition to the high frequency regions at separate frequency. The pendulums have different resonance frequencies and transition into the high frequency region based on their own resonant frequency. This causes a range of frequencies where the pendulums are out of phase from each other. After both pendulums have transitioned, they remain in phase with each other and out of phase from the cart.

However, if the length of the pendulum is decreased too much, the system starts to exhibit chaotic behavior. The short pendulum starts to act in a chaotic manner and the phase relationship between the pendulums and the carts is no longer maintained. Since the pendulum length represents the distance between the particle of coffee and the top of the cup, this implies that coffee near the top of the cup would cause the system to act chaotically. Further analysis would be needed to determine the reason why the length affects the system in this way.
ContributorsZindani, Abdul Rahman (Co-author) / Crane, Kari (Co-author) / Lai, Ying-Cheng (Thesis director) / Jiang, Junjie (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
Description

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We find that, in the presence of a magnetic field transverse to the symmetry axis of the system, turbulence can occur at Reynolds numbers that are at least one order of magnitude smaller than those in conventional fluids. This is established by extensive computational ferrohydrodynamics through a detailed investigation of transitions in the flow structure, and characterization of behaviors of physical quantities such as the energy, the wave number, and the angular momentum through the bifurcations. A finding is that, as the magnetic field is increased, onset of turbulence can be determined accurately and reliably. Our results imply that experimental investigation of turbulence may be feasible by using ferrofluids. Our study of transition to and evolution of turbulence in the Taylor-Couette ferrofluidic flow system provides insights into the challenging problem of turbulence control.

ContributorsAltmeyer, Sebastian (Author) / Do, Younghae (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-12
Description

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks over a relatively wide range of consecutive IP addresses, we successfully uncover intrinsic spatiotemporal patterns underlying cyberattacks, where the term “spatio” refers to the IP address space. In particular, we focus on analyzing macroscopic properties of the attack traffic flows and identify two main patterns with distinct spatiotemporal characteristics: deterministic and stochastic. Strikingly, there are very few sets of major attackers committing almost all the attacks, since their attack “fingerprints” and target selection scheme can be unequivocally identified according to the very limited number of unique spatiotemporal characteristics, each of which only exists on a consecutive IP region and differs significantly from the others. We utilize a number of quantitative measures, including the flux-fluctuation law, the Markov state transition probability matrix, and predictability measures, to characterize the attack patterns in a comprehensive manner. A general finding is that the attack patterns possess high degrees of predictability, potentially paving the way to anticipating and, consequently, mitigating or even preventing large-scale cyberattacks using macroscopic approaches.

ContributorsChen, Yu-Zhong (Author) / Huang, Zi-Gang (Author) / Xu, Shouhuai (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-20
Description

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers.

We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

ContributorsZhang, Si-Ping (Author) / Huang, Zi-Gang (Author) / Dong, Jia-Qi (Author) / Eisenberg, Daniel (Author) / Seager, Thomas (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-23
164885-Thumbnail Image.png
Description

In this research, I surveyed existing methods of characterizing Epilepsy from Electroencephalogram (EEG) data, including the Random Forest algorithm, which was claimed by many researchers to be the most effective at detecting epileptic seizures [7]. I observed that although many papers claimed a detection of >99% using Random Forest, it

In this research, I surveyed existing methods of characterizing Epilepsy from Electroencephalogram (EEG) data, including the Random Forest algorithm, which was claimed by many researchers to be the most effective at detecting epileptic seizures [7]. I observed that although many papers claimed a detection of >99% using Random Forest, it was not specified “when” the detection was declared within the 23.6 second interval of the seizure event. In this research, I created a time-series procedure to detect the seizure as early as possible within the 23.6 second epileptic seizure window and found that the detection is effective (> 92%) as early as the first few seconds of the epileptic episode. I intend to use this research as a stepping stone towards my upcoming Masters thesis research where I plan to expand the time-series detection mechanism to the pre-ictal stage, which will require a different dataset.

ContributorsBou-Ghazale, Carine (Author) / Lai, Ying-Cheng (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-05
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128390-Thumbnail Image.png
Description

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on–off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

ContributorsHuang, Liang (Author) / Ni, Xuan (Author) / Ditto, William L. (Author) / Spano, Mark (Author) / Carney, Paul R. (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-18
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128519-Thumbnail Image.png
Description

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals on the minimum set of driver nodes, an unexpected phenomenon arises: due to computational or experimental error there is a great probability that convergence to the final state cannot be achieved. In fact, the associated control cost can become unbearably large, effectively preventing actual control from being realized physically. The difficulty is particularly severe when the network is deemed controllable with a small number of drivers. Here we develop a physical controllability framework based on the probability of achieving actual control. Using a recently identified fundamental chain structure underlying the control energy, we offer strategies to turn physically uncontrollable networks into physically controllable ones by imposing slightly augmented set of input signals on properly chosen nodes. Our findings indicate that, although full control can be theoretically guaranteed by the prevailing structural controllability theory, it is necessary to balance the number of driver nodes and control cost to achieve physical control.

ContributorsWang, Le-Zhi (Author) / Chen, Yu-Zhong (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-11