Matching Items (79)
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128342-Thumbnail Image.png
Description

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with high efficiency and robustness for optimal source localization in arbitrary weighted networks with arbitrary distribution of sources. We offer a minimum output analysis to quantify the source locatability through a minimal number of messenger nodes that produce sufficient measurement for fully locating the sources. When the minimum messenger nodes are discerned, the problem of optimal source localization becomes one of sparse signal reconstruction, which can be solved using compressive sensing. Application of our framework to model and empirical networks demonstrates that sources in homogeneous and denser networks are more readily to be located. A surprising finding is that, for a connected undirected network with random link weights and weak noise, a single messenger node is sufficient for locating any number of sources. The framework deepens our understanding of the network source localization problem and offers efficient tools with broad applications.

ContributorsHu, Zhao-Long (Author) / Han, Xiao (Author) / Lai, Ying-Cheng (Author) / Wang, Wen-Xu (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-12
130259-Thumbnail Image.png
Description
Background
Syngas fermentation, the bioconversion of CO, CO[subscript 2], and H[subscript 2] to biofuels and chemicals, has undergone considerable optimization for industrial applications. Even more, full-scale plants for ethanol production from syngas fermentation by pure cultures are being built worldwide. The composition of syngas depends on the feedstock gasified and the

Background
Syngas fermentation, the bioconversion of CO, CO[subscript 2], and H[subscript 2] to biofuels and chemicals, has undergone considerable optimization for industrial applications. Even more, full-scale plants for ethanol production from syngas fermentation by pure cultures are being built worldwide. The composition of syngas depends on the feedstock gasified and the gasification conditions. However, it remains unclear how different syngas mixtures affect the metabolism of carboxidotrophs, including the ethanol/acetate ratios. In addition, the potential application of mixed cultures in syngas fermentation and their advantages over pure cultures have not been deeply explored. In this work, the effects of CO[subscript 2] and H[subscript 2] on the CO metabolism by pure and mixed cultures were studied and compared. For this, a CO-enriched mixed culture and two isolated carboxidotrophs were grown with different combinations of syngas components (CO, CO:H[subscript 2], CO:CO[subscript 2], or CO:CO[subscript 2]:H[subscript 2]).
Results
The CO metabolism of the mixed culture was somehow affected by the addition of CO[subscript 2] and/or H[subscript 2], but the pure cultures were more sensitive to changes in gas composition than the mixed culture. CO[subscript 2] inhibited CO oxidation by the Pleomorphomonas-like isolate and decreased the ethanol/acetate ratio by the Acetobacterium-like isolate. H[subscript 2] did not inhibit ethanol or H[subscript 2] production by the Acetobacterium and Pleomorphomonas isolates, respectively, but decreased their CO consumption rates. As part of the mixed culture, these isolates, together with other microorganisms, consumed H[subscript 2] and CO[subscript 2] (along with CO) for all conditions tested and at similar CO consumption rates (2.6 ± 0.6 mmol CO L[superscript −1] day[superscript −1]), while maintaining overall function (acetate production). Providing a continuous supply of CO by membrane diffusion caused the mixed culture to switch from acetate to ethanol production, presumably due to the increased supply of electron donor. In parallel with this change in metabolic function, the structure of the microbial community became dominated by Geosporobacter phylotypes, instead of Acetobacterium and Pleomorphomonas phylotypes.
Conclusions
These results provide evidence for the potential of mixed-culture syngas fermentation, since the CO-enriched mixed culture showed high functional redundancy, was resilient to changes in syngas composition, and was capable of producing acetate or ethanol as main products of CO metabolism.
Created2017-09-16
127933-Thumbnail Image.png
Description

To date, little research has been performed regarding the planning and management of “small” projects – those projects typically differentiated from “large” projects due to having lower costs. In 2013, The Construction Industry Institute (CII) set out to develop a front end planning tool that will provide practitioners with a

To date, little research has been performed regarding the planning and management of “small” projects – those projects typically differentiated from “large” projects due to having lower costs. In 2013, The Construction Industry Institute (CII) set out to develop a front end planning tool that will provide practitioners with a standardized process for planning small projects in the industrial sector. The research team determined that data should be sought from industry regarding small industrial projects to ensure applicability, effectiveness and validity of the new tool. The team developed and administered a survey to determine (1) the prevalence of small projects, (2) the planning processes currently in use for small projects, and (3) current metrics used by industry to differentiate between small and large projects. The survey data showed that small projects make up a majority of projects completed in the industrial sector, planning of these projects varies greatly across the industry, and the metrics posed in the survey were mostly not appropriate for use in differentiating between small and large projects. This study contributes to knowledge through adding to the limited research surrounding small projects, and suggesting future research regarding using measures of project complexity to differentiate between small and large projects.

ContributorsCollins, Wesley (Author) / Parrish, Kristen (Author) / Gibson, G (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2017-08-24
127934-Thumbnail Image.png
Description

For the past three decades, the Saudi construction industry (SCI) has exhibited poor performance. Many research efforts have tried to identify the problem and the potential causes but there have been few publications identifying ways to mitigate the problem and describing testing to validate the proposed solution. This paper examines

For the past three decades, the Saudi construction industry (SCI) has exhibited poor performance. Many research efforts have tried to identify the problem and the potential causes but there have been few publications identifying ways to mitigate the problem and describing testing to validate the proposed solution. This paper examines the research and development (R&D) approach in the SCI. A literature research was performed identifying the impact that R&D has had on the SCI. A questionnaire was also created for surveying industry professionals and researchers. The results show evidence that the SCI practice and the academic research work exist in separate silos. This study recommends a change of mindset in both the public and private sector on their views on R&D since cooperation is required to create collaboration between the two sectors and improve the competitiveness of the country's economy.

ContributorsAlhammadi, Yasir (Author) / Algahtany, Mohammed (Author) / Kashiwagi, Dean (Author) / Sullivan, Kenneth (Author) / Kashiwagi, Jacob (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127935-Thumbnail Image.png
Description

The principles of a new project management model have been tested for the past 20 years. This project management model utilizes expertise instead of the traditional management, direction, and control (MDC). This new project management model is a leadership-based model instead of a management model. The practice of the new

The principles of a new project management model have been tested for the past 20 years. This project management model utilizes expertise instead of the traditional management, direction, and control (MDC). This new project management model is a leadership-based model instead of a management model. The practice of the new model requires a change in paradigm and project management structure. Some of the practices of this new paradigm include minimizing the flow of information and communications to and from the project manager [including meetings, emails and documents], eliminating technical communications, reducing client management, direction, and control of the vendor, and the hiring of vendors or personnel to do specific tasks. A vendors is hired only after they have clearly shown that they know what they are doing by showing past performance on similar projects, that they clearly understand how to create transparency to minimize risk that they do not control, and that they can clearly outline their project plan using a detailed milestone schedule including time, cost, and tasks all communicated in the language of metrics.

ContributorsRivera, Alfredo (Author) / Kashiwagi, Dean (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127936-Thumbnail Image.png
Description

Load associated fatigue cracking is one of the major distress types occurring in flexible pavements. Flexural bending beam fatigue laboratory test has been used for several decades and is considered an integral part of the Superpave advanced characterization procedure. One of the most significant solutions to sustain the fatigue life

Load associated fatigue cracking is one of the major distress types occurring in flexible pavements. Flexural bending beam fatigue laboratory test has been used for several decades and is considered an integral part of the Superpave advanced characterization procedure. One of the most significant solutions to sustain the fatigue life for an asphaltic mixture is to add sustainable materials such as rubber or polymers to the asphalt mixture. A laboratory testing program was performed on three gap-graded mixtures: unmodified, Asphalt Rubber (AR) and polymer-modified. Strain controlled fatigue tests were conducted according to the AASHTO T321 procedure. The results from the beam fatigue tests indicated that the AR and polymer-modified gap graded mixtures would have much longer fatigue lives compared to the reference (unmodified) mixture. In addition, a mechanistic analysis using 3D-Move software coupled with a cost-effectiveness analysis study based on the fatigue performance on the three mixtures were performed. Overall, the analysis showed that the AR and polymer-modified asphalt mixtures exhibited significantly higher cost-effectiveness compared to unmodified HMA mixture. Although AR and polymer-modification increases the cost of the material, the analysis showed that they are more cost effective than the unmodified mixture.

ContributorsSouliman, Mena I. (Author) / Mamlouk, Michael (Author) / Eifert, Annie (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127961-Thumbnail Image.png
Description

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three gesture-based input devices that utilize different techniques to capture user movement, and compared them to conventional input technologies like touchscreen and mouse. Participants completed a target-acquisition test and were instructed to move a cursor from a home location to a spherical target as quickly and accurately as possible. Three distances and three target sizes were tested six times in a randomized order for all input devices. A total of 81 participants completed all tasks. Movement time, error rate, and throughput were calculated for each input technology. Results showed that the mean movement time was highly correlated with the target's index of difficulty for all devices, providing evidence that Fitts’ law can be extended and applied to gesture-based devices. Throughputs were found to be significantly lower for the gesture-based devices compared to mouse and touchscreen, and as the index of difficulty increased, the movement time increased significantly more for these gesture technologies. Error counts were statistically higher for all gesture-based input technologies compared to mouse. In addition, error counts for all inputs were highly correlated with target width, but little impact was shown by movement distance. Overall, the findings suggest that gesture-based devices can be characterized by Fitts’ law in a similar fashion to conventional 1D or 2D devices.

ContributorsBurno, Rachael A. (Author) / Wu, Bing (Author) / Doherty, Rina (Author) / Colett, Hannah (Author) / Elnaggar, Rania (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-10-23
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09