This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 28
Filtering by

Clear all filters

129462-Thumbnail Image.png
Description

We develop a general framework to analyze the controllability of multiplex networks using multiple-relation networks and multiple-layer networks with interlayer couplings as two classes of prototypical systems. In the former, networks associated with different physical variables share the same set of nodes and in the latter, diffusion processes take place.

We develop a general framework to analyze the controllability of multiplex networks using multiple-relation networks and multiple-layer networks with interlayer couplings as two classes of prototypical systems. In the former, networks associated with different physical variables share the same set of nodes and in the latter, diffusion processes take place. We find that, for a multiple-relation network, a layer exists that dominantly determines the controllability of the whole network and, for a multiple-layer network, a small fraction of the interconnections can enhance the controllability remarkably. Our theory is generally applicable to other types of multiplex networks as well, leading to significant insights into the control of complex network systems with diverse structures and interacting patterns.

ContributorsYuan, Zhengzhong (Author) / Zhao, Chen (Author) / Wang, Wen-Xu (Author) / Di, Zengru (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-10-24
129336-Thumbnail Image.png
Description

Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures

Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”

ContributorsHout, Michael C. (Author) / Goldinger, Stephen (Author) / Brady, Kyle (Author) / Department of Psychology (Contributor)
Created2014-11-12
129220-Thumbnail Image.png
Description

While expert groups often make recommendations on a range of non-controversial as well as controversial issues, little is known about how the level of expert consensus-the level of expert agreement-influences perceptions of the recommendations. This research illustrates that for non-controversial issues expert groups that exhibit high levels of agreement are

While expert groups often make recommendations on a range of non-controversial as well as controversial issues, little is known about how the level of expert consensus-the level of expert agreement-influences perceptions of the recommendations. This research illustrates that for non-controversial issues expert groups that exhibit high levels of agreement are more persuasive than expert groups that exhibit low levels of agreement. This effect is mediated by the perceived entitativity-the perceived cohesiveness or unification of the group-of the expert group. But for controversial issues, this effect is moderated by the perceivers' implicit assumptions about the group composition. When perceivers are provided no information about a group supporting the Affordable Care Act-a highly controversial piece of U.S. legislation that is divided by political party throughout the country-higher levels of agreement are less persuasive than lower levels of agreement because participants assume there were more democrats and fewer republicans in the group. But when explicitly told that the group was half republicans and half democrats, higher levels of agreement are more persuasive.

ContributorsVotruba, Ashley (Author) / Kwan, Sau (Author) / Department of Psychology (Contributor)
Created2015-03-26
128972-Thumbnail Image.png
Description

Background: Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number

Background: Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number of deaths was repeatedly sampled from all deaths that occurred during a hot weather event, and compared with deaths during a control period. The deaths were compared with respect to five factors known to be associated with hot weather mortality. Individuals were ranked by their presence in significant models over 100 trials of 10,000 repetitions. Those with the highest rankings were identified as probable excess deaths. Sensitivity analyses were performed on a range of model combinations. These methods were applied to a 2009 hot weather event in greater Vancouver, Canada.

Results: The excess deaths identified were sensitive to differences in model combinations, particularly between univariate and multivariate approaches. One multivariate and one univariate combination were chosen as the best models for further analyses. The individuals identified by multiple combinations suggest that marginalized populations in greater Vancouver are at higher risk of death during hot weather.

Conclusions: This study proposes novel methods for classifying specific deaths as expected or excess during a hot weather event. Further work is needed to evaluate performance of the methods in simulation studies and against clinically identified cases. If confirmed, these methods could be applied to a wide range of populations and events of interest.

Created2016-11-15
128963-Thumbnail Image.png
Description

Background: Medical and public health scientists are using evolution to devise new strategies to solve major health problems. But based on a 2003 survey, medical curricula may not adequately prepare physicians to evaluate and extend these advances. This study assessed the change in coverage of evolution in North American medical schools

Background: Medical and public health scientists are using evolution to devise new strategies to solve major health problems. But based on a 2003 survey, medical curricula may not adequately prepare physicians to evaluate and extend these advances. This study assessed the change in coverage of evolution in North American medical schools since 2003 and identified opportunities for enriching medical education.

Methods: In 2013, curriculum deans for all North American medical schools were invited to rate curricular coverage and perceived importance of 12 core principles, the extent of anticipated controversy from adding evolution, and the usefulness of 13 teaching resources. Differences between schools were assessed by Pearson’s chi-square test, Student’s t-test, and Spearman’s correlation. Open-ended questions sought insight into perceived barriers and benefits.

Results: Despite repeated follow-up, 60 schools (39%) responded to the survey. There was no evidence of sample bias. The three evolutionary principles rated most important were antibiotic resistance, environmental mismatch, and somatic selection in cancer. While importance and coverage of principles were correlated (r = 0.76, P < 0.01), coverage (at least moderate) lagged behind importance (at least moderate) by an average of 21% (SD = 6%). Compared to 2003, a range of evolutionary principles were covered by 4 to 74% more schools. Nearly half (48%) of responders anticipated igniting controversy at their medical school if they added evolution to their curriculum. The teaching resources ranked most useful were model test questions and answers, case studies, and model curricula for existing courses/rotations. Limited resources (faculty expertise) were cited as the major barrier to adding more evolution, but benefits included a deeper understanding and improved patient care.

Conclusion: North American medical schools have increased the evolution content in their curricula over the past decade. However, coverage is not commensurate with importance. At a few medical schools, anticipated controversy impedes teaching more evolution. Efforts to improve evolution education in medical schools should be directed toward boosting faculty expertise and crafting resources that can be easily integrated into existing curricula.

ContributorsHidaka, Brandon H. (Author) / Asghar, Anila (Author) / Aktipis, C. Athena (Author) / Nesse, Randolph (Author) / Wolpaw, Terry M. (Author) / Skursky, Nicole K. (Author) / Bennett, Katelyn J. (Author) / Beyrouty, Matthew W. (Author) / Schwartz, Mark D. (Author) / Department of Psychology (Contributor)
Created2015-03-08
128306-Thumbnail Image.png
Description

The Arctic, even more so than other parts of the world, has warmed substantially over the past few decades. Temperature and humidity influence the rate of development, survival and reproduction of pathogens and thus the incidence and prevalence of many infectious diseases. Higher temperatures may also allow infected host species

The Arctic, even more so than other parts of the world, has warmed substantially over the past few decades. Temperature and humidity influence the rate of development, survival and reproduction of pathogens and thus the incidence and prevalence of many infectious diseases. Higher temperatures may also allow infected host species to survive winters in larger numbers, increase the population size and expand their habitat range. The impact of these changes on human disease in the Arctic has not been fully evaluated. There is concern that climate change may shift the geographic and temporal distribution of a range of infectious diseases. Many infectious diseases are climate sensitive, where their emergence in a region is dependent on climate-related ecological changes. Most are zoonotic diseases, and can be spread between humans and animals by arthropod vectors, water, soil, wild or domestic animals. Potentially climate-sensitive zoonotic pathogens of circumpolar concern include Brucella spp., Toxoplasma gondii, Trichinella spp., Clostridium botulinum, Francisella tularensis, Borrelia burgdorferi, Bacillus anthracis, Echinococcus spp., Leptospira spp., Giardia spp., Cryptosporida spp., Coxiella burnetti, rabies virus, West Nile virus, Hantaviruses, and tick-borne encephalitis viruses.

ContributorsParkinson, Alan J. (Author) / Evengard, Birgitta (Author) / Semenza, Jan C. (Author) / Ogden, Nicholas (Author) / Borresen, Malene L. (Author) / Berner, Jim (Author) / Brubaker, Michael (Author) / Sjostedt, Anders (Author) / Evander, Magnus (Author) / Hondula, David M. (Author) / Menne, Bettina (Author) / Pshenichnaya, Natalia (Author) / Gounder, Prabhu (Author) / Larose, Tricia (Author) / Revich, Boris (Author) / Hueffer, Karsten (Author) / Albihn, Ann (Author) / College of Public Service and Community Solutions (Contributor)
Created2014-09-30
128119-Thumbnail Image.png
Description

Dynamical processes occurring on the edges in complex networks are relevant to a variety of real-world situations. Despite recent advances, a framework for edge controllability is still required for complex networks of arbitrary structure and interaction strength. Generalizing a previously introduced class of processes for edge dynamics, the switchboard dynamics,

Dynamical processes occurring on the edges in complex networks are relevant to a variety of real-world situations. Despite recent advances, a framework for edge controllability is still required for complex networks of arbitrary structure and interaction strength. Generalizing a previously introduced class of processes for edge dynamics, the switchboard dynamics, and exploit- ing the exact controllability theory, we develop a universal framework in which the controllability of any node is exclusively determined by its local weighted structure. This framework enables us to identify a unique set of critical nodes for control, to derive analytic formulas and articulate efficient algorithms to determine the exact upper and lower controllability bounds, and to evaluate strongly structural controllability of any given network. Applying our framework to a large number of model and real-world networks, we find that the interaction strength plays a more significant role in edge controllability than the network structure does, due to a vast range between the bounds determined mainly by the interaction strength. Moreover, transcriptional regulatory networks and electronic circuits are much more strongly structurally controllable (SSC) than other types of real-world networks, directed networks are more SSC than undirected networks, and sparse networks are typically more SSC than dense networks.

ContributorsPang, Shao-Peng (Author) / Wang, Wen-Xu (Author) / Hao, Fei (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-06-26
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128342-Thumbnail Image.png
Description

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with high efficiency and robustness for optimal source localization in arbitrary weighted networks with arbitrary distribution of sources. We offer a minimum output analysis to quantify the source locatability through a minimal number of messenger nodes that produce sufficient measurement for fully locating the sources. When the minimum messenger nodes are discerned, the problem of optimal source localization becomes one of sparse signal reconstruction, which can be solved using compressive sensing. Application of our framework to model and empirical networks demonstrates that sources in homogeneous and denser networks are more readily to be located. A surprising finding is that, for a connected undirected network with random link weights and weak noise, a single messenger node is sufficient for locating any number of sources. The framework deepens our understanding of the network source localization problem and offers efficient tools with broad applications.

ContributorsHu, Zhao-Long (Author) / Han, Xiao (Author) / Lai, Ying-Cheng (Author) / Wang, Wen-Xu (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-12