This collection collates faculty and staff collections alphabetically by surname.

Displaying 1 - 10 of 24
Filtering by

Clear all filters

Description

Two classes of scaling behaviours, namely the super-linear scaling of links or activities, and the sub-linear scaling of area, diversity, or time elapsed with respect to size have been found to prevail in the growth of complex networked systems. Despite some pioneering modelling approaches proposed for specific systems, whether there

Two classes of scaling behaviours, namely the super-linear scaling of links or activities, and the sub-linear scaling of area, diversity, or time elapsed with respect to size have been found to prevail in the growth of complex networked systems. Despite some pioneering modelling approaches proposed for specific systems, whether there exists some general mechanisms that account for the origins of such scaling behaviours in different contexts, especially in socioeconomic systems, remains an open question. We address this problem by introducing a geometric network model without free parameter, finding that both super-linear and sub-linear scaling behaviours can be simultaneously reproduced and that the scaling exponents are exclusively determined by the dimension of the Euclidean space in which the network is embedded. We implement some realistic extensions to the basic model to offer more accurate predictions for cities of various scaling behaviours and the Zipf distribution reported in the literature and observed in our empirical studies. All of the empirical results can be precisely recovered by our model with analytical predictions of all major properties. By virtue of these general findings concerning scaling behaviour, our models with simple mechanisms gain new insights into the evolution and development of complex networked systems.

ContributorsZhang, Jiang (Author) / Li, Xintong (Author) / Wang, Xinran (Author) / Wang, Wen-Xu (Author) / Wu, Lingfei (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-04-29
390-Thumbnail Image.png
Description

This paper presents a Bayesian framework for evaluative classification. Current education policy debates center on arguments about whether and how to use student test score data in school and personnel evaluation. Proponents of such use argue that refusing to use data violates both the public’s need to hold schools accountable

This paper presents a Bayesian framework for evaluative classification. Current education policy debates center on arguments about whether and how to use student test score data in school and personnel evaluation. Proponents of such use argue that refusing to use data violates both the public’s need to hold schools accountable when they use taxpayer dollars and students’ right to educational opportunities. Opponents of formulaic use of test-score data argue that most standardized test data is susceptible to fatal technical flaws, is a partial picture of student achievement, and leads to behavior that corrupts the measures.

A Bayesian perspective on summative ordinal classification is a possible framework for combining quantitative outcome data for students with the qualitative types of evaluation that critics of high-stakes testing advocate. This paper describes the key characteristics of a Bayesian perspective on classification, describes a method to translate a naïve Bayesian classifier into a point-based system for evaluation, and draws conclusions from the comparison on the construction of algorithmic (including point-based) systems that could capture the political and practical benefits of a Bayesian approach. The most important practical conclusion is that point-based systems with fixed components and weights cannot capture the dynamic and political benefits of a reciprocal relationship between professional judgment and quantitative student outcome data.

ContributorsDorn, Sherman (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2009
387-Thumbnail Image.png
Description

This is a brief text intended for use in undergraduate school-and-society classes. Your class may also be titled “Social foundations of education.” “Social foundations of education” is an interdisciplinary field that includes both humanities and social-science perspectives on schooling. It thus includes study of the philosophy and history of education

This is a brief text intended for use in undergraduate school-and-society classes. Your class may also be titled “Social foundations of education.” “Social foundations of education” is an interdisciplinary field that includes both humanities and social-science perspectives on schooling. It thus includes study of the philosophy and history of education as well as sociological, economic, anthropological, and political perspectives on schooling.

The core of most social foundations classes lies in the relationship between formal schooling and broader society. This emphasis means that while some parts of psychology may be related to the core issues of social foundations classes—primarily social psychology—the questions that are asked within a social-foundations class are different from the questions raised in child development, educational psychology, and most teaching-methods classes. For example, after finishing the first chapter of this text, you should be able to answer the question, “Why does the federal government pay public schools to feed poor students at breakfast and lunch?” Though there is some psychology research tying nutrition to behavior and learning, the policy is based on much broader expectations of schools. In this case, “Children learn better if they are well-fed” both is based on research and also is an incomplete answer.

ContributorsDorn, Sherman (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2013
385-Thumbnail Image.png
Description

The current debate over graduate rate calculations and results has glossed over the relationship between student migration and the accuracy of various graduation rates proposed over the past five years. Three general grade-based graduation rates have been proposed recently, and each has a parallel version that includes an adjustment for

The current debate over graduate rate calculations and results has glossed over the relationship between student migration and the accuracy of various graduation rates proposed over the past five years. Three general grade-based graduation rates have been proposed recently, and each has a parallel version that includes an adjustment for migration, whether international, internal to the U.S., or between different school sectors. All of the adjustment factors have a similar form, allowing simulation of estimates from real data, assuming different unmeasured net migration rates. In addition, a new age-based graduation rate, based on mathematical demography, allows the simulation of estimates on a parallel basis using data from Virginia's public schools.

Both the direct analysis and simulation demonstrate that graduation rates can only be useful with accurate information about student migration. A discussion of Florida's experiences with longitudinal cohort graduation rates highlights some of the difficulties with the current status of the oldest state databases and the need for both technical confidence and definitional clarity. Meeting the No Child Left Behind mandates for school-level graduation rates requires confirmation of transfers and an audit of any state system for accuracy, and basing graduation rates on age would be a significant improvement over rates calculated using grade-based data.

ContributorsDorn, Sherman (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2009
386-Thumbnail Image.png
Description

Analysis of newly-released data from the Florida Department of Education suggests that commonly-used proxies for high school graduation are generally weak predictors of the new federal rate.

ContributorsDorn, Sherman (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2012
388-Thumbnail Image.png
Description

The spread of academic testing for accountability purposes in multiple countries has obscured at least two historical purposes of academic testing: community ritual and management of the social structure. Testing for accountability is very different from the purpose of academic challenges one can identify in community “examinations” in 19th century

The spread of academic testing for accountability purposes in multiple countries has obscured at least two historical purposes of academic testing: community ritual and management of the social structure. Testing for accountability is very different from the purpose of academic challenges one can identify in community “examinations” in 19th century North America, or exams’ controlling access to the civil service in Imperial China. Rather than testing for ritual or access to mobility, the modern uses of testing are much closer to the state-building project of a tax census, such as the Domesday Book of medieval Britain after the Norman Invasion, the social engineering projects described in James Scott's Seeing like a State (1998), or the “mapping the world” project that David Nye described in America as Second Creation (2004). This paper will explore both the instrumental and cultural differences among testing as ritual, testing as mobility control, and testing as state-building.

ContributorsDorn, Sherman (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2014-12-08
128730-Thumbnail Image.png
Description

One way to view ‘equitable pedagogy’ is through an opportunity to learn (OTL) lens, meaning that regardless of race, class, or culture, a student has access to rigorous and meaningful content, as well as appropriate resources and instruction necessary to learn and demonstrate understanding of that content. Assessment holds a

One way to view ‘equitable pedagogy’ is through an opportunity to learn (OTL) lens, meaning that regardless of race, class, or culture, a student has access to rigorous and meaningful content, as well as appropriate resources and instruction necessary to learn and demonstrate understanding of that content. Assessment holds a unique position in the classroom in that it can both uncover whether inequitable conditions exist (i.e., performance gaps, denied OTL) and provide an OTL by mediating communication between teacher and students regarding learning progress and what is important to learn. Nevertheless, individuals entering teacher education programs often hold deficit views toward marginalized students, such as Language Minorities (LMs), believe that assessment strictly serves to evaluate learning, and do not do consider how language and culture influence student thinking–views supplanting assessment’s role at supporting an equitable pedagogy for LMs. Through surveys, interviews, program artifacts, and classroom observation, I report on a case study of one pre-service physics teacher, Dean, to depict how his expertise at assessing science did evolve throughout his yearlong teacher education program in terms of (a) becoming more knowledgeable of the role of language and (b) developing a belief in incorporating ‘discourse’ while assessing science. Within the case study, I analyze one particular episode from Dean’s teaching practicum to highlight remaining challenges for pre-service teachers to integrate science and language in classroom assessment—namely, interpreting students’ use of language along with their understanding of core science ideas. The findings underscore the need for connecting language and equity issues to content-area assessment in teacher preparation.

ContributorsLyon, Edward (Author) / Mary Lou Fulton Teachers College (Contributor)
Created2013-07-19
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20