This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 29
Filtering by

Clear all filters

162019-Thumbnail Image.png
Description

Cities in the Global South face rapid urbanization challenges and often suffer an acute lack of infrastructure and governance capacities. Smart Cities Mission, in India, launched in 2015, aims to offer a novel approach for urban renewal of 100 cities following an area‐based development approach, where the use of ICT

Cities in the Global South face rapid urbanization challenges and often suffer an acute lack of infrastructure and governance capacities. Smart Cities Mission, in India, launched in 2015, aims to offer a novel approach for urban renewal of 100 cities following an area‐based development approach, where the use of ICT and digital technologies is particularly emphasized. This article presents a critical review of the design and implementation framework of this new urban renewal program across selected case‐study cities. The article examines the claims of the so‐called “smart cities” against actual urban transformation on‐ground and evaluates how “inclusive” and “sustainable” these developments are. We quantify the scale and coverage of the smart city urban renewal projects in the cities to highlight who the program includes and excludes. The article also presents a statistical analysis of the sectoral focus and budgetary allocations of the projects under the Smart Cities Mission to find an inherent bias in these smart city initiatives in terms of which types of development they promote and the ones it ignores. The findings indicate that a predominant emphasis on digital urban renewal of selected precincts and enclaves, branded as “smart cities,” leads to deepening social polarization and gentrification. The article offers crucial urban planning lessons for designing ICT‐driven urban renewal projects, while addressing critical questions around inclusion and sustainability in smart city ventures.`

ContributorsPraharaj, Sarbeswar (Author)
Created2021-05-07
190-Thumbnail Image.png
Description

Attitudes and habits are extremely resistant to change, but a disruption of the magnitude of the COVID-19 pandemic has the potential to bring long-term, massive societal changes. During the pandemic, people are being compelled to experience new ways of interacting, working, learning, shopping, traveling, and eating meals. Going forward, a

Attitudes and habits are extremely resistant to change, but a disruption of the magnitude of the COVID-19 pandemic has the potential to bring long-term, massive societal changes. During the pandemic, people are being compelled to experience new ways of interacting, working, learning, shopping, traveling, and eating meals. Going forward, a critical question is whether these experiences will result in changed behaviors and preferences in the long term. This paper presents initial findings on the likelihood of long-term changes in telework, daily travel, restaurant patronage, and air travel based on survey data collected from adults in the United States in Spring 2020. These data suggest that a sizable fraction of the increase in telework and decreases in both business air travel and restaurant patronage are likely here to stay. As for daily travel modes, public transit may not fully recover its pre-pandemic ridership levels, but many of our respondents are planning to bike and walk more than they used to. These data reflect the responses of a sample that is higher income and more highly educated than the US population. The response of these particular groups to the COVID-19 pandemic is perhaps especially important to understand, however, because their consumption patterns give them a large influence on many sectors of the economy.

Created2020-09-03
Description

The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations O3 due to urbanization within cities in arid/semi-arid environments. First,

The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations O3 due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region's native shrubland. Impacts due to the presence of the built environment on O3 are highly heterogeneous across the metropolitan area. Increased near surface O3 due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily O3 range (by virtue of increasing nighttime minima), an impact largely due to the region's urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas.

ContributorsLi, Jialun (Author) / Georgescu, Matei (Author) / Hyde, Peter (Author) / Mahalov, Alex (Author) / Moustaoui, Mohamed (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2014-11-01
129251-Thumbnail Image.png
Description

Forecasts of noise pollution from a highway line segment noise source are obtained from a sound propagation model utilizing effective sound speed profiles derived from a Numerical Weather Prediction (NWP) limited area forecast with 1 km horizontal resolution and near-ground vertical resolution finer than 20 m. Methods for temporal along

Forecasts of noise pollution from a highway line segment noise source are obtained from a sound propagation model utilizing effective sound speed profiles derived from a Numerical Weather Prediction (NWP) limited area forecast with 1 km horizontal resolution and near-ground vertical resolution finer than 20 m. Methods for temporal along with horizontal and vertical spatial nesting are demonstrated within the NWP model for maintaining forecast feasibility. It is shown that vertical nesting can improve the prediction of finer structures in near-ground temperature and velocity profiles, such as morning temperature inversions and low level jet-like features. Accurate representation of these features is shown to be important for modeling sound refraction phenomena and for enabling accurate noise assessment. Comparisons are made using the parabolic equation model for predictions with profiles derived from NWP simulations and from field experiment observations during mornings on November 7 and 8, 2006 in Phoenix, Arizona. The challenges faced in simulating accurate meteorological profiles at high resolution for sound propagation applications are highlighted and areas for possible improvement are discussed.

ContributorsShaffer, Stephen (Author) / Fernando, H. J. S. (Author) / Ovenden, N. C. (Author) / Moustaoui, Mohamed (Author) / Mahalov, Alex (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-05-01
129252-Thumbnail Image.png
Description

Physical mechanisms of incongruency between observations and Weather Research and Forecasting (WRF) Model predictions are examined. Limitations of evaluation are constrained by (i) parameterizations of model physics, (ii) parameterizations of input data, (iii) model resolution, and (iv) flux observation resolution. Observations from a new 22.1-m flux tower situated within a

Physical mechanisms of incongruency between observations and Weather Research and Forecasting (WRF) Model predictions are examined. Limitations of evaluation are constrained by (i) parameterizations of model physics, (ii) parameterizations of input data, (iii) model resolution, and (iv) flux observation resolution. Observations from a new 22.1-m flux tower situated within a residential neighborhood in Phoenix, Arizona, are utilized to evaluate the ability of the urbanized WRF to resolve finescale surface energy balance (SEB) when using the urban classes derived from the 30-m-resolution National Land Cover Database. Modeled SEB response to a large seasonal variation of net radiation forcing was tested during synoptically quiescent periods of high pressure in winter 2011 and premonsoon summer 2012. Results are presented from simulations employing five nested domains down to 333-m horizontal resolution. A comparative analysis of model cases testing parameterization of physical processes was done using four configurations of urban parameterization for the bulk urban scheme versus three representations with the Urban Canopy Model (UCM) scheme, and also for two types of planetary boundary layer parameterization: the local Mellor–Yamada–Janjić scheme and the nonlocal Yonsei University scheme. Diurnal variation in SEB constituent fluxes is examined in relation to surface-layer stability and modeled diagnostic variables. Improvement is found when adapting UCM for Phoenix with reduced errors in the SEB components. Finer model resolution is seen to have insignificant (<1 standard deviation) influence on mean absolute percent difference of 30-min diurnal mean SEB terms.

ContributorsShaffer, Stephen (Author) / Chow, Winston, 1951- (Author) / Georgescu, Matei (Author) / Hyde, Peter (Author) / Jenerette, G. D. (Author) / Mahalov, Alex (Author) / Moustaoui, Mohamed (Author) / Ruddell, Benjamin (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-06-11
129254-Thumbnail Image.png
Description

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e.,

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e., sprayed polyurethane foam roofs (SPF roofs). Thirty-seven urethane-coated SPF roofs that were installed in 2005/2006 were visually inspected to measure the percentage of blisters and repairs three times over a period of four years, six years, and seven years. A repairing criteria was established after a six-year mark based on the data that were reported to contractors as vulnerable roofs. Furthermore, the relation between four possible contributing time-of-installation factors—contractor, demographics, season, and difficulty (number of penetrations and size of the roof in square feet) that could affect the quality of the roof was determined. Demographics and difficulty did not affect the quality of the roofs, whereas the contractor and the season when the roof was installed did affect the quality of the roofs.

ContributorsGajjar, Dhaval (Author) / Kashiwagi, Dean (Author) / Sullivan, Kenneth (Author) / Kashiwagi, Jacob (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-04-01
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
128834-Thumbnail Image.png
Description

Introduction: The ketogenic diet (KD) is a high-fat, low-carbohydrate diet that alters metabolism by increasing the level of ketone bodies in the blood. KetoCal® (KC) is a nutritionally complete, commercially available 4∶1 (fat∶ carbohydrate+protein) ketogenic formula that is an effective non-pharmacologic treatment for the management of refractory pediatric epilepsy. Diet-induced ketosis

Introduction: The ketogenic diet (KD) is a high-fat, low-carbohydrate diet that alters metabolism by increasing the level of ketone bodies in the blood. KetoCal® (KC) is a nutritionally complete, commercially available 4∶1 (fat∶ carbohydrate+protein) ketogenic formula that is an effective non-pharmacologic treatment for the management of refractory pediatric epilepsy. Diet-induced ketosis causes changes to brain homeostasis that have potential for the treatment of other neurological diseases such as malignant gliomas.

Methods: We used an intracranial bioluminescent mouse model of malignant glioma. Following implantation animals were maintained on standard diet (SD) or KC. The mice received 2×4 Gy of whole brain radiation and tumor growth was followed by in vivo imaging.

Results: Animals fed KC had elevated levels of β-hydroxybutyrate (p = 0.0173) and an increased median survival of approximately 5 days relative to animals maintained on SD. KC plus radiation treatment were more than additive, and in 9 of 11 irradiated animals maintained on KC the bioluminescent signal from the tumor cells diminished below the level of detection (p<0.0001). Animals were switched to SD 101 days after implantation and no signs of tumor recurrence were seen for over 200 days.

Conclusions: KC significantly enhances the anti-tumor effect of radiation. This suggests that cellular metabolic alterations induced through KC may be useful as an adjuvant to the current standard of care for the treatment of human malignant gliomas.

ContributorsAbdelwahab, Mohammed G. (Author) / Fenton, Kathryn E. (Author) / Preul, Mark C. (Author) / Rho, Jong M. (Author) / Lynch, Andrew (Author) / Stafford, Phillip (Author) / Scheck, Adrienne C. (Author) / Biodesign Institute (Contributor)
Created2012-05-01