Matching Items (581)
Filtering by

Clear all filters

152295-Thumbnail Image.png
Description
ABSTRACT Recent studies indicate that top-performing companies have higher-performing work environments than average companies. They receive higher scores for worker satisfaction with their overall physical work environment as well as higher effectiveness ratings for their workspaces (Gensler, 2008; Harter et al., 2003). While these studies indicate a relationship between effective

ABSTRACT Recent studies indicate that top-performing companies have higher-performing work environments than average companies. They receive higher scores for worker satisfaction with their overall physical work environment as well as higher effectiveness ratings for their workspaces (Gensler, 2008; Harter et al., 2003). While these studies indicate a relationship between effective office design and satisfaction they have not explored which specific space types may contribute to workers' overall satisfaction with their physical work environment. Therefore, the purpose of this study is to explore the relationship between workers' overall satisfaction with their physical work environments and their perception of the effectiveness of spaces designed for Conceptual Age work including learning, focusing, collaborating, and socializing tasks. This research is designed to identify which workspace types are related to workers' satisfaction with their overall work environment and which are perceived to be most and least effective. To accomplish this two primary and four secondary research questions were developed for this study. The first primary question considers overall workers' satisfaction with their overall physical work environments (offices, workstations, hallways, common areas, reception, waiting areas, etc.) related to the effective use of work mode workspaces (learning, focusing, collaborating, socializing). The second primary research question was developed to identify which of the four work mode space types had the greatest and least relationship to workers' satisfaction with the overall physical work environment. Secondary research questions were developed to address workers' perceptions of effectiveness of each space type. This research project used data from a previous study collected from 2007 to 2012. Responses were from all staff levels of US office-based office workers and resulted in a blind sample of approximately 48,000 respondents. The data for this study were developed from SPSS data reports that included descriptive data and Pearson correlations. Findings were developed from those statistics using coefficient of determination.
ContributorsHarmon-Vaughan, Elizabeth (Author) / Kroelinger, Michael D. (Thesis advisor) / Bernardi, Jose (Committee member) / Ozel, Filiz (Committee member) / Arizona State University (Publisher)
Created2013
152299-Thumbnail Image.png
Description
Extreme hot-weather events have become life-threatening natural phenomena in many cities around the world, and the health impacts of excessive heat are expected to increase with climate change (Huang et al. 2011; Knowlton et al. 2007; Meehl and Tebaldi 2004; Patz 2005). Heat waves will likely have the worst health

Extreme hot-weather events have become life-threatening natural phenomena in many cities around the world, and the health impacts of excessive heat are expected to increase with climate change (Huang et al. 2011; Knowlton et al. 2007; Meehl and Tebaldi 2004; Patz 2005). Heat waves will likely have the worst health impacts in urban areas, where large numbers of vulnerable people reside and where local-scale urban heat island effects (UHI) retard and reduce nighttime cooling. This dissertation presents three empirical case studies that were conducted to advance our understanding of human vulnerability to heat in coupled human-natural systems. Using vulnerability theory as a framework, I analyzed how various social and environmental components of a system interact to exacerbate or mitigate heat impacts on human health, with the goal of contributing to the conceptualization of human vulnerability to heat. The studies: 1) compared the relationship between temperature and health outcomes in Chicago and Phoenix; 2) compared a map derived from a theoretical generic index of vulnerability to heat with a map derived from actual heat-related hospitalizations in Phoenix; and 3) used geospatial information on health data at two areal units to identify the hot spots for two heat health outcomes in Phoenix. The results show a 10-degree Celsius difference in the threshold temperatures at which heat-stress calls in Phoenix and Chicago are likely to increase drastically, and that Chicago is likely to be more sensitive to climate change than Phoenix. I also found that heat-vulnerability indices are sensitive to scale, measurement, and context, and that cities will need to incorporate place-based factors to increase the usefulness of vulnerability indices and mapping to decision making. Finally, I found that identification of geographical hot-spot of heat-related illness depends on the type of data used, scale of measurement, and normalization procedures. I recommend using multiple datasets and different approaches to spatial analysis to overcome this limitation and help decision makers develop effective intervention strategies.
ContributorsChuang, Wen-Ching (Author) / Gober, Patricia (Thesis advisor) / Boone, Christopher (Committee member) / Guhathakurta, Subhrajit (Committee member) / Ruddell, Darren (Committee member) / Arizona State University (Publisher)
Created2013
152305-Thumbnail Image.png
Description
Water is the defining issue in determining the development and growth of human populations of the Southwest. The cities of Las Vegas, Phoenix, Tucson, Albuquerque, and El Paso have experienced rapid and exponential growth over the past 50 years. The outlook for having access to sustainable sources of water to

Water is the defining issue in determining the development and growth of human populations of the Southwest. The cities of Las Vegas, Phoenix, Tucson, Albuquerque, and El Paso have experienced rapid and exponential growth over the past 50 years. The outlook for having access to sustainable sources of water to support this growth is not promising due to water demand and supply deficits. Regional water projects have harnessed the Colorado and Rio Grande rivers to maximize the utility of the water for human consumption and environmental laws have been adopted to regulate the beneficial use of this water, but it still is not enough to create sustainable future for rapidly growing southwest cities. Future growth in these cities will depend on finding new sources of water and creative measures to maximize the utility of existing water resources. The challenge for southwest cities is to establish policies, procedures, and projects that maximizes the use of water and promotes conservation from all areas of municipal users. All cities are faced with the same challenges, but have different options for how they prioritize their water resources. The principal means of sustainable water management include recovery, recharge, reuse, and increasing the efficiency of water delivery. Other strategies that have been adopted include harvesting of rainwater, building codes that promote efficient water use, tiered water rates, turf removal programs, residential water auditing, and native plant promotion. Creating a sustainable future for the southwest will best be achieved by cities that adopt an integrated approach to managing their water resources including discouraging discretionary uses of water, adoption of building and construction codes for master plans, industrial plants, and residential construction. Additionally, a robust plan for education of the public is essential to create a culture of conservation from a very young age.
ContributorsMalloy, Richard (Richard A.) (Author) / Brock, John (Thesis advisor) / Martin, Chris (Thesis advisor) / Thor, Eric (Committee member) / Arizona State University (Publisher)
Created2013
151687-Thumbnail Image.png
Description

In recent years, an increase of environmental temperature in urban areas has raised many concerns. These areas are subjected to higher temperature compared to the rural surrounding areas. Modification of land surface and the use of materials such as concrete and/or asphalt are the main factors influencing the surface energy

In recent years, an increase of environmental temperature in urban areas has raised many concerns. These areas are subjected to higher temperature compared to the rural surrounding areas. Modification of land surface and the use of materials such as concrete and/or asphalt are the main factors influencing the surface energy balance and therefore the environmental temperature in the urban areas. Engineered materials have relatively higher solar energy absorption and tend to trap a relatively higher incoming solar radiation. They also possess a higher heat storage capacity that allows them to retain heat during the day and then slowly release it back into the atmosphere as the sun goes down. This phenomenon is known as the Urban Heat Island (UHI) effect and causes an increase in the urban air temperature. Many researchers believe that albedo is the key pavement affecting the urban heat island. However, this research has shown that the problem is more complex and that solar reflectivity may not be the only important factor to evaluate the ability of a pavement to mitigate UHI. The main objective of this study was to analyze and research the influence of pavement materials on the near surface air temperature. In order to accomplish this effort, test sections consisting of Hot Mix Asphalt (HMA), Porous Hot Mix asphalt (PHMA), Portland Cement Concrete (PCC), Pervious Portland Cement Concrete (PPCC), artificial turf, and landscape gravels were constructed in the Phoenix, Arizona area. Air temperature, albedo, wind speed, solar radiation, and wind direction were recorded, analyzed and compared above each pavement material type. The results showed that there was no significant difference in the air temperature at 3-feet and above, regardless of the type of the pavement. Near surface pavement temperatures were also measured and modeled. The results indicated that for the UHI analysis, it is important to consider the interaction between pavement structure, material properties, and environmental factors. Overall, this study demonstrated the complexity of evaluating pavement structures for UHI mitigation; it provided great insight on the effects of material types and properties on surface temperatures and near surface air temperature.

ContributorsPourshams-Manzouri, Tina (Author) / Kaloush, Kamil (Thesis advisor) / Wang, Zhihua (Thesis advisor) / Zapata, Claudia E. (Committee member) / Mamlouk, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151802-Thumbnail Image.png
Description
The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software

The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software design activities, however the same cannot be said for reverse engineering activities. The introduction of abstraction to reverse engineering will allow the engineer to move farther away from the details of the system, increasing his ability to see the role that domain level concepts play in the system. In this thesis, we present a technique that facilitates filtering of classes from existing systems at the source level based on their relationship to concepts in the domain via a classification method using machine learning. We showed that concepts can be identified using a machine learning classifier based on source level metrics. We developed an Eclipse plugin to assist with the process of manually classifying Java source code, and collecting metrics and classifications into a standard file format. We developed an Eclipse plugin to act as a concept identifier that visually indicates a class as a domain concept or not. We minimized the size of training sets to ensure a useful approach in practice. This allowed us to determine that a training set of 7:5 to 10% is nearly as effective as a training set representing 50% of the system. We showed that random selection is the most consistent and effective means of selecting a training set. We found that KNN is the most consistent performer among the learning algorithms tested. We determined the optimal feature set for this classification problem. We discussed two possible structures besides a one to one mapping of domain knowledge to implementation. We showed that classes representing more than one concept are simply concepts at differing levels of abstraction. We also discussed composite concepts representing a domain concept implemented by more than one class. We showed that these composite concepts are difficult to detect because the problem is NP-complete.
ContributorsCarey, Maurice (Author) / Colbourn, Charles (Thesis advisor) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151757-Thumbnail Image.png
Description
Statistical process control (SPC) and predictive analytics have been used in industrial manufacturing and design, but up until now have not been applied to threshold data of vital sign monitoring in remote care settings. In this study of 20 elders with COPD and/or CHF, extended months of peak flow monitoring

Statistical process control (SPC) and predictive analytics have been used in industrial manufacturing and design, but up until now have not been applied to threshold data of vital sign monitoring in remote care settings. In this study of 20 elders with COPD and/or CHF, extended months of peak flow monitoring (FEV1) using telemedicine are examined to determine when an earlier or later clinical intervention may have been advised. This study demonstrated that SPC may bring less than a 2.0% increase in clinician workload while providing more robust statistically-derived thresholds than clinician-derived thresholds. Using a random K-fold model, FEV1 output was predictably validated to .80 Generalized R-square, demonstrating the adequate learning of a threshold classifier. Disease severity also impacted the model. Forecasting future FEV1 data points is possible with a complex ARIMA (45, 0, 49), but variation and sources of error require tight control. Validation was above average and encouraging for clinician acceptance. These statistical algorithms provide for the patient's own data to drive reduction in variability and, potentially increase clinician efficiency, improve patient outcome, and cost burden to the health care ecosystem.
ContributorsFralick, Celeste (Author) / Muthuswamy, Jitendran (Thesis advisor) / O'Shea, Terrance (Thesis advisor) / LaBelle, Jeffrey (Committee member) / Pizziconi, Vincent (Committee member) / Shea, Kimberly (Committee member) / Arizona State University (Publisher)
Created2013
151762-Thumbnail Image.png
Description
Teamwork and project management (TPM) tools are important components of sustainability science curricula designed using problem- and project-base learning (PPBL). Tools are additional materials, beyond lectures, readings, and assignments, that structure and facilitate students' learning; they can enhance student teams' ability to complete projects and achieve learning outcomes and, if

Teamwork and project management (TPM) tools are important components of sustainability science curricula designed using problem- and project-base learning (PPBL). Tools are additional materials, beyond lectures, readings, and assignments, that structure and facilitate students' learning; they can enhance student teams' ability to complete projects and achieve learning outcomes and, if instructors can find appropriate existing tools, can reduce time needed for class design and preparation. This research uses a case study approach to evaluate the effectiveness of five TPM tools in two Arizona State University (ASU) sustainability classes: an introductory (100-level) and a capstone (400-level) class. Data was collected from student evaluations and instructor observations in both classes during Spring 2013 and qualitatively analyzed to identify patterns in tool use and effectiveness. Results suggest how instructors might improve tool effectiveness in other sustainability classes. Work plans and meeting agendas were the most effective TPM tools in the 100-level class, while work plans and codes of collaboration were most effective at the 400 level. Common factors in tool effectiveness include active use and integration of tools into class activities. Suggestions for improving tool effectiveness at both levels include introducing tools earlier in the course, incorporating tools into activities, and helping students link a tool's value to sustainability problem-solving competence. Polling students on prior use and incorporating tool use into project assignments may increase 100 level tool effectiveness; and at the 400 level, improvements may be achieved by introducing tools earlier and coaching students to select, find, and develop relevant tools.
ContributorsTrippel, Dorothy (Author) / Redman, Charles L. (Thesis advisor) / Pijawka, K. David (Committee member) / Walters, Molina (Committee member) / Arizona State University (Publisher)
Created2013
151783-Thumbnail Image.png
Description
The United Nation's Framework Convention on Climate Change (UNFCCC) recognizes development as a priority for carbon dioxide (CO2) allocation, under its principle of "common but differentiated responsibilities". This was codified in the Kyoto Protocol, which exempt developing nations from binding emission reduction targets. Additionally, they could be the recipients of

The United Nation's Framework Convention on Climate Change (UNFCCC) recognizes development as a priority for carbon dioxide (CO2) allocation, under its principle of "common but differentiated responsibilities". This was codified in the Kyoto Protocol, which exempt developing nations from binding emission reduction targets. Additionally, they could be the recipients of financed sustainable development projects in exchange for emission reduction credits that the developed nations could use to comply with emission targets. Due to ineffective results, post-Kyoto policy discussions indicate a transition towards mitigation commitments from major developed and developing emitters, likely supplemented by market-based mechanisms to reduce mitigation costs. Although the likelihood of achieving substantial emission reductions is increased by the new plan, there is a paucity of consideration to how an ethic of development might be advanced. Therefore, this research empirically investigates the role that CO2 plays in advancing human development (in terms of the Human Development Index or HDI) over the 1990 to 2010 time period. Based on empirical evidence, a theoretical CO2-development framework is established, which provides a basis for designing a novel policy proposal that integrates mitigation efforts with human development objectives. Empirical evidence confirms that CO2 and HDI are highly correlated, but that there are diminishing returns to HDI as per capita CO2 emissions increase. An examination of development pathways reveals that as nations develop, their trajectories generally become less coupled with CO2. Moreover, the developing countries with the greatest gains in HDI are also nations that have, or are in the process of moving toward, outward-oriented trade policies that involve increased domestic capabilities for product manufacture and export. With these findings in mind, future emission targets should reduce current emissions in developed nations and allow room for HDI growth in developing countries as well as in the least developed nations of the world. Emission trading should also be limited to nations with similar HDI levels to protect less-developed nations from unfair competition for capacity building resources. Lastly, developed countries should be incentivized to invest in joint production ventures within the LDCs to build capacity for self-reliant and sustainable development over the long-term.
ContributorsClark, Susan Spierre (Author) / Seager, Thomas P. (Thesis advisor) / Allenby, Braden (Committee member) / Klinsky, Sonja (Committee member) / Arizona State University (Publisher)
Created2013
151627-Thumbnail Image.png
Description
Text classification, in the artificial intelligence domain, is an activity in which text documents are automatically classified into predefined categories using machine learning techniques. An example of this is classifying uncategorized news articles into different predefined categories such as "Business", "Politics", "Education", "Technology" , etc. In this thesis, supervised machine

Text classification, in the artificial intelligence domain, is an activity in which text documents are automatically classified into predefined categories using machine learning techniques. An example of this is classifying uncategorized news articles into different predefined categories such as "Business", "Politics", "Education", "Technology" , etc. In this thesis, supervised machine learning approach is followed, in which a module is first trained with pre-classified training data and then class of test data is predicted. Good feature extraction is an important step in the machine learning approach and hence the main component of this text classifier is semantic triplet based features in addition to traditional features like standard keyword based features and statistical features based on shallow-parsing (such as density of POS tags and named entities). Triplet {Subject, Verb, Object} in a sentence is defined as a relation between subject and object, the relation being the predicate (verb). Triplet extraction process, is a 5 step process which takes input corpus as a web text document(s), each consisting of one or many paragraphs, from RSS feeds to lists of extremist website. Input corpus feeds into the "Pronoun Resolution" step, which uses an heuristic approach to identify the noun phrases referenced by the pronouns. The next step "SRL Parser" is a shallow semantic parser and converts the incoming pronoun resolved paragraphs into annotated predicate argument format. The output of SRL parser is processed by "Triplet Extractor" algorithm which forms the triplet in the form {Subject, Verb, Object}. Generalization and reduction of triplet features is the next step. Reduced feature representation reduces computing time, yields better discriminatory behavior and handles curse of dimensionality phenomena. For training and testing, a ten- fold cross validation approach is followed. In each round SVM classifier is trained with 90% of labeled (training) data and in the testing phase, classes of remaining 10% unlabeled (testing) data are predicted. Concluding, this paper proposes a model with semantic triplet based features for story classification. The effectiveness of the model is demonstrated against other traditional features used in the literature for text classification tasks.
ContributorsKarad, Ravi Chandravadan (Author) / Davulcu, Hasan (Thesis advisor) / Corman, Steven (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2013