Matching Items (549)
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151708-Thumbnail Image.png
Description
Simultaneously culture heroes and stumbling buffoons, Tricksters bring cultural tools to the people and make the world more habitable. There are common themes in these figures that remain fruitful for the advancement of culture, theory, and critical praxis. This dissertation develops a method for opening a dialogue with Trickster figures.

Simultaneously culture heroes and stumbling buffoons, Tricksters bring cultural tools to the people and make the world more habitable. There are common themes in these figures that remain fruitful for the advancement of culture, theory, and critical praxis. This dissertation develops a method for opening a dialogue with Trickster figures. It draws from established literature to present a newly conceived and more flexible Trickster archetype. This archetype is more than a collection of traits; it builds on itself processually to form a method for analysis. The critical Trickster archetype includes the fundamental act of crossing borders; the twin ontologies of ambiguity and liminality; the particular tactics of humor, duplicity, and shape shifting; and the overarching cultural roles of culture hero and stumbling buffoon. Running parallel to each archetypal element, though, are Trickster's overarching critical spirit of Quixotic utopianism and underlying telos of manipulating human relationships. The character 'Q' from Star Trek: The Next Generation is used to demonstrate the critical Trickster archetype. To be more useful for critical cultural studies, Trickster figures must also be connected to their socio-cultural and historical contexts. Thus, this dissertation offers a second set of analytics, a dialogical method that connects Tricksters to the worlds they make more habitable. This dialogical method, developed from the work of M. M. Bakhtin and others, consists of three analytical tools: utterance, intertextuality, and chronotope. Utterance bounds the text for analysis. Intertextuality connects the utterance, the text, to its context. Chronotope suggests particular spatio-temporal relationships that help reveal the cultural significance of a dialogical performance. Performance artists Andre Stitt, Ann Liv Young, and Steven Leyba are used to demonstrate the method of Trickster dialogics. A concluding discussion of Trickster's unique chronotope reveals its contributions to conceptions of utopia and futurity. This dissertation offers theoretical advancements about the significance and tactics of subversive communication practices. It offers a new and unique method for cultural and performative analyses that can be expanded into different kinds of dialogics. Trickster dialogics can also be used generatively to direct and guide the further development of performative praxis.
ContributorsSalinas, Chema (Author) / de la Garza, Amira (Thesis advisor) / Carlson, Cheree (Committee member) / Olson, Clark (Committee member) / Ellsworth, Angela (Committee member) / Arizona State University (Publisher)
Created2013
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152176-Thumbnail Image.png
Description
Buddhism is thriving in US-America, attracting many converts with college and post-graduate degrees as well as selling all forms of popular culture. Yet little is known about the communication dynamics behind the diffusion of Buddhist religious/spiritual traditions into the United States. Religion is an underexplored area of intercultural communication studies

Buddhism is thriving in US-America, attracting many converts with college and post-graduate degrees as well as selling all forms of popular culture. Yet little is known about the communication dynamics behind the diffusion of Buddhist religious/spiritual traditions into the United States. Religion is an underexplored area of intercultural communication studies (Nakayama & Halualani, 2010) and this study meets the lacuna in critical intercultural communication scholarship by investigating the communication practices of US-Americans adopting Asian Buddhist religious/spiritual traditions. Ethnographic observations were conducted at events where US-Americans gathered to learn about and practice Buddhist religious/spiritual traditions. In addition, interviews were conducted with US-Americans who were both learning and teaching Buddhism. The grounded theory method was used for data analysis. The findings of this study describe an emerging theory of the paracultural imaginary -- the space of imagining that one could be better than who one was today by taking on the cultural vestments of (an)Other. The embodied communication dynamics of intercultural exchange that take place when individuals adopt the rituals and philosophies of a foreign culture are described. In addition, a self-reflexive narrative of my struggle with the silence of witnessing the paracultural imaginary is weaved into the analysis. The findings from this study extend critical theorizing on cultural identity, performativity, and cultural appropriation in the diffusion of traditions between cultural groups. In addition, the study addresses the complexity of speaking out against the subtle prejudices in encountered in intercultural communication.
ContributorsWong, Terrie Siang-Ting (Author) / de la Garza, Sarah Amira (Thesis advisor) / Margolis, Eric (Committee member) / Budruk, Megha (Committee member) / Chen, Vivian Hsueh-Hua (Committee member) / Arizona State University (Publisher)
Created2013
152142-Thumbnail Image.png
Description
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement.

According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
ContributorsKaskhedikar, Apoorva Prakash (Author) / Reddy, T. Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2013
152152-Thumbnail Image.png
Description
The academic literature on science communication widely acknowledges a problem: science communication between experts and lay audiences is important, but it is not done well. General audience popular science books, however, carry a reputation for clear science communication and are understudied in the academic literature. For this doctoral dissertation, I

The academic literature on science communication widely acknowledges a problem: science communication between experts and lay audiences is important, but it is not done well. General audience popular science books, however, carry a reputation for clear science communication and are understudied in the academic literature. For this doctoral dissertation, I utilize Sam Harris's The Moral Landscape, a general audience science book on the particularly thorny topic of neuroscientific approaches to morality, as a case-study to explore the possibility of using general audience science books as models for science communication more broadly. I conduct a literary analysis of the text that delimits the scope of its project, its intended audience, and the domains of science to be communicated. I also identify seven literary aspects of the text: three positive aspects that facilitate clarity and four negative aspects that interfere with lay public engagement. I conclude that The Moral Landscape relies on an assumed knowledge base and intuitions of its audience that cannot reasonably be expected of lay audiences; therefore, it cannot properly be construed as popular science communication. It nevertheless contains normative lessons for the broader science project, both in literary aspects to be salvaged and literary aspects and concepts to consciously be avoided and combated. I note that The Moral Landscape's failings can also be taken as an indication that typical descriptions of science communication offer under-detailed taxonomies of both audiences for science communication and the varieties of science communication aimed at those audiences. Future directions of study include rethinking appropriate target audiences for science literacy projects and developing a more discriminating taxonomy of both science communication and lay publics.
ContributorsJohnson, Nathan W (Author) / Robert, Jason S (Thesis advisor) / Creath, Richard (Committee member) / Martinez, Jacqueline (Committee member) / Sylvester, Edward (Committee member) / Lynch, John (Committee member) / Arizona State University (Publisher)
Created2013
152098-Thumbnail Image.png
Description
Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a

Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a key issue, and the first step towards addressing and eventually curbing climate change. Additionally, companies are finding it beneficial and are interested in going beyond compliance using pollution prevention strategies and environmental management systems to improve their environmental performance. Life-cycle Assessment (LCA) is an evaluative method to assess the environmental impacts associated with a products' life-cycle from cradle-to-grave (i.e. from raw material extraction through to material processing, manufacturing, distribution, use, repair and maintenance, and finally, disposal or recycling). This study focuses on evaluating building envelopes on the basis of their life-cycle analysis. In order to facilitate this analysis, a small-scale office building, the University Services Building (USB), with a built-up area of 148,101 ft2 situated on ASU campus in Tempe, Arizona was studied. The building's exterior envelope is the highlight of this study. The current exterior envelope is made of tilt-up concrete construction, a type of construction in which the concrete elements are constructed horizontally and tilted up, after they are cured, using cranes and are braced until other structural elements are secured. This building envelope is compared to five other building envelope systems (i.e. concrete block, insulated concrete form, cast-in-place concrete, steel studs and curtain wall constructions) evaluating them on the basis of least environmental impact. The research methodology involved developing energy models, simulating them and generating changes in energy consumption due to the above mentioned envelope types. Energy consumption data, along with various other details, such as building floor area, areas of walls, columns, beams etc. and their material types were imported into Life-Cycle Assessment software called ATHENA impact estimator for buildings. Using this four-stepped LCA methodology, the results showed that the Steel Stud envelope performed the best and less environmental impact compared to other envelope types. This research methodology can be applied to other building typologies.
ContributorsRamachandran, Sriranjani (Author) / Bryan, Harvey (Thesis advisor) / Reddy T, Agami (Committee member) / White, Philip (Committee member) / Arizona State University (Publisher)
Created2013
152113-Thumbnail Image.png
Description
The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus

The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different control algorithms. The focus of this thesis is to design scheduling and power control algorithms in wireless networks, and analyze their performances. In this thesis, we first study the multicast capacity of wireless ad hoc networks. Gupta and Kumar studied the scaling law of the unicast capacity of wireless ad hoc networks. They derived the order of the unicast throughput, as the number of nodes in the network goes to infinity. In our work, we characterize the scaling of the multicast capacity of large-scale MANETs under a delay constraint D. We first derive an upper bound on the multicast throughput, and then propose a lower bound on the multicast capacity by proposing a joint coding-scheduling algorithm that achieves a throughput within logarithmic factor of the upper bound. We then study the power control problem in ad-hoc wireless networks. We propose a distributed power control algorithm based on the Gibbs sampler, and prove that the algorithm is throughput optimal. Finally, we consider the scheduling algorithm in collocated wireless networks with flow-level dynamics. Specifically, we study the delay performance of workload-based scheduling algorithm with SRPT as a tie-breaking rule. We demonstrate the superior flow-level delay performance of the proposed algorithm using simulations.
ContributorsZhou, Shan (Author) / Ying, Lei (Thesis advisor) / Zhang, Yanchao (Committee member) / Zhang, Junshan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2013