Matching Items (23)
Filtering by

Clear all filters

152996-Thumbnail Image.png
Description
This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image,

This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size.

Second, I propose a method for a user to understand and systematically explore good building layouts. Starting from a discrete set of good layouts, I analytically characterize the local shape space of good layouts around each initial layout, compactly encode these spaces, and link them to support transitions across the different local spaces. I represent such transitions in the form of a portal graph. The user can then use the portal graph, along with the family of local shape spaces, to globally and locally explore the space of good building layouts.

Finally, I propose an algorithm to computationally design street networks that balance competing requirements such as quick travel time and reduced through traffic in residential neighborhoods. The user simply provides high-level functional specifications for a target neighborhood, while my algorithm best satisfies the specification by solving for both connectivity and geometric layout of the network.
ContributorsBao, Fan (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Razdan, Anshuman (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
153051-Thumbnail Image.png
Description
Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different

Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different choices of mesh connectivity to represent the same 3D shape or surface. One key concept of QD mesh connectivity is the distinction between regular and irregular elements: a vertex with valence 4 is regular; otherwise, it is irregular. In a similar sense, a face with four sides is regular; otherwise, it is irregular. For QD meshes, the placement of irregular elements is especially important since it largely determines the achievable geometric quality of the final mesh.

Traditionally, the research on QD meshes focuses on the automatic generation of pure quadrilateral or QD meshes from a given surface. Explicit control of the placement of irregular elements can only be achieved indirectly. To fill this gap, in this thesis, we make the following contributions. First, we formulate the theoretical background about the fundamental combinatorial properties of irregular elements in QD meshes. Second, we develop algorithms for the explicit control of irregular elements and the exhaustive enumeration of QD mesh connectivities. Finally, we demonstrate the importance of connectivity control for QD meshes in a wide range of applications.
ContributorsPeng, Chi-Han (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Farin, Gerald (Committee member) / Razdan, Anshuman (Committee member) / Arizona State University (Publisher)
Created2014
153196-Thumbnail Image.png
Description
Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse

Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost.
ContributorsYang, Sen (Author) / Ye, Jieping (Thesis advisor) / Wonka, Peter (Thesis advisor) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2014
149744-Thumbnail Image.png
Description
The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth surfaces from the polygonal domain has its share of issues and there is an inherent need to address various rendering bottlenecks that could hamper such a move. The game engine needs to choose an appropriate method based on in-game characteristics of the objects; character and animated objects need more sophisticated methods whereas static objects could use simpler techniques. Scaling the polygon count over various hardware platforms becomes an important factor. Much control is needed over the tessellation levels, either imposed by the hardware limitations or by the application, to be able to adaptively render the mesh without significant loss in performance. This thesis explores several methods that would help game engine developers in making correct design choices by optimally balancing the trade-offs while rendering the scene using smooth surfaces. It proposes a novel technique for adaptive tessellation of triangular meshes that vastly improves speed and tessellation count. It develops an approximate method for rendering Loop subdivision surfaces on tessellation enabled hardware. A taxonomy and evaluation of the methods is provided and a unified rendering system that provides automatic level of detail by switching between the methods is proposed.
ContributorsAmresh, Ashish (Author) / Farin, Gerlad (Thesis advisor) / Razdan, Anshuman (Thesis advisor) / Wonka, Peter (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2011
150447-Thumbnail Image.png
Description
Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack

Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack of tools for visualizing urban areas at night. This is mainly due to difficulties in gathering the light system data, placing the light systems at suitable locations, and rendering millions of lights with complex light intensity distributions (LID). Unlike daytime images, a city can have millions of light sources at night, including street lights, illuminated signs, and light shed from building interiors through windows. In this paper, a Procedural Lighting tool (PL), which predicts the positions and properties of street lights, is presented. The PL tool is used to accomplish three aims: (1) to generate vector data layers for geographic information systems (GIS) with statistically estimated information on lighting designs for streets, as well as the locations, orientations, and models for millions of streetlights; (2) to generate geo-referenced raster data to suitable for use as light maps that cover a large scale urban area so that the effect of millions of street light can be accurately rendered at real time, and (3) to extend existing 3D models by generating detailed light-maps that can be used as UV-mapped textures to render the model. An interactive graphical user interface (GUI) for configuring and previewing lights from a Light System Database (LDB) is also presented. The GUI includes physically accurate information about LID and also the lights' spectral power distributions (SPDs) so that a light-map can be generated for use with any sensor if the sensors luminosity function is known. Finally, for areas where more detail is required, a tool has been developed for editing and visualizing light effects over a 3D building from many light sources including area lights and windows. The above components are integrated in the PL tool to produce a night time urban view for not only a large-scale area but also a detail of a city building.
ContributorsChuang, Chia-Yuan (Author) / Femiani, John (Thesis advisor) / Razdan, Anshuman (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2011
154470-Thumbnail Image.png
Description
For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model

For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model when it is represented by performance of the teachable agent. The feedback system represents performance of the teachable agent, and not of a student. Data in the feedback system is thus updated according to a student's understanding of the subject. This provides students an opportunity to enhance their understanding of a subject by analyzing their performance. To test the effectiveness of the feedback system, student understanding in two different conditions is analyzed. In the first condition a feedback report is not provided to the students, while in the second condition the feedback report is provided in the form of the agent’s performance.
ContributorsUpadhyay, Abha (Author) / Walker, Erin (Thesis advisor) / Nelson, Brian (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
154054-Thumbnail Image.png
Description
The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of

The American Heart Association recommended in 1997 the data elements that should be collected from resuscitations in hospitals. (15) Currently, data documentation from resuscitation events in hospitals, termed ‘code blue’ events, utilizes a paper form, which is institution-specific. Problems with data capture and transcription exists, due to the challenges of dynamic documentation of patient, event and outcome variables as the code blue event unfolds.

This thesis is based on the hypothesis that an electronic version of code blue real-time data capture would lead to improved resuscitation data transcription, and enable clinicians to address deficiencies in quality of care. The primary goal of this thesis is to create an iOS based application, primarily designed for iPads, for code blue events at the Mayo Clinic Hospital. The secondary goal is to build an open-source software development framework for converting paper-based hospital protocols into digital format.

The tool created in this study enabled data documentation to be completed electronically rather than on paper for resuscitation outcomes. The tool was evaluated for usability with twenty nurses, the end-users, at Mayo Clinic in Phoenix, Arizona. The results showed the preference of users for the iPad application. Furthermore, a qualitative survey showed the clinicians perceived the electronic version to be more accurate and efficient than paper-based documentation, both of which are essential for an emergency code blue resuscitation procedure.
ContributorsBokhari, Wasif (Author) / Patel, Vimla L. (Thesis advisor) / Amresh, Ashish (Thesis advisor) / Nelson, Brian (Committee member) / Sen, Ayan (Committee member) / Arizona State University (Publisher)
Created2015
154698-Thumbnail Image.png
Description
Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful in live learning environments where students take notes, I question

the

Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful in live learning environments where students take notes, I question

the value of pauses in video lectures. Techniques and algorithms that can shorten such

videos can have a huge impact in saving students’ time and reducing storage space.

I study this problem of shortening videos by removing long pauses and adaptively

modifying the playback rate by emphasizing the most important sections of the video

and its effect on the student community. The playback rate is designed in such a

way to play uneventful sections faster and significant sections slower. Important and

unimportant sections of a video are identified using textual analysis. I use an existing

speech-to-text algorithm to extract the transcript and apply latent semantic analysis

and standard information retrieval techniques to identify the relevant segments of

the video. I compute relevance scores of different segments and propose a variable

playback rate for each of these segments. The aim is to reduce the amount of time

students spend on passive learning while watching videos without harming their ability

to follow the lecture. I validate the approach by conducting a user study among

computer science students and measuring their engagement. The results indicate

no significant difference in their engagement when this method is compared to the

original unedited video.
ContributorsPurushothama Shenoy, Sreenivas (Author) / Amresh, Ashish (Thesis advisor) / Femiani, John (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
154717-Thumbnail Image.png
Description
Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery.

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.
ContributorsUba, Nagesh Kumar (Author) / Femiani, John (Thesis advisor) / Razdan, Anshuman (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
154796-Thumbnail Image.png
Description
In the last decade, the number of people who own a mobile phone or portable electronic communication device has grown exponentially. Recent advances in smartphone technology have enabled mobile devices to provide applications (“mHealth apps”) to support delivering interventions, tracking health treatments, or involving a healthcare team into the treatment

In the last decade, the number of people who own a mobile phone or portable electronic communication device has grown exponentially. Recent advances in smartphone technology have enabled mobile devices to provide applications (“mHealth apps”) to support delivering interventions, tracking health treatments, or involving a healthcare team into the treatment process and symptom monitoring. Although the popularity of mHealth apps is increasing, few lessons have been shared regarding user experience design and evaluation for such innovations as they relate to clinical outcomes. Studies assessing usability for mobile apps primarily rely on survey instruments. Though surveys are effective in determining user perception of usability and positive attitudes towards an app, they do not directly assess app feature usage, and whether feature usage and related aspects of app design are indicative of whether intended tasks are completed by users. This is significant in the area of mHealth apps, as proper utilization of the app determines compliance to a clinical study protocol. Therefore it is important to understand how design directly impacts compliance, specifically what design factors are prevalent in non-compliant users. This research studies the impact of usability features on clinical protocol compliance by applying a mixed methods approach to usability assessment, combining traditional surveys, log analysis, and clickstream analysis to determine the connection of design to outcomes. This research is novel in its construction of the mixed methods approach and in its attempt to tie usability results to impacts on clinical protocol compliance. The validation is a case study approach, applying the methods to an mHealth app developed for early prevention of anxiety in middle school students. The results of three empirical studies are shared that support the construction of the mixed methods approach.
ContributorsPatwardhan, Mandar (Author) / Gary, Kevin A (Thesis advisor) / Pina, Armando (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016