Matching Items (143)
152370-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models.

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.
ContributorsTa, Duyan (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Wonka, Peter (Committee member) / Arizona State University (Publisher)
Created2013
152300-Thumbnail Image.png
Description
In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.
ContributorsXu, Liang (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151154-Thumbnail Image.png
Description
Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection

Alzheimer's Disease (AD) is the most common form of dementia observed in elderly patients and has significant social-economic impact. There are many initiatives which aim to capture leading causes of AD. Several genetic, imaging, and biochemical markers are being explored to monitor progression of AD and explore treatment and detection options. The primary focus of this thesis is to identify key biomarkers to understand the pathogenesis and prognosis of Alzheimer's Disease. Feature selection is the process of finding a subset of relevant features to develop efficient and robust learning models. It is an active research topic in diverse areas such as computer vision, bioinformatics, information retrieval, chemical informatics, and computational finance. In this work, state of the art feature selection algorithms, such as Student's t-test, Relief-F, Information Gain, Gini Index, Chi-Square, Fisher Kernel Score, Kruskal-Wallis, Minimum Redundancy Maximum Relevance, and Sparse Logistic regression with Stability Selection have been extensively exploited to identify informative features for AD using data from Alzheimer's Disease Neuroimaging Initiative (ADNI). An integrative approach which uses blood plasma protein, Magnetic Resonance Imaging, and psychometric assessment scores biomarkers has been explored. This work also analyzes the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Performance of feature selection algorithm is evaluated using the relevance of derived features and the predictive power of the algorithm using Random Forest and Support Vector Machine classifiers. Performance metrics such as Accuracy, Sensitivity and Specificity, and area under the Receiver Operating Characteristic curve (AUC) have been used for evaluation. The feature selection algorithms best suited to analyze AD proteomics data have been proposed. The key biomarkers distinguishing healthy and AD patients, Mild Cognitive Impairment (MCI) converters and non-converters, and healthy and MCI patients have been identified.
ContributorsDubey, Rashmi (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2012
136153-Thumbnail Image.png
Description
Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.
ContributorsLam, Christine (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Barrett, The Honors College (Contributor) / Ira A. Fulton School of Engineering (Contributor) / School of Arts, Media and Engineering (Contributor) / Department of English (Contributor) / Computing and Informatics Program (Contributor)
Created2015-05
136329-Thumbnail Image.png
Description
Lean and Green construction methodologies are prevalent in today's construction industry. Green construction implementation in buildings has progressed quickly due to the popularity and development of building rating systems, such as LEED, Green Globes, and the Living Building Challenge. Similarly, lean construction has become more popular as this philosophy often

Lean and Green construction methodologies are prevalent in today's construction industry. Green construction implementation in buildings has progressed quickly due to the popularity and development of building rating systems, such as LEED, Green Globes, and the Living Building Challenge. Similarly, lean construction has become more popular as this philosophy often leads to efficient construction and improved owner satisfaction. Green construction is defined as using sustainable materials in the construction process to eliminate environmental degradation and ensure that material and equipment use aligns with the design intent and promotes efficient building performance. Lean construction is defined as a set of operational/systematic processes that reduce waste and eliminates defects in the project process throughout its lifecycle. This paper describes the implementation of Lean and Green construction processes to determine the trends that each methodology contributes to a project as well as how these methodologies synergize. The authors identified common elements of each methodology through semi-structured interviews with several construction industry professionals who had extensive experience with lean and green construction. Interviewees report lean and green construction philosophies are different "flavors" of the industry; however, interviewees also state if implemented together, these processes often result in a high-performance building.
ContributorsMaris, Kelsey Lynn (Co-author) / Parrish, Kristen (Co-author, Thesis director) / Olson, Patricia (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / Del E. Webb Construction (Contributor)
Created2015-05
135900-Thumbnail Image.png
Description
As the demand for natural resources increases with population growth, importance has been placed on environmental issues due to increasing pressure on land, water, air, and raw materials. In order to sustain the environment and natural resources, sustainable engineering and earth systems engineering and management (ESEM) is vital for future

As the demand for natural resources increases with population growth, importance has been placed on environmental issues due to increasing pressure on land, water, air, and raw materials. In order to sustain the environment and natural resources, sustainable engineering and earth systems engineering and management (ESEM) is vital for future populations. The Aral Sea and the Florida Everglades are both regions that are heavily impacted by human design decisions. Comparing and analyzing the implications and outcomes of these human design decisions allows conclusions to be made regarding how earth systems engineering and management can be best accomplished. The Aral Sea, located in central Asia between Kazakhstan and Uzbekistan, is a case study of an ecosystem that has collapsed under the pressure of agricultural expansion. This has caused extensive economic, health, agricultural, and environmental impacts. The Everglades in southern Florida is a case study where the ecosystem has evolved from its original state, rather than collapsed, due to human settlement and water resource demand. In order to determine effective sustainable engineering approaches, the case studies will be evaluated using ESEM principles. These principles are used as guidance in executing better practice of sustainable engineering. When comparing the two case studies, it appears that the Everglades is an adequate representation of effective ESEM approaches, while the Aral Sea is not reflective of effective approaches. When practicing ESEM, it is critical that the principles be applied as a whole rather than individually. While the ESEM principles do not guarantee success, they offer an effective guide to dealing with the complexity and uncertainty in many of today's systems.
ContributorsRidley, Brooke Nicole (Author) / Allenby, Brad (Thesis director) / Parrish, Kristen (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
136656-Thumbnail Image.png
Description
The objective for Under the Camper Shell was to build a prototype of a full living environment within the confines of a pickup truck bed and camper shell. The total volume available to work with is approximately 85ft3. This full living environment entails functioning systems for essential modern living, providing

The objective for Under the Camper Shell was to build a prototype of a full living environment within the confines of a pickup truck bed and camper shell. The total volume available to work with is approximately 85ft3. This full living environment entails functioning systems for essential modern living, providing shelter and spaces for cooking, sleeping, eating, and sanitation. The project proved to be very challenging from the start. First, the livable space is extremely small, being only tall enough for one to sit up straight. The truck and camper shell were both borrowed items, so no modifications were allowed for either, e.g. drilling holes for mounting. The idea was to create a system that could be easily removed, transforming it from a camper to a utility truck. The systems developed for the living environment would be modular and transformative so to accommodate for different necessities when packing. The goal was to create a low-water system with sustainability in mind. Insulating the space was the largest challenge and the most rewarding, using body heat to warm the space and insulate from the elements. Comfort systems were made of high density foam cushions in sections to allow folding and stacking for different functions (sleeping, lounging, and sitting). Sanitation is necessary for healthy living and regular human function. A composting toilet was used for the design, lending to low-water usage and is sustainable over time. Saw dust would be necessary for its function, but upon composting, the unit will generate sufficient amounts of heat to act as a space heater. Showering serves the functions of exfoliation and ridding of bacteria, both of which bath wipes can accomplish, limiting massive volumes of water storage and waste. Storage systems were also designed for modularity. Hooks were installed the length of the bed for hanging or securing items as necessary. Some are available for hanging bags. A cabinetry rail also runs the length of the bed to allow movement of hard storage to accommodate different scenarios. The cooking method is called "sous-vide", a method of cooking food in air-tight bags submerged in hot water. The water is reusable for cooking and no dishes are necessary for serving. Overall, the prototype fulfilled its function as a full living environment with few improvements necessary for future use.
ContributorsLimsirichai, Pimwadee (Author) / Foy, Joseph (Thesis director) / Parrish, Kristen (Committee member) / Barrett, The Honors College (Contributor) / Materials Science and Engineering Program (Contributor) / School of Sustainability (Contributor)
Created2014-12