Matching Items (748)
Filtering by

Clear all filters

150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150215-Thumbnail Image.png
Description
Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and

Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This disserta- tion describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier- Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational architectures. In the first system, the code's ability to handle surface tension and large tem- perature gradients is established. In the second system, the code's ability to sim- ulate simple interface geometries with strong shear is demonstrated. In the third system, the ability to handle extremely complex geometries and topology changes with strong shear is shown.
ContributorsBrady, Peter, Ph.D (Author) / Herrmann, Marcus (Thesis advisor) / Lopez, Juan (Thesis advisor) / Adrian, Ronald (Committee member) / Calhoun, Ronald (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
150153-Thumbnail Image.png
Description
A new method of adaptive mesh generation for the computation of fluid flows is investigated. The method utilizes gradients of the flow solution to adapt the size and stretching of elements or volumes in the computational mesh as is commonly done in the conventional Hessian approach. However, in

A new method of adaptive mesh generation for the computation of fluid flows is investigated. The method utilizes gradients of the flow solution to adapt the size and stretching of elements or volumes in the computational mesh as is commonly done in the conventional Hessian approach. However, in the new method, higher-order gradients are used in place of the Hessian. The method is applied to the finite element solution of the incompressible Navier-Stokes equations on model problems. Results indicate that a significant efficiency benefit is realized.
ContributorsShortridge, Randall (Author) / Chen, Kang Ping (Thesis advisor) / Herrmann, Marcus (Thesis advisor) / Wells, Valana (Committee member) / Huang, Huei-Ping (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2011
150156-Thumbnail Image.png
Description
Early-age cracks in fresh concrete occur mainly due to high rate of surface evaporation and restraint offered by the contracting solid phase. Available test methods that simulate severe drying conditions, however, were not originally designed to focus on evaporation and transport characteristics of the liquid-gas phases in a hydrating cementitious

Early-age cracks in fresh concrete occur mainly due to high rate of surface evaporation and restraint offered by the contracting solid phase. Available test methods that simulate severe drying conditions, however, were not originally designed to focus on evaporation and transport characteristics of the liquid-gas phases in a hydrating cementitious microstructure. Therefore, these tests lack accurate measurement of the drying rate and data interpretation based on the principles of transport properties is limited. A vacuum-based test method capable of simulating early-age cracks in 2-D cement paste is developed which continuously monitors the weight loss and changes to the surface characteristics. 2-D crack evolution is documented using time-lapse photography. Effects of sample size, w/c ratio, initial curing and fiber content are studied. In the subsequent analysis, the cement paste phase is considered as a porous medium and moisture transport is described based on surface mass transfer and internal moisture transport characteristics. Results indicate that drying occurs in two stages: constant drying rate period (stage I), followed by a falling drying rate period (stage II). Vapor diffusion in stage I and unsaturated flow within porous medium in stage II determine the overall rate of evaporation. The mass loss results are analyzed using diffusion-based models. Results show that moisture diffusivity in stage I is higher than its value in stage II by more than one order of magnitude. The drying model is used in conjunction with a shrinkage model to predict the development of capillary pressures. Similar approach is implemented in drying restrained ring specimens to predict 1-D crack width development. An analytical approach relates diffusion, shrinkage, creep, tensile and fracture properties to interpret the experimental data. Evaporation potential is introduced based on the boundary layer concept, mass transfer, and a driving force consisting of the concentration gradient. Effect of wind velocity is reflected on Reynolds number which affects the boundary layer on sample surface. This parameter along with Schmidt and Sherwood numbers are used for prediction of mass transfer coefficient. Concentration gradient is shown to be a strong function of temperature and relative humidity and used to predict the evaporation potential. Results of modeling efforts are compared with a variety of test results reported in the literature. Diffusivity data and results of 1-D and 2-D image analyses indicate significant effects of fibers on controlling early-age cracks. Presented models are capable of predicting evaporation rates and moisture flow through hydrating cement-based materials during early-age drying and shrinkage conditions.
ContributorsBakhshi, Mehdi (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Zapata, Claudia E. (Committee member) / Arizona State University (Publisher)
Created2011
150158-Thumbnail Image.png
Description
Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering

Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms.
ContributorsSun, Liang (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Liu, Huan (Committee member) / Mittelmann, Hans D. (Committee member) / Arizona State University (Publisher)
Created2011
150234-Thumbnail Image.png
Description
Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality

Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.
ContributorsBillionniere, Elodie V (Author) / Collofello, James (Thesis advisor) / Ganesh, Tirupalavanam G. (Thesis advisor) / VanLehn, Kurt (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
150310-Thumbnail Image.png
Description
The world is grappling with two serious issues related to energy and climate change. The use of solar energy is receiving much attention due to its potential as one of the solutions. Air conditioning is particularly attractive as a solar energy application because of the near coincidence of peak cooling

The world is grappling with two serious issues related to energy and climate change. The use of solar energy is receiving much attention due to its potential as one of the solutions. Air conditioning is particularly attractive as a solar energy application because of the near coincidence of peak cooling loads with the available solar power. Recently, researchers have started serious discussions of using adsorptive processes for refrigeration and heat pumps. There is some success for the >100 ton adsorption systems but none exists in the <10 ton size range required for residential air conditioning. There are myriad reasons for the lack of small-scale systems such as low Coefficient of Performance (COP), high capital cost, scalability, and limited performance data. A numerical model to simulate an adsorption system was developed and its performance was compared with similar thermal-powered systems. Results showed that both the adsorption and absorption systems provide equal cooling capacity for a driving temperature range of 70-120 ºC, but the adsorption system is the only system to deliver cooling at temperatures below 65 ºC. Additionally, the absorption and desiccant systems provide better COP at low temperatures, but the COP's of the three systems converge at higher regeneration temperatures. To further investigate the viability of solar-powered heat pump systems, an hourly building load simulation was developed for a single-family house in the Phoenix metropolitan area. Thermal as well as economic performance comparison was conducted for adsorption, absorption, and solar photovoltaic (PV) powered vapor compression systems for a range of solar collector area and storage capacity. The results showed that for a small collector area, solar PV is more cost-effective whereas adsorption is better than absorption for larger collector area. The optimum solar collector area and the storage size were determined for each type of solar system. As part of this dissertation work, a small-scale proof-of-concept prototype of the adsorption system was assembled using some novel heat transfer enhancement strategies. Activated carbon and butane was chosen as the adsorbent-refrigerant pair. It was found that a COP of 0.12 and a cooling capacity of 89.6 W can be achieved.
ContributorsGupta, Yeshpal (Author) / Phelan, Patrick E (Thesis advisor) / Bryan, Harvey J. (Committee member) / Mikellidas, Pavlos G (Committee member) / Pacheco, Jose R (Committee member) / Trimble, Steven W (Committee member) / Arizona State University (Publisher)
Created2011
150329-Thumbnail Image.png
Description
The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum

The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum forcing in the vicinity of the boundary. The flow solver is parallelized using a domain decomposition strategy and message passing interface (MPI), and exhibits linear scaling on as many as 500 processors. A laminar flow case is presented to verify the formal accuracy of the method. The immersed boundary approach is validated by comparison with computations of the flow over a smooth sphere. Simulations are performed at Reynolds numbers of 2.5 × 104 and 1.1 × 105 based on the diameter of the ball and the freestream speed and using grids comprised of more than 1.14 × 109 points. Flow visualizations reveal the location of separation, as well as the delay of complete detachment. Predictions of the aerodynamic forces at both Reynolds numbers are in reasonable agreement with measurements. Energy spectra of the velocity quantify the dominant frequencies of the flow near separation and in the wake. Time-averaged statistics reveal characteristic physical patterns in the flow as well as local trends within dimples. A mechanism of drag reduction due to the dimples is confirmed, and metrics for dimple optimization are proposed.
ContributorsSmith, Clinton E (Author) / Squires, Kyle D (Thesis advisor) / Balaras, Elias (Committee member) / Herrmann, Marcus (Committee member) / Adrian, Ronald (Committee member) / Stanzione, Daniel C (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150284-Thumbnail Image.png
Description
Free/Libre Open Source Software (FLOSS) is the product of volunteers collaborating to build software in an open, public manner. The large number of FLOSS projects, combined with the data that is inherently archived with this online process, make studying this phenomenon attractive. Some FLOSS projects are very functional, well-known, and

Free/Libre Open Source Software (FLOSS) is the product of volunteers collaborating to build software in an open, public manner. The large number of FLOSS projects, combined with the data that is inherently archived with this online process, make studying this phenomenon attractive. Some FLOSS projects are very functional, well-known, and successful, such as Linux, the Apache Web Server, and Firefox. However, for every successful FLOSS project there are 100's of projects that are unsuccessful. These projects fail to attract sufficient interest from developers and users and become inactive or abandoned before useful functionality is achieved. The goal of this research is to better understand the open source development process and gain insight into why some FLOSS projects succeed while others fail. This dissertation presents an agent-based model of the FLOSS development process. The model is built around the concept that projects must manage to attract contributions from a limited pool of participants in order to progress. In the model developer and user agents select from a landscape of competing FLOSS projects based on perceived utility. Via the selections that are made and subsequent contributions, some projects are propelled to success while others remain stagnant and inactive. Findings from a diverse set of empirical studies of FLOSS projects are used to formulate the model, which is then calibrated on empirical data from multiple sources of public FLOSS data. The model is able to reproduce key characteristics observed in the FLOSS domain and is capable of making accurate predictions. The model is used to gain a better understanding of the FLOSS development process, including what it means for FLOSS projects to be successful and what conditions increase the probability of project success. It is shown that FLOSS is a producer-driven process, and project factors that are important for developers selecting projects are identified. In addition, it is shown that projects are sensitive to when core developers make contributions, and the exhibited bandwagon effects mean that some projects will be successful regardless of competing projects. Recommendations for improving software engineering in general based on the positive characteristics of FLOSS are also presented.
ContributorsRadtke, Nicholas Patrick (Author) / Collofello, James S. (Thesis advisor) / Janssen, Marco A (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Sundaram, Hari (Committee member) / Arizona State University (Publisher)
Created2011
150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011