Matching Items (1,463)
Filtering by

Clear all filters

150339-Thumbnail Image.png
Description
A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air

A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air that is stored in a tank at a particular pressure will be introduced during the compression stage of the engine cycle to reduce pump work. In the modified design the intake and exhaust valve timings are modified to achieve this process. The time required to fill the combustion chamber with compressed air to the storage pressure immediately before spark and the state of the air with respect to crank angle is modeled numerically using a crank step energy and mass balance model. The results are used to complete the engine cycle analysis based on air standard assumptions and air to fuel ratio of 15 for gasoline. It is found that at the baseline storage conditions (280 psi, 70OF) the modified engine does not meet the imposed constraints of staying below the maximum pressure of the unmodified engine. A new storage pressure of 235 psi is recommended. This only provides a 7.7% increase in thermal efficiency for the same work output. The modification of this engine for this low efficiency gain is not recommended.
ContributorsJoy, Lijin (Author) / Trimble, Steve (Thesis advisor) / Davidson, Joseph (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150341-Thumbnail Image.png
Description
A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in

A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in rotation rate and the tilting of isotherms gives rise to baroclinic source of vorticity. Research by (Smirnov et al. [2010a]) showed the differences in evolution of instabilities when Dirichlet and Neumann thermal boundary conditions were applied at top and bottom walls. Study of parametric variations carried out in this dissertation confirmed the instability patterns observed by them for given aspect ratio and Rossby number values greater than 0.5. Also results reveal that flow maintained axisymmetry and stability for short aspect ratio containers independent of amount of rotational increment imparted. Investigation on vorticity components provides framework for baroclinic vorticity feedback mechanism which plays important role in delayed rise of instabilities when Dirichlet thermal Boundary Conditions are applied.
ContributorsKher, Aditya Deepak (Author) / Chen, Kangping (Thesis advisor) / Huang, Huei-Ping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
150359-Thumbnail Image.png
Description
S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic

S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic (MTL) to describe the user specifications for the hybrid systems. We then try to falsify the MTL specification using global minimization of robustness metric. Global minimization is carried out using stochastic optimization algorithms like Monte-Carlo (MC) and Extended Ant Colony Optimization (EACO) algorithms. Irrespective of the type of the model we provide as an input to S-Taliro, the user needs to specify the MTL specification, the initial conditions and the bounds on the inputs. S-Taliro then uses this information to generate test inputs which are used to simulate the system. The simulation trace is then provided as an input to Taliro which computes the robustness estimate of the MTL formula. Global minimization of this robustness metric is performed to generate new test inputs which again generate simulation traces which are closer to falsifying the MTL formula. Traces with negative robustness values indicate that the simulation trace falsified the MTL formula. Traces with positive robustness values are also of great importance because they indicate how robust the system is against the given specification. S-Taliro has been seamlessly integrated into the Matlab environment, which is extensively used for model-based development of control software. Moreover the toolbox has been developed in a modular fashion and therefore adding new optimization algorithms is easy and straightforward. In this work I present the architecture of S-Taliro and its working on a few benchmark problems.
ContributorsAnnapureddy, Yashwanth Singh Rahul (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2011
150419-Thumbnail Image.png
Description
Pb-free solders are used as interconnects in various levels of micro-electronic packaging. Reliability of these interconnects is very critical for the performance of the package. One of the main factors affecting the reliability of solder joints is the presence of porosity which is introduced during processing of the joints. In

Pb-free solders are used as interconnects in various levels of micro-electronic packaging. Reliability of these interconnects is very critical for the performance of the package. One of the main factors affecting the reliability of solder joints is the presence of porosity which is introduced during processing of the joints. In this thesis, the effect of such porosity on the deformation behavior and eventual failure of the joints is studied using Finite Element (FE) modeling technique. A 3D model obtained by reconstruction of x-ray tomographic image data is used as input for FE analysis to simulate shear deformation and eventual failure of the joint using ductile damage model. The modeling was done in ABAQUS (v 6.10). The FE model predictions are validated with experimental results by comparing the deformation of the pores and the crack path as predicted by the model with the experimentally observed deformation and failure pattern. To understand the influence of size, shape, and distribution of pores on the mechanical behavior of the joint four different solder joints with varying degrees of porosity are modeled using the validated FE model. The validation technique mentioned above enables comparison of the simulated and actual deformation only. A more robust way of validating the FE model would be to compare the strain distribution in the joint as predicted by the model and as observed experimentally. In this study, to enable visualization of the experimental strain for the 3D microstructure obtained from tomography, a three dimensional digital image correlation (3D DIC) code has been implemented in MATLAB (MathWorks Inc). This developed 3D DIC code can be used as another tool to verify the numerical model predictions. The capability of the developed code in measuring local displacement and strain is demonstrated by considering a test case.
ContributorsJakkali, Vaidehi (Author) / Chawla, Nikhilesh K (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Arizona State University (Publisher)
Created2011
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149950-Thumbnail Image.png
Description
With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in

With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in teamwork, and team members' movement and face-to-face interaction strength in the wild. Using sociometric badges (wearable sensors), electronic Experience Sampling Methods (ESM), the KEYS team creativity assessment instrument, and qualitative methods, three research studies were conducted in academic and industry R&D; labs. Sociometric badges captured movement of team members and face-to-face interaction between team members. KEYS scale was implemented using ESM for self-rated creativity and expert-coded creativity assessment. Activities (movement and face-to-face interaction) and creativity of one five member and two seven member teams were tracked for twenty five days, eleven days, and fifteen days respectively. Day wise values of movement and face-to-face interaction for participants were mean split categorized as creative and non-creative using self- rated creativity measure and expert-coded creativity measure. Paired-samples t-tests [t(36) = 3.132, p < 0.005; t(23) = 6.49 , p < 0.001] confirmed that average daily movement energy during creative days (M = 1.31, SD = 0.04; M = 1.37, SD = 0.07) was significantly greater than the average daily movement of non-creative days (M = 1.29, SD = 0.03; M = 1.24, SD = 0.09). The eta squared statistic (0.21; 0.36) indicated a large effect size. A paired-samples t-test also confirmed that face-to-face interaction tie strength of team members during creative days (M = 2.69, SD = 4.01) is significantly greater [t(41) = 2.36, p < 0.01] than the average face-to-face interaction tie strength of team members for non-creative days (M = 0.9, SD = 2.1). The eta squared statistic (0.11) indicated a large effect size. The combined approach of principal component analysis (PCA) and linear discriminant analysis (LDA) conducted on movement and face-to-face interaction data predicted creativity with 87.5% and 91% accuracy respectively. This work advances creativity research and provides a foundation for sensor based real-time creativity support tools for teams.
ContributorsTripathi, Priyamvada (Author) / Burleson, Winslow (Thesis advisor) / Liu, Huan (Committee member) / VanLehn, Kurt (Committee member) / Pentland, Alex (Committee member) / Arizona State University (Publisher)
Created2011
149922-Thumbnail Image.png
Description
Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or

Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or low-level data entities. Also, additional domain knowledge may often be indispensable for uncovering the underlying semantics, but in most cases such domain knowledge is not readily available from the acquired media streams. Thus, making use of various types of contextual information and leveraging corresponding domain knowledge are vital for effectively associating high-level semantics with low-level signals with higher accuracies in multimedia computing problems. In this work, novel computational methods are explored and developed for incorporating contextual information/domain knowledge in different forms for multimedia computing and pattern recognition problems. Specifically, a novel Bayesian approach with statistical-sampling-based inference is proposed for incorporating a special type of domain knowledge, spatial prior for the underlying shapes; cross-modality correlations via Kernel Canonical Correlation Analysis is explored and the learnt space is then used for associating multimedia contents in different forms; model contextual information as a graph is leveraged for regulating interactions among high-level semantic concepts (e.g., category labels), low-level input signal (e.g., spatial/temporal structure). Four real-world applications, including visual-to-tactile face conversion, photo tag recommendation, wild web video classification and unconstrained consumer video summarization, are selected to demonstrate the effectiveness of the approaches. These applications range from classic research challenges to emerging tasks in multimedia computing. Results from experiments on large-scale real-world data with comparisons to other state-of-the-art methods and subjective evaluations with end users confirmed that the developed approaches exhibit salient advantages, suggesting that they are promising for leveraging contextual information/domain knowledge for a wide range of multimedia computing and pattern recognition problems.
ContributorsWang, Zhesheng (Author) / Li, Baoxin (Thesis advisor) / Sundaram, Hari (Committee member) / Qian, Gang (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
149965-Thumbnail Image.png
Description
Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured

Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured using a digital camera and the images are stored onto a datalogger, these images are retrieved using a cellular/ satellite modem. A MATLAB program was designed to obtain the level of water by just entering the file name into to the program, a curve fit model was created to determine the contrast parameters. The contrast parameters were obtained using the data obtained from the gray scale image mainly the mean and variance of the intensity values. The enhanced images are used to determine the level of water by taking pixel intensity plots along the region of interest. The level of water obtained is accurate to less than 2% of the actual level of water observed from the image. High speed imaging in micro channels have various application in industrial field, medical field etc. In medical field it is tested by using blood samples. The experimental procedure proposed determines the flow duration and the defects observed in these channel using a fluid introduced into the micro channel the fluid being water based dye and whole milk. The viscosity of the fluid shows different types of flow patterns and defects in the micro channel. The defects observed vary from a small effect to the flow pattern to an extreme defect in the channel such as obstruction of flow or deformation in the channel. The sample needs to be further analyzed by SEM to get a better insight on the defects.
ContributorsShasedhara, Abhijeet Bangalore (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
149901-Thumbnail Image.png
Description
Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical

Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical options, in order to narrow down the search and reach the desired result. Typical corpus-driven keyword query expansion approaches return popular words in the results as expanded queries. These empirical methods fail to cover all semantics of categories present in the query results. More importantly these methods do not consider the semantic relationship between the keywords featured in an expanded query. Contrary to a normal keyword search setting, these factors are non-trivial in an exploratory and ambiguous query setting where the user's precise discernment of different categories present in the query results is more important for making subsequent search decisions. In this thesis, I propose a new framework for keyword query expansion: generating a set of queries that correspond to the categorization of original query results, which is referred as Categorizing query expansion. Two approaches of algorithms are proposed, one that performs clustering as pre-processing step and then generates categorizing expanded queries based on the clusters. The other category of algorithms handle the case of generating quality expanded queries in the presence of imperfect clusters.
ContributorsNatarajan, Sivaramakrishnan (Author) / Chen, Yi (Thesis advisor) / Candan, Selcuk (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011