Matching Items (1,820)
Filtering by

Clear all filters

148216-Thumbnail Image.png
Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develo

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

ContributorsChmelnik, Nathan (Co-author) / de Guzman, Lorenzo (Co-author) / Johnson, Katelyn (Co-author) / Martz, Emma (Co-author) / Ju, Feng (Thesis director) / Courter, Brandon (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
Description
Single cell phenotypic heterogeneity studies reveal more information about the pathogenesis process than conventional bulk methods. Furthermore, investigation of the individual cellular response mechanism during rapid environmental changes can only be achieved at single cell level. By enabling the study of cellular morphology, a single cell three-dimensional (3D) imaging system

Single cell phenotypic heterogeneity studies reveal more information about the pathogenesis process than conventional bulk methods. Furthermore, investigation of the individual cellular response mechanism during rapid environmental changes can only be achieved at single cell level. By enabling the study of cellular morphology, a single cell three-dimensional (3D) imaging system can be used to diagnose fatal diseases, such as cancer, at an early stage. One proven method, CellCT, accomplishes 3D imaging by rotating a single cell around a fixed axis. However, some existing cell rotating mechanisms require either intricate microfabrication, and some fail to provide a suitable environment for living cells. This thesis develops a microvorterx chamber that allows living cells to be rotated by hydrodynamic alone while facilitating imaging access. In this thesis work, 1) the new chamber design was developed through numerical simulation. Simulations revealed that in order to form a microvortex in the side chamber, the ratio of the chamber opening to the channel width must be smaller than one. After comparing different chamber designs, the trapezoidal side chamber was selected because it demonstrated controllable circulation and met the imaging requirements. Microvortex properties were not sensitive to the chambers with interface angles ranging from 0.32 to 0.64. A similar trend was observed when chamber heights were larger than chamber opening. 2) Micro-particle image velocimetry was used to characterize microvortices and validate simulation results. Agreement between experimentation and simulation confirmed that numerical simulation was an effective method for chamber design. 3) Finally, cell rotation experiments were performed in the trapezoidal side chamber. The experimental results demonstrated cell rotational rates ranging from 12 to 29 rpm for regular cells. With a volumetric flow rate of 0.5 µL/s, an irregular cell rotated at a mean rate of 97 ± 3 rpm. Rotational rates can be changed by altering inlet flow rates.
ContributorsZhang, Wenjie (Author) / Frakes, David (Thesis advisor) / Meldrum, Deirdre (Thesis advisor) / Chao, Shih-hui (Committee member) / Wang, Xiao (Committee member) / Arizona State University (Publisher)
Created2011
149950-Thumbnail Image.png
Description
With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in

With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in teamwork, and team members' movement and face-to-face interaction strength in the wild. Using sociometric badges (wearable sensors), electronic Experience Sampling Methods (ESM), the KEYS team creativity assessment instrument, and qualitative methods, three research studies were conducted in academic and industry R&D; labs. Sociometric badges captured movement of team members and face-to-face interaction between team members. KEYS scale was implemented using ESM for self-rated creativity and expert-coded creativity assessment. Activities (movement and face-to-face interaction) and creativity of one five member and two seven member teams were tracked for twenty five days, eleven days, and fifteen days respectively. Day wise values of movement and face-to-face interaction for participants were mean split categorized as creative and non-creative using self- rated creativity measure and expert-coded creativity measure. Paired-samples t-tests [t(36) = 3.132, p < 0.005; t(23) = 6.49 , p < 0.001] confirmed that average daily movement energy during creative days (M = 1.31, SD = 0.04; M = 1.37, SD = 0.07) was significantly greater than the average daily movement of non-creative days (M = 1.29, SD = 0.03; M = 1.24, SD = 0.09). The eta squared statistic (0.21; 0.36) indicated a large effect size. A paired-samples t-test also confirmed that face-to-face interaction tie strength of team members during creative days (M = 2.69, SD = 4.01) is significantly greater [t(41) = 2.36, p < 0.01] than the average face-to-face interaction tie strength of team members for non-creative days (M = 0.9, SD = 2.1). The eta squared statistic (0.11) indicated a large effect size. The combined approach of principal component analysis (PCA) and linear discriminant analysis (LDA) conducted on movement and face-to-face interaction data predicted creativity with 87.5% and 91% accuracy respectively. This work advances creativity research and provides a foundation for sensor based real-time creativity support tools for teams.
ContributorsTripathi, Priyamvada (Author) / Burleson, Winslow (Thesis advisor) / Liu, Huan (Committee member) / VanLehn, Kurt (Committee member) / Pentland, Alex (Committee member) / Arizona State University (Publisher)
Created2011
149930-Thumbnail Image.png
Description
Concern regarding the quality of traffic data exists among engineers and planners tasked with obtaining and using the data for various transportation applications. While data quality issues are often understood by analysts doing the hands on work, rarely are the quality characteristics of the data effectively communicated beyond the analyst.

Concern regarding the quality of traffic data exists among engineers and planners tasked with obtaining and using the data for various transportation applications. While data quality issues are often understood by analysts doing the hands on work, rarely are the quality characteristics of the data effectively communicated beyond the analyst. This research is an exercise in measuring and reporting data quality. The assessment was conducted to support the performance measurement program at the Maricopa Association of Governments in Phoenix, Arizona, and investigates the traffic data from 228 continuous monitoring freeway sensors in the metropolitan region. Results of the assessment provide an example of describing the quality of the traffic data with each of six data quality measures suggested in the literature, which are accuracy, completeness, validity, timeliness, coverage and accessibility. An important contribution is made in the use of data quality visualization tools. These visualization tools are used in evaluating the validity of the traffic data beyond pass/fail criteria commonly used. More significantly, they serve to educate an intuitive sense or understanding of the underlying characteristics of the data considered valid. Recommendations from the experience gained in this assessment include that data quality visualization tools be developed and used in the processing and quality control of traffic data, and that these visualization tools, along with other information on the quality control effort, be stored as metadata with the processed data.
ContributorsSamuelson, Jothan P (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Arizona State University (Publisher)
Created2011
149922-Thumbnail Image.png
Description
Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or

Bridging semantic gap is one of the fundamental problems in multimedia computing and pattern recognition. The challenge of associating low-level signal with their high-level semantic interpretation is mainly due to the fact that semantics are often conveyed implicitly in a context, relying on interactions among multiple levels of concepts or low-level data entities. Also, additional domain knowledge may often be indispensable for uncovering the underlying semantics, but in most cases such domain knowledge is not readily available from the acquired media streams. Thus, making use of various types of contextual information and leveraging corresponding domain knowledge are vital for effectively associating high-level semantics with low-level signals with higher accuracies in multimedia computing problems. In this work, novel computational methods are explored and developed for incorporating contextual information/domain knowledge in different forms for multimedia computing and pattern recognition problems. Specifically, a novel Bayesian approach with statistical-sampling-based inference is proposed for incorporating a special type of domain knowledge, spatial prior for the underlying shapes; cross-modality correlations via Kernel Canonical Correlation Analysis is explored and the learnt space is then used for associating multimedia contents in different forms; model contextual information as a graph is leveraged for regulating interactions among high-level semantic concepts (e.g., category labels), low-level input signal (e.g., spatial/temporal structure). Four real-world applications, including visual-to-tactile face conversion, photo tag recommendation, wild web video classification and unconstrained consumer video summarization, are selected to demonstrate the effectiveness of the approaches. These applications range from classic research challenges to emerging tasks in multimedia computing. Results from experiments on large-scale real-world data with comparisons to other state-of-the-art methods and subjective evaluations with end users confirmed that the developed approaches exhibit salient advantages, suggesting that they are promising for leveraging contextual information/domain knowledge for a wide range of multimedia computing and pattern recognition problems.
ContributorsWang, Zhesheng (Author) / Li, Baoxin (Thesis advisor) / Sundaram, Hari (Committee member) / Qian, Gang (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
149969-Thumbnail Image.png
Description
In the search for chemical biosensors designed for patient-based physiological applications, non-invasive diagnostic approaches continue to have value. The work described in this thesis builds upon previous breath analysis studies. In particular, it seeks to assess the adsorptive mechanisms active in both acetone and ethanol biosensors designed for

In the search for chemical biosensors designed for patient-based physiological applications, non-invasive diagnostic approaches continue to have value. The work described in this thesis builds upon previous breath analysis studies. In particular, it seeks to assess the adsorptive mechanisms active in both acetone and ethanol biosensors designed for breath analysis. The thermoelectric biosensors under investigation were constructed using a thermopile for transduction and four different materials for biorecognition. The analytes, acetone and ethanol, were evaluated under dry-air and humidified-air conditions. The biosensor response to acetone concentration was found to be both repeatable and linear, while the sensor response to ethanol presence was also found to be repeatable. The different biorecognition materials produced discernible thermoelectric responses that were characteristic for each analyte. The sensor output data is presented in this report. Additionally, the results were evaluated against a mathematical model for further analysis. Ultimately, a thermoelectric biosensor based upon adsorption chemistry was developed and characterized. Additional work is needed to characterize the physicochemical action mechanism.
ContributorsWilson, Kimberly (Author) / Guilbeau, Eric (Thesis advisor) / Pizziconi, Vincent (Thesis advisor) / LaBelle, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2011
149901-Thumbnail Image.png
Description
Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical

Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical options, in order to narrow down the search and reach the desired result. Typical corpus-driven keyword query expansion approaches return popular words in the results as expanded queries. These empirical methods fail to cover all semantics of categories present in the query results. More importantly these methods do not consider the semantic relationship between the keywords featured in an expanded query. Contrary to a normal keyword search setting, these factors are non-trivial in an exploratory and ambiguous query setting where the user's precise discernment of different categories present in the query results is more important for making subsequent search decisions. In this thesis, I propose a new framework for keyword query expansion: generating a set of queries that correspond to the categorization of original query results, which is referred as Categorizing query expansion. Two approaches of algorithms are proposed, one that performs clustering as pre-processing step and then generates categorizing expanded queries based on the clusters. The other category of algorithms handle the case of generating quality expanded queries in the presence of imperfect clusters.
ContributorsNatarajan, Sivaramakrishnan (Author) / Chen, Yi (Thesis advisor) / Candan, Selcuk (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011