Matching Items (130)
149991-Thumbnail Image.png
Description
With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications

With the introduction of compressed sensing and sparse representation,many image processing and computer vision problems have been looked at in a new way. Recent trends indicate that many challenging computer vision and image processing problems are being solved using compressive sensing and sparse representation algorithms. This thesis assays some applications of compressive sensing and sparse representation with regards to image enhancement, restoration and classication. The first application deals with image Super-Resolution through compressive sensing based sparse representation. A novel framework is developed for understanding and analyzing some of the implications of compressive sensing in reconstruction and recovery of an image through raw-sampled and trained dictionaries. Properties of the projection operator and the dictionary are examined and the corresponding results presented. In the second application a novel technique for representing image classes uniquely in a high-dimensional space for image classification is presented. In this method, design and implementation strategy of the image classification system through unique affine sparse codes is presented, which leads to state of the art results. This further leads to analysis of some of the properties attributed to these unique sparse codes. In addition to obtaining these codes, a strong classier is designed and implemented to boost the results obtained. Evaluation with publicly available datasets shows that the proposed method outperforms other state of the art results in image classication. The final part of the thesis deals with image denoising with a novel approach towards obtaining high quality denoised image patches using only a single image. A new technique is proposed to obtain highly correlated image patches through sparse representation, which are then subjected to matrix completion to obtain high quality image patches. Experiments suggest that there may exist a structure within a noisy image which can be exploited for denoising through a low-rank constraint.
ContributorsKulkarni, Naveen (Author) / Li, Baoxin (Thesis advisor) / Ye, Jieping (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150405-Thumbnail Image.png
Description
Infant mortality rate of field deployed photovoltaic (PV) modules may be expected to be higher than that estimated by standard qualification tests. The reason for increased failure rates may be attributed to the high system voltages. High voltages (HV) in grid connected modules induce additional stress factors that cause new

Infant mortality rate of field deployed photovoltaic (PV) modules may be expected to be higher than that estimated by standard qualification tests. The reason for increased failure rates may be attributed to the high system voltages. High voltages (HV) in grid connected modules induce additional stress factors that cause new degradation mechanisms. These new degradation mechanisms are not recognized by qualification stress tests. To study and model the effect of high system voltages, recently, potential induced degradation (PID) test method has been introduced. Using PID studies, it has been reported that high voltage failure rates are essentially due to increased leakage currents from active semiconducting layer to the grounded module frame, through encapsulant and/or glass. This project involved designing and commissioning of a new PID test bed at Photovoltaic Reliability Laboratory (PRL) of Arizona State University (ASU) to study the mechanisms of HV induced degradation. In this study, PID stress tests have been performed on accelerated stress modules, in addition to fresh modules of crystalline silicon technology. Accelerated stressing includes thermal cycling (TC200 cycles) and damp heat (1000 hours) tests as per IEC 61215. Failure rates in field deployed modules that are exposed to long term weather conditions are better simulated by conducting HV tests on prior accelerated stress tested modules. The PID testing was performed in 3 phases on a set of 5 mono crystalline silicon modules. In Phase-I of PID test, a positive bias of +600 V was applied, between shorted leads and frame of each module, on 3 modules with conducting carbon coating on glass superstrate. The 3 module set was comprised of: 1 fresh control, TC200 and DH1000. The PID test was conducted in an environmental chamber by stressing the modules at 85°C, for 35 hours with an intermittent evaluation for Arrhenius effects. In the Phase-II, a negative bias of -600 V was applied on a set of 3 modules in the chamber as defined above. The 3 module set in phase-II was comprised of: control module from phase-I, TC200 and DH1000. In the Phase-III, the same set of 3 modules which were used in the phase-II again subjected to +600 V bias to observe the recovery of lost power during the Phase-II. Electrical performance, infrared (IR) and electroluminescence (EL) were done prior and post PID testing. It was observed that high voltage positive bias in the first phase resulted in little
o power loss, high voltage negative bias in the second phase caused significant power loss and the high voltage positive bias in the third phase resulted in major recovery of lost power.
ContributorsGoranti, Sandhya (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Rogers, Bradley (Committee member) / Macia, Narciso (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150385-Thumbnail Image.png
Description
In nearly all commercially successful internal combustion engine applications, the slider crank mechanism is used to convert the reciprocating motion of the piston into rotary motion. The hypocycloid mechanism, wherein the crankshaft is replaced with a novel gearing arrangement, is a viable alternative to the slider crank mechanism. The geared

In nearly all commercially successful internal combustion engine applications, the slider crank mechanism is used to convert the reciprocating motion of the piston into rotary motion. The hypocycloid mechanism, wherein the crankshaft is replaced with a novel gearing arrangement, is a viable alternative to the slider crank mechanism. The geared hypocycloid mechanism allows for linear motion of the connecting rod and provides a method for perfect balance with any number of cylinders including single cylinder applications. A variety of hypocycloid engine designs and research efforts have been undertaken and produced successful running prototypes. Wiseman Technologies, Inc provided one of these prototypes to this research effort. This two-cycle 30cc half crank hypocycloid engine has shown promise in several performance categories including balance and efficiency. To further investigate its potential a more thorough and scientific analysis was necessary and completed in this research effort. The major objective of the research effort was to critically evaluate and optimize the Wiseman prototype for maximum performance in balance, efficiency, and power output. A nearly identical slider crank engine was used extensively to establish baseline performance data and make comparisons. Specialized equipment and methods were designed and built to collect experimental data on both engines. Simulation and mathematical models validated by experimental data collection were used to better quantify performance improvements. Modifications to the Wiseman prototype engine improved balance by 20 to 50% (depending on direction) and increased peak power output by 24%.
ContributorsConner, Thomas (Author) / Redkar, Sangram (Thesis advisor) / Rogers, Bradley (Committee member) / Georgeou, Trian (Committee member) / Arizona State University (Publisher)
Created2011
150342-Thumbnail Image.png
Description
Building Applied Photovoltaics (BAPV) form an essential part of today's solar economy. This thesis is an effort to compare and understand the effect of fan cooling on the temperature of rooftop photovoltaic (PV) modules by comparing two side-by-side arrays (test array and control array) under identical ambient conditions of irradiance,

Building Applied Photovoltaics (BAPV) form an essential part of today's solar economy. This thesis is an effort to compare and understand the effect of fan cooling on the temperature of rooftop photovoltaic (PV) modules by comparing two side-by-side arrays (test array and control array) under identical ambient conditions of irradiance, air temperature, wind speed and wind direction. The lower operating temperature of PV modules due to fan operation mitigates array non uniformity and improves on performance. A crystalline silicon (c-Si) PV module has a light to electrical conversion efficiency of 14-20%. So on a cool sunny day with incident solar irradiance of 1000 W/m2, a PV module with 15% efficiency, will produce about only 150 watts. The rest of the energy is primarily lost in the form of heat. Heat extraction methods for BAPV systems may become increasingly higher in demand as the hot stagnant air underneath the array can be extracted to improve the array efficiency and the extracted low-temperature heat can also be used for residential space heating and water heating. Poly c-Si modules experience a negative temperature coefficient of power at about -0.5% /o C. A typical poly c-Si module would experience power loss due to elevation in temperature, which may be in the range of 25 to 30% for desert conditions such as that of Mesa, Arizona. This thesis investigates the effect of fan cooling on the previously developed thermal models at Arizona State University and on the performance of PV modules/arrays. Ambient conditions are continuously monitored and collected to calculate module temperature using the thermal model and to compare with actually measured temperature of individual modules. Including baseline analysis, the thesis has also looked into the effect of fan on the test array in three stages of 14 continuous days each. Multiple Thermal models are developed in order to identify the effect of fan cooling on performance and temperature uniformity. Although the fan did not prove to have much significant cooling effect on the system, but when combined with wind blocks it helped improve the thermal mismatch both under low and high wind speed conditions.
ContributorsChatterjee, Saurabh (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Rogers, Bradley (Committee member) / Macia, Narciso (Committee member) / Arizona State University (Publisher)
Created2011
150421-Thumbnail Image.png
Description
Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation of the power degradation of field aged PV modules in

Photovoltaic (PV) modules undergo performance degradation depending on climatic conditions, applications, and system configurations. The performance degradation prediction of PV modules is primarily based on Accelerated Life Testing (ALT) procedures. In order to further strengthen the ALT process, additional investigation of the power degradation of field aged PV modules in various configurations is required. A detailed investigation of 1,900 field aged (12-18 years) PV modules deployed in a power plant application was conducted for this study. Analysis was based on the current-voltage (I-V) measurement of all the 1,900 modules individually. I-V curve data of individual modules formed the basis for calculating the performance degradation of the modules. The percentage performance degradation and rates of degradation were compared to an earlier study done at the same plant. The current research was primarily focused on identifying the extent of potential induced degradation (PID) of individual modules with reference to the negative ground potential. To investigate this, the arrangement and connection of the individual modules/strings was examined in detail. The study also examined the extent of underperformance of every series string due to performance mismatch of individual modules in that string. The power loss due to individual module degradation and module mismatch at string level was then compared to the rated value.
ContributorsJaspreet Singh (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2011
149901-Thumbnail Image.png
Description
Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical

Query Expansion is a functionality of search engines that suggest a set of related queries for a user issued keyword query. In case of exploratory or ambiguous keyword queries, the main goal of the user would be to identify and select a specific category of query results among different categorical options, in order to narrow down the search and reach the desired result. Typical corpus-driven keyword query expansion approaches return popular words in the results as expanded queries. These empirical methods fail to cover all semantics of categories present in the query results. More importantly these methods do not consider the semantic relationship between the keywords featured in an expanded query. Contrary to a normal keyword search setting, these factors are non-trivial in an exploratory and ambiguous query setting where the user's precise discernment of different categories present in the query results is more important for making subsequent search decisions. In this thesis, I propose a new framework for keyword query expansion: generating a set of queries that correspond to the categorization of original query results, which is referred as Categorizing query expansion. Two approaches of algorithms are proposed, one that performs clustering as pre-processing step and then generates categorizing expanded queries based on the clusters. The other category of algorithms handle the case of generating quality expanded queries in the presence of imperfect clusters.
ContributorsNatarajan, Sivaramakrishnan (Author) / Chen, Yi (Thesis advisor) / Candan, Selcuk (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150244-Thumbnail Image.png
Description
A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment

A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today.
ContributorsBarbier, Geoffrey P (Author) / Liu, Huan (Thesis advisor) / Bell, Herbert (Committee member) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011