Matching Items (504)
Filtering by

Clear all filters

149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
151476-Thumbnail Image.png
Description
The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four

The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four other care activities in seventeen adults. Pushing a wheelchair and pushing a stroller were identified as moderate-intensity activities. Minutes spent engaged in these activities contribute towards meeting the 2008 Physical Activity Guidelines. Project two identified the oxygen costs of common cleaning activities in sixteen adults. Mopping a floor was identified as moderate-intensity physical activity, while cleaning a kitchen and cleaning a bathtub were identified as light-intensity physical activity. Minutes spent engaged in mopping a floor contributes towards meeting the 2008 Physical Activity Guidelines. Project three evaluated the differences in number of minutes spent in activity levels when utilizing different epoch lengths in accelerometry. A shorter epoch length (1-second, 5-seconds) accumulated significantly more minutes of sedentary behaviors than a longer epoch length (60-seconds). The longer epoch length also identified significantly more time engaged in light-intensity activities than the shorter epoch lengths. Future research needs to account for epoch length selection when conducting physical activity and sedentary behavior assessment. Project four investigated the accuracy of four activity monitors in assessing activities that were either sedentary behaviors or light-intensity physical activities. The ActiGraph GT3X+ assessed the activities least accurately, while the SenseWear Armband and ActivPAL assessed activities equally accurately. The monitor used to assess physical activity and sedentary behaviors may influence the accuracy of the measurement of a construct.
ContributorsMeckes, Nathanael (Author) / Ainsworth, Barbara E (Thesis advisor) / Belyea, Michael (Committee member) / Buman, Matthew (Committee member) / Gaesser, Glenn (Committee member) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Arizona State University (Publisher)
Created2012
151598-Thumbnail Image.png
Description
Cardiovascular disease (CVD) is the number one cause of death in the United States and type 2 diabetes (T2D) and obesity lead to cardiovascular disease. Obese adults are more susceptible to CVD compared to their non-obese counterparts. Exercise training leads to large reductions in the risk of CVD and T2D.

Cardiovascular disease (CVD) is the number one cause of death in the United States and type 2 diabetes (T2D) and obesity lead to cardiovascular disease. Obese adults are more susceptible to CVD compared to their non-obese counterparts. Exercise training leads to large reductions in the risk of CVD and T2D. Recent evidence suggests high-intensity interval training (HIT) may yield similar or superior benefits in a shorter amount of time compared to traditional continuous exercise training. The purpose of this study was to compare the effects of HIT to continuous (CONT) exercise training for the improvement of endothelial function, glucose control, and visceral adipose tissue. Seventeen obese men (N=9) and women (N=8) were randomized to eight weeks of either HIT (N=9, age=34 years, BMI=37.6 kg/m2) or CONT (N=8, age=34 years, BMI=34.6 kg/m2) exercise 3 days/week for 8 weeks. Endothelial function was assessed via flow-mediated dilation (FMD), glucose control was assessed via continuous glucose monitoring (CGM), and visceral adipose tissue and body composition was measured with an iDXA. Incremental exercise testing was performed at baseline, 4 weeks, and 8 weeks. There were no changes in weight, fat mass, or visceral adipose tissue measured by the iDXA, but there was a significant reduction in body fat that did not differ by group (46±6.3 to 45.4±6.6%, P=0.025). HIT led to a significantly greater improvement in FMD compared to CONT exercise (HIT: 5.1 to 9.0%; CONT: 5.0 to 2.6%, P=0.006). Average 24-hour glucose was not improved over the whole group and there were no group x time interactions for CGM data (HIT: 103.9 to 98.2 mg/dl; CONT: 99.9 to 100.2 mg/dl, P>0.05). When statistical analysis included only the subjects who started with an average glucose at baseline > 100 mg/dl, there was a significant improvement in glucose control overall, but no group x time interaction (107.8 to 94.2 mg/dl, P=0.027). Eight weeks of HIT led to superior improvements in endothelial function and similar improvements in glucose control in obese subjects at risk for T2D and CVD. HIT was shown to have comparable or superior health benefits in this obese sample with a 36% lower total exercise time commitment.
ContributorsSawyer, Brandon J (Author) / Gaesser, Glenn A (Thesis advisor) / Shaibi, Gabriel (Committee member) / Lee, Chong (Committee member) / Swan, Pamela (Committee member) / Buman, Matthew (Committee member) / Arizona State University (Publisher)
Created2013
151604-Thumbnail Image.png
Description
Purpose: The purpose of this study was to examine the acute effects of two novel intermittent exercise prescriptions on glucose regulation and ambulatory blood pressure. Methods: Ten subjects (5 men and 5 women, ages 31.5 ± 5.42 yr, height 170.38 ± 9.69 cm and weight 88.59 ± 18.91 kg) participated

Purpose: The purpose of this study was to examine the acute effects of two novel intermittent exercise prescriptions on glucose regulation and ambulatory blood pressure. Methods: Ten subjects (5 men and 5 women, ages 31.5 ± 5.42 yr, height 170.38 ± 9.69 cm and weight 88.59 ± 18.91 kg) participated in this four-treatment crossover trial. All subjects participated in four trials, each taking place over three days. On the evening of the first day, subjects were fitted with a continuous glucose monitor (CGM). On the second day, subjects were fitted with an ambulatory blood pressure monitor (ABP) and underwent one of the following four conditions in a randomized order: 1) 30-min: 30 minutes of continuous exercise at 60 - 70% VO2peak; 2) Mod 2-min: twenty-one 2-min bouts of walking at 3 mph performed once every 20 minutes; 3) HI 2-min: eight 2-min bouts of walking at maximal incline performed once every hour; 4) Control: a no exercise control condition. On the morning of the third day, the CGM and ABP devices were removed. All meals were standardized during the study visits. Linear mixed models were used to compare mean differences in glucose and blood pressure regulation between the four trials. Results: Glucose concentrations were significantly lower following the 30-min (91.1 ± 14.9 mg/dl), Mod 2-min (93.7 ± 19.8 mg/dl) and HI 2-min (96.1 ± 16.4 mg/dl) trials as compared to the Control (101.1 ± 20 mg/dl) (P < 0.001 for all three comparisons). The 30-min trial was superior to the Mod 2-min, which was superior to the HI 2-min trial in lowering blood glucose levels (P < 0.001 and P = 0.003 respectively). Only the 30-min trial was effective in lowering systolic ABP (124 ± 12 mmHg) as compared to the Control trial (127 ± 14 mmHg; P < 0.001) for up to 11 hours post exercise. Conclusion: Performing frequent short (i.e., 2 minutes) bouts of moderate or high intensity exercise may be a viable alternative to traditional continuous exercise in improving glucose regulation. However, 2-min bouts of exercise are not effective in reducing ambulatory blood pressure in healthy adults.
ContributorsBhammar, Dharini Mukeshkumar (Author) / Gaesser, Glenn A (Thesis advisor) / Shaibi, Gabriel (Committee member) / Buman, Matthew (Committee member) / Swan, Pamela (Committee member) / Lee, Chong (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
152020-Thumbnail Image.png
Description
The purpose of this pilot randomized control trial was to test the initial efficacy of a 10 week social cognitive theory (SCT)-based intervention to reduce workplace sitting time (ST). Participants were currently employed adults with predominantly sedentary occupations (n=24) working in the Greater Phoenix area in 2012-2013. Participants wore an

The purpose of this pilot randomized control trial was to test the initial efficacy of a 10 week social cognitive theory (SCT)-based intervention to reduce workplace sitting time (ST). Participants were currently employed adults with predominantly sedentary occupations (n=24) working in the Greater Phoenix area in 2012-2013. Participants wore an activPAL (AP) inclinometer to assess postural allocation (i.e., sitting vs. standing) and Actigraph accelerometer (AG) to assess sedentary time for one week prior to beginning and immediately following the completion of the 10 week intervention. Self-reported measures of sedentary time were obtained via two validated questionnaires for overall (International Physical Activity Questionnaire [IPAQ]) and domain specific sedentary behaviors (Sedentary Behavior Questionnaire [SBQ]). SCT constructs were also measured pre and post via adapted physical activity questionnaires. Participants were randomly assigned to receive either (a) 10 weekly social cognitive-based e-newsletters focused on reducing workplace ST; or (b) similarly formatted 10 weekly e-newsletters focusing on health education. Baseline adjusted Analysis of Covariance statistical analyses were used to examine differences between groups in time spent sitting (AP) and sedentary (AG) during self-reported work hours from pre- to post- intervention. Both groups decreased ST and AG sedentary time; however, no significant differences were observed. SCT constructs also did not change significantly between pretest and post test in either group. These results indicate that individualized educational approaches to decreasing workplace sitting time may not be sufficient for observing long term change in behaviors. Future research should utilize a larger sample, measure main outcomes more frequently, and incorporate more environmental factors throughout the intervention.
ContributorsGordon, Amanda (Author) / Buman, Matthew (Thesis advisor) / Der Ananian, Cheryl (Committee member) / Swan, Pamela (Committee member) / Arizona State University (Publisher)
Created2013