Matching Items (611)
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
148155-Thumbnail Image.png
Description

A novel concept for integration of flame-assisted fuel cells (FFC) with a gas turbine is analyzed in this paper. Six different fuels (CH4, C3H8, JP-4, JP-5, JP-10(L), and H2) are investigated for the analytical model of the FFC integrated gas turbine hybrid system. As equivalence ratio increases, the efficiency of

A novel concept for integration of flame-assisted fuel cells (FFC) with a gas turbine is analyzed in this paper. Six different fuels (CH4, C3H8, JP-4, JP-5, JP-10(L), and H2) are investigated for the analytical model of the FFC integrated gas turbine hybrid system. As equivalence ratio increases, the efficiency of the hybrid system increases initially then decreases because the decreasing flow rate of air begins to outweigh the increasing hydrogen concentration. This occurs at an equivalence ratio of 2 for CH4. The thermodynamic cycle is analyzed using a temperature entropy diagram and a pressure volume diagram. These thermodynamic diagrams show as equivalence ratio increases, the power generated by the turbine in the hybrid setup decreases. Thermodynamic analysis was performed to verify that energy is conserved and the total chemical energy going into the system was equal to the heat rejected by the system plus the power generated by the system. Of the six fuels, the hybrid system performs best with H2 as the fuel. The electrical efficiency with H2 is predicted to be 27%, CH4 is 24%, C3H8 is 22%, JP-4 is 21%, JP-5 is 20%, and JP-10(L) is 20%. When H2 fuel is used, the overall integrated system is predicted to be 24.5% more efficient than the standard gas turbine system. The integrated system is predicted to be 23.0% more efficient with CH4, 21.9% more efficient with C3H8, 22.7% more efficient with JP-4, 21.3% more efficient with JP-5, and 20.8% more efficient with JP-10(L). The sensitivity of the model is investigated using various fuel utilizations. When CH4 fuel is used, the integrated system is predicted to be 22.7% more efficient with a fuel utilization efficiency of 90% compared to that of 30%.

ContributorsRupiper, Lauren Nicole (Author) / Milcarek, Ryan (Thesis director) / Wang, Liping (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
151476-Thumbnail Image.png
Description
The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four

The health benefits of physical activity are widely accepted. Emerging research also indicates that sedentary behaviors can carry negative health consequences regardless of physical activity level. This dissertation explored four projects that examined measurement properties of physical activity and sedentary behavior monitors. Project one identified the oxygen costs of four other care activities in seventeen adults. Pushing a wheelchair and pushing a stroller were identified as moderate-intensity activities. Minutes spent engaged in these activities contribute towards meeting the 2008 Physical Activity Guidelines. Project two identified the oxygen costs of common cleaning activities in sixteen adults. Mopping a floor was identified as moderate-intensity physical activity, while cleaning a kitchen and cleaning a bathtub were identified as light-intensity physical activity. Minutes spent engaged in mopping a floor contributes towards meeting the 2008 Physical Activity Guidelines. Project three evaluated the differences in number of minutes spent in activity levels when utilizing different epoch lengths in accelerometry. A shorter epoch length (1-second, 5-seconds) accumulated significantly more minutes of sedentary behaviors than a longer epoch length (60-seconds). The longer epoch length also identified significantly more time engaged in light-intensity activities than the shorter epoch lengths. Future research needs to account for epoch length selection when conducting physical activity and sedentary behavior assessment. Project four investigated the accuracy of four activity monitors in assessing activities that were either sedentary behaviors or light-intensity physical activities. The ActiGraph GT3X+ assessed the activities least accurately, while the SenseWear Armband and ActivPAL assessed activities equally accurately. The monitor used to assess physical activity and sedentary behaviors may influence the accuracy of the measurement of a construct.
ContributorsMeckes, Nathanael (Author) / Ainsworth, Barbara E (Thesis advisor) / Belyea, Michael (Committee member) / Buman, Matthew (Committee member) / Gaesser, Glenn (Committee member) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Arizona State University (Publisher)
Created2012
151598-Thumbnail Image.png
Description
Cardiovascular disease (CVD) is the number one cause of death in the United States and type 2 diabetes (T2D) and obesity lead to cardiovascular disease. Obese adults are more susceptible to CVD compared to their non-obese counterparts. Exercise training leads to large reductions in the risk of CVD and T2D.

Cardiovascular disease (CVD) is the number one cause of death in the United States and type 2 diabetes (T2D) and obesity lead to cardiovascular disease. Obese adults are more susceptible to CVD compared to their non-obese counterparts. Exercise training leads to large reductions in the risk of CVD and T2D. Recent evidence suggests high-intensity interval training (HIT) may yield similar or superior benefits in a shorter amount of time compared to traditional continuous exercise training. The purpose of this study was to compare the effects of HIT to continuous (CONT) exercise training for the improvement of endothelial function, glucose control, and visceral adipose tissue. Seventeen obese men (N=9) and women (N=8) were randomized to eight weeks of either HIT (N=9, age=34 years, BMI=37.6 kg/m2) or CONT (N=8, age=34 years, BMI=34.6 kg/m2) exercise 3 days/week for 8 weeks. Endothelial function was assessed via flow-mediated dilation (FMD), glucose control was assessed via continuous glucose monitoring (CGM), and visceral adipose tissue and body composition was measured with an iDXA. Incremental exercise testing was performed at baseline, 4 weeks, and 8 weeks. There were no changes in weight, fat mass, or visceral adipose tissue measured by the iDXA, but there was a significant reduction in body fat that did not differ by group (46±6.3 to 45.4±6.6%, P=0.025). HIT led to a significantly greater improvement in FMD compared to CONT exercise (HIT: 5.1 to 9.0%; CONT: 5.0 to 2.6%, P=0.006). Average 24-hour glucose was not improved over the whole group and there were no group x time interactions for CGM data (HIT: 103.9 to 98.2 mg/dl; CONT: 99.9 to 100.2 mg/dl, P>0.05). When statistical analysis included only the subjects who started with an average glucose at baseline > 100 mg/dl, there was a significant improvement in glucose control overall, but no group x time interaction (107.8 to 94.2 mg/dl, P=0.027). Eight weeks of HIT led to superior improvements in endothelial function and similar improvements in glucose control in obese subjects at risk for T2D and CVD. HIT was shown to have comparable or superior health benefits in this obese sample with a 36% lower total exercise time commitment.
ContributorsSawyer, Brandon J (Author) / Gaesser, Glenn A (Thesis advisor) / Shaibi, Gabriel (Committee member) / Lee, Chong (Committee member) / Swan, Pamela (Committee member) / Buman, Matthew (Committee member) / Arizona State University (Publisher)
Created2013
151604-Thumbnail Image.png
Description
Purpose: The purpose of this study was to examine the acute effects of two novel intermittent exercise prescriptions on glucose regulation and ambulatory blood pressure. Methods: Ten subjects (5 men and 5 women, ages 31.5 ± 5.42 yr, height 170.38 ± 9.69 cm and weight 88.59 ± 18.91 kg) participated

Purpose: The purpose of this study was to examine the acute effects of two novel intermittent exercise prescriptions on glucose regulation and ambulatory blood pressure. Methods: Ten subjects (5 men and 5 women, ages 31.5 ± 5.42 yr, height 170.38 ± 9.69 cm and weight 88.59 ± 18.91 kg) participated in this four-treatment crossover trial. All subjects participated in four trials, each taking place over three days. On the evening of the first day, subjects were fitted with a continuous glucose monitor (CGM). On the second day, subjects were fitted with an ambulatory blood pressure monitor (ABP) and underwent one of the following four conditions in a randomized order: 1) 30-min: 30 minutes of continuous exercise at 60 - 70% VO2peak; 2) Mod 2-min: twenty-one 2-min bouts of walking at 3 mph performed once every 20 minutes; 3) HI 2-min: eight 2-min bouts of walking at maximal incline performed once every hour; 4) Control: a no exercise control condition. On the morning of the third day, the CGM and ABP devices were removed. All meals were standardized during the study visits. Linear mixed models were used to compare mean differences in glucose and blood pressure regulation between the four trials. Results: Glucose concentrations were significantly lower following the 30-min (91.1 ± 14.9 mg/dl), Mod 2-min (93.7 ± 19.8 mg/dl) and HI 2-min (96.1 ± 16.4 mg/dl) trials as compared to the Control (101.1 ± 20 mg/dl) (P < 0.001 for all three comparisons). The 30-min trial was superior to the Mod 2-min, which was superior to the HI 2-min trial in lowering blood glucose levels (P < 0.001 and P = 0.003 respectively). Only the 30-min trial was effective in lowering systolic ABP (124 ± 12 mmHg) as compared to the Control trial (127 ± 14 mmHg; P < 0.001) for up to 11 hours post exercise. Conclusion: Performing frequent short (i.e., 2 minutes) bouts of moderate or high intensity exercise may be a viable alternative to traditional continuous exercise in improving glucose regulation. However, 2-min bouts of exercise are not effective in reducing ambulatory blood pressure in healthy adults.
ContributorsBhammar, Dharini Mukeshkumar (Author) / Gaesser, Glenn A (Thesis advisor) / Shaibi, Gabriel (Committee member) / Buman, Matthew (Committee member) / Swan, Pamela (Committee member) / Lee, Chong (Committee member) / Arizona State University (Publisher)
Created2013
152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013