Matching Items (79)
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
149782-Thumbnail Image.png
Description
In this work, a novel method is developed for making nano- and micro- fibrous hydrogels capable of preventing the rejection of implanted materials. This is achieved by either (1) mimicking the native cellular environment, to exert fine control over the cellular response or (2) acting as a protective barrier, to

In this work, a novel method is developed for making nano- and micro- fibrous hydrogels capable of preventing the rejection of implanted materials. This is achieved by either (1) mimicking the native cellular environment, to exert fine control over the cellular response or (2) acting as a protective barrier, to camouflage the foreign nature of a material and evade recognition by the immune system. Comprehensive characterization and in vitro studies described here provide a foundation for developing substrates for use in clinical applications. Hydrogel dextran and poly(acrylic acid) (PAA) fibers are formed via electrospinning, in sizes ranging from nanometers to microns in diameter. While "as-electrospun" fibers are continuous in length, sonication is used to fragment fibers into short fiber "bristles" and generate nano- and micro- fibrous surface coatings over a wide range of topographies. Dex-PAA fibrous surfaces are chemically modified, and then optimized and characterized for non-fouling and ECM-mimetic properties. The non-fouling nature of fibers is verified, and cell culture studies show differential responses dependent upon chemical, topographical and mechanical properties. Dex-PAA fibers are advantageously unique in that (1) a fine degree of control is possible over three significant parameters critical for modifying cellular response: topography, chemistry and mechanical properties, over a range emulating that of native cellular environments, (2) the innate nature of the material is non-fouling, providing an inert background for adding back specific bioactive functionality, and (3) the fibers can be applied as a surface coating or comprise the scaffold itself. This is the first reported work of dex-PAA hydrogel fibers formed via electrospinning and thermal cross-linking, and unique to this method, no toxic solvents or cross-linking agents are needed to create hydrogels or for surface attachment. This is also the first reported work of using sonication to fragment electrospun hydrogel fibers, and in which surface coatings were made via simple electrostatic interaction and dehydration. These versatile features enable fibrous surface coatings to be applied to virtually any material. Results of this research broadly impact the design of biomaterials which contact cells in the body by directing the consequent cell-material interaction.
ContributorsLouie, Katherine BoYook (Author) / Massia, Stephen P (Thesis advisor) / Bennett, Kevin (Committee member) / Garcia, Antonio (Committee member) / Pauken, Christine (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2011
147931-Thumbnail Image.png
Description

This analysis explores what the time needed to harden, and time needed to degrade is of a PLGA bead, as well as whether the size of the needle injecting the bead and the addition of a drug (Vismodegib) may affect these variables. Polymer degradation and hardening are critical to understand

This analysis explores what the time needed to harden, and time needed to degrade is of a PLGA bead, as well as whether the size of the needle injecting the bead and the addition of a drug (Vismodegib) may affect these variables. Polymer degradation and hardening are critical to understand for the polymer’s use in clinical settings, as these factors help determine the patients’ and healthcare providers’ use of the drug and estimated treatment time. Based on the literature, it is expected that the natural logarithmic polymer mass degradation forms a linear relationship to time. Polymer hardening was tested by taking video recordings of gelatin plates as they are injected with microneedles and performing RGB analysis on the polymer “beads” created. Our results for the polymer degradation experiments showed that the polymer hardened for all solutions and trials within approximately 1 minute, presenting a small amount of time in which the patient would have to remain motionless in the affected area. Both polymer bead size and drug concentration may have had a modest impact on the hardening time experiments, while bead size may affect the time required for the polymer to degrade. Based on the results, the polymer degradation is expected to last multiple weeks, which may allow for the polymer to be used as a long-term drug delivery system in treatment of basal cell carcinoma.

ContributorsEltze, Maren Caterina (Author) / Vernon, Brent (Thesis director) / Buneo, Christopher (Committee member) / Harrington Bioengineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The goal of this research project is to create a Mathcad template file capable of statistically modelling the effects of mean and standard deviation on a microparticle batch characterized by the log normal distribution model. Such a file can be applied during manufacturing to explore tolerances and increase cost and

The goal of this research project is to create a Mathcad template file capable of statistically modelling the effects of mean and standard deviation on a microparticle batch characterized by the log normal distribution model. Such a file can be applied during manufacturing to explore tolerances and increase cost and time effectiveness. Theoretical data for the time to 60% drug release and the slope and intercept of the log-log plot were collected and subjected to statistical analysis in JMP. Since the scope of this project focuses on microparticle surface degradation drug release with no drug diffusion, the characteristic variables relating to the slope (n = diffusional release exponent) and the intercept (k = kinetic constant) do not directly apply to the distribution model within the scope of the research. However, these variables are useful for analysis when the Mathcad template is applied to other types of drug release models.

ContributorsHan, Priscilla (Author) / Vernon, Brent (Thesis director) / Nickle, Jacob (Committee member) / Harrington Bioengineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149904-Thumbnail Image.png
Description
Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all

Computed tomography (CT) is one of the essential imaging modalities for medical diagnosis. Since its introduction in 1972, CT technology has been improved dramatically, especially in terms of its acquisition speed. However, the main principle of CT which consists in acquiring only density information has not changed at all until recently. Different materials may have the same CT number, which may lead to uncertainty or misdiagnosis. Dual-energy CT (DECT) was reintroduced recently to solve this problem by using the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between two low and high energy images or measurements, so that it is difficult to acquire the accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, a new model and an image enhancement technique for DECT are proposed, based on the fact that the attenuation of a high density material decreases more rapidly as X-ray energy increases. This fact has been previously ignored in most of DECT image enhancement techniques. The proposed technique consists of offset correction, spectral error correction, and adaptive noise suppression. It reduced noise, improved contrast effectively and showed better material differentiation in real patient images as well as phantom studies.
ContributorsPark, Kyung Kook (Author) / Metin, Akay (Thesis advisor) / Pavlicek, William (Committee member) / Akay, Yasemin (Committee member) / Towe, Bruce (Committee member) / Muthuswamy, Jitendran (Committee member) / Arizona State University (Publisher)
Created2011
149723-Thumbnail Image.png
Description
This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve

This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages.
ContributorsDeng, Houtao (Author) / Runger, George C. (Thesis advisor) / Lohr, Sharon L (Committee member) / Pan, Rong (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2011
151478-Thumbnail Image.png
Description
Gene manipulation techniques, such as RNA interference (RNAi), offer a powerful method for elucidating gene function and discovery of novel therapeutic targets in a high-throughput fashion. In addition, RNAi is rapidly being adopted for treatment of neurological disorders, such as Alzheimer's disease (AD), Parkinson's disease, etc. However, a major challenge

Gene manipulation techniques, such as RNA interference (RNAi), offer a powerful method for elucidating gene function and discovery of novel therapeutic targets in a high-throughput fashion. In addition, RNAi is rapidly being adopted for treatment of neurological disorders, such as Alzheimer's disease (AD), Parkinson's disease, etc. However, a major challenge in both of the aforementioned applications is the efficient delivery of siRNA molecules, plasmids or transcription factors to primary cells such as neurons. A majority of the current non-viral techniques, including chemical transfection, bulk electroporation and sonoporation fail to deliver with adequate efficiencies and the required spatial and temporal control. In this study, a novel optically transparent biochip is presented that can (a) transfect populations of primary and secondary cells in 2D culture (b) readily scale to realize high-throughput transfections using microscale electroporation and (c) transfect targeted cells in culture with spatial and temporal control. In this study, delivery of genetic payloads of different sizes and molecular characteristics, such as GFP plasmids and siRNA molecules, to precisely targeted locations in primary hippocampal and HeLa cell cultures is demonstrated. In addition to spatio-temporally controlled transfection, the biochip also allowed simultaneous assessment of a) electrical activity of neurons, b) specific proteins using fluorescent immunohistochemistry, and c) sub-cellular structures. Functional silencing of GAPDH in HeLa cells using siRNA demonstrated a 52% reduction in the GAPDH levels. In situ assessment of actin filaments post electroporation indicated a sustained disruption in actin filaments in electroporated cells for up to two hours. Assessment of neural spike activity pre- and post-electroporation indicated a varying response to electroporation. The microarray based nature of the biochip enables multiple independent experiments on the same culture, thereby decreasing culture-to-culture variability, increasing experimental throughput and allowing cell-cell interaction studies. Further development of this technology will provide a cost-effective platform for performing high-throughput genetic screens.
ContributorsPatel, Chetan (Author) / Muthuswamy, Jitendran (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Jain, Tilak (Committee member) / Caplan, Michael (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2012
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151633-Thumbnail Image.png
Description
In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also

In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios.
ContributorsXia, Mingjun (Author) / Askin, Ronald (Thesis advisor) / Mirchandani, Pitu (Committee member) / Zhang, Muhong (Committee member) / Kierstead, Henry (Committee member) / Arizona State University (Publisher)
Created2013
151852-Thumbnail Image.png
Description
Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome.

Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT.
ContributorsYamak, Didem (Author) / Akay, Metin (Thesis advisor) / Muthuswamy, Jit (Committee member) / Akay, Yasemin (Committee member) / Pavlicek, William (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013