Matching Items (12)

Optimizing Recombinant Protein Production for Domain Antibodies: Proof-of-Concept

Description

Recent studies in traumatic brain injury (TBI) have found a temporal window where therapeutics on the nanometer scale can cross the blood-brain barrier and enter the parenchyma. Developing protein-based therapeutics

Recent studies in traumatic brain injury (TBI) have found a temporal window where therapeutics on the nanometer scale can cross the blood-brain barrier and enter the parenchyma. Developing protein-based therapeutics is attractive for a number of reasons, yet, the production pipeline for high yield and consistent bioactive recombinant proteins remains a major obstacle. Previous studies for recombinant protein production has utilized gram-negative hosts such as Escherichia coli (E. coli) due to its well-established genetics and fast growth for recombinant protein production. However, using gram-negative hosts require lysis that calls for additional optimization and also introduces endotoxins and proteases that contribute to protein degradation. This project directly addressed this issue and evaluated the potential to use a gram-positive host such as Brevibacillus choshinensis (Brevi) which does not require lysis as the proteins are expressed directly into the supernatant. This host was utilized to produce variants of Stock 11 (S11) protein as a proof-of-concept towards this methodology. Variants of S11 were synthesized using different restriction enzymes which will alter the location of protein tags that may affect production or purification. Factors such as incubation time, incubation temperature, and media were optimized for each variant of S11 using a robust design of experiments. All variants of S11 were grown using optimized parameters prior to purification via affinity chromatography. Results showed the efficiency of using Brevi as a potential host for domain antibody production in the Stabenfeldt lab. Future aims will focus on troubleshooting the purification process to optimize the protein production pipeline.

Contributors

Agent

Created

Date Created
  • 2019-05

134361-Thumbnail Image.png

Statistical Analysis of Power Differences between Experimental Design Software Packages

Description

Based on findings of previous studies, there was speculation that two well-known experimental design software packages, JMP and Design Expert, produced varying power outputs given the same design and user

Based on findings of previous studies, there was speculation that two well-known experimental design software packages, JMP and Design Expert, produced varying power outputs given the same design and user inputs. For context and scope, another popular experimental design software package, Minitab® Statistical Software version 17, was added to the comparison. The study compared multiple test cases run on the three software packages with a focus on 2k and 3K factorial design and adjusting the standard deviation effect size, number of categorical factors, levels, number of factors, and replicates. All six cases were run on all three programs and were attempted to be run at one, two, and three replicates each. There was an issue at the one replicate stage, however—Minitab does not allow for only one replicate full factorial designs and Design Expert will not provide power outputs for only one replicate unless there are three or more factors. From the analysis of these results, it was concluded that the differences between JMP 13 and Design Expert 10 were well within the margin of error and likely caused by rounding. The differences between JMP 13, Design Expert 10, and Minitab 17 on the other hand indicated a fundamental difference in the way Minitab addressed power calculation compared to the latest versions of JMP and Design Expert. This was found to be likely a cause of Minitab’s dummy variable coding as its default instead of the orthogonal coding default of the other two. Although dummy variable and orthogonal coding for factorial designs do not show a difference in results, the methods affect the overall power calculations. All three programs can be adjusted to use either method of coding, but the exact instructions for how are difficult to find and thus a follow-up guide on changing the coding for factorial variables would improve this issue.

Contributors

Agent

Created

Date Created
  • 2017-05

Modelling Megacities: An Approach to Modelling Dense Urban Area

Description

In 2010, for the first time in human history, more than half of the world's total population lived in cities; this number is expected to increase to 60% or more

In 2010, for the first time in human history, more than half of the world's total population lived in cities; this number is expected to increase to 60% or more by 2050. The goal of this research effort is to create a comprehensive model and modelling framework for megacities, middleweight cities, and urban agglomerations, collectively referred to as dense urban areas. The motivation for this project comes from the United States Army's desire for readiness in all operating environments including dense urban areas. Though there is valuable insight in research to support Army operational behaviors, megacities are of unique interest to nearly every societal sector imaginable. A novel application for determining both main effects and interactive effects between factors within a dense urban area is a Design of Experiments- providing insight on factor causations. Regression Modelling can also be employed for analysis of dense urban areas, providing wide ranging insights into correlations between factors and their interactions. Past studies involving megacities concern themselves with general trend of cities and their operation. This study is unique in its efforts to model a singular megacity to enable decision support for military operational planning, as well as potential decision support to city planners to increase the sustainability of these dense urban areas and megacities.

Contributors

Agent

Created

Date Created
  • 2016-05

136655-Thumbnail Image.png

Applying Industrial Engineering to Optimize Swim Stroke Economy

Description

The U.S. Navy and other amphibious military organizations utilize a derivation of the traditional side stroke called the Combat Side Stroke, or CSS, and tout it as the most efficient

The U.S. Navy and other amphibious military organizations utilize a derivation of the traditional side stroke called the Combat Side Stroke, or CSS, and tout it as the most efficient technique available. Citing its low aerobic requirements and slow yet powerful movements as superior to the traditionally-best front crawl (freestyle), the CSS is the go-to stroke for any operation in the water. The purpose of this thesis is to apply principles of Industrial Engineering to a real-world situation not typically approached from a perspective of optimization. I will analyze pre-existing data about various swim strokes in order to compare them in terms of efficiency for different variables. These variables include calories burned, speed, and strokes per unit distance, as well as their interactions. Calories will be measured by heart rate monitors, converting BPM to calories burned. Speed will be measured by stopwatch and observer. Strokes per unit distance will be measured by observer. The strokes to be analyzed include the breast stroke, crawl stroke, butterfly, and combat side stroke. The goal is to informally test the U.S. Navy's claim that the combat side stroke is the optimum stroke to conserve energy while covering distance. Because of limitations in the scope of the project, analysis will be done using data collected from literary sources rather than through experimentation. This thesis will include a design of experiment to test the findings here in practical study. The main method of analysis will be linear programming, followed by hypothesis testing, culminating in a design of experiment for future progress on this topic.

Contributors

Agent

Created

Date Created
  • 2014-12

158883-Thumbnail Image.png

Analysis Methods for No-Confounding Screening Designs

Description

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor

Nonregular designs are a preferable alternative to regular resolution four designs because they avoid confounding two-factor interactions. As a result nonregular designs can estimate and identify a few active two-factor interactions. However, due to the sometimes complex alias structure of nonregular designs, standard screening strategies can fail to identify all active effects. In this research, two-level nonregular screening designs with orthogonal main effects will be discussed. By utilizing knowledge of the alias structure, a design based model selection process for analyzing nonregular designs is proposed.

The Aliased Informed Model Selection (AIMS) strategy is a design specific approach that is compared to three generic model selection methods; stepwise regression, least absolute shrinkage and selection operator (LASSO), and the Dantzig selector. The AIMS approach substantially increases the power to detect active main effects and two-factor interactions versus the aforementioned generic methodologies. This research identifies design specific model spaces; sets of models with strong heredity, all estimable, and exhibit no model confounding. These spaces are then used in the AIMS method along with design specific aliasing rules for model selection decisions. Model spaces and alias rules are identified for three designs; 16-run no-confounding 6, 7, and 8-factor designs. The designs are demonstrated with several examples as well as simulations to show the AIMS superiority in model selection.

A final piece of the research provides a method for augmenting no-confounding designs based on a model spaces and maximum average D-efficiency. Several augmented designs are provided for different situations. A final simulation with the augmented designs shows strong results for augmenting four additional runs if time and resources permit.

Contributors

Agent

Created

Date Created
  • 2020

152087-Thumbnail Image.png

No-confounding designs of 20 and 24 runs for screening experiments and a design selection methodology

Description

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In this dissertation, orthogonal arrays were evaluated with many popular design-ranking criteria in order to identify optimal 20-run and 24-run no-confounding designs. Monte Carlo simulation was used to empirically assess the model fitting effectiveness of the recommended no-confounding designs. The results of the simulation demonstrated that these new designs, particularly the 24-run designs, are successful at detecting active effects over 95% of the time given sufficient model effect sparsity. The final chapter presents a screening design selection methodology, based on decision trees, to aid in the selection of a screening design from a list of published options. The methodology determines which of a candidate set of screening designs has the lowest expected experimental cost.

Contributors

Agent

Created

Date Created
  • 2013

157755-Thumbnail Image.png

An Investigative Study on Effects of Geometry, Relative Humidity, and Temperature on Fluid Flow Rate in Porous Media

Description

Developing countries suffer from various health challenges due to inaccessible medical diagnostic laboratories and lack of resources to establish new laboratories. One way to address these issues is to develo

Developing countries suffer from various health challenges due to inaccessible medical diagnostic laboratories and lack of resources to establish new laboratories. One way to address these issues is to develop diagnostic systems that are suitable for the low-resource setting. In addition to this, applications requiring rapid analyses further motivates the development of portable, easy-to-use, and accurate Point of Care (POC) diagnostics. Lateral Flow Immunoassays (LFIAs) are among the most successful POC tests as they satisfy most of the ASSURED criteria. However, factors like reagent stability, reaction rates limit the performance and robustness of LFIAs. The fluid flow rate in LFIA significantly affect the factors mentioned above, and hence, it is desirable to maintain an optimal fluid velocity in porous media.

The main objective of this study is to build a statistical model that enables us to determine the optimal design parameters and ambient conditions for achieving a desired fluid velocity in porous media. This study mainly focuses on the effects of relative humidity and temperature on evaporation in porous media and the impact of geometry on fluid velocity in LFIAs. A set of finite element analyses were performed, and the obtained simulation results were then experimentally verified using Whatman filter paper with different geometry under varying ambient conditions. Design of experiments was conducted to estimate the significant factors affecting the fluid flow rate.

Literature suggests that liquid evaporation is one of the major factors that inhibit fluid penetration and capillary flow in lateral flow Immunoassays. The obtained results closely align with the existing literature and conclude that a desired fluid flow rate can be achieved by tuning the geometry of the porous media. The derived statistical model suggests that a dry and warm atmosphere is expected to inhibit the fluid flow rate the most and vice-versa.

Contributors

Agent

Created

Date Created
  • 2019

155687-Thumbnail Image.png

Analyzing Controllable Factors Influencing Cycle Time Distribution in Semiconductor Industries

Description

Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore

Semiconductor manufacturing is one of the most complex manufacturing systems in today’s times. Since semiconductor industry is extremely consumer driven, market demands within this industry change rapidly. It is therefore very crucial for these industries to be able to predict cycle time very accurately in order to quote accurate delivery dates. Discrete Event Simulation (DES) models are often used to model these complex manufacturing systems in order to generate estimates of the cycle time distribution. However, building models and executing them consumes sufficient time and resources. The objective of this research is to determine the influence of input parameters on the cycle time distribution of a semiconductor or high volume electronics manufacturing system. This will help the decision makers to implement system changes to improve the predictability of their cycle time distribution without having to run simulation models. In order to understand how input parameters impact the cycle time, Design of Experiments (DOE) is performed. The response variables considered are the attributes of cycle time distribution which include the four moments and percentiles. The input to this DOE is the output from the simulation runs. Main effects, two-way and three-way interactions for these input variables are analyzed. The implications of these results to real world scenarios are explained which would help manufactures understand the effects of the interactions between the input factors on the estimates of cycle time distribution. The shape of the cycle time distributions is different for different types of systems. Also, DES requires substantial resources and time to run. In an effort to generalize the results obtained in semiconductor manufacturing analysis, a non- complex system is considered.

Contributors

Agent

Created

Date Created
  • 2017

154115-Thumbnail Image.png

Optimal design of experiments for functional responses

Description

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response.

Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs.

First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA.

The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency.

The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.

Contributors

Agent

Created

Date Created
  • 2015

153053-Thumbnail Image.png

Analysis of no-confounding designs using the dantzig selector

Description

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution

No-confounding designs (NC) in 16 runs for 6, 7, and 8 factors are non-regular fractional factorial designs that have been suggested as attractive alternatives to the regular minimum aberration resolution IV designs because they do not completely confound any two-factor interactions with each other. These designs allow for potential estimation of main effects and a few two-factor interactions without the need for follow-up experimentation. Analysis methods for non-regular designs is an area of ongoing research, because standard variable selection techniques such as stepwise regression may not always be the best approach. The current work investigates the use of the Dantzig selector for analyzing no-confounding designs. Through a series of examples it shows that this technique is very effective for identifying the set of active factors in no-confounding designs when there are three of four active main effects and up to two active two-factor interactions.

To evaluate the performance of Dantzig selector, a simulation study was conducted and the results based on the percentage of type II errors are analyzed. Also, another alternative for 6 factor NC design, called the Alternate No-confounding design in six factors is introduced in this study. The performance of this Alternate NC design in 6 factors is then evaluated by using Dantzig selector as an analysis method. Lastly, a section is dedicated to comparing the performance of NC-6 and Alternate NC-6 designs.

Contributors

Agent

Created

Date Created
  • 2014