Matching Items (449)
Filtering by

Clear all filters

152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
150135-Thumbnail Image.png
Description
It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among

It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among multi-categorical variables. Pearson's chi-squared statistic is well-known in goodness-of-fit testing, but it is sometimes considered to produce an omnibus test as it gives little guidance to the source of poor fit once the null hypothesis is rejected. However, its components can provide powerful directional tests. In this dissertation, orthogonal components are used to develop goodness-of-fit tests for models fit to the counts obtained from the cross-classification of multi-category dependent variables. Ordinal categories are assumed. Orthogonal components defined on marginals are obtained when analyzing multi-dimensional contingency tables through the use of the QR decomposition. A subset of these orthogonal components can be used to construct limited-information tests that allow one to identify the source of lack-of-fit and provide an increase in power compared to Pearson's test. These tests can address the adverse effects presented when data are sparse. The tests rely on the set of first- and second-order marginals jointly, the set of second-order marginals only, and the random forest method, a popular algorithm for modeling large complex data sets. The performance of these tests is compared to the likelihood ratio test as well as to tests based on orthogonal polynomial components. The derived goodness-of-fit tests are evaluated with studies for detecting two- and three-way associations that are not accounted for by a categorical variable factor model with a single latent variable. In addition the tests are used to investigate the case when the model misspecification involves parameter constraints for large and sparse contingency tables. The methodology proposed here is applied to data from the 38th round of the State Survey conducted by the Institute for Public Policy and Michigan State University Social Research (2005) . The results illustrate the use of the proposed techniques in the context of a sparse data set.
ContributorsMilovanovic, Jelena (Author) / Young, Dennis (Thesis advisor) / Reiser, Mark R. (Thesis advisor) / Wilson, Jeffrey (Committee member) / Eubank, Randall (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2011
153801-Thumbnail Image.png
Description
Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in

Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in which coordination is attained through negotiated agreements by way of the evaluation of validity claims. The dynamic processes involve information processing and knowledge sharing. The access and the use of collaborative intelligence can be examined by notions of traceability and intelligence cohort. Empirical evidence indicates that social traceability is statistical significant and positively associated with the improvement of collaborative performance. Moreover, social traceability positively contributes to the efficacy of technical traceability, but not vice versa. Furthermore, technical traceability significantly contributes to both moderate and high performance improvement; while social traceability is only significant for moderate performance improvement. Therefore, the social effect is limited and contingent. The results further suggest strategic considerations. Social significance: social traceability is the fundamental consideration to high cohort performance. Cocktail therapy: high cohort performance involves an integrative strategy with high social traceability and high technical traceability. Servant leadership: public agencies should exercise limited authority and perform a supporting role in the provision of appropriate technical traceability, while actively promoting social traceability in the system.
ContributorsWang, Chao-shih (Author) / Van Fleet, David (Thesis advisor) / Grebitus, Carola (Committee member) / Wilson, Jeffrey (Committee member) / Shultz, Clifford (Committee member) / Arizona State University (Publisher)
Created2015
156264-Thumbnail Image.png
Description
The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error

The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error rate due to sparseness. Pearson's statistic can be decomposed into orthogonal components associated with the marginal distributions of observed variables, and an omnibus fit statistic can be obtained as a sum of these components. When the statistic is a sum of components for lower-order marginals, it has good performance for Type I error rate and statistical power even when applied to a sparse table. In this dissertation, goodness-of-fit statistics using orthogonal components based on second- third- and fourth-order marginals were examined. If lack-of-fit is present in higher-order marginals, then a test that incorporates the higher-order marginals may have a higher power than a test that incorporates only first- and/or second-order marginals. To this end, two new statistics based on the orthogonal components of Pearson's chi-square that incorporate third- and fourth-order marginals were developed, and the Type I error, empirical power, and asymptotic power under different sparseness conditions were investigated. Individual orthogonal components as test statistics to identify lack-of-fit were also studied. The performance of individual orthogonal components to other popular lack-of-fit statistics were also compared. When the number of manifest variables becomes larger than 20, most of the statistics based on marginal distributions have limitations in terms of computer resources and CPU time. Under this problem, when the number manifest variables is larger than or equal to 20, the performance of a bootstrap based method to obtain p-values for Pearson-Fisher statistic, fit to confirmatory dichotomous variable factor analysis model, and the performance of Tollenaar and Mooijaart (2003) statistic were investigated.
ContributorsDassanayake, Mudiyanselage Maduranga Kasun (Author) / Reiser, Mark R. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St. Louis, Robert (Committee member) / Kamarianakis, Ioannis (Committee member) / Arizona State University (Publisher)
Created2018
133345-Thumbnail Image.png
Description
The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase

The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase the dosage of currently prescribed antibiotics. This study attempted to combat two forms of antibiotic resistance. The first is the AcrAB efflux pump which is able to pump antibiotics out of the cell. The second is the biofilms that E. coli can form. By using an inhibitor, the pump should be unable to rid itself of an antibiotic. On the other hand, using Tween allows for biofilm formation to either be disrupted or for the biofilm to be dissolved. By combining these two chemicals with an antibiotic that the efflux pump is known to expel, low concentrations of each chemical should result in an equivalent or greater effect on bacteria compared to any one chemical in higher concentrations. To test this hypothesis a 96 well plate BEC screen test was performed. A range of antibiotics were used at various concentrations and with varying concentrations of both Tween and the inhibitor to find a starting point. Following this, Erythromycin and Ciprofloxacin were picked as the best candidates and the optimum range of the antibiotic, Tween, and inhibitor were established. Finally, all three chemicals were combined to observe the effects they had together as opposed to individually or paired together. From the results of this experiment several conclusions were made. First, the inhibitor did in fact increase the effectiveness of the antibiotic as less antibiotic was needed if the inhibitor was present. Second, Tween showed an ability to prevent recovery in the MBEC reading, showing that it has the ability to disrupt or dissolve biofilms. However, Tween also showed a noticeable decrease in effectiveness in the overall treatment. This negative interaction was unable to be compensated for when using the inhibitor and so the hypothesis was proven false as combining the three chemicals led to a less effective treatment method.
ContributorsPetrovich Flynn, Chandler James (Author) / Misra, Rajeev (Thesis director) / Bean, Heather (Committee member) / Perkins, Kim (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133363-Thumbnail Image.png
Description
An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and

An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and other internal flow applications. An experiment was constructed to acquire physical data that could assess the change in performance of the diffusers once vortex generators were applied. The experiment consisted of pushing air through rectangular diffusers with half angles of 10, 20, and 30 degrees. A velocity distribution model was created for each diffuser without the application of vortex generators before modeling the velocity distribution with the application of vortex generators. This allowed the two results to be directly compared to one another and the improvements to be quantified. This was completed by using the velocity distribution model to find the partial mass flow rate through the outer portion of the diffuser's cross-sectional area. The analysis concluded that the vortex generators noticeably increased the performance of the diffusers. This was best seen in the performance of the 30-degree diffuser. Initially the diffuser experienced airflow velocities near zero towards the edges. This led to 0.18% of the mass flow rate occurring in the outer one-fourth portion of the cross-sectional area. With the application of vortex generators, this percentage increased to 5.7%. The 20-degree diffuser improved from 2.5% to 7.9% of the total mass flow rate in the outer portion and the 10-degree diffuser improved from 11.9% to 19.2%. These results demonstrate an increase in performance by the addition of vortex generators while allowing the possibility for further investigation on improvement through the design and configuration of these vortex generators.
ContributorsSanchez, Zachary Daniel (Author) / Takahashi, Timothy (Thesis director) / Herrmann, Marcus (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133366-Thumbnail Image.png
Description
The objective of this project was to design an electrically driven centrifugal pump for the Daedalus Astronautics @ASU hybrid rocket engine (HRE). The pump design was purposefully simplified due to time, fabrication, calculation, and capability constraints, which resulted in a lower fidelity design, with the option to be improved later.

The objective of this project was to design an electrically driven centrifugal pump for the Daedalus Astronautics @ASU hybrid rocket engine (HRE). The pump design was purposefully simplified due to time, fabrication, calculation, and capability constraints, which resulted in a lower fidelity design, with the option to be improved later. The impeller, shroud, volute, shaft, motor, and ESC were the main focuses of the pump assembly, but the seals, bearings, lubrication methods, and flow path connections were considered as elements which would require future attention. The resulting pump design is intended to be used on the Daedalus Astronautics HRE test cart for design verification. In the future, trade studies and more detailed analyses should and will be performed before this pump is integrated into the Daedalus Astronautics flight-ready HRE.
ContributorsShillingburg, Ryan Carl (Author) / White, Daniel (Thesis director) / Brunacini, Lauren (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
131515-Thumbnail Image.png
Description
Human habitation of other planets requires both cost-effective transportation and low time-of-flight for human passengers and critical supplies. The current methods for interplanetary orbital transfers, such as the Hohmann transfer, require either expensive, high fuel maneuvers or extended space travel. However, by utilizing the high velocities of a super-geosynchronous space

Human habitation of other planets requires both cost-effective transportation and low time-of-flight for human passengers and critical supplies. The current methods for interplanetary orbital transfers, such as the Hohmann transfer, require either expensive, high fuel maneuvers or extended space travel. However, by utilizing the high velocities of a super-geosynchronous space elevator, spacecraft released from an apex anchor could achieve interplanetary transfers with minimal Delta V fuel and time of flight requirements. By using Lambert’s Problem and Free Release propagation to determine the minimal fuel transfer from a terrestrial space elevator to Mars under a variety of initial conditions and time-of-flight constraints, this paper demonstrates that the use of a space elevator release can address both needs by dramatically reducing the time-of-flight and the fuel budget.
ContributorsTorla, James (Author) / Peet, Matthew (Thesis director) / Swan, Peter (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133887-Thumbnail Image.png
Description
This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.
ContributorsKing, Michael (Author) / Ren, Yi (Thesis director) / Spanias, Andreas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133913-Thumbnail Image.png
Description
This research project will test the structural properties of a 3D printed origami inspired structure and compare them with a standard honeycomb structure. The models have equal face areas, model heights, and overall volume but wall thicknesses will be different. Stress-deformation curves were developed from static loading testing. The area

This research project will test the structural properties of a 3D printed origami inspired structure and compare them with a standard honeycomb structure. The models have equal face areas, model heights, and overall volume but wall thicknesses will be different. Stress-deformation curves were developed from static loading testing. The area under these curves was used to calculate the toughness of the structures. These curves were analyzed to see which structures take more load and which deform more before fracture. Furthermore, graphs of the Stress-Strain plots were produced. Using 3-D printed parts in tough resin printed with a Stereolithography (SLA) printer, the origami inspired structure withstood a larger load, produced a larger toughness and deformed more before failure than the equivalent honeycomb structure.
ContributorsMcGregor, Alexander (Author) / Jiang, Hanqing (Thesis director) / Kingsbury, Dallas (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05