Matching Items (325)
Filtering by

Clear all filters

152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
150135-Thumbnail Image.png
Description
It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among

It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among multi-categorical variables. Pearson's chi-squared statistic is well-known in goodness-of-fit testing, but it is sometimes considered to produce an omnibus test as it gives little guidance to the source of poor fit once the null hypothesis is rejected. However, its components can provide powerful directional tests. In this dissertation, orthogonal components are used to develop goodness-of-fit tests for models fit to the counts obtained from the cross-classification of multi-category dependent variables. Ordinal categories are assumed. Orthogonal components defined on marginals are obtained when analyzing multi-dimensional contingency tables through the use of the QR decomposition. A subset of these orthogonal components can be used to construct limited-information tests that allow one to identify the source of lack-of-fit and provide an increase in power compared to Pearson's test. These tests can address the adverse effects presented when data are sparse. The tests rely on the set of first- and second-order marginals jointly, the set of second-order marginals only, and the random forest method, a popular algorithm for modeling large complex data sets. The performance of these tests is compared to the likelihood ratio test as well as to tests based on orthogonal polynomial components. The derived goodness-of-fit tests are evaluated with studies for detecting two- and three-way associations that are not accounted for by a categorical variable factor model with a single latent variable. In addition the tests are used to investigate the case when the model misspecification involves parameter constraints for large and sparse contingency tables. The methodology proposed here is applied to data from the 38th round of the State Survey conducted by the Institute for Public Policy and Michigan State University Social Research (2005) . The results illustrate the use of the proposed techniques in the context of a sparse data set.
ContributorsMilovanovic, Jelena (Author) / Young, Dennis (Thesis advisor) / Reiser, Mark R. (Thesis advisor) / Wilson, Jeffrey (Committee member) / Eubank, Randall (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2011
153801-Thumbnail Image.png
Description
Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in

Designing a hazard intelligence platform enables public agencies to organize diversity and manage complexity in collaborative partnerships. To maintain the integrity of the platform while preserving the prosocial ethos, understanding the dynamics of “non-regulatory supplements” to central governance is crucial. In conceptualization, social responsiveness is shaped by communicative actions, in which coordination is attained through negotiated agreements by way of the evaluation of validity claims. The dynamic processes involve information processing and knowledge sharing. The access and the use of collaborative intelligence can be examined by notions of traceability and intelligence cohort. Empirical evidence indicates that social traceability is statistical significant and positively associated with the improvement of collaborative performance. Moreover, social traceability positively contributes to the efficacy of technical traceability, but not vice versa. Furthermore, technical traceability significantly contributes to both moderate and high performance improvement; while social traceability is only significant for moderate performance improvement. Therefore, the social effect is limited and contingent. The results further suggest strategic considerations. Social significance: social traceability is the fundamental consideration to high cohort performance. Cocktail therapy: high cohort performance involves an integrative strategy with high social traceability and high technical traceability. Servant leadership: public agencies should exercise limited authority and perform a supporting role in the provision of appropriate technical traceability, while actively promoting social traceability in the system.
ContributorsWang, Chao-shih (Author) / Van Fleet, David (Thesis advisor) / Grebitus, Carola (Committee member) / Wilson, Jeffrey (Committee member) / Shultz, Clifford (Committee member) / Arizona State University (Publisher)
Created2015
156264-Thumbnail Image.png
Description
The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error

The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error rate due to sparseness. Pearson's statistic can be decomposed into orthogonal components associated with the marginal distributions of observed variables, and an omnibus fit statistic can be obtained as a sum of these components. When the statistic is a sum of components for lower-order marginals, it has good performance for Type I error rate and statistical power even when applied to a sparse table. In this dissertation, goodness-of-fit statistics using orthogonal components based on second- third- and fourth-order marginals were examined. If lack-of-fit is present in higher-order marginals, then a test that incorporates the higher-order marginals may have a higher power than a test that incorporates only first- and/or second-order marginals. To this end, two new statistics based on the orthogonal components of Pearson's chi-square that incorporate third- and fourth-order marginals were developed, and the Type I error, empirical power, and asymptotic power under different sparseness conditions were investigated. Individual orthogonal components as test statistics to identify lack-of-fit were also studied. The performance of individual orthogonal components to other popular lack-of-fit statistics were also compared. When the number of manifest variables becomes larger than 20, most of the statistics based on marginal distributions have limitations in terms of computer resources and CPU time. Under this problem, when the number manifest variables is larger than or equal to 20, the performance of a bootstrap based method to obtain p-values for Pearson-Fisher statistic, fit to confirmatory dichotomous variable factor analysis model, and the performance of Tollenaar and Mooijaart (2003) statistic were investigated.
ContributorsDassanayake, Mudiyanselage Maduranga Kasun (Author) / Reiser, Mark R. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St. Louis, Robert (Committee member) / Kamarianakis, Ioannis (Committee member) / Arizona State University (Publisher)
Created2018
135545-Thumbnail Image.png
Description
This thesis will examine the recruitment process of educated millennials coming from four-year institutions to their first job out of college. When referring to millennials throughout my research, I am specifically focusing on current college graduates in order to better relate to my own experiences as a soon-to-be-graduate seeking a

This thesis will examine the recruitment process of educated millennials coming from four-year institutions to their first job out of college. When referring to millennials throughout my research, I am specifically focusing on current college graduates in order to better relate to my own experiences as a soon-to-be-graduate seeking a job. I will examine the various recruiting techniques, i.e. channels to connect with graduates, and the hiring and interview process as a whole. This thesis will also discuss the challenges and differences of recruiting millennials versus other generations. It will also discuss the latest trends in college and early talent recruiting. In order to do this, I conducted a number of in-depth interviews with recruiters and hiring managers from various companies that recruit heavily from Arizona State University (ASU), in order to determine what these companies have done to be successful among young college graduates. I aimed to identify the specific techniques that these companies use to connect with recent college graduates, what skills these firms are looking for, and what the hiring process looks like for new millennial employees. I also conducted an extensive online literature search about recruiting educated millennials in the workforce, and I used that information as a basis to form my interview questions. The interviews were meant to confirm or deny that research, but the interviewees also revealed many new trends and insights. I hope that this information will be beneficial not only to college seniors seeking first-time employment, but also to other companies who feel that they are struggling to capture young talent.
ContributorsCapra, Alexandria Luccia (Author) / Kalika, Dale (Thesis director) / Eaton, Kathryn (Committee member) / W. P. Carey School of Business (Contributor) / Department of Marketing (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135557-Thumbnail Image.png
Description
ASU's international student population has been growing exponentially in the last few years. Specifically, the fastest growing group has been international students from China. However, many of these students are arriving with inaccurate expectations of life at an American university. Furthermore, prospective students in China that have a desire to

ASU's international student population has been growing exponentially in the last few years. Specifically, the fastest growing group has been international students from China. However, many of these students are arriving with inaccurate expectations of life at an American university. Furthermore, prospective students in China that have a desire to attend school in the U.S. are struggling to find a university that is affordable and respected. There is a huge opportunity for ASU to reach this market of students and increase their enrollment of international Chinese students. Our project aimed to create advertisements of ASU that target international Chinese students and their parents. The purpose of our project is to provide inspiration that ASU can utilize to create a professional marketing campaign to target this population of potential students.
ContributorsKagiyama, Kristen (Co-author) / Le, Alethea (Co-author) / Chien, Hsui Fen (Thesis director) / Chau, Angie (Committee member) / W. P. Carey School of Business (Contributor) / Department of Marketing (Contributor) / Department of Supply Chain Management (Contributor) / School of International Letters and Cultures (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135586-Thumbnail Image.png
Description
Fringe: Abstract Fringe is a feature length screenplay and a work of original science fiction. The story takes place in the future, on a planet far from Earth but it is told from the human perspective and is meant to draw into question many issues present in society today: prejudice,

Fringe: Abstract Fringe is a feature length screenplay and a work of original science fiction. The story takes place in the future, on a planet far from Earth but it is told from the human perspective and is meant to draw into question many issues present in society today: prejudice, hatred, multiculturalism, war, and social division. The screenplay seeks to pose an allegorical relationship between the humanity living on the planet, and the enemies they face, and the present day conflict between America and the Middle East or ISIS. The story follows Miles as he is forced to ally with his sworn enemy, the Lue, and learn to fight together to save his world from destruction. Miles begins the film bitter, resentful, and filled with prejudice towards his foes, much like a majority of Americans today. Instead of focussing on that conflict though, my story unites these two bitter enemies and asks them to put aside their violent and hateful pasts to fight a new, more powerful foe together. As the events unfold my characters learn that their enemies can be just like them and that they have something valuable to offer their world. My screenplay is about finding commonality with the enemy, on both sides of a conflict. By the end of my tale, Miles learns that there is good to be found in the world, even in his sworn enemies, if he looks close enough. It may seem like an archetypal plot on the surface but I worked hard to create a world that has not been seen in film before, an original science fiction universe that can bring these issues into the light and entertain an audience while doing so. I feel that my screenplay does just that, offering entertainment with and edge of social commentary, and stays true to the science fiction form.
ContributorsTrcic, Colton Walker (Author) / Maday, Gregory (Thesis director) / Bernstein, Gregory (Committee member) / WPC Graduate Programs (Contributor) / W. P. Carey School of Business (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135587-Thumbnail Image.png
Description
The January 12, 2010 Haiti earthquake, which hit Port-au-Prince in the late afternoon, was the cause of over 220,000 deaths and $8 billion in damages \u2014 roughly 120% of national GDP at the time. A Mw 7.5 earthquake struck rural Guatemala in the early morning in 1976 and caused 23,000-25,000

The January 12, 2010 Haiti earthquake, which hit Port-au-Prince in the late afternoon, was the cause of over 220,000 deaths and $8 billion in damages \u2014 roughly 120% of national GDP at the time. A Mw 7.5 earthquake struck rural Guatemala in the early morning in 1976 and caused 23,000-25,000 deaths, three times as many injuries, and roughly $1.1 billion in damages, which accounted for approximately 30% of Guatemala's GDP. The earthquake which hit just outside of Christchurch, New Zealand early in the morning on September 4, 2010 had a magnitude of 7.1 and caused just two injuries, no deaths, and roughly 7.2 billion USD in damages (5% of GDP). These three earthquakes, all with magnitudes over 7 on the Richter scale, caused extremely varied amounts of economic damage for these three countries. This thesis aims to identify a possible explanation as to why this was the case and suggest ways in which to improve disaster risk management going forward.
ContributorsHeuermann, Jamie Lynne (Author) / Schoellman, Todd (Thesis director) / Mendez, Jose (Committee member) / Department of Supply Chain Management (Contributor) / Department of Economics (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135201-Thumbnail Image.png
Description
Traditional educational infrastructures and their corresponding architectures have degenerated to work in opposition to today's scholastic objectives. In consideration of the necessity of formal education and academic success in modern society, a re-imagination of the ideal educational model and its architectural equivalent is long overdue. Fortunately, the constituents of a

Traditional educational infrastructures and their corresponding architectures have degenerated to work in opposition to today's scholastic objectives. In consideration of the necessity of formal education and academic success in modern society, a re-imagination of the ideal educational model and its architectural equivalent is long overdue. Fortunately, the constituents of a successful instructional method exist just outside our windows. This thesis, completed in conjunction with the ADE422 architectural studio, seeks to identify the qualities of a new educational paradigm and its architectural manifestation through an exploration of nature and biophilic design. Architectural Studio IV was challenged to develop a new academic model and corresponding architectural integration for the Herberger Young Scholars Academy, an educational institution for exceptionally gifted junior high and high school students, located on the West Campus of Arizona State University. A commencing investigation of pre-established educational methods and practices evaluated compulsory academic values, concepts, theories, and principles. External examination of scientific studies and literature regarding the functions of nature within a scholastic setting assisted in the process of developing a novel educational paradigm. A study of game play and its relation to the learning process also proved integral to the development of a new archetype. A hypothesis was developed, asserting that a nature-centric educational model was ideal. Architectural case studies were assessed to determine applicable qualities for a new nature-architecture integration. An architectural manifestation was tested within the program of the Herberger Young Scholars Academy and through the ideal functions of nature within an academic context.
ContributorsTate, Caroline Elizabeth (Author) / Underwood, Max (Thesis director) / Hejduk, Renata (Committee member) / De Jarnett, Mitchell (Committee member) / The Design School (Contributor) / W. P. Carey School of Business (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135204-Thumbnail Image.png
Description
The vastly growing field of supercomputing is in dire need of a new measurement system to optimize JMRAM (Josephson junction magnetoresistive random access memory) devices. To effectively measure these devices, an ultra-low-noise, low cost cryogenic dipping probe with a dynamic voltage range is required. This dipping probe has been designed

The vastly growing field of supercomputing is in dire need of a new measurement system to optimize JMRAM (Josephson junction magnetoresistive random access memory) devices. To effectively measure these devices, an ultra-low-noise, low cost cryogenic dipping probe with a dynamic voltage range is required. This dipping probe has been designed by ASU with <100 nVp-p noise, <10 nV offsets, 10 pV to 16 mV voltage range, and negligible thermoelectric drift. There is currently no other research group or company that can currently match both these low noise levels and wide voltage range. Two different dipping probes can be created with these specifications: one for high-use applications and one for low-use applications. The only difference between these probes is the outer shell; the high-use application probe has a shell made of G-10 fiberglass for a higher price, and the low-use application probe has a shell made of AISI 310 steel for a lower price. Both types of probes can be assembled in less than 8 hours for less than $2,500, requiring only soldering expertise. The low cost and short time to create these probes makes wide profit margins possible. The market for these cryogenic dipping probes is currently untapped, as most research groups and companies that use these probes build their own, which allows for rapid business growth. These potential consumers can be easily reached by marketing these probes at superconducting conferences. After several years of selling >50 probes, mass production can easily become possible by hiring several technicians, and still maintaining wide profit margins.
ContributorsHudson, Brooke Ashley (Author) / Adams, James (Thesis director) / Anwar, Shahriar (Committee member) / Materials Science and Engineering Program (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05