Matching Items (4)
135661-Thumbnail Image.png
Description
This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same games played in "NBA 2k16". The findings of this paper

This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same games played in "NBA 2k16". The findings of this paper indicate that "NBA 2k16" utilizes statistical findings to model their gameplay. It was also determined that "NBA 2k16" modeled the shooting patterns of the Suns in the first five games of the 2015-2016 season very closely. Both, the real Suns' games and the "NBA 2k16" Suns' games, showed a higher probability of success for shots taken in the first eight seconds of the shot clock than the last eight seconds of the shot clock. Similarly, both game types illustrated a trend that the probability of success for a shot increases as a player holds onto a ball longer. This result was not expected for either game type, however, "NBA 2k16" modeled the findings consistent with real Suns' games. The video game modeled the Suns with significantly more passes per possession than the real Suns' games, while they also showed a trend that more passes per possession has a significant effect on the outcome of the shot. This trend was not present in the real Suns' games, however literature supports this finding. Also, "NBA 2k16" did not correctly model the allocation of team shots for each player, however, the differences were found only in bench players. Lastly, "NBA 2k16" did not correctly allocate shots across the seven regions for Eric Bledsoe, however, there was no evidence indicating that the game did not correctly model the allocation of shots for the other starters, as well as the probability of success across the regions.
ContributorsHarrington, John P. (Author) / Armbruster, Dieter (Thesis director) / Kamarianakis, Ioannis (Committee member) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134918-Thumbnail Image.png
Description
Statistical process control (SPC) is an important quality application that is used throughout industry and is composed of control charts. Most often, it is applied in the final stages of product manufacturing. However it would be beneficial to apply SPC throughout all stages of the manufacturing process such as the

Statistical process control (SPC) is an important quality application that is used throughout industry and is composed of control charts. Most often, it is applied in the final stages of product manufacturing. However it would be beneficial to apply SPC throughout all stages of the manufacturing process such as the beginning stages. This report explores the fundamentals of SPC, applicable programs, important aspects of implementation, and specific examples of where SPC was beneficial. Important programs for SPC are general statistical software such as JMP and Minitab, and some programs are made specifically for SPC such as SPACE: statistical process and control environment. Advanced programs like SPACE are beneficial because they can easily assist with creating control charts and setting up rules, alarms and notifications, and reaction mechanisms. After the charts are set up it is important to apply rules to the charts to see when a system is running off target which indicates the need to troubleshoot and investigate. This makes the notification part an integral aspect as well because attention and awareness must be brought to out of control situations. The next important aspect is ensuring there is a reaction mechanism or plan on what to do in the event of an out of control situation and what to do to get the system running back on target. Setting up an SPC system takes time and practice and requires a lot of collaboration with experts who know more about the system or the quality side. Some of the more difficult parts of implementation is getting everyone on board and creating trainings and getting the appropriate personnel trained.
ContributorsSennavongsa, Christy (Author) / Raupp, Gregory (Thesis director) / Dai, Lenore (Committee member) / Materials Science and Engineering Program (Contributor) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134977-Thumbnail Image.png
Description
Polymer-nanoparticle composites (PNCs) show improved chemical and physical properties compared to pure polymers. However, nanoparticles dispersed in a polymer matrix tend to aggregate due to strong interparticle interactions. Electrospun nanofibers impregnated with nanoparticles have shown improved dispersion of nanoparticles. Currently, there are few models for quantifying dispersion in a PNC,

Polymer-nanoparticle composites (PNCs) show improved chemical and physical properties compared to pure polymers. However, nanoparticles dispersed in a polymer matrix tend to aggregate due to strong interparticle interactions. Electrospun nanofibers impregnated with nanoparticles have shown improved dispersion of nanoparticles. Currently, there are few models for quantifying dispersion in a PNC, and none for electrospun PNC fibers. A simulation model was developed to quantify the effects of nanoparticle volume loading and fiber to particle diameter ratios on the dispersion in a nanofiber. The dispersion was characterized using the interparticle distance along the fiber. Distributions of the interparticle distance were fit to Weibull distributions and a two-parameter empirical equation for the mean and standard deviation was found. A dispersion factor was defined to quantify the dispersion along the polymer fiber. This model serves as a standard for comparison for future experimental studies through its comparability with microscopy techniques, and as way to quantify and predict dispersion in polymer-nanoparticle electrospinning systems with a single performance metric.
ContributorsBalzer, Christopher James (Author) / Mu, Bin (Thesis director) / Armstrong, Mitchell (Committee member) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
131478-Thumbnail Image.png
Description
The process of cooking a turkey is a yearly task that families undertake in order to deliver a delicious centerpiece to a Thanksgiving meal. While other dishes accompany and comprise the traditional Thanksgiving supper, focusing on creating a turkey that satisfies the tastes of all guests is difficult, as preferences

The process of cooking a turkey is a yearly task that families undertake in order to deliver a delicious centerpiece to a Thanksgiving meal. While other dishes accompany and comprise the traditional Thanksgiving supper, focusing on creating a turkey that satisfies the tastes of all guests is difficult, as preferences vary. Over the years, many cooking methods and preparation variations have come to light. This thesis studies these cooking methods and preparation variations, as well as the effects on the crispiness of the skin, the juiciness of the meat, the tenderness of the meat, and the overall taste, to simplify the choices that home cooks have to prepare a turkey that best fits their tastes. Testing and evaluation reveal that among deep-frying, grilling, and oven roasting turkey, a number of preparation variations show statistically significant changes relative to a lack of these preparation variations. For crispiness, fried turkeys are statistically superior, scoring about 1.5 points higher than other cooking methods on a 5 point scale. For juiciness, the best preparation variation was using an oven bag, with the oven roasted turkey scoring about 4.5 points on a 5 point scale. For tenderness, multiple methods are excellent, with the best three preparation variations in order being spatchcocking, brining, and using an oven bag, each of these preparation variations are just under a 4 out of 5. Finally, testing reaffirms that judges tend to have different subjective tastes, with some having different perceptions and opinions on some criteria, while statistically agreeing on others: there was 67% agreement among judges on crispiness and tenderness, while there was only 17% agreement on juiciness. Evaluation of these cooking methods, as well as their respective preparation variations, addresses the question of which methods are worthwhile endeavors for cooks.
ContributorsVance, Jarod (Co-author) / Lacsa, Jeremy (Co-author) / Green, Matthew (Thesis director) / Taylor, David (Committee member) / Chemical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05