Matching Items (876)
Filtering by

Clear all filters

152204-Thumbnail Image.png
Description
This project sheds light on trombonist Andy Martin's improvisation and provides tools for further learning. A biographical sketch gives background on Martin, establishing him as a newer jazz master. Through the transcription and analysis of nine improvised solos, Martin's improvisational voice and vocabulary is deciphered and presented as a series

This project sheds light on trombonist Andy Martin's improvisation and provides tools for further learning. A biographical sketch gives background on Martin, establishing him as a newer jazz master. Through the transcription and analysis of nine improvised solos, Martin's improvisational voice and vocabulary is deciphered and presented as a series of seven thematic hooks. These patterns, rhythms, and gestures are described, analyzed, and presented as examples of how each is used in the solos. The hooks are also set as application exercises for learning jazz style and improvisation. These exercises demonstrate how to use Martin's hooks as a means for furthering one's own improvisation. A full method for successful transcription is also presented, along with the printed transcriptions and their accompanying information sheets.
ContributorsWilkinson, Michael Scott (Author) / Ericson, John (Thesis advisor) / Kocour, Michael (Committee member) / Solis, Theodore (Committee member) / Arizona State University (Publisher)
Created2013
151665-Thumbnail Image.png
Description
Jazz continues, into its second century, as one of the most important musics taught in public middle and high schools. Even so, research related to how students learn, especially in their earliest interactions with jazz culture, is limited. Weaving together interviews and observations of junior and senior high school jazz

Jazz continues, into its second century, as one of the most important musics taught in public middle and high schools. Even so, research related to how students learn, especially in their earliest interactions with jazz culture, is limited. Weaving together interviews and observations of junior and senior high school jazz players and teachers, private studio instructors, current university students majoring in jazz, and university and college jazz faculty, I developed a composite sketch of a secondary school student learning to play jazz. Using arts-based educational research methods, including the use of narrative inquiry and literary non-fiction, the status of current jazz education and the experiences by novice jazz learners is explored. What emerges is a complex story of students and teachers negotiating the landscape of jazz in and out of early twenty-first century public schools. Suggestions for enhancing jazz experiences for all stakeholders follow, focusing on access and the preparation of future jazz teachers.
ContributorsKelly, Keith B (Author) / Stauffer, Sandra (Thesis advisor) / Tobias, Evan (Committee member) / Kocour, Michael (Committee member) / Sullivan, Jill (Committee member) / Schmidt, Margaret (Committee member) / Arizona State University (Publisher)
Created2013
151374-Thumbnail Image.png
Description
ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year

ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year old PV power plant in Phoenix, Arizona has been evaluated for performance, reliability, and durability. The PV power plant, called Solar One, is owned and operated by John F. Long's homeowners association. It is a 200 kWdc, standard test conditions (STC) rated power plant comprised of 4000 PV modules or frameless laminates, in 100 panel groups (rated at 175 kWac). The power plant is made of two center-tapped bipolar arrays, the north array and the south array. Due to a limited time frame to execute this large project, this work was performed by two masters students (Jonathan Belmont and Kolapo Olakonu) and the test results are presented in two masters theses. This thesis presents the results obtained on the south array and the other thesis presents the results obtained on the north array. Each of these two arrays is made of four sub arrays, the east sub arrays (positive and negative polarities) and the west sub arrays (positive and negative polarities), making up eight sub arrays. The evaluation and analyses of the power plant included in this thesis consists of: visual inspection, electrical performance measurements, and infrared thermography. A possible presence of potential induced degradation (PID) due to potential difference between ground and strings was also investigated. Some installation practices were also studied and found to contribute to the power loss observed in this investigation. The power output measured in 2011 for all eight sub arrays at STC is approximately 76 kWdc and represents a power loss of 62% (from 200 kW to 76 kW) over 26+ years. The 2011 measured power output for the four south sub arrays at STC is 39 kWdc and represents a power loss of 61% (from 100 kW to 39 kW) over 26+ years. Encapsulation browning and non-cell interconnect ribbon breakages were determined to be the primary causes for the power loss.
ContributorsOlakonu, Kolapo (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
151533-Thumbnail Image.png
Description
Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for

Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for future technologies. This work presents a novel test measurement and extraction technique which is non-invasive to the actual operation of the SRAM memory array. The salient features of this work include i) A single ended SRAM test structure with no disturbance to SRAM operations ii) a convenient test procedure that only requires quasi-static control of external voltages iii) non-iterative method that extracts the VTH variation of each transistor from eight independent switch point measurements. With the present day technology scaling, in addition to the variability with the process, there is also the impact of other aging mechanisms which become dominant. The various aging mechanisms like Negative Bias Temperature Instability (NBTI), Channel Hot Carrier (CHC) and Time Dependent Dielectric Breakdown (TDDB) are critical in the present day nano-scale technology nodes. In this work, we focus on the impact of NBTI due to aging in the SRAM cell and have used Trapping/De-Trapping theory based log(t) model to explain the shift in threshold voltage VTH. The aging section focuses on the following i) Impact of Statistical aging in PMOS device due to NBTI dominates the temporal shift of SRAM cell ii) Besides static variations , shifting in VTH demands increased guard-banding margins in design stage iii) Aging statistics remain constant during the shift, presenting a secondary effect in aging prediction. iv) We have investigated to see if the aging mechanism can be used as a compensation technique to reduce mismatch due to process variations. Finally, the entire test setup has been tested in SPICE and also validated with silicon and the results are presented. The method also facilitates the study of design metrics such as static, read and write noise margins and also the data retention voltage and thus help designers to improve the cell stability of SRAM.
ContributorsRavi, Venkatesa (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Clark, Lawrence (Committee member) / Arizona State University (Publisher)
Created2013
152400-Thumbnail Image.png
Description
Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the

Advances in implantable MEMS technology has made possible adaptive micro-robotic implants that can track and record from single neurons in the brain. Development of autonomous neural interfaces opens up exciting possibilities of micro-robots performing standard electrophysiological techniques that would previously take researchers several hundred hours to train and achieve the desired skill level. It would result in more reliable and adaptive neural interfaces that could record optimal neural activity 24/7 with high fidelity signals, high yield and increased throughput. The main contribution here is validating adaptive strategies to overcome challenges in autonomous navigation of microelectrodes inside the brain. The following issues pose significant challenges as brain tissue is both functionally and structurally dynamic: a) time varying mechanical properties of the brain tissue-microelectrode interface due to the hyperelastic, viscoelastic nature of brain tissue b) non-stationarities in the neural signal caused by mechanical and physiological events in the interface and c) the lack of visual feedback of microelectrode position in brain tissue. A closed loop control algorithm is proposed here for autonomous navigation of microelectrodes in brain tissue while optimizing the signal-to-noise ratio of multi-unit neural recordings. The algorithm incorporates a quantitative understanding of constitutive mechanical properties of soft viscoelastic tissue like the brain and is guided by models that predict stresses developed in brain tissue during movement of the microelectrode. An optimal movement strategy is developed that achieves precise positioning of microelectrodes in the brain by minimizing the stresses developed in the surrounding tissue during navigation and maximizing the speed of movement. Results of testing the closed-loop control paradigm in short-term rodent experiments validated that it was possible to achieve a consistently high quality SNR throughout the duration of the experiment. At the systems level, new generation of MEMS actuators for movable microelectrode array are characterized and the MEMS device operation parameters are optimized for improved performance and reliability. Further, recommendations for packaging to minimize the form factor of the implant; design of device mounting and implantation techniques of MEMS microelectrode array to enhance the longevity of the implant are also included in a top-down approach to achieve a reliable brain interface.
ContributorsAnand, Sindhu (Author) / Muthuswamy, Jitendran (Thesis advisor) / Tillery, Stephen H (Committee member) / Buneo, Christopher (Committee member) / Abbas, James (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
152290-Thumbnail Image.png
Description
Concerto for Piano and Chamber Orchestra was conceived in February of 2013, and conceptually it is my attempt to fuse personal expressions of jazz and classical music into one fully realized statement. It is a three movement work (fast, slow, fast) for 2 fl., 2 ob., 2 cl., bsn., 2

Concerto for Piano and Chamber Orchestra was conceived in February of 2013, and conceptually it is my attempt to fuse personal expressions of jazz and classical music into one fully realized statement. It is a three movement work (fast, slow, fast) for 2 fl., 2 ob., 2 cl., bsn., 2 hrn., 2 tpt., tbn., pno., perc., str. (6,4,2,2,1). The work is approximately 27 minutes in duration. The first movement of the Concerto is written in a fluid sonata form. A fugato begins where the second theme would normally appear, and the second theme does not fully appear until near the end of the solo piano section. The result is that the second theme when finally revealed is so reminiscent of the history of jazz and classical synthesis that it does not sound completely new, and in fact is a return of something that was heard before, but only hinted at in this piece. The second movement is a kind of deconstructive set of variations, with a specific theme and harmonic pattern implied throughout the movement. However, the full theme is not disclosed until the final variation. The variations are interrupted by moments of pure rhythmic music, containing harmony made up of major chords with an added fourth, defying resolution, and dissolving each time back into a new variation. The third movement is in rondo form, using rhythmic and harmonic influences from jazz. The percussion plays a substantial role in this movement, acting as a counterpoint to the piano part throughout. This movement and the piece concludes with an extended coda, inspired indirectly by the simple complexities of an improvisational piano solo, building in complexity as the concerto draws to a close.
ContributorsSneider, Elliot (Author) / Rogers, Rodney (Thesis advisor) / DeMars, James (Committee member) / Hackbarth, Glenn (Committee member) / Solis, Theodore (Committee member) / Arizona State University (Publisher)
Created2013
152459-Thumbnail Image.png
Description
Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground.

Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates.
ContributorsYang, Chengen (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2014
152284-Thumbnail Image.png
Description
Electromigration in metal interconnects is the most pernicious failure mechanism in semiconductor integrated circuits (ICs). Early electromigration investigations were primarily focused on aluminum interconnects for silicon-based ICs. An alternative metallization compatible with gallium arsenide (GaAs) was required in the development of high-powered radio frequency (RF) compound semiconductor devices operating at

Electromigration in metal interconnects is the most pernicious failure mechanism in semiconductor integrated circuits (ICs). Early electromigration investigations were primarily focused on aluminum interconnects for silicon-based ICs. An alternative metallization compatible with gallium arsenide (GaAs) was required in the development of high-powered radio frequency (RF) compound semiconductor devices operating at higher current densities and elevated temperatures. Gold-based metallization was implemented on GaAs devices because it uniquely forms a very low resistance ohmic contact and gold interconnects have superior electrical and thermal conductivity properties. Gold (Au) was also believed to have improved resistance to electromigration due to its higher melting temperature, yet electromigration reliability data on passivated Au interconnects is scarce and inadequate in the literature. Therefore, the objective of this research was to characterize the electromigration lifetimes of passivated Au interconnects under precisely controlled stress conditions with statistically relevant quantities to obtain accurate model parameters essential for extrapolation to normal operational conditions. This research objective was accomplished through measurement of electromigration lifetimes of large quantities of passivated electroplated Au interconnects utilizing high-resolution in-situ resistance monitoring equipment. Application of moderate accelerated stress conditions with a current density limited to 2 MA/cm2 and oven temperatures in the range of 300°C to 375°C avoided electrical overstress and severe Joule-heated temperature gradients. Temperature coefficients of resistance (TCRs) were measured to determine accurate Joule-heated Au interconnect film temperatures. A failure criterion of 50% resistance degradation was selected to prevent thermal runaway and catastrophic metal ruptures that are problematic of open circuit failure tests. Test structure design was optimized to reduce resistance variation and facilitate failure analysis. Characterization of the Au microstructure yielded a median grain size of 0.91 ìm. All Au lifetime distributions followed log-normal distributions and Black's model was found to be applicable. An activation energy of 0.80 ± 0.05 eV was measured from constant current electromigration tests at multiple temperatures. A current density exponent of 1.91 was extracted from multiple current densities at a constant temperature. Electromigration-induced void morphology along with these model parameters indicated grain boundary diffusion is dominant and the void nucleation mechanism controlled the failure time.
ContributorsKilgore, Stephen (Author) / Adams, James (Thesis advisor) / Schroder, Dieter (Thesis advisor) / Krause, Stephen (Committee member) / Gaw, Craig (Committee member) / Arizona State University (Publisher)
Created2013
152866-Thumbnail Image.png
Description
The measurement of competency in nursing is critical to ensure safe and effective care of patients. This study had two purposes. First, the psychometric characteristics of the Nursing Performance Profile (NPP), an instrument used to measure nursing competency, were evaluated using generalizability theory and a sample of 18 nurses in

The measurement of competency in nursing is critical to ensure safe and effective care of patients. This study had two purposes. First, the psychometric characteristics of the Nursing Performance Profile (NPP), an instrument used to measure nursing competency, were evaluated using generalizability theory and a sample of 18 nurses in the Measuring Competency with Simulation (MCWS) Phase I dataset. The relative magnitudes of various error sources and their interactions were estimated in a generalizability study involving a fully crossed, three-facet random design with nurse participants as the object of measurement and scenarios, raters, and items as the three facets. A design corresponding to that of the MCWS Phase I data--involving three scenarios, three raters, and 41 items--showed nurse participants contributed the greatest proportion to total variance (50.00%), followed, in decreasing magnitude, by: rater (19.40%), the two-way participant x scenario interaction (12.93%), and the two-way participant x rater interaction (8.62%). The generalizability (G) coefficient was .65 and the dependability coefficient was .50. In decision study designs minimizing number of scenarios, the desired generalizability coefficients of .70 and .80 were reached at three scenarios with five raters, and five scenarios with nine raters, respectively. In designs minimizing number of raters, G coefficients of .72 and .80 were reached at three raters and five scenarios and four raters and nine scenarios, respectively. A dependability coefficient of .71 was attained with six scenarios and nine raters or seven raters and nine scenarios. Achieving high reliability with designs involving fewer raters may be possible with enhanced rater training to decrease variance components for rater main and interaction effects. The second part of this study involved the design and implementation of a validation process for evidence-based human patient simulation scenarios in assessment of nursing competency. A team of experts validated the new scenario using a modified Delphi technique, involving three rounds of iterative feedback and revisions. In tandem, the psychometric study of the NPP and the development of a validation process for human patient simulation scenarios both advance and encourage best practices for studying the validity of simulation-based assessments.
ContributorsO'Brien, Janet Elaine (Author) / Thompson, Marilyn (Thesis advisor) / Hagler, Debra (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2014
153109-Thumbnail Image.png
Description
This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the

This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the speed of propagation of the initial crack could differ from different test conditions and different solder materials. A quantitative analysis about the fatigue behavior of SAC lead-free solder under thermal preconditioning process is conducted. This thesis presents a method of making prediction of failure life of solder alloy by building a Weibull regression model. The failure life of solder on circuit board is assumed Weibull distributed. Different materials and test conditions could affect the distribution by changing the shape and scale parameters of Weibull distribution. The method is to model the regression of parameters with different test conditions as predictors based on Bayesian inference concepts. In the process of building regression models, prior distributions are generated according to the previous studies, and Markov Chain Monte Carlo (MCMC) is used under WinBUGS environment.
ContributorsXu, Xinyue (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014