Matching Items (225)

Filtering by

Clear all filters

149867-Thumbnail Image.png

Incorporating auditory models in speech/audio applications

Description

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception.

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

Contributors

Agent

Created

Date Created
2011

149744-Thumbnail Image.png

Smooth surfaces for video game development

Description

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the

The video game graphics pipeline has traditionally rendered the scene using a polygonal approach. Advances in modern graphics hardware now allow the rendering of parametric methods. This thesis explores various smooth surface rendering methods that can be integrated into the video game graphics engine. Moving over to parametric or smooth surfaces from the polygonal domain has its share of issues and there is an inherent need to address various rendering bottlenecks that could hamper such a move. The game engine needs to choose an appropriate method based on in-game characteristics of the objects; character and animated objects need more sophisticated methods whereas static objects could use simpler techniques. Scaling the polygon count over various hardware platforms becomes an important factor. Much control is needed over the tessellation levels, either imposed by the hardware limitations or by the application, to be able to adaptively render the mesh without significant loss in performance. This thesis explores several methods that would help game engine developers in making correct design choices by optimally balancing the trade-offs while rendering the scene using smooth surfaces. It proposes a novel technique for adaptive tessellation of triangular meshes that vastly improves speed and tessellation count. It develops an approximate method for rendering Loop subdivision surfaces on tessellation enabled hardware. A taxonomy and evaluation of the methods is provided and a unified rendering system that provides automatic level of detail by switching between the methods is proposed.

Contributors

Agent

Created

Date Created
2011

149846-Thumbnail Image.png

Bridging divides through technology use: transnationalism and digital literacy socialization

Description

In this study, I investigate the digital literacy practices of adult immigrants, and their relationship with transnational processes and practices. Specifically, I focus on their conditions of access to information and communication technologies (ICTs) in their life trajectories, their

In this study, I investigate the digital literacy practices of adult immigrants, and their relationship with transnational processes and practices. Specifically, I focus on their conditions of access to information and communication technologies (ICTs) in their life trajectories, their conditions of learning in a community center, and their appropriation of digital literacy practices for transnational purposes. By studying the culturally situated nature of digital literacies of adult learners with transnational affiliations, I build on recent empirical work in the fields of New Literacy Studies, sociocultural approaches to learning, and transnational studies. In this qualitative study, I utilized ethnographic techniques for data collection, including participant observation, interviewing, and collection of material and electronic artifacts. I drew from case study approaches to analyze and present the experiences of five adult first-generation immigrant participants. I also negotiated multiple positionalities during the two phases of the study: as a participant observer and instructor's aide during the Basic Computer Skills course participants attended, and as a researcher-practitioner in the Web Design course that followed. From these multiple vantage points, my analysis demonstrates that participants' access to ICTs is shaped by structural factors, family dynamics, and individuals' constructions of the value of digital literacies. These factors influence participants' conditions of access to material resources, such as computer equipment, and access to mentoring opportunities with members of their social networks. In addition, my analysis of the instructional practices in the classroom shows that instructors used multiple modalities, multiple languages and specialized discourses to scaffold participants' understandings of digital spaces and interfaces. Lastly, in my analysis of participants' repertoires of digital literacy practices, I found that their engagement in technology use for purposes of communication, learning, political participation and online publishing supported their maintenance of transnational affiliations. Conversely, participants' transnational ties and resources supported their appropriation of digital literacies in everyday practice. This study concludes with a discussion on the relationship among learning, digital literacies and transnationalism, and the contributions of critical and ethnographic perspectives to the study of programs that can bridge digital inequality for minority groups.

Contributors

Agent

Created

Date Created
2011

152382-Thumbnail Image.png

A P-value based approach for phase II profile monitoring

Description

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.

Contributors

Agent

Created

Date Created
2013

152247-Thumbnail Image.png

Small molecule detection by surface plasmon resonance: improvements in sensitivity and kinetic measurement

Description

Surface plasmon resonance (SPR) has emerged as a popular technique for elucidating subtle signals from biological events in a label-free, high throughput environment. The efficacy of conventional SPR sensors, whose signals are mass-sensitive, diminishes rapidly with the size of the

Surface plasmon resonance (SPR) has emerged as a popular technique for elucidating subtle signals from biological events in a label-free, high throughput environment. The efficacy of conventional SPR sensors, whose signals are mass-sensitive, diminishes rapidly with the size of the observed target molecules. The following work advances the current SPR sensor paradigm for the purpose of small molecule detection. The detection limits of two orthogonal components of SPR measurement are targeted: speed and sensitivity. In the context of this report, speed refers to the dynamic range of measured kinetic rate constants, while sensitivity refers to the target molecule mass limitation of conventional SPR measurement. A simple device for high-speed microfluidic delivery of liquid samples to a sensor surface is presented to address the temporal limitations of conventional SPR measurement. The time scale of buffer/sample switching is on the order of milliseconds, thereby minimizing the opportunity for sample plug dispersion. The high rates of mass transport to and from the central microfluidic sensing region allow for SPR-based kinetic analysis of binding events with dissociation rate constants (kd) up to 130 s-1. The required sample volume is only 1 μL, allowing for minimal sample consumption during high-speed kinetic binding measurement. Charge-based detection of small molecules is demonstrated by plasmonic-based electrochemical impedance microscopy (P-EIM). The dependence of surface plasmon resonance (SPR) on surface charge density is used to detect small molecules (60-120 Da) printed on a dextran-modified sensor surface. The SPR response to an applied ac potential is a function of the surface charge density. This optical signal is comprised of a dc and an ac component, and is measured with high spatial resolution. The amplitude and phase of local surface impedance is provided by the ac component. The phase signal of the small molecules is a function of their charge status, which is manipulated by the pH of a solution. This technique is used to detect and distinguish small molecules based on their charge status, thereby circumventing the mass limitation (~100 Da) of conventional SPR measurement.

Contributors

Agent

Created

Date Created
2013

152153-Thumbnail Image.png

Transmission expansion planning for large power systems

Description

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of

Transmission expansion planning (TEP) is a complex decision making process that requires comprehensive analysis to determine the time, location, and number of electric power transmission facilities that are needed in the future power grid. This dissertation investigates the topic of solving TEP problems for large power systems. The dissertation can be divided into two parts. The first part of this dissertation focuses on developing a more accurate network model for TEP study. First, a mixed-integer linear programming (MILP) based TEP model is proposed for solving multi-stage TEP problems. Compared with previous work, the proposed approach reduces the number of variables and constraints needed and improves the computational efficiency significantly. Second, the AC power flow model is applied to TEP models. Relaxations and reformulations are proposed to make the AC model based TEP problem solvable. Third, a convexified AC network model is proposed for TEP studies with reactive power and off-nominal bus voltage magnitudes included in the model. A MILP-based loss model and its relaxations are also investigated. The second part of this dissertation investigates the uncertainty modeling issues in the TEP problem. A two-stage stochastic TEP model is proposed and decomposition algorithms based on the L-shaped method and progressive hedging (PH) are developed to solve the stochastic model. Results indicate that the stochastic TEP model can give a more accurate estimation of the annual operating cost as compared to the deterministic TEP model which focuses only on the peak load.

Contributors

Agent

Created

Date Created
2013

152210-Thumbnail Image.png

Evaluation of online teacher and student materials for the Framework for K-12 Science Education: science and engineering crosscutting concepts

Description

The National Research Council developed and published the Framework for K-12 Science Education, a new set of concepts that many states were planning on adopting. Part of this new endeavor included a set of science and engineering crosscutting concepts to

The National Research Council developed and published the Framework for K-12 Science Education, a new set of concepts that many states were planning on adopting. Part of this new endeavor included a set of science and engineering crosscutting concepts to be incorporated into science materials and activities, a first in science standards history. With the recent development of the Framework came the arduous task of evaluating current lessons for alignment with the new crosscutting concepts. This study took on that task in a small, yet important area of available lessons on the internet. Lessons, to be used by K-12 educators and students, were produced by different organizations and research efforts. This study focused specifically on Earth science lessons as they related to earthquakes. To answer the question as to the extent current and available lessons met the new crosscutting concepts; an evaluation rubric was developed and used to examine teacher and student lessons. Lessons were evaluated on evidence of the science, engineering and application of the engineering for each of the seven crosscutting concepts in the Framework. Each lesson was also evaluated for grade level appropriateness to determine if the lesson was suitable for the intended grade level(s) designated by the lesson. The study demonstrated that the majority of lesson items contained science applications of the crosscutting concepts. However, few contained evidence of engineering applications of the crosscutting concepts. Not only was there lack of evidence for engineering examples of the crosscutting concepts, but a lack of application engineering concepts as well. To evaluate application of the engineering concepts, the activities were examined for characteristics of the engineering design process. Results indicated that student activities were limited in both the nature of the activity and the quantity of lessons that contained activities. The majority of lessons were found to be grade appropriate. This study demonstrated the need to redesign current lessons to incorporate more engineering-specific examples from the crosscutting concepts. Furthermore, it provided evidence the current model of material development was out dated and should be revised to include engineering concepts to meet the needs of the new science standards.

Contributors

Agent

Created

Date Created
2013

151301-Thumbnail Image.png

1-dimensional zinc oxide nanomaterial growth and solar cell applications

Description

Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known

Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known as a conductive material when doped with elements such as indium, gallium and aluminum. The solubility of those dopant elements in ZnO is still debatable; but, it is necessary to find alternative conducting materials when their form is film or nanostructure for display devices. This is a consequence of the ever increasing price of indium. In addition, a new generation solar cell (nanostructured or hybrid photovoltaics) requires compatible materials which are capable of free standing on substrates without seed or buffer layers and have the ability introduce electrons or holes pathway without blocking towards electrodes. The nanostructures for solar cells using inorganic materials such as silicon (Si), titanium oxide (TiO2), and ZnO have been an interesting topic for research in solar cell community in order to overcome the limitation of efficiency for organic solar cells. This dissertation is a study of the rational solution-based synthesis of 1-dimentional ZnO nanomaterial and its solar cell applications. These results have implications in cost effective and uniform nanomanufacturing for the next generation solar cells application by controlling growth condition and by doping transition metal element in solution.

Contributors

Agent

Created

Date Created
2012

152260-Thumbnail Image.png

Multipath mitigating correlation kernels

Description

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival estimates are based upon accurate direct sequence spread spectrum (DSSS) code and carrier phase tracking. Current multipath mitigating GNSS solutions include fixed radiation pattern antennas and windowed delay-lock loop code phase discriminators. A new multipath mitigating code tracking algorithm is introduced that utilizes a non-symmetric correlation kernel to reject multipath. Independent parameters provide a means to trade-off code tracking discriminant gain against multipath mitigation performance. The algorithm performance is characterized in terms of multipath phase error bias, phase error estimation variance, tracking range, tracking ambiguity and implementation complexity. The algorithm is suitable for modernized GNSS signals including Binary Phase Shift Keyed (BPSK) and a variety of Binary Offset Keyed (BOC) signals. The algorithm compensates for unbalanced code sequences to ensure a code tracking bias does not result from the use of asymmetric correlation kernels. The algorithm does not require explicit knowledge of the propagation channel model. Design recommendations for selecting the algorithm parameters to mitigate precorrelation filter distortion are also provided.

Contributors

Agent

Created

Date Created
2013

154027-Thumbnail Image.png

Methods and devices for assessment of fiprole pesticides in engineered waterways

Description

This dissertation focused on the development and application of state-of-the-art monitoring tools and analysis methods for tracking the fate of trace level contaminants in the natural and built water environments, using fipronil as a model; fipronil and its primary degradates

This dissertation focused on the development and application of state-of-the-art monitoring tools and analysis methods for tracking the fate of trace level contaminants in the natural and built water environments, using fipronil as a model; fipronil and its primary degradates (known collectively as fiproles) are among a group of trace level emerging environmental contaminants that are extremely potent arthropodic neurotoxins. The work further aimed to fill in data gaps regarding the presence and fate of fipronil in engineered water systems, specifically in a wastewater treatment plant (WWTP), and in an engineered wetland. A review of manual and automated “active” water sampling technologies motivated the development of two new automated samplers capable of in situ biphasic extraction of water samples across the bulk water/sediment interface of surface water systems. Combined with an optimized method for the quantification of fiproles, the newly developed In Situ Sampler for Biphasic water monitoring (IS2B) was deployed along with conventional automated water samplers, to study the fate and occurrence of fiproles in engineered water environments; continuous sampling over two days and subsequent analysis yielded average total fiprole concentrations in wetland surface water (9.9 ± 4.6 to 18.1 ± 4.6 ng/L) and wetland sediment pore water (9.1 ± 3.0 to 12.6 ± 2.1 ng/L). A mass balance of the WWTP located immediately upstream demonstrated unattenuated breakthrough of total fiproles through the WWTP with 25 ± 3 % of fipronil conversion to degradates, and only limited removal of total fiproles in the wetland (47 ± 13%). Extrapolation of local emissions (5–7 g/d) suggests nationwide annual fiprole loadings from WWTPs to U.S. surface waters on the order of about one half to three quarters of a metric tonne. The qualitative and quantitative data collected in this work have regulatory implications, and the sampling tools and analysis strategies described in this thesis have broad applicability in the assessment of risks posed by trace level environmental contaminants.

Contributors

Agent

Created

Date Created
2015