Matching Items (1,349)

Filtering by

Clear all filters

149780-Thumbnail Image.png

Interactive laboratory for digital signal processing in iOS devices

Description

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices, including the iPhone, iPod touch and the latest in the family - the iPad, are among the well-known and widely used mobile devices today. Their advanced multi-touch interface and improved processing power can be exploited for engineering and STEM demonstrations. Moreover, these devices have become a part of everyday student life. Hence, the design of exciting mobile applications and software represents a great opportunity to build student interest and enthusiasm in science and engineering. This thesis presents the design and implementation of a portable interactive signal processing simulation software on the iOS platform. The iOS-based object-oriented application is called i-JDSP and is based on the award winning Java-DSP concept. It is implemented in Objective-C and C as a native Cocoa Touch application that can be run on any iOS device. i-JDSP offers basic signal processing simulation functions such as Fast Fourier Transform, filtering, spectral analysis on a compact and convenient graphical user interface and provides a very compelling multi-touch programming experience. Built-in modules also demonstrate concepts such as the Pole-Zero Placement. i-JDSP also incorporates sound capture and playback options that can be used in near real-time analysis of speech and audio signals. All simulations can be visually established by forming interactive block diagrams through multi-touch and drag-and-drop. Computations are performed on the mobile device when necessary, making the block diagram execution fast. Furthermore, the extensive support for user interactivity provides scope for improved learning. The results of i-JDSP assessment among senior undergraduate and first year graduate students revealed that the software created a significant positive impact and increased the students' interest and motivation and in understanding basic DSP concepts.

Contributors

Agent

Created

Date Created
2011

149785-Thumbnail Image.png

A theoretical analysis of microchannel flow boiling enhancement via cross-sectional expansion

Description

Microchannel heat sinks can possess heat transfer characteristics unavailable in conventional heat exchangers; such sinks offer compact solutions to otherwise intractable thermal management problems, notably in small-scale electronics cooling. Flow boiling in microchannels allows a very high heat transfer rate,

Microchannel heat sinks can possess heat transfer characteristics unavailable in conventional heat exchangers; such sinks offer compact solutions to otherwise intractable thermal management problems, notably in small-scale electronics cooling. Flow boiling in microchannels allows a very high heat transfer rate, but is bounded by the critical heat flux (CHF). This thesis presents a theoretical-numerical study of a method to improve the heat rejection capability of a microchannel heat sink via expansion of the channel cross-section along the flow direction. The thermodynamic quality of the refrigerant increases during flow boiling, decreasing the density of the bulk coolant as it flows. This may effect pressure fluctuations in the channels, leading to nonuniform heat transfer and local dryout in regions exceeding CHF. This undesirable phenomenon is counteracted by permitting the cross-section of the microchannel to increase along the direction of flow, allowing more volume for the vapor. Governing equations are derived from a control-volume analysis of a single heated rectangular microchannel; the cross-section is allowed to expand in width and height. The resulting differential equations are solved numerically for a variety of channel expansion profiles and numbers of channels. The refrigerant is R-134a and channel parameters are based on a physical test bed in a related experiment. Significant improvement in CHF is possible with moderate area expansion. Minimal additional manufacturing costs could yield major gains in the utility of microchannel heat sinks. An optimum expansion rate occurred in certain cases, and alterations in the channel width are, in general, more effective at improving CHF than alterations in the channel height. Modest expansion in height enables small width expansions to be very effective.

Contributors

Agent

Created

Date Created
2011

149867-Thumbnail Image.png

Incorporating auditory models in speech/audio applications

Description

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception.

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

Contributors

Agent

Created

Date Created
2011

149825-Thumbnail Image.png

Separating and detecting Escherichia Coli in a microfluidic thannel [i.e., channel] for urinary tract infection (UTI) applications

Description

In this thesis, I present a lab-on-a-chip (LOC) that can separate and detect Escherichia Coli (E. coli) in simulated urine samples for Urinary Tract Infection (UTI) diagnosis. The LOC consists of two (concentration and sensing) chambers connected in series and

In this thesis, I present a lab-on-a-chip (LOC) that can separate and detect Escherichia Coli (E. coli) in simulated urine samples for Urinary Tract Infection (UTI) diagnosis. The LOC consists of two (concentration and sensing) chambers connected in series and an integrated impedance detector. The two-chamber approach is designed to reduce the non-specific absorption of proteins, e.g. albumin, that potentially co-exist with E. coli in urine. I directly separate E. coli K-12 from a urine cocktail in a concentration chamber containing micro-sized magnetic beads (5 µm in diameter) conjugated with anti-E. coli antibodies. The immobilized E. coli are transferred to a sensing chamber for the impedance measurement. The measurement at the concentration chamber suffers from non-specific absorption of albumin on the gold electrode, which may lead to a false positive response. By contrast, the measured impedance at the sensing chamber shows ~60 kÙ impedance change between 6.4x104 and 6.4x105 CFU/mL, covering the threshold of UTI (105 CFU/mL). The sensitivity of the LOC for detecting E. coli is characterized to be at least 3.4x104 CFU/mL. I also characterized the LOC for different age groups and white blood cell spiked samples. These preliminary data show promising potential for application in portable LOC devices for UTI detection.

Contributors

Agent

Created

Date Created
2011

149503-Thumbnail Image.png

Stereo based visual odometry

Description

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.

Contributors

Agent

Created

Date Created
2010

149504-Thumbnail Image.png

Cost-effective integrated wireless monitoring of wafer cleanliness using SOI technology

Description

The thesis focuses on cost-efficient integration of the electro-chemical residue sensor (ECRS), a novel sensor developed for the in situ and real-time measurement of the residual impurities left on the wafer surface and in the fine structures of patterned wafers

The thesis focuses on cost-efficient integration of the electro-chemical residue sensor (ECRS), a novel sensor developed for the in situ and real-time measurement of the residual impurities left on the wafer surface and in the fine structures of patterned wafers during typical rinse processes, and wireless transponder circuitry that is based on RFID technology. The proposed technology uses only the NMOS FD-SOI transistors with amorphous silicon as active material with silicon nitride as a gate dielectric. The proposed transistor was simulated under the SILVACO ATLAS Simulation Framework. A parametric study was performed to study the impact of different gate lengths (6 μm to 56 μm), electron motilities (0.1 cm2/Vs to 1 cm2/Vs), gate dielectric (SiO2 and SiNx) and active materials (a-Si and poly-Si) specifications. Level-1 models, that are accurate enough to acquire insight into the circuit behavior and perform preliminary design, were successfully constructed by analyzing drain current and gate to node capacitance characteristics against drain to source and gate to source voltages. Using the model corresponding to SiNx as gate dielectric, a-Si:H as active material with electron mobility equal to 0.4 cm2/V-sec, an operational amplifier was designed and was tested in unity gain configuration at modest load-frequency specifications.

Contributors

Agent

Created

Date Created
2010

149506-Thumbnail Image.png

Portfolio modeling, analysis and management

Description

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.

Contributors

Agent

Created

Date Created
2010

149510-Thumbnail Image.png

Development of models for optical instrument transformers

Description

Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and

Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and developing complete models are the contributions of this work. Dedicated test facilities are developed so that the steady state and transient performances of analog outputs of a magnetic current transformer (CT) and a magnetic voltage transformer (VT) are compared with that of an optical current transformer (OCT) and an optical voltage transformer (OVT) respectively. Frequency response characteristics of OIT outputs are obtained. Comparison results show that OITs have a specified accuracy of 0.3% in all cases. They are linear, and DC offset does not saturate the systems. The OIT output signal has a 40~60 μs time delay, but this is typically less than the equivalent phase difference permitted by the IEEE and IEC standards for protection applications. Analog outputs have significantly higher bandwidths (adjustable to 20 to 40 kHz) than the IT. The digital output signal bandwidth (2.4 kHz) of an OCT is significantly lower than the analog signal bandwidth (20 kHz) due to the sampling rates involved. The OIT analog outputs may have significant white noise of 6%, but the white noise does not affect accuracy or protection performance. Temperatures up to 50oC do not adversely affect the performance of the OITs. Three types of models are developed for analog outputs: analog, digital, and complete models. Well-known mathematical methods, such as network synthesis and Jones calculus methods are applied. The developed models are compared with experiment results and are verified with simulation programs. Results show less than 1.5% for OCT and 2% for OVT difference and that the developed models can be used for power system simulations and the method used for the development can be used to develop models for all other brands of optical systems. The communication and data transfer between the all-digital protection systems is investigated by developing a test facility for all digital protection systems. Test results show that different manufacturers' relays and transformers based on the IEC standard can serve the power system successfully.

Contributors

Agent

Created

Date Created
2010

149829-Thumbnail Image.png

Generalized statistical tolerance analysis and three dimensional model for manufacturing tolerance transfer in manufacturing process planning

Description

Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet

Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly.

Contributors

Agent

Created

Date Created
2011

149848-Thumbnail Image.png

Performance of single layer H.264 SVC video over error prone networks

Description

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.

Contributors

Agent

Created

Date Created
2011