Matching Items (1,710)

Filtering by

Clear all filters

149780-Thumbnail Image.png

Interactive laboratory for digital signal processing in iOS devices

Description

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex

The demand for handheld portable computing in education, business and research has resulted in advanced mobile devices with powerful processors and large multi-touch screens. Such devices are capable of handling tasks of moderate computational complexity such as word processing, complex Internet transactions, and even human motion analysis. Apple's iOS devices, including the iPhone, iPod touch and the latest in the family - the iPad, are among the well-known and widely used mobile devices today. Their advanced multi-touch interface and improved processing power can be exploited for engineering and STEM demonstrations. Moreover, these devices have become a part of everyday student life. Hence, the design of exciting mobile applications and software represents a great opportunity to build student interest and enthusiasm in science and engineering. This thesis presents the design and implementation of a portable interactive signal processing simulation software on the iOS platform. The iOS-based object-oriented application is called i-JDSP and is based on the award winning Java-DSP concept. It is implemented in Objective-C and C as a native Cocoa Touch application that can be run on any iOS device. i-JDSP offers basic signal processing simulation functions such as Fast Fourier Transform, filtering, spectral analysis on a compact and convenient graphical user interface and provides a very compelling multi-touch programming experience. Built-in modules also demonstrate concepts such as the Pole-Zero Placement. i-JDSP also incorporates sound capture and playback options that can be used in near real-time analysis of speech and audio signals. All simulations can be visually established by forming interactive block diagrams through multi-touch and drag-and-drop. Computations are performed on the mobile device when necessary, making the block diagram execution fast. Furthermore, the extensive support for user interactivity provides scope for improved learning. The results of i-JDSP assessment among senior undergraduate and first year graduate students revealed that the software created a significant positive impact and increased the students' interest and motivation and in understanding basic DSP concepts.

Contributors

Agent

Created

Date Created
2011

Robust margin based classifiers for small sample data

Description

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.

Contributors

Agent

Created

Date Created
2011

149867-Thumbnail Image.png

Incorporating auditory models in speech/audio applications

Description

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception.

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

Contributors

Agent

Created

Date Created
2011

149825-Thumbnail Image.png

Separating and detecting Escherichia Coli in a microfluidic thannel [i.e., channel] for urinary tract infection (UTI) applications

Description

In this thesis, I present a lab-on-a-chip (LOC) that can separate and detect Escherichia Coli (E. coli) in simulated urine samples for Urinary Tract Infection (UTI) diagnosis. The LOC consists of two (concentration and sensing) chambers connected in series and

In this thesis, I present a lab-on-a-chip (LOC) that can separate and detect Escherichia Coli (E. coli) in simulated urine samples for Urinary Tract Infection (UTI) diagnosis. The LOC consists of two (concentration and sensing) chambers connected in series and an integrated impedance detector. The two-chamber approach is designed to reduce the non-specific absorption of proteins, e.g. albumin, that potentially co-exist with E. coli in urine. I directly separate E. coli K-12 from a urine cocktail in a concentration chamber containing micro-sized magnetic beads (5 µm in diameter) conjugated with anti-E. coli antibodies. The immobilized E. coli are transferred to a sensing chamber for the impedance measurement. The measurement at the concentration chamber suffers from non-specific absorption of albumin on the gold electrode, which may lead to a false positive response. By contrast, the measured impedance at the sensing chamber shows ~60 kÙ impedance change between 6.4x104 and 6.4x105 CFU/mL, covering the threshold of UTI (105 CFU/mL). The sensitivity of the LOC for detecting E. coli is characterized to be at least 3.4x104 CFU/mL. I also characterized the LOC for different age groups and white blood cell spiked samples. These preliminary data show promising potential for application in portable LOC devices for UTI detection.

Contributors

Agent

Created

Date Created
2011

149501-Thumbnail Image.png

Detecting sybil nodes in static and dynamic networks

Description

Peer-to-peer systems are known to be vulnerable to the Sybil attack. The lack of a central authority allows a malicious user to create many fake identities (called Sybil nodes) pretending to be independent honest nodes. The goal of the malicious

Peer-to-peer systems are known to be vulnerable to the Sybil attack. The lack of a central authority allows a malicious user to create many fake identities (called Sybil nodes) pretending to be independent honest nodes. The goal of the malicious user is to influence the system on his/her behalf. In order to detect the Sybil nodes and prevent the attack, a reputation system is used for the nodes, built through observing its interactions with its peers. The construction makes every node a part of a distributed authority that keeps records on the reputation and behavior of the nodes. Records of interactions between nodes are broadcast by the interacting nodes and honest reporting proves to be a Nash Equilibrium for correct (non-Sybil) nodes. In this research is argued that in realistic communication schedule scenarios, simple graph-theoretic queries such as the computation of Strongly Connected Components and Densest Subgraphs, help in exposing those nodes most likely to be Sybil, which are then proved to be Sybil or not through a direct test executed by some peers.

Contributors

Agent

Created

Date Created
2010

149503-Thumbnail Image.png

Stereo based visual odometry

Description

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.

Contributors

Agent

Created

Date Created
2010

149504-Thumbnail Image.png

Cost-effective integrated wireless monitoring of wafer cleanliness using SOI technology

Description

The thesis focuses on cost-efficient integration of the electro-chemical residue sensor (ECRS), a novel sensor developed for the in situ and real-time measurement of the residual impurities left on the wafer surface and in the fine structures of patterned wafers

The thesis focuses on cost-efficient integration of the electro-chemical residue sensor (ECRS), a novel sensor developed for the in situ and real-time measurement of the residual impurities left on the wafer surface and in the fine structures of patterned wafers during typical rinse processes, and wireless transponder circuitry that is based on RFID technology. The proposed technology uses only the NMOS FD-SOI transistors with amorphous silicon as active material with silicon nitride as a gate dielectric. The proposed transistor was simulated under the SILVACO ATLAS Simulation Framework. A parametric study was performed to study the impact of different gate lengths (6 μm to 56 μm), electron motilities (0.1 cm2/Vs to 1 cm2/Vs), gate dielectric (SiO2 and SiNx) and active materials (a-Si and poly-Si) specifications. Level-1 models, that are accurate enough to acquire insight into the circuit behavior and perform preliminary design, were successfully constructed by analyzing drain current and gate to node capacitance characteristics against drain to source and gate to source voltages. Using the model corresponding to SiNx as gate dielectric, a-Si:H as active material with electron mobility equal to 0.4 cm2/V-sec, an operational amplifier was designed and was tested in unity gain configuration at modest load-frequency specifications.

Contributors

Agent

Created

Date Created
2010

149506-Thumbnail Image.png

Portfolio modeling, analysis and management

Description

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.

Contributors

Agent

Created

Date Created
2010

149510-Thumbnail Image.png

Development of models for optical instrument transformers

Description

Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and

Optical Instrument Transformers (OIT) have been developed as an alternative to traditional instrument transformers (IT). The question "Can optical instrument transformers substitute for the traditional transformers?" is the main motivation of this study. Finding the answer for this question and developing complete models are the contributions of this work. Dedicated test facilities are developed so that the steady state and transient performances of analog outputs of a magnetic current transformer (CT) and a magnetic voltage transformer (VT) are compared with that of an optical current transformer (OCT) and an optical voltage transformer (OVT) respectively. Frequency response characteristics of OIT outputs are obtained. Comparison results show that OITs have a specified accuracy of 0.3% in all cases. They are linear, and DC offset does not saturate the systems. The OIT output signal has a 40~60 μs time delay, but this is typically less than the equivalent phase difference permitted by the IEEE and IEC standards for protection applications. Analog outputs have significantly higher bandwidths (adjustable to 20 to 40 kHz) than the IT. The digital output signal bandwidth (2.4 kHz) of an OCT is significantly lower than the analog signal bandwidth (20 kHz) due to the sampling rates involved. The OIT analog outputs may have significant white noise of 6%, but the white noise does not affect accuracy or protection performance. Temperatures up to 50oC do not adversely affect the performance of the OITs. Three types of models are developed for analog outputs: analog, digital, and complete models. Well-known mathematical methods, such as network synthesis and Jones calculus methods are applied. The developed models are compared with experiment results and are verified with simulation programs. Results show less than 1.5% for OCT and 2% for OVT difference and that the developed models can be used for power system simulations and the method used for the development can be used to develop models for all other brands of optical systems. The communication and data transfer between the all-digital protection systems is investigated by developing a test facility for all digital protection systems. Test results show that different manufacturers' relays and transformers based on the IEC standard can serve the power system successfully.

Contributors

Agent

Created

Date Created
2010

149695-Thumbnail Image.png

Materialized views over heterogeneous structured data sources in a distributed event stream processing environment

Description

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.

Contributors

Agent

Created

Date Created
2011