Matching Items (2,237)
Filtering by

Clear all filters

Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
150359-Thumbnail Image.png
Description
S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic

S-Taliro is a fully functional Matlab toolbox that searches for trajectories of minimal robustness in hybrid systems that are implemented as either m-functions or Simulink/State flow models. Trajectories with minimal robustness are found using automatic testing of hybrid systems against user specifications. In this work we use Metric Temporal Logic (MTL) to describe the user specifications for the hybrid systems. We then try to falsify the MTL specification using global minimization of robustness metric. Global minimization is carried out using stochastic optimization algorithms like Monte-Carlo (MC) and Extended Ant Colony Optimization (EACO) algorithms. Irrespective of the type of the model we provide as an input to S-Taliro, the user needs to specify the MTL specification, the initial conditions and the bounds on the inputs. S-Taliro then uses this information to generate test inputs which are used to simulate the system. The simulation trace is then provided as an input to Taliro which computes the robustness estimate of the MTL formula. Global minimization of this robustness metric is performed to generate new test inputs which again generate simulation traces which are closer to falsifying the MTL formula. Traces with negative robustness values indicate that the simulation trace falsified the MTL formula. Traces with positive robustness values are also of great importance because they indicate how robust the system is against the given specification. S-Taliro has been seamlessly integrated into the Matlab environment, which is extensively used for model-based development of control software. Moreover the toolbox has been developed in a modular fashion and therefore adding new optimization algorithms is easy and straightforward. In this work I present the architecture of S-Taliro and its working on a few benchmark problems.
ContributorsAnnapureddy, Yashwanth Singh Rahul (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2011
150360-Thumbnail Image.png
Description
A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to

A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to get workload, temperature and CPU performance counter values. The controller is designed and simulated using circuit-design and synthesis tools. At device-level, on-chip planar inductors suffer from low inductance occupying large chip area. On-chip inductors with integrated magnetic materials are designed, simulated and fabricated to explore performance-efficiency trade offs and explore potential applications such as resonant clocking and on-chip voltage regulation. A system level study is conducted to evaluate the effect of on-chip voltage regulator employing magnetic inductors as the output filter. It is concluded that neuromorphic power controller is beneficial for fine-grained per-core power management in conjunction with on-chip voltage regulators utilizing scaled magnetic inductors.
ContributorsSinha, Saurabh (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Yu, Hongbin (Committee member) / Christen, Jennifer B. (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150364-Thumbnail Image.png
Description
Dual-wavelength laser sources have various existing and potential applications in wavelength division multiplexing, differential techniques in spectroscopy for chemical sensing, multiple-wavelength interferometry, terahertz-wave generation, microelectromechanical systems, and microfluidic lab-on-chip systems. In the drive for ever smaller and increasingly mobile electronic devices, dual-wavelength coherent light output from a single semiconductor laser

Dual-wavelength laser sources have various existing and potential applications in wavelength division multiplexing, differential techniques in spectroscopy for chemical sensing, multiple-wavelength interferometry, terahertz-wave generation, microelectromechanical systems, and microfluidic lab-on-chip systems. In the drive for ever smaller and increasingly mobile electronic devices, dual-wavelength coherent light output from a single semiconductor laser diode would enable further advances and deployment of these technologies. The output of conventional laser diodes is however limited to a single wavelength band with a few subsequent lasing modes depending on the device design. This thesis investigates a novel semiconductor laser device design with a single cavity waveguide capable of dual-wavelength laser output with large spectral separation. The novel dual-wavelength semiconductor laser diode uses two shorter- and longer-wavelength active regions that have separate electron and hole quasi-Fermi energy levels and carrier distributions. The shorter-wavelength active region is based on electrical injection as in conventional laser diodes, and the longer-wavelength active region is then pumped optically by the internal optical field of the shorter-wavelength laser mode, resulting in stable dual-wavelength laser emission at two different wavelengths quite far apart. Different designs of the device are studied using a theoretical model developed in this work to describe the internal optical pumping scheme. The carrier transport and separation of the quasi-Fermi distributions are then modeled using a software package that solves Poisson's equation and the continuity equations to simulate semiconductor devices. Three different designs are grown using molecular beam epitaxy, and broad-area-contact laser diodes are processed using conventional methods. The modeling and experimental results of the first generation design indicate that the optical confinement factor of the longer-wavelength active region is a critical element in realizing dual-wavelength laser output. The modeling predicts lower laser thresholds for the second and third generation designs; however, the experimental results of the second and third generation devices confirm challenges related to the epitaxial growth of the structures in eventually demonstrating dual-wavelength laser output.
ContributorsGreen, Benjamin C (Author) / Zhang, Yong-Hang (Thesis advisor) / Ning, Cun-Zheng (Committee member) / Tao, Nongjian (Committee member) / Roedel, Ronald J (Committee member) / Arizona State University (Publisher)
Created2011
150417-Thumbnail Image.png
Description
The drive towards device scaling and large output power in millimeter and sub-millimeter wave power amplifiers results in a highly non-linear, out-of-equilibrium charge transport regime. Particle-based Full Band Monte Carlo device simulators allow an accurate description of this carrier dynamics at the nanoscale. This work initially compares GaN high electron

The drive towards device scaling and large output power in millimeter and sub-millimeter wave power amplifiers results in a highly non-linear, out-of-equilibrium charge transport regime. Particle-based Full Band Monte Carlo device simulators allow an accurate description of this carrier dynamics at the nanoscale. This work initially compares GaN high electron mobility transistors (HEMTs) based on the established Ga-face technology and the emerging N-face technology, through a modeling approach that allows a fair comparison, indicating that the N-face devices exhibit improved performance with respect to Ga-face ones due to the natural back-barrier confinement that mitigates short-channel-effects. An investigation is then carried out on the minimum aspect ratio (i.e. gate length to gate-to-channel-distance ratio) that limits short channel effects in ultra-scaled GaN and InP HEMTs, indicating that this value in GaN devices is 15 while in InP devices is 7.5. This difference is believed to be related to the different dielectric properties of the two materials, and the corresponding different electric field distributions. The dielectric effects of the passivation layer in millimeter-wave, high-power GaN HEMTs are also investigated, finding that the effective gate length is increased by fringing capacitances, enhanced by the dielectrics in regions adjacent to the gate for layers thicker than 5 nm, strongly affecting the frequency performance of deep sub-micron devices. Lastly, efficient Full Band Monte Carlo particle-based device simulations of the large-signal performance of mm-wave transistor power amplifiers with high-Q matching networks are reported for the first time. In particular, a CellularMonte Carlo (CMC) code is self-consistently coupled with a Harmonic Balance (HB) frequency domain circuit solver. Due to the iterative nature of the HB algorithm, this simulation approach is possible only due to the computational efficiency of the CMC, which uses pre-computed scattering tables. On the other hand, HB allows the direct simulation of the steady-state behavior of circuits with long transient time. This work provides an accurate and efficient tool for the device early-stage design, which allows a computerbased performance evaluation in lieu of the extremely time-consuming and expensive iterations of prototyping and experimental large-signal characterization.
ContributorsGuerra, Diego (Author) / Saraniti, Marco (Thesis advisor) / Ferry, David K. (Committee member) / Goodnick, Stephen M (Committee member) / Ozev, Sule (Committee member) / Arizona State University (Publisher)
Created2011
147972-Thumbnail Image.png
Description

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which

Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.

ContributorsHirte, Amanda (Author) / Kosut, Oliver (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148212-Thumbnail Image.png
Description

Developed a business product with a team of CS students.

ContributorsPerri, Cole Thomas (Co-author) / Hernandez, Maximilliano (Co-author) / Schneider, Kaitlin (Co-author) / Call, Andy (Thesis director) / Hunt, Neil (Committee member) / School of Accountancy (Contributor) / Watts College of Public Service & Community Solut (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011