Matching Items (299)
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
147842-Thumbnail Image.png
Description

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer

Motor learning is the process of improving task execution according to some measure of performance. This can be divided into skill learning, a model-free process, and adaptation, a model-based process. Prior studies have indicated that adaptation results from two complementary learning systems with parallel organization. This report attempted to answer the question of whether a similar interaction leads to savings, a model-free process that is described as faster relearning when experiencing something familiar. This was tested in a two-week reaching task conducted on a robotic arm capable of perturbing movements. The task was designed so that the two sessions differed in their history of errors. By measuring the change in the learning rate, the savings was determined at various points. The results showed that the history of errors successfully modulated savings. Thus, this supports the notion that the two complementary systems interact to develop savings. Additionally, this report was part of a larger study that will explore the organizational structure of the complementary systems as well as the neural basis of this motor learning.

ContributorsRuta, Michael (Author) / Santello, Marco (Thesis director) / Blais, Chris (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / School of Human Evolution & Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

This thesis worked towards the development of a parameterized 3D model off a cover that could go over any specific prosthesis depending on the parameters that had been entered. It also focused on gathering user inputs, which was done with the aid of the Amputee Coalition, that could be used

This thesis worked towards the development of a parameterized 3D model off a cover that could go over any specific prosthesis depending on the parameters that had been entered. It also focused on gathering user inputs, which was done with the aid of the Amputee Coalition, that could be used to create an aesthetic design on this cover. The Amputee Coalition helped to recruit participants through its website and social media platforms. Finally, multiple methods of creating a design were developed to increase the amount of customization that a user could have for their cover.

ContributorsRiley, Nicholas (Co-author) / Fusaro, Gerard (Co-author) / Sugar, Thomas (Thesis director) / Redkar, Sangram (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

This thesis worked towards the development of a parameterized 3D model off a cover that could go over any specific prosthesis depending on the parameters that had been entered. It also focused on gathering user inputs, which was done with the aid of the Amputee Coalition, that could be used

This thesis worked towards the development of a parameterized 3D model off a cover that could go over any specific prosthesis depending on the parameters that had been entered. It also focused on gathering user inputs, which was done with the aid of the Amputee Coalition, that could be used to create an aesthetic design on this cover. The Amputee Coalition helped to recruit participants through its website and social media platforms. Finally, multiple methods of creating a design were developed to increase the amount of customization that a user could have for their cover.

ContributorsFusaro, Gerard Anthony (Co-author) / Riley, Nicholas (Co-author) / Sugar, Thomas (Thesis director) / Redkar, Sangram (Committee member) / College of Integrative Sciences and Arts (Contributor) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The purpose of this creative project was to create a stereo sound system in a unique medium. As a team, we decided to integrate a Tesla Coil with a bluetooth audio source. These high frequency, high voltage systems can be configured to emit their electrical discharge in a manner that

The purpose of this creative project was to create a stereo sound system in a unique medium. As a team, we decided to integrate a Tesla Coil with a bluetooth audio source. These high frequency, high voltage systems can be configured to emit their electrical discharge in a manner that resembles playing tunes. Originally the idea was to split the audio into left and right, then to further segregate the signals to have a treble, mid, and base emitter for each side. Due to time, budget, and scope constraints, we decided to complete the project with only two coils.<br/><br/>For this project, the team decided to use a solid-state coil kit. This kit was purchased from OneTelsa and would help ensure everyone’s safety and the project’s success. The team developed our own interrupting or driving circuit through reverse-engineering the interrupter provided by oneTesla and discussing with other engineers. The custom interpreter was controlled by the PSoC5 LP and communicated with an audio source through the DFRobot Bluetooth module. Utilizing the left and right audio signals it can drive the two Tesla Coils in stereo to play the music.

ContributorsPinkowski, Olivia N (Co-author) / Hutcherson, Cree (Co-author) / Jordan, Shawn (Thesis director) / Sugar, Thomas (Committee member) / Engineering Programs (Contributor, Contributor) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The purpose of this creative project was to create a stereo sound system in a unique medium. As a team, we decided to integrate a Tesla Coil with a bluetooth audio source. These high frequency, high voltage systems can be configured to emit their electrical discharge in a manner that

The purpose of this creative project was to create a stereo sound system in a unique medium. As a team, we decided to integrate a Tesla Coil with a bluetooth audio source. These high frequency, high voltage systems can be configured to emit their electrical discharge in a manner that resembles playing tunes. Originally the idea was to split the audio into left and right, then to further segregate the signals to have a treble, mid, and base emitter for each side. Due to time, budget, and scope constraints, we decided to complete the project with only two coils.<br/><br/>For this project, the team decided to use a solid-state coil kit. This kit was purchased from OneTelsa and would help ensure everyone’s safety and the project’s success. The team developed our own interrupting or driving circuit through reverse-engineering the interrupter provided by oneTesla and discussing with other engineers. The custom interpreter was controlled by the PSoC5 LP and communicated with an audio source through the DFRobot Bluetooth module. Utilizing the left and right audio signals it can drive the two Tesla Coils in stereo to play the music.

ContributorsHutcherson, Cree (Co-author) / Pinkowski, Olivia (Co-author) / Jordan, Shawn (Thesis director) / Sugar, Thomas (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149903-Thumbnail Image.png
Description
Neurostimulation methods currently include deep brain stimulation (DBS), optogenetic, transcranial direct-current stimulation (tDCS), and transcranial magnetic stimulation (TMS). TMS and tDCS are noninvasive techniques whereas DBS and optogenetic require surgical implantation of electrodes or light emitting devices. All approaches, except for optogenetic, have been implemented in clinical settings because they

Neurostimulation methods currently include deep brain stimulation (DBS), optogenetic, transcranial direct-current stimulation (tDCS), and transcranial magnetic stimulation (TMS). TMS and tDCS are noninvasive techniques whereas DBS and optogenetic require surgical implantation of electrodes or light emitting devices. All approaches, except for optogenetic, have been implemented in clinical settings because they have demonstrated therapeutic utility and clinical efficacy for neurological and psychiatric disorders. When applied for therapeutic applications, these techniques suffer from limitations that hinder the progression of its intended use to treat compromised brain function. DBS requires an invasive surgical procedure that surfaces complications from infection, longevity of electrical components, and immune responses to foreign materials. Both TMS and tDCS circumvent the problems seen with DBS as they are noninvasive procedures, but they fail to produce the spatial resolution required to target specific brain structures. Realizing these restrictions, we sought out to use ultrasound as a neurostimulation modality. Ultrasound is capable of achieving greater resolution than TMS and tDCS, as we have demonstrated a ~2mm lateral resolution, which can be delivered noninvasively. These characteristics place ultrasound superior to current neurostimulation methods. For these reasons, this dissertation provides a developed protocol to use transcranial pulsed ultrasound (TPU) as a neurostimulation technique. These investigations implement electrophysiological, optophysiological, immunohistological, and behavioral methods to elucidate the effects of ultrasound on the central nervous system and raise questions about the functional consequences. Intriguingly, we showed that TPU was also capable of stimulating intact sub-cortical circuits in the anesthetized mouse. These data reveal that TPU can evoke synchronous oscillations in the hippocampus in addition to increasing expression of brain-derived neurotrophic factor (BDNF). Considering these observations, and the ability to noninvasively stimulate neuronal activity on a mesoscale resolution, reveals a potential avenue to be effective in clinical settings where current brain stimulation techniques have shown to be beneficial. Thus, the results explained by this dissertation help to pronounce the significance for these protocols to gain translational recognition.
ContributorsTufail, Yusuf Zahid (Author) / Tyler, William J (Thesis advisor) / Duch, Carsten (Committee member) / Muthuswamy, Jitendran (Committee member) / Santello, Marco (Committee member) / Tillery, Stephen H (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011