Matching Items (43)
Filtering by

Clear all filters

Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
149890-Thumbnail Image.png
Description
Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization of markets, the traffic of containers flowing through the different

Nowadays ports play a critic role in the supply chains of contemporary companies and global commerce. Since the ports' operational effectiveness is critical on the development of competitive supply chains, their contribution to regional economies is essential. With the globalization of markets, the traffic of containers flowing through the different ports has increased significantly in the last decades. In order to attract additional container traffic and improve their comparative advantages over the competition, ports serving same hinterlands explore ways to improve their operations to become more attractive to shippers. This research explores the hypothesis that lowering the variability of the service time observed in the handling of containers, a port reduces the total logistics costs of their customers, increase its competiveness and that of their customers. This thesis proposes a methodology that allows the quantification of the variability existing in the services of a port derived from factors like inefficient internal operations, vessel congestion or external disruptions scenarios. It focuses on assessing the impact of this variability on the user's logistic costs. The methodology also allows a port to define competitive strategies that take into account its variability and that of competing ports. These competitive strategies are also translated into specific parameters that can be used to design and adjust internal operations. The methodology includes (1) a definition of a proper economic model to measure the logistic impact of port's variability, (2) a network analysis approach to the defined problem and (3) a systematic procedure to determine competitive service time parameters for a port. After the methodology is developed, a case study is presented where it is applied to the Port of Guaymas. This is done by finding service time parameters for this port that yield lower logistic costs than the observed in other competing ports.
ContributorsMeneses Preciado, Cesar (Author) / Villalobos, Jesus R (Thesis advisor) / Gel, Esma S (Committee member) / Maltz, Arnold B (Committee member) / Arizona State University (Publisher)
Created2011
149658-Thumbnail Image.png
Description
Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered

Hydropower generation is one of the clean renewable energies which has received great attention in the power industry. Hydropower has been the leading source of renewable energy. It provides more than 86% of all electricity generated by renewable sources worldwide. Generally, the life span of a hydropower plant is considered as 30 to 50 years. Power plants over 30 years old usually conduct a feasibility study of rehabilitation on their entire facilities including infrastructure. By age 35, the forced outage rate increases by 10 percentage points compared to the previous year. Much longer outages occur in power plants older than 20 years. Consequently, the forced outage rate increases exponentially due to these longer outages. Although these long forced outages are not frequent, their impact is immense. If reasonable timing of rehabilitation is missed, an abrupt long-term outage could occur and additional unnecessary repairs and inefficiencies would follow. On the contrary, too early replacement might cause the waste of revenue. The hydropower plants of Korea Water Resources Corporation (hereafter K-water) are utilized for this study. Twenty-four K-water generators comprise the population for quantifying the reliability of each equipment. A facility in a hydropower plant is a repairable system because most failures can be fixed without replacing the entire facility. The fault data of each power plant are collected, within which only forced outage faults are considered as raw data for reliability analyses. The mean cumulative repair functions (MCF) of each facility are determined with the failure data tables, using Nelson's graph method. The power law model, a popular model for a repairable system, can also be obtained to represent representative equipment and system availability. The criterion-based analysis of HydroAmp is used to provide more accurate reliability of each power plant. Two case studies are presented to enhance the understanding of the availability of each power plant and represent economic evaluations for modernization. Also, equipment in a hydropower plant is categorized into two groups based on their reliability for determining modernization timing and their suitable replacement periods are obtained using simulation.
ContributorsKwon, Ogeuk (Author) / Holbert, Keith E. (Thesis advisor) / Heydt, Gerald T (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2011
150172-Thumbnail Image.png
Description
This thesis develops a low-investment marketing strategy that allows low-to-mid level farmers extend their commercialization reach by strategically sending containers of fresh produce items to secondary markets that present temporary arbitrage opportunities. The methodology aims at identifying time windows of opportunity in which the price differential between two markets create

This thesis develops a low-investment marketing strategy that allows low-to-mid level farmers extend their commercialization reach by strategically sending containers of fresh produce items to secondary markets that present temporary arbitrage opportunities. The methodology aims at identifying time windows of opportunity in which the price differential between two markets create an arbitrage opportunity for a transaction; a transaction involves buying a fresh produce item at a base market, and then shipping and selling it at secondary market price. A decision-making tool is developed that gauges the individual arbitrage opportunities and determines the specific price differential (or threshold level) that is most beneficial to the farmer under particular market conditions. For this purpose, two approaches are developed; a pragmatic approach that uses historic price information of the products in order to find the optimal price differential that maximizes earnings, and a theoretical one, which optimizes an expected profit model of the shipments to identify this optimal threshold. This thesis also develops risk management strategies that further reduce profit variability during a particular two-market transaction. In this case, financial engineering concepts are used to determine a shipment configuration strategy that minimizes the overall variability of the profits. For this, a Markowitz model is developed to determine the weight assignation of each component for a particular shipment. Based on the results of the analysis, it is deemed possible to formulate a shipment policy that not only increases the farmer's commercialization reach, but also produces profitable operations. In general, the observed rates of return under a pragmatic and theoretical approach hovered between 0.072 and 0.616 within important two-market structures. Secondly, it is demonstrated that the level of return and risk can be manipulated by varying the strictness of the shipping policy to meet the overall objectives of the decision-maker. Finally, it was found that one can minimize the risk of a particular two-market transaction by strategically grouping the product shipments.
ContributorsFlores, Hector M (Author) / Villalobos, Rene (Thesis advisor) / Runger, George C. (Committee member) / Maltz, Arnold (Committee member) / Arizona State University (Publisher)
Created2011
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152266-Thumbnail Image.png
Description
In the industry of manufacturing, each gas turbine engine component begins in a raw state such as bar stock and is routed through manufacturing processes to define its final form before being installed on the engine. What is the follow-up to this part? What happens when over time and usage

In the industry of manufacturing, each gas turbine engine component begins in a raw state such as bar stock and is routed through manufacturing processes to define its final form before being installed on the engine. What is the follow-up to this part? What happens when over time and usage it wears? Several factors have created a section of the manufacturing industry known as aftermarket to support the customer in their need for restoration and repair of their original product. Once a product has reached a wear factor or cycle limit that cannot be ignored, one of the options is to have it repaired to maintain use of the core. This research investigated the study into the creation and application of repair development methodology that can be utilized by current and new manufacturing engineers of the world. Those who have been in this field for some time will find the process thought provoking while the engineering students can develop a foundation of thinking to prepare for the common engineering problems they will be tasked to resolve. The examples, figures and tables are true issues of the industry though the data will have been changed due to proprietary factors. The results of the study reveals, under most scenarios, a solid process can be followed to proceed with the best options for repair based on the initial discrepancy. However, this methodology will not be a "catch-all" process but a guidance that will develop the proper thinking in evaluation of the repair options and the possible failure modes of each choice. As with any continuous improvement tool, further research is needed to test the applicability of this process in other fields.
ContributorsMoedano, Jesus A (Author) / Lewis, Sharon L (Thesis advisor) / Meitz, Robert (Committee member) / Georgeou, Trian (Committee member) / Arizona State University (Publisher)
Created2013
151029-Thumbnail Image.png
Description
In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to help the demand planning process for production. However, accurate forecasting from

In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to help the demand planning process for production. However, accurate forecasting from historical data is difficult in today's complex volatile market. Also it is not the only factor that influences the demand planning. Factors, namely, Consumer's shifting interest and buying power also influence the future demand. Hence, this research study focuses on Just-In-Time (JIT) philosophy using a pull control strategy implemented with a Kanban control system to control the inventory flow. Two different product structures, serial product structure and assembly product structure, are considered for this research. Three different methods: the Toyota Production System model, a histogram model and a cost minimization model, have been used to find the number of kanbans that was used in a computer simulated Just-In-Time Kanban System. The simulation model was built to execute the designed scenarios for both the serial and assembly product structure. A test was performed to check the significance effects of various factors on system performance. Results of all three methods were collected and compared to indicate which method provides the most effective way to determine number of kanbans at various conditions. It was inferred that histogram model and cost minimization models are more accurate in calculating the required kanbans for various manufacturing conditions. Method-1 fails to adjust the kanbans when the backordered cost increases or when product structure changes. Among the product structures, serial product structures proved to be effective when Method-2 or Method-3 is used to calculate the kanban numbers for the system. The experimental result data also indicated that the lower container capacity collects more backorders in the system, which increases the inventory cost, than the high container capacity for both serial and assembly product structures.
ContributorsSahu, Pranati (Author) / Askin, Ronald G. (Thesis advisor) / Shunk, Dan L. (Thesis advisor) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2012
150981-Thumbnail Image.png
Description
For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.
ContributorsHuddleston, Jeanne (Author) / Fowler, John (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Gel, Esma (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012
151111-Thumbnail Image.png
Description
This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages

This research is motivated by a deterministic scheduling problem that is fairly common in manufacturing environments, where there are certain processes that call for a machine working on multiple jobs at the same time. An example of such an environment is wafer fabrication in the semiconductor industry where some stages can be modeled as batch processes. There has been significant work done in the past in the field of a single stage of parallel machines which process jobs in batches. The primary motivation behind this research is to extend the research done in this area to a two-stage flow-shop where jobs arrive with unequal ready times and belong to incompatible job families with the goal of minimizing total weighted tardiness. As a first step to propose solutions, a mixed integer mathematical model is developed which tackles the problem at hand. The problem is NP-hard and thus the developed mathematical program can only solve problem instances of smaller sizes in a reasonable amount of time. The next step is to build heuristics which can provide feasible solutions in polynomial time for larger problem instances. The basic nature of the heuristics proposed is time window decomposition, where jobs within a moving time frame are considered for batching each time a machine becomes available on either stage. The Apparent Tardiness Cost (ATC) rule is used to build batches, and is modified to calculate ATC indices on a batch as well as a job level. An improvisation to the above heuristic is proposed, where the heuristic is run iteratively, each time assigning start times of jobs on the second stage as due dates for the jobs on the first stage. The underlying logic behind the iterative approach is to improve the way due dates are estimated for the first stage based on assigned due dates for jobs in the second stage. An important study carried out as part of this research is to analyze the bottleneck stage in terms of its location and how it affects the performance measure. Extensive experimentation is carried out to test how the quality of the solution varies when input parameters are varied between high and low values.
ContributorsTewari, Anubha Alokkumar (Author) / Fowler, John W (Thesis advisor) / Monch, Lars (Thesis advisor) / Gel, Esma S (Committee member) / Arizona State University (Publisher)
Created2012
149351-Thumbnail Image.png
Description
Micromachining has seen application growth in a variety of industries requiring a miniaturization of the machining process. Machining at the micro level generates different cutter/workpiece interactions, generating more localized temperature spikes in the part/sample, as suggested by multiple studies. Temper-etch inspection is a non-destructive test used to identify `grind burns'

Micromachining has seen application growth in a variety of industries requiring a miniaturization of the machining process. Machining at the micro level generates different cutter/workpiece interactions, generating more localized temperature spikes in the part/sample, as suggested by multiple studies. Temper-etch inspection is a non-destructive test used to identify `grind burns' or localized over-heating in steel components. This research investigated the application of temper-etch inspection to micromachined steel. The tests were performed on AISI 4340 steel samples. Finding, indications of localized over-heating was the primary focus of the experiment. In addition, change in condition between the original and post-machining hardness in the machined slot bottom was investigated. The results revealed that, under the conditions of the experiment, no indications of localized over-heating were present. However, there was a change in hardness at the bottom of the machined slot compared to the rest of the sample. Further research is needed to test the applicability of temper-etch inspection to micromilled steel and to identify the source of the change in hardness.
ContributorsSayler, William A (Author) / Biekert, Russ (Thesis advisor) / Danielson, Scott (Committee member) / Georgeou, Trian (Committee member) / Arizona State University (Publisher)
Created2010