Matching Items (591)
Filtering by

Clear all filters

151471-Thumbnail Image.png
Description
In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns

In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns the notable improvements to the format of the temporal fragment of the International Planning Competitions (IPCs). Particularly: the theory I expound upon here is the primary cause of--and justification for--the altered (i) selection of benchmark problems, and (ii) notion of "winning temporal planner". For higher level motivation: robotics, web service composition, industrial manufacturing, business process management, cybersecurity, space exploration, deep ocean exploration, and logistics all benefit from applying domain-independent automated planning technique. Naturally, actually carrying out such case studies has much to offer. For example, we may extract the lesson that reasoning carefully about deadlines is rather crucial to planning in practice. More generally, effectively automating specifically temporal planning is well-motivated from applications. Entirely abstractly, the aim is to improve the theory of automated temporal planning by distilling from its practice. My thesis is that the key feature of computational interest is concurrency. To support, I demonstrate by way of compilation methods, worst-case counting arguments, and analysis of algorithmic properties such as completeness that the more immediately pressing computational obstacles (facing would-be temporal generalizations of classical planning systems) can be dealt with in theoretically efficient manner. So more accurately the technical contribution here is to demonstrate: The computationally significant obstacle to automated temporal planning that remains is just concurrency.
ContributorsCushing, William Albemarle (Author) / Kambhampati, Subbarao (Thesis advisor) / Weld, Daniel S. (Committee member) / Smith, David E. (Committee member) / Baral, Chitta (Committee member) / Davalcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152402-Thumbnail Image.png
Description
This work demonstrated a novel microfluidic device based on direct current (DC) insulator based dielectrophoresis (iDEP) for trapping individual mammalian cells in a microfluidic device. The novel device is also applicable for selective trapping of weakly metastatic mammalian breast cancer cells (MCF-7) from mixtures with mammalian Peripheral Blood Mononuclear Cells

This work demonstrated a novel microfluidic device based on direct current (DC) insulator based dielectrophoresis (iDEP) for trapping individual mammalian cells in a microfluidic device. The novel device is also applicable for selective trapping of weakly metastatic mammalian breast cancer cells (MCF-7) from mixtures with mammalian Peripheral Blood Mononuclear Cells (PBMC) and highly metastatic mammalian breast cancer cells, MDA-MB-231. The advantage of this approach is the ease of integration of iDEP structures in microfliudic channels using soft lithography, the use of DC electric fields, the addressability of the single cell traps for downstream analysis and the straightforward multiplexing for single cell trapping. These microfluidic devices are targeted for capturing of single cells based on their DEP behavior. The numerical simulations point out the trapping regions in which single cell DEP trapping occurs. This work also demonstrates the cell conductivity values of different cell types, calculated using the single-shell model. Low conductivity buffers are used for trapping experiments. These low conductivity buffers help reduce the Joule heating. Viability of the cells in the buffer system was studied in detail with a population size of approximately 100 cells for each study. The work also demonstrates the development of the parallelized single cell trap device with optimized traps. This device is also capable of being coupled detection of target protein using MALDI-MS.
ContributorsBhattacharya, Sanchari (Author) / Ros, Alexandra (Committee member) / Ros, Robert (Committee member) / Buttry, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
152415-Thumbnail Image.png
Description
We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale

We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
ContributorsBai, Ke (Author) / Shrivastava, Aviral (Thesis advisor) / Chatha, Karamvir (Committee member) / Xue, Guoliang (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
152604-Thumbnail Image.png
Description
A clean and sustainable alternative to fossil fuels is solar energy. For efficient use of solar energy to be realized, artificial systems that can effectively capture and convert sunlight into a usable form of energy have to be developed. In natural photosynthesis, antenna chlorophylls and carotenoids capture sunlight and transfer

A clean and sustainable alternative to fossil fuels is solar energy. For efficient use of solar energy to be realized, artificial systems that can effectively capture and convert sunlight into a usable form of energy have to be developed. In natural photosynthesis, antenna chlorophylls and carotenoids capture sunlight and transfer the resulting excitation energy to the photosynthetic reaction center (PRC). Small reorganization energy, λ and well-balanced electronic coupling between donors and acceptors in the PRC favor formation of a highly efficient charge-separated (CS) state. By covalently linking electron/energy donors to acceptors, organic molecular dyads and triads that mimic natural photosynthesis were synthesized and studied. Peripherally linked free base phthalocyanine (Pc)-fullerene (C60) and a zinc (Zn) phthalocyanine-C60 dyads were synthesized. Photoexcitation of the Pc moiety resulted in singlet-singlet energy transfer to the attached C60, followed by electron transfer. The lifetime of the CS state was 94 ps. Linking C60 axially to silicon (Si) Pc, a lifetime of the CS state of 4.5 ns was realized. The exceptionally long-lived CS state of the SiPc-C60 dyad qualifies it for applications in solar energy conversion devices. A secondary electron donor was linked to the dyad to obtain a carotenoid (Car)-SiPc-C60 triad and ferrocene (Fc)-SiPc-C60 triad. Excitation of the SiPc moiety resulted in fast electron transfer from the Car or Fc secondary electron donors to the C60. The lifetime of the CS state was 17 ps and 1.2 ps in Car-SiPc-C60 and Fc-SiPc-C60, respectively. In Chapter 3, an efficient synthetic route that yielded regioselective oxidative porphyrin dimerization is presented. Using Cu2+ as the oxidant, meso-β doubly-connected fused porphyrin dimers were obtained in very high yields. Removal of the copper from the macrocycle affords a free base porphyrin dimer. This allows for exchange of metals and provides a route to a wider range of metallporphyrin dimers. In Chapter 4, the development of an efficient and an expedient route to bacteriopurpurin synthesis is discussed. Meso-10,20- diformylation of porphyrin was achieved and one-pot porphyrin diacrylate synthesis and cyclization to afford bacteriopurpurin was realized. The bacteriopurpurin had a reduction potential of - 0.85 V vs SCE and λmax, 845 nm.
ContributorsArero, Jaro (Author) / Gust, Devens (Thesis advisor) / Moore, Ana (Committee member) / Gould, Ian (Committee member) / Arizona State University (Publisher)
Created2014
152655-Thumbnail Image.png
Description
Solar energy is a promising alternative for addressing the world's current and future energy requirements in a sustainable way. Because solar irradiation is intermittent, it is necessary to store this energy in the form of a fuel so it can be used when required. The light-driven splitting of water into

Solar energy is a promising alternative for addressing the world's current and future energy requirements in a sustainable way. Because solar irradiation is intermittent, it is necessary to store this energy in the form of a fuel so it can be used when required. The light-driven splitting of water into oxygen and hydrogen (a useful chemical fuel) is a fascinating theoretical and experimental challenge that is worth pursuing because the advance of the knowledge that it implies and the availability of water and sunlight. Inspired by natural photosynthesis and building on previous work from our laboratory, this dissertation focuses on the development of water-splitting dye-sensitized photoelectrochemical tandem cells (WSDSPETCs). The design, synthesis, and characterization of high-potential porphyrins and metal-free phthalocyanines with phosphonic anchoring groups are reported. Photocurrents measured for WSDSPETCs made with some of these dyes co-adsorbed with molecular or colloidal catalysts on TiO2 electrodes are reported as well. To guide in the design of new molecules we have used computational quantum chemistry extensively. Linear correlations between calculated frontier molecular orbital energies and redox potentials were built and tested at multiple levels of theory (from semi-empirical methods to density functional theory). Strong correlations (with r2 values > 0.99) with very good predictive abilities (rmsd < 50 mV) were found when using density functional theory (DFT) combined with a continuum solvent model. DFT was also used to aid in the elucidation of the mechanism of the thermal relaxation observed for the charge-separated state of a molecular triad that mimics the photo-induced proton coupled electron transfer of the tyrosine-histidine redox relay in the reaction center of Photosystem II. It was found that the inclusion of explicit solvent molecules, hydrogen bonded to specific sites within the molecular triad, was essential to explain the observed thermal relaxation. These results are relevant for both advancing the knowledge about natural photosynthesis and for the future design of new molecules for WSDSPETCs.
ContributorsMéndez-Hernández, Dalvin D (Author) / Moore, Ana L (Thesis advisor) / Mujica, Vladimiro (Thesis advisor) / Gust, Devens J. (Committee member) / Gould, Ian (Committee member) / Arizona State University (Publisher)
Created2014
152159-Thumbnail Image.png
Description
[FeFe]-hydrogenases are enzymes for the reduction of protons to hydrogen. They rely on only the earth abundant first-row transition metal iron at their active site (H cluster). In recent years, a multitude of diiron mimics of hydrogenases have been synthesized, but none of them catalyzes hydrogen production with the same

[FeFe]-hydrogenases are enzymes for the reduction of protons to hydrogen. They rely on only the earth abundant first-row transition metal iron at their active site (H cluster). In recent years, a multitude of diiron mimics of hydrogenases have been synthesized, but none of them catalyzes hydrogen production with the same exquisite combination of high turnover frequency and low activation energy as the enzymes. Generally, model complexes fail to include one or both of two features essential to the natural enzyme: an intricate array of outer coordination sphere contacts that constrain the coordination geometry to attain a catalytically optimal conformation, and the redox non-innocence of accessory [FeS] clusters found at or near the hydrogen-activating site. The work presented herein describes the synthesis and electrocatalytic characterization of iron-dithiolate models designed to incorporate these features. First, synthetic strategies are developed for constructing peptides with artificial metal-binding motifs, such as 1,3-dithiolate and phosphines, which are utilized to append diiron-polycarbonyl clusters onto a peptide. The phosphine-functionalized peptides are shown to be better electrocatalysts for proton reduction in water/acetonitrile mixtures than in neat acetonitrile. Second, we report the impact of redox non-innocent ligands on the electrocatalytic properties of two types of [FeFe]-hydrogenase models: dinuclear and mononuclear iron complexes. The bidentate, redox non-innocent α-diimine ligands (N-N), 2,2'-bipyridine and 2,2' bipyrimidine, are used to create complexes with the general formula (μ-SRS)Fe2(CO)4(N-N), new members of the well known family of asymmetric diiron carbonyls. While the 2,2'-bipyridine derivatives can act as electrocatalysts for proton reduction, surprisingly, the 2,2'-bipyrimidine analogues are found to be inactive towards catalysis. Electrochemical investigation of two related Fe(II) complexes, (bdt)Fe(CO)P2 for bdt = benzene-1,2-dithiolate and P2 = 1,1'-diphenylphosphinoferrocene or methyl-2-{bis(diphenylphosphinomethylamino}acetate, related to the distal iron in [FeFe]-hydrogenase show that these complexes catalyze the reduction of protons under mild conditions. However, their reactivities toward the external ligand CO are distinguished by gross geometrical differences.
ContributorsRoy, Souvik (Author) / Jones, Anne K (Thesis advisor) / Moore, Thomas (Committee member) / Trovitch, Ryan (Committee member) / Arizona State University (Publisher)
Created2013
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013