Description

Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning

Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a problem instance without re-using information from previously solved instances. Research in generalized planning has demonstrated the utility of constructing algorithm-like plans that reuse such information.

6.04 MB application/pdf

Download count: 0

Details

Contributors
Date Created
  • 2020
Resource Type
  • Text
  • Collections this item is in
    Note
    • Masters Thesis Computer Engineering 2020

    Machine-readable links