Learning robust controllers that work across many partially observable environments - Robohub

Source: robohub
Published: 11/27/2025
To read the full content, please visit the original article.
Read original articleThe article discusses the challenge of designing robust controllers for intelligent systems operating in partially observable and uncertain environments, such as autonomous robots navigating with noisy sensors and imperfect models. Traditional approaches model decision-making under uncertainty using partially observable Markov decision processes (POMDPs), which assume a single known environment but limited state observability. However, real-world scenarios often involve uncertainty not only in observations but also in the environment model itself, such as unknown obstacle locations or varying dynamics, which POMDPs cannot fully capture.
To address this, the authors introduce the hidden-model POMDP (HM-POMDP) framework, which represents a set of possible POMDPs differing in dynamics or rewards but sharing the same structure. Controllers designed for HM-POMDPs must be robust, performing well across all possible models despite the true environment being hidden. Robustness is measured by the worst-case performance over the model set, ensuring reliable operation regardless of which specific environment is encountered. The article highlights their IJCA
Tags
roboticsautonomous-systemscontrol-systemspartially-observable-environmentsPOMDProbust-controllersmachine-learning