Risk-seeking conservative policy iteration with agent-state based policies for Dec-POMDPs with guaranteed convergence

2026-04-10Multiagent Systems

Multiagent Systems
AI summary

The authors study a way to solve complex group decision problems where each agent can't remember everything perfectly. Instead of trying to find perfect strategies that use all past information, they focus on simpler strategies limited by a fixed memory size. They propose an algorithm that improves these strategies step-by-step and guarantees finding a good, though not always perfect, solution efficiently. Their tests show their method works well compared to others, and having more memory helps improve results. This work helps handle decision-making when agents have limited memory capacity.

Dec-POMDPdecentralized decision-makingagent memorypolicy iterationlocal optimumiterated best responsepolynomial runtimerisk-seekingbenchmark problems
Authors
Amit Sinha, Matthieu Geist, Aditya Mahajan
Abstract
Optimally solving decentralized decision-making problems modeled as Dec-POMDPs is known to be NEXP-complete. These optimal solutions are policies based on the entire history of observations and actions of an agent. However, some applications may require more compact policies because of limited compute capabilities, which can be modeled by considering a limited number of memory states (or agent states). While such an agent-state based policy class may not contain the optimal solution, it is still of practical interest to find the best agent-state policy within the class. We focus on an iterated best response style algorithm which guarantees monotonic improvements and convergence to a local optimum in polynomial runtime in the Dec-POMDP model size. In order to obtain a better local optimum, we use a modified objective which incentivizes risk-seeking alongside a conservative policy iteration update. Our empirical results show that our approach performs as well as state-of-the-art approaches on several benchmark Dec-POMDPs, achieving near-optimal performance while having polynomial runtime despite the limited memory. We also show that using more agent states (a larger memory) leads to greater performance. Our approach provides a novel way of incorporating memory constraints on the agents in the Dec-POMDP problem.