The Performance Impact of Combining Agent Factorization with Different Learning Algorithms for Multiagent Coordination
More Info
expand_more
Abstract
Factorizing a multiagent system refers to partitioning the state-action space to individual agents and defining the interactions between those agents. This so-called agent factorization is of much importance in real-world industrial settings, and is a process that can have significant performance implications. In this work, we explore if the performance impact of agent factorization is different when using different learning algorithms in multiagent coordination settings. We evaluated six different agent factorization instances - or agent definitions - in the warehouse traffic management domain, comparing the performance of (mainly) two learning algorithms suitable for learning coordinated multiagent policies: the Evolutionary Strategies (ES), and a genetic algorithm (CCEA) previously used in this setting. Our results demonstrate that different learning algorithms are affected in different ways by alternative agent definitions. Given this, we can deduce that many important multiagent coordination problems can potentially be solved by an appropriate agent factorization in conjunction with an appropriate choice of a learning algorithm. Moreover, our work shows that ES is an effective learning algorithm for the warehouse traffic management domain; while, interestingly, celebrated policy gradient methods do not fare well in this complex real-world problem setting.