reinforcement learning: multiplayer game state vector for variable number opponents

by dektorpan   Last Updated February 07, 2018 05:19 AM

As far as I know, in many of the recent deep reinforcement learning papers, such as DQN, etc., it seems like the state vector parameterization always has the same dimension. Sure, they often do some preprocessing function like stacking a few frames of the video, converting to grayscale, cropping etc.

But my question is: if I'm modeling the state space of a multiplyer game that has a variable number of opponents (say the game rules allow for 1 opponent up to 3 opponents), the way I'm doing it now has a different size state space for different numbers of opponents. So I could train a new agent for each number of opponents, but is there a better way to deal with this?

Related Questions

Deep Q Network for Uncontrolled Environment

Updated April 21, 2017 22:19 PM