Our team is trying to building a backend web server for a game and we have chosen the standard 3-tier architecture (client - web server - database). We are expecting a decent amount of installs and DAU and so would like to design the system with sharding. It’s not that we are going to start with 5 shards or something (we would begin with only 1) but at least we would want to design our implementation with sharding in mind. For this, we have seen few games’ code and mostly found two approaches. They are
Approach-1: There would be one static DB and multiple shard DBs. Player is connected to a game server on game launch which would fetch the shard from static DB. And then all the shard specific data is dealt with the shard DB. If there are 10 shards, then the application server needs to connect to all of the 10 DB shards. Approach-2: In this, there would be cluster of webserver and DB combined (like sticky servers). There would be a dedicated web server(s) which would only be connected to static DB (call it, static web server). On game launch, players are always first connected to this static server which would find out the player shard. And then something interesting happens. The app server would redirect the client to connect to a shard specific server (a new URL which is cluster specific), the client would then connect to the new server which is connected only to 2 DBs - static and it’s shard DB. Thus any app server is always connected to only 2 DBs.
We are a little bewildered about choosing which of the above approaches. Could anyone please tell me which one of them is good or is there a better alternative? We are also pondering about sharding techniques and really appreciate any guidance on that (article or any guidelines).
Thank you for your help in advance!