Balancing the Worldforge Client and Server System
by Bryce Harrington WorldForge uses a client/server model for game distribution. The client is the part of the game that displays the game graphics, and that handles user input. It runs locally on the player's machine. The client speaks with the server, which is located in a centralized place, and does the work of resolving interactions between different player's characters, and generally running the game world.

As one can imagine, there are some trade-offs involved in optimizing the split between work that the server does, and work that the client does. There will be a lot of computers running clients, compared with few (or one) machine running the server, and so obviously distributing processing work to the players' machines would lighten the load on the server dramatically, and help the performance of the system as a whole. This is called the "Thick client" model, and is what games such as Starcraft use; nearly all of the work is done on the players' computers, and a single machine can run hundreds of games.

Yet on the other hand there is are massive cheating possibilities with a thick client. Because the project's code is open source, any player could hack cheat capabilities into a client, and unfairly take advantage of the trust and good will of the game community. The easiest way to prevent this is to retain all of the game processing code on the server. This is the "Thin client" model, and is what MUDs often use - MUD clients can be as trivial as a Telnet shell!

Clearly, there is an optimization point between these two extremes. NetTrek, for instance, uses a signed-client model, that restricts players to using "verified" client applications secured with encryption to stop tampering. We think that this is not the best solution for WorldForge however; it could stymie client development, impose additional and undesirable administrative burdens, and require the existence of a central client-checking organization.

WorldForge will take a different approach to taking advantage of the benefits of client-server distribution. Instead of distributing server work to the clients, it will strive to distribute work across multiple servers. For example, in the STAGE server design, there can be intermediate sub-servers that specialize in churning through the world database, assembling data packages for a subset of the clients, a sub-server to handle NPC AI's, and a sub-server to manage database organization and cleanup, all trusted by a central "main" server.

As well, there are a number of non-game-critical tasks that the client will need to do already, and others we can hand off to it. For some clients, managing the graphics and displaying them to screen is a challenging enough requirement. Adding sophisticated scripting, pathfinding, and player-AI can be done safely at the client end. Pre-verification of commands (i.e., determining movement commands it knows will be rejected by the server) can help avoid wasted bandwidth.

In other words, we choose the "hacked client" model. We encourage the client to do as much as it can be trusted to do, and take advantage of distribution only on the server side. At every step in the design we make the assumption that someone with a "hacked client" is accessing the system, and make sure that the design will naturally prevent cheating. This frees up the client developers from worrying about security or other messy regulations, and gives server developers a clear rule by which to stake out their walls.

Like any design there are some trade-offs to this approach. The programming complexity of distributed servers is non-trivial, yet it is an "interesting" problem and should attract plenty of good talent. Also, since the initial test worlds and player numbers will naturally be limited, server distribution is on a lower priority level and can be dealt with after the many other design problems have been solved. Another problem is the computer network. Keeping a large number of computers connected together is no mean feat, and keeping a quantity of highly distributed subservers properly attached to one another will require some deft programming. Server-server security is another problem area, yet one that has been solved many times over on the internet and no doubt something that many of the WorldForge people do daily in their jobs.

Looking to the future, this approach should scale well, matching forcasted computer capabilities. Unlike the previous few years, where increases in client-side personal computer processor speed alone enabled more and more powerful desktop software, today's market is more characterized by the dropping cost of low end machines; computers continue to get more powerful, but more importantly it is becoming easier and easier for people to have more computers. Purchasing an extra Celeron specifically to run a WorldForge subserver is not unthinkable even today. Personal clustering is certainly a practical approach for the future.

Processor speeds look like they'll continue to increase in the years ahead, although these days other factors - bus speed, network bandwidth, and memory - can be more immediate limitations. A bunch of low end systems on cable modems might as good or better than a single high end system on a fast network connection, especially if the server system is sophisticated enough to reroute when a subserver goes down. Such an approach would additionally permit servers that are only run part of the day, which opens up other possibilities.

As well, the number of people with server-class machines and operating systems is proportionally very high, especially among the gaming community we will be serving. Linux can be installed and properly configured to safely run as a game server in about a day by a knowledgeable computer user.

By taking advantage of server distribution, we can achieve many of the benefits of thick clients, without needing a complex and difficult client trusting mechanism. The trade-off issues are solvable through skillful programming.