Server Threading Model
[AESOP Server Library]

The AESOP Server is designed to be multithreaded. More...

Collaboration diagram for Server Threading Model:

The AESOP Server is designed to be multithreaded.

This is important so that the server can handle long-latency operations (reading files, etc.) without impacting the core processing loop.

In general, the main thread of execution (the startup thread) is what performs the main processing loop. The server object, and other objects, have helper threads to do asynchronous processing. The general pattern is that someone requests work to be done (TCP message from client, or rules engine requesting a map be loaded), and the request is queued. Worker threads check the queue, perform the work, and either queue the result or know how to update state appropriately by updating threadsafe data collections.

Objects with helper threads should maintain thread pools (or a single helper thread), and NOT spin off threads as needed. This requires queuing of requests, but helps prevent runaway thread creation. If a request queue gets too long, further requests should be denied. Clients (remote or local) should be able to handle request failure gracefully.

However, threading carries penalties and dangers as well. The main penalty is that data shared between threads needs to be synchronized. In general, all data on the heap that could be shared is synchronized using threadsafe collections from the Wavepacket Threadsafe Library:

The dangers of threading are related to that. Any important data that isn't protected could result in program crashes or other misbehavior. There, the solution is simple: any data that could be accessed by multiple threads must be protected! More concretely, any data on the heap must be protected by a threadsafe collection.

The other danger is deadlocks. See . That article gives a good overview of how deadlocks can occur, and how to avoid them.

The AESOP Server takes a simple approach, which the wiki page describes as eliminating the "hold and wait" condition. In the AESOP Server, the only locks allowed to exist are on the threadsafe collections. Threads should never directly lock or unlock mutexes. Instead, threads should use threadsafe collections for all synchronization and communication. This guarantees that a thread will never take two mutexes at once.

For now, all code in the AESOP Server should follow this convention and only allow threadsafe collection objects to maintain mutexes.

If in the future, data model complexity requires that a thread hold multiple mutexes for some reason, then other solutions could be examined. For the simple data objects used on the server, a hierarchical approach (requiring threads to acquire/release locks in a certain order) may be fine. But given the realities of how networked games work (dropped UDP packets, network lag leading to clients having to guess at state), the simple approach (never hold multiple locks) should be enough. In edge cases it can lead to temporary inconsistencies, but the impact should be no worse than a dropped UDP packet.