Multi-Channel Servers
Many online games run multiple channels (also called shards or instances) — identical copies of the game server that players choose between. Each channel runs as a separate process with its own TCP/WebSocket port and HTTP port, but all channels share the same database.
Altruist doesn't have a built-in "channel" concept — each channel is simply a standalone Altruist server instance with different port configuration. Docker Compose makes it easy to spin up multiple channels from the same image.
Architecture
All channels connect to the same database. Players pick a channel from a server list, and the client connects to that channel's TCP/WebSocket port.
Note:
Each channel is a fully independent Altruist server. They don't communicate with each other directly. If you need cross-channel features (global chat, cross-channel trading), you'll need an external broker (Redis pub/sub, message queue, etc.).
Config Override with Environment Variables
Altruist loads config from config.yml but allows environment variable overrides using the ALTRUIST__ prefix. Nested keys use double underscores (__):
| Config YAML Path | Environment Variable |
|---|---|
altruist.server.transport.config.port | ALTRUIST__SERVER__TRANSPORT__CONFIG__PORT |
altruist.server.http.port | ALTRUIST__SERVER__HTTP__PORT |
altruist.persistence.database.host | ALTRUIST__PERSISTENCE__DATABASE__HOST |
This means you can use a single config.yml for shared settings and override only the port per channel via env vars.
Note:
You don't need separate config files per channel. The base config.yml defines all shared settings (database credentials, game engine config, world definitions). Only the ports differ, and those come from environment variables.
Base Config
Create a shared config.yml with default settings:
altruist:
server:
host: "0.0.0.0"
transport:
type: "tcp"
config:
port: 13001
http:
port: 8081
persistence:
database:
provider: "postgres"
host: "localhost"
port: 5432
database: "mygame"
user: "gameserver"
password: "secret"
cache:
provider: "inmemory"
game:
engine:
framerateHz: 30
unit: "hz"
worlds:
partitioner: { width: 50000, height: 50000, depth: 50000 }
items:
- index: 0
id: "main"
size: { x: 3300000, y: 1900000, z: 50000 }
Dockerfile
A standard multi-stage .NET build:
FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
WORKDIR /src
COPY src/MyGame/MyGame.csproj src/MyGame/
COPY libs/ libs/
RUN dotnet restore src/MyGame/MyGame.csproj
COPY src/ src/
RUN dotnet publish src/MyGame/MyGame.csproj -c Release -o /app
FROM mcr.microsoft.com/dotnet/aspnet:9.0
WORKDIR /app
COPY /app .
COPY config/ /app/config/
ENTRYPOINT ["dotnet", "MyGame.dll"]
Docker Compose: Infrastructure
Separate your database from your game servers. This file runs once:
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: gameserver
POSTGRES_PASSWORD: secret
POSTGRES_DB: mygame
ports:
- "5432:5432"
volumes:
- pg_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U gameserver"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pg_data:
Docker Compose: Game Channels
Each channel is the same image with different ports:
services:
channel1:
build:
context: ..
dockerfile: infra/Dockerfile
ports:
- "13001:13001"
- "8081:8081"
environment:
MYGAME__CHANNEL__ID: 1
ALTRUIST__SERVER__TRANSPORT__CONFIG__PORT: 13001
ALTRUIST__SERVER__HTTP__PORT: 8081
ALTRUIST__PERSISTENCE__DATABASE__HOST: host.docker.internal
restart: unless-stopped
channel2:
build:
context: ..
dockerfile: infra/Dockerfile
ports:
- "13002:13002"
- "8082:8082"
environment:
MYGAME__CHANNEL__ID: 2
ALTRUIST__SERVER__TRANSPORT__CONFIG__PORT: 13002
ALTRUIST__SERVER__HTTP__PORT: 8082
ALTRUIST__PERSISTENCE__DATABASE__HOST: host.docker.internal
restart: unless-stopped
channel3:
build:
context: ..
dockerfile: infra/Dockerfile
ports:
- "13003:13003"
- "8083:8083"
environment:
MYGAME__CHANNEL__ID: 3
ALTRUIST__SERVER__TRANSPORT__CONFIG__PORT: 13003
ALTRUIST__SERVER__HTTP__PORT: 8083
ALTRUIST__PERSISTENCE__DATABASE__HOST: host.docker.internal
restart: unless-stopped
Note:
host.docker.internal resolves to the host machine from inside Docker containers. This works on Docker Desktop (Windows/macOS). On Linux, use the actual host IP or a Docker network instead.
Running
# Start the database first
docker compose -f infra/docker-compose-infra.yml up -d
# Wait for it to be healthy, then start channels
docker compose -f infra/docker-compose-services.yml up -d --build
Reading the Channel ID
Use MYGAME__CHANNEL__ID (or any custom env var) in your game code to identify which channel this instance is:
[Service]
public class ChannelInfo
{
public int ChannelId { get; }
public ChannelInfo(
[AppConfigValue("mygame:channel:id", "1")] int channelId)
{
ChannelId = channelId;
}
}
Note:
The MYGAME__ prefix works because Altruist's config loader merges all environment variables. The ALTRUIST__ prefix is for framework settings, but you can use any prefix for your own game-specific config.
Performance & Capacity
Altruist is designed for high-throughput game servers. Here's what each component handles and where the boundaries are:
What Scales Well on a Single Channel
| Component | At Scale | Why |
|---|---|---|
| TCP/WebSocket connections | 10,000+ | Async I/O, standard .NET socket handling |
| MessagePack serialization | 10,000+ | Zero-alloc, ~1 microsecond per packet |
| Game tick (25–30 Hz) | 10,000+ | Only ticks entities near players (zone-based hibernation) |
| Visibility system | 10,000+ | Spatial partitioning — O(nearby), not O(all) |
| Zone activation | 10,000+ | Lazy loading, only active zones consume resources |
| Movement validation | 10,000+ | Grid/spatial lookup, O(1) |
| World ticking | 10,000+ | Parallel per-world — each world ticks on its own thread |
Where Bottlenecks Appear
| Component | Issue | Mitigation |
|---|---|---|
| DB writes | Periodic saves from many players = high write rate | Use SaveBatchAsync — one batched upsert for 100 players is 50x faster than 100 individual saves |
| Dense area broadcasts | 200+ players in view range = 200 squared visibility packets/tick | Reduce altruist:game:visibility:range in config (default: 5000 units — the meaning depends on your game's coordinate scale), or split crowded areas into separate zones so fewer entities overlap |
| Memory | Each player entity + connection + world objects | ~2–4 GB for a few thousand players — fine for a dedicated server |
| Postgres single instance | Connection pool exhaustion under heavy write load | Read replicas for queries, or Redis for hot data |
Note:
Altruist includes SaveBatchAsync on vaults, which sends a single INSERT ... ON CONFLICT UPDATE query for an entire batch of entities. Use this for periodic autosaves instead of saving each player individually.
Recommended Capacity Per Channel
| Players | Architecture | Notes |
|---|---|---|
| 1,000–2,000 | Single channel, no changes | Comfortable on one instance out of the box |
| 2,500–5,000 | Single channel, optimized | Enable batch DB writes and parallel world ticking |
| 5,000–10,000 | Multi-channel (2–4 channels) | Split players across channels, shared database |
| 10,000+ | Multi-channel + infrastructure scaling | 4+ channels, read replicas, Redis cache, consider Kubernetes |
Note:
We recommend adding a second channel once you approach 2,500 concurrent players on a single instance. Each channel is a fully independent Altruist server — all state lives in the shared database, so you can add or remove channels without data migration. Just update the compose file and redeploy.
Example: 10,000 Players
For 10,000 concurrent players, run 4 channels with ~2,500 players each:
services:
channel1:
# ... TCP: 13001, HTTP: 8081
channel2:
# ... TCP: 13002, HTTP: 8082
channel3:
# ... TCP: 13003, HTTP: 8083
channel4:
# ... TCP: 13004, HTTP: 8084
Each channel handles its own game worlds, visibility, and entity ticking independently. The database is the only shared resource — batch writes keep the write load manageable.
Note:
These numbers assume a typical game workload (MMO-style: movement, combat, NPC AI, inventory). CPU-heavy features like real-time physics simulation on every entity or complex pathfinding for thousands of NPCs will lower the per-channel capacity. Profile your specific workload to find the right channel count.
Server List Endpoint
To let clients discover available channels, add an HTTP endpoint to a dedicated "lobby" service or to each channel:
[HttpGet("/api/channels")]
public IActionResult GetChannels()
{
return Ok(new[]
{
new { Id = 1, Name = "Channel 1", Host = "game.example.com", Port = 13001, Players = 142 },
new { Id = 2, Name = "Channel 2", Host = "game.example.com", Port = 13002, Players = 87 },
new { Id = 3, Name = "Channel 3", Host = "game.example.com", Port = 13003, Players = 203 },
});
}
The client fetches this list at startup, the player picks a channel, and the client connects to that channel's TCP/WebSocket port.