Compare commits
3 Commits
b17d199301
...
6511353b55
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6511353b55 | ||
|
|
620429c9b8 | ||
|
|
88424675b5 |
37
.gitignore
vendored
Normal file
37
.gitignore
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.egg-info/
|
||||
.venv/
|
||||
venv/
|
||||
|
||||
# Environment
|
||||
.env
|
||||
.env.local
|
||||
.env.*.local
|
||||
|
||||
# Database
|
||||
*.db
|
||||
*.db-shm
|
||||
*.db-wal
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.sw?
|
||||
*.suo
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Test / coverage artifacts
|
||||
coverage/
|
||||
playwright-report/
|
||||
test-results/
|
||||
|
||||
# Claude Code local settings
|
||||
.claude/settings.local.json
|
||||
|
||||
# Build output
|
||||
dist/
|
||||
1140
ARCHITECTURE.md
1140
ARCHITECTURE.md
File diff suppressed because it is too large
Load Diff
991
DATABASE.md
991
DATABASE.md
File diff suppressed because it is too large
Load Diff
1667
FRONTEND.md
1667
FRONTEND.md
File diff suppressed because it is too large
Load Diff
@@ -1,503 +0,0 @@
|
||||
# Languard Servers Manager — Implementation Plan
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting, ensure the following are available:
|
||||
- Python 3.11+
|
||||
- A working Arma 3 dedicated server installation (for testing the first adapter)
|
||||
- Node.js 18+ (for frontend dev server)
|
||||
- The reference docs: ARCHITECTURE.md, DATABASE.md, API.md, MODULES.md, THREADING.md
|
||||
|
||||
---
|
||||
|
||||
## Phase 0 — Adapter Framework (New)
|
||||
|
||||
**Goal:** Build the adapter protocol + registry system before any other code. This is the foundation that makes every subsequent phase modular.
|
||||
|
||||
### Step 0.1 — Adapter protocols, exceptions, and registry
|
||||
|
||||
1. Create `backend/adapters/__init__.py` — auto-imports built-in adapters
|
||||
2. Create `backend/adapters/protocols.py` — all capability Protocol definitions:
|
||||
- `ConfigGenerator` (merged: schema + generation), `ProcessConfig`, `LogParser`
|
||||
- `RemoteAdmin`, `RemoteAdminClient`
|
||||
- `MissionManager`, `ModManager`, `BanManager`
|
||||
- `GameAdapter` (composite protocol with `has_capability()` method)
|
||||
- `ConfigGenerator` includes `get_sections()`, `get_sensitive_fields(section)`, `get_config_version()`
|
||||
- `RemoteAdmin` includes `get_player_data_schema() -> type[BaseModel] | None`
|
||||
- `MissionManager` includes `get_mission_data_schema() -> type[BaseModel] | None`
|
||||
- `ModManager` includes `get_mod_data_schema() -> type[BaseModel] | None`
|
||||
- `BanManager` includes `get_ban_data_schema() -> type[BaseModel] | None`
|
||||
3. Create `backend/adapters/exceptions.py` — typed adapter exceptions:
|
||||
- `AdapterError` (base)
|
||||
- `ConfigWriteError` — atomic write failed (tmp file cleanup done)
|
||||
- `ConfigValidationError` — adapter Pydantic validation failed
|
||||
- `LaunchArgsError` — invalid launch arguments
|
||||
- `RemoteAdminError` — admin protocol communication failed
|
||||
- `ExeNotAllowedError` — executable not in adapter allowlist
|
||||
4. Create `backend/adapters/registry.py` — `GameAdapterRegistry` singleton
|
||||
5. Add `has_capability(name) -> bool` method to `GameAdapter` protocol — core uses explicit capability probes instead of scattered `None` checks
|
||||
6. Write unit tests: register adapter, get adapter, list game types, missing adapter raises error, exceptions are catchable by type, has_capability returns correct bools
|
||||
|
||||
### Step 0.2 — Arma 3 adapter skeleton
|
||||
|
||||
1. Create `backend/adapters/arma3/__init__.py` — exports and registers `ARMA3_ADAPTER`
|
||||
2. Create `backend/adapters/arma3/adapter.py` — `Arma3Adapter` class (all methods return stubs initially)
|
||||
3. Create `backend/adapters/arma3/process_config.py` — `Arma3ProcessConfig` (full implementation)
|
||||
4. Create `backend/adapters/arma3/config_generator.py` — Pydantic models (ServerConfig, BasicConfig, ProfileConfig, LaunchConfig, RConConfig) + `Arma3ConfigGenerator` (schema + generation merged)
|
||||
5. **Third-party adapter loading**: add `languard.adapters` entry_point group to `pyproject.toml`:
|
||||
```toml
|
||||
[project.entry-points."languard.adapters"]
|
||||
arma3 = "adapters.arma3:ARMA3_ADAPTER"
|
||||
```
|
||||
Core scans entry_points at startup via `importlib.metadata` in addition to built-in imports.
|
||||
6. Write unit tests: adapter registers, protocols satisfied, config schema produces valid JSON Schema
|
||||
|
||||
**Test:** Import adapters module → `GameAdapterRegistry.get("arma3")` returns a valid adapter. `GameAdapterRegistry.list_game_types()` returns `[{"game_type": "arma3", "display_name": "Arma 3", ...}]`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Foundation
|
||||
|
||||
**Goal:** Running FastAPI server with DB, auth, and basic server CRUD using the adapter framework.
|
||||
|
||||
### Step 1.1 — Project scaffold
|
||||
|
||||
```
|
||||
mkdir backend
|
||||
cd backend
|
||||
python -m venv venv
|
||||
venv/Scripts/activate
|
||||
pip install fastapi uvicorn[standard] sqlalchemy python-jose[cryptography] passlib[bcrypt] cryptography psutil apscheduler python-multipart slowapi pytest pytest-asyncio httpx
|
||||
pip freeze > requirements.txt
|
||||
```
|
||||
|
||||
Create:
|
||||
- `backend/config.py` — Settings class
|
||||
- `backend/main.py` — FastAPI app factory, startup/shutdown hooks
|
||||
- `backend/conftest.py` — pytest fixtures (in-memory SQLite, test client)
|
||||
- `.env.example` — All env vars documented
|
||||
|
||||
### Step 1.2 — Database + Migrations
|
||||
|
||||
1. Create `backend/core/migrations/001_initial_schema.sql` — all core tables:
|
||||
- `schema_migrations`, `users`, `servers` (with `game_type`), `game_configs`
|
||||
- `mods` (with `game_type`, `game_data`), `server_mods`
|
||||
- `missions`, `mission_rotation` (with `game_data`)
|
||||
- `players` (with `slot_id` TEXT, `game_data`), `player_history`
|
||||
- `bans` (with `game_data`), `logs`, `metrics`, `server_events`
|
||||
- Include all CHECK constraints and indexes
|
||||
- Include `PRAGMA busy_timeout=5000` in engine setup
|
||||
2. Create `backend/core/dal/event_repository.py`
|
||||
3. Create `backend/database.py`:
|
||||
- `get_engine()` with WAL + FK pragma
|
||||
- `run_migrations()`
|
||||
- `get_db()` — FastAPI dependency
|
||||
- `get_thread_db()` — thread-local session factory
|
||||
4. Call `run_migrations()` in `main.py:on_startup()`
|
||||
|
||||
**Test:** Start app, confirm `languard.db` created with all tables. Run `pytest` with in-memory SQLite.
|
||||
|
||||
### Step 1.3 — Auth module
|
||||
|
||||
1. `backend/core/auth/utils.py` — `hash_password`, `verify_password`, `create_access_token`, `decode_access_token`
|
||||
2. `backend/core/auth/schemas.py` — `LoginRequest`, `TokenResponse`, `UserResponse`
|
||||
3. `backend/core/auth/service.py` — `AuthService`
|
||||
4. `backend/core/auth/router.py` — login, me, users CRUD
|
||||
5. `backend/dependencies.py` — `get_current_user`, `require_admin`, `get_adapter_for_server`
|
||||
6. `main.py` — seed default admin user on first startup (random password printed to stdout)
|
||||
7. Add rate limiting to `POST /auth/login` (5 attempts/minute per IP via slowapi)
|
||||
|
||||
**Test:** `POST /api/auth/login` returns JWT. `GET /api/auth/me` with token returns user. Rate limiting returns 429 after 5 failed attempts.
|
||||
|
||||
### Step 1.4 — Server CRUD (no process management yet)
|
||||
|
||||
1. `backend/core/dal/server_repository.py`
|
||||
2. `backend/core/dal/config_repository.py` — manages `game_configs` table
|
||||
3. `backend/core/servers/schemas.py` — `CreateServerRequest` (includes `game_type`)
|
||||
4. `backend/core/servers/router.py` — GET, POST, PUT, DELETE /servers
|
||||
5. `backend/core/servers/service.py` — CRUD methods + `create_server` seeds config sections from adapter defaults
|
||||
6. `backend/core/utils/file_utils.py` — `ensure_server_dirs()` (uses adapter's `get_server_dir_layout()`)
|
||||
7. `backend/core/utils/port_checker.py` — `is_port_in_use()`, `check_server_ports_available()`
|
||||
- **Full cross-game port checking**: query ALL running servers, resolve each adapter, get port conventions for each, check the full derived port set
|
||||
- Example: Arma 3 uses game port + 1 (Steam query), BattlEye RCon port; another game may use different conventions — all checked
|
||||
|
||||
**Test:** Create server via API with `game_type: "arma3"` → confirm DB row + `game_configs` rows + directory created. Create a second server with a port that conflicts with derived ports of the first → confirm 409 error.
|
||||
|
||||
### Step 1.5 — Game type discovery endpoints
|
||||
|
||||
1. `backend/core/games/router.py` — `GET /games`, `GET /games/{type}`, `GET /games/{type}/config-schema`, `GET /games/{type}/defaults`
|
||||
|
||||
**Test:** `GET /api/games` returns `[{"game_type": "arma3", ...}]`. `GET /api/games/arma3/config-schema` returns JSON Schema for all 5 Arma 3 config sections.
|
||||
|
||||
### Step 1.6 — Migration script for existing Arma 3 data
|
||||
|
||||
If upgrading from the single-game schema, create a migration script:
|
||||
|
||||
1. Create `backend/core/migrations/002_migrate_arma3_config.py`
|
||||
2. Column type map: `max_players` INT→JSON `maxPlayers`, `hostname` TEXT→JSON `hostname`, etc.
|
||||
3. `migrate_config_table()`: read old Arma 3 config table rows → build `game_configs` JSON blobs → insert into new table → delete old rows
|
||||
4. `migrate_player_data()`: convert `player_num` INTEGER → `slot_id` TEXT
|
||||
5. Transaction + rollback: all migration runs inside a single DB transaction; on failure, full rollback
|
||||
6. Row count verification: after migration, assert row counts match between old and new tables
|
||||
7. Idempotent: safe to run multiple times (checks if migration already applied)
|
||||
|
||||
**Test:** Create test DB with old single-game schema + sample data → run migration script → verify all data in new tables → verify old tables dropped.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Arma 3 Adapter Implementation
|
||||
|
||||
**Goal:** Complete the Arma 3 adapter with config generation and process management. This phase proves the adapter architecture works end-to-end with the primary game.
|
||||
|
||||
### Step 2.1 — Config Generator (Arma 3 adapter)
|
||||
|
||||
1. `backend/adapters/arma3/config_generator.py` — `Arma3ConfigGenerator`
|
||||
2. **Use a structured builder** (NOT f-strings) — escape double quotes and newlines in all user-supplied string values
|
||||
3. Write `server.cfg` covering all params from config schema, including mission rotation as `class Missions {}` block
|
||||
4. Write `basic.cfg`
|
||||
5. Write `server.Arma3Profile` — written to `servers/{id}/server/server.Arma3Profile`
|
||||
6. Write `beserver.cfg` — creates `battleye/` directory, writes RCon config
|
||||
7. `build_launch_args()` — assembles full CLI arg list including `-bepath=./battleye`
|
||||
8. `preview_config()` — renders all files without writing to disk, returns `dict[str, str]` of label→content (filenames for file-based, variable names for env-var, argument names for CLI)
|
||||
9. Set file permissions 0600 on config files containing passwords
|
||||
10. **Atomic write pattern**: all config files written to `.tmp` files first, then `os.replace()` for atomic rename. On any write failure, all `.tmp` files are cleaned up and original files remain untouched. Raises `ConfigWriteError` on failure.
|
||||
|
||||
**Test:** `Arma3ConfigGenerator.write_configs(server_id, dir, config)` → inspect all generated files. Test config injection prevention: set hostname to `X"; passwordAdmin = "pwned"; //` — verify generated server.cfg does NOT contain the injected directive. Test atomic write: mock `os.replace()` to raise OSError → confirm `.tmp` files are cleaned up and original files are untouched.
|
||||
|
||||
### Step 2.2 — Process Manager (core)
|
||||
|
||||
1. `backend/core/servers/process_manager.py` — `ProcessManager` singleton (game-agnostic)
|
||||
2. `start(server_id, exe_path, args, cwd=servers/{id}/)`
|
||||
3. `stop(server_id, timeout=30)` — on Windows: `terminate()` = hard kill
|
||||
4. `kill()`, `is_running()`, `get_pid()`
|
||||
5. `recover_on_startup()` — verify PID is alive AND process name matches adapter allowlist (prevents PID reuse)
|
||||
6. Wire `ServerService.start()` and `ServerService.stop()` — both delegate to adapter for exe validation and config generation
|
||||
7. Add `POST /servers/{id}/start`, `POST /servers/{id}/stop`, `POST /servers/{id}/kill` endpoints
|
||||
8. **Typed exception handling in start flow**: catch and map adapter exceptions to HTTP responses:
|
||||
- `ConfigWriteError` → 500 (atomic write failed, tmp cleaned)
|
||||
- `ConfigValidationError` → 422 (invalid config values)
|
||||
- `LaunchArgsError` → 400 (invalid launch arguments)
|
||||
- `ExeNotAllowedError` → 403 (executable not in adapter allowlist)
|
||||
|
||||
**Test:** Start a server via API → confirm process appears in Task Manager. Stop it → confirm process ends. Test error paths: set invalid exe path → confirm 403 ExeNotAllowedError response.
|
||||
|
||||
### Step 2.3 — Config endpoints (core + adapter validation)
|
||||
|
||||
1. `GET /servers/{id}/config` — reads all sections from `game_configs`
|
||||
2. `GET /servers/{id}/config/{section}` — reads single section, response includes `_meta` with `config_version` and `schema_version`
|
||||
3. `PUT /servers/{id}/config/{section}` — validates against adapter's Pydantic model, encrypts sensitive fields via `adapter.get_sensitive_fields(section)`, stores in `game_configs`
|
||||
- **Optimistic locking**: client must send `config_version` in request body; if it doesn't match the current row's `config_version`, return 409 Conflict with `CONFIG_VERSION_CONFLICT` error code
|
||||
- On successful write, increment `config_version` in the row
|
||||
4. `GET /servers/{id}/config/preview` — delegates to adapter's `preview_config()`, returns `dict[str, str]` of label→content
|
||||
5. `GET /servers/{id}/config/download/{filename}` — filename validated against adapter allowlist
|
||||
|
||||
**Test:** Update hostname via API → regenerate and start server → confirm new hostname appears in server browser. Test optimistic locking: two concurrent PUT requests with same config_version → one succeeds (200), one fails (409 Conflict).
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Background Threads (Core + Adapter)
|
||||
|
||||
**Goal:** Live monitoring — process crash detection, log tailing, metrics.
|
||||
|
||||
### Step 3.1 — Thread infrastructure
|
||||
|
||||
1. `backend/core/threads/base_thread.py` — `BaseServerThread`
|
||||
2. `backend/core/threads/thread_registry.py` — `ThreadRegistry` (adapter-aware)
|
||||
3. Wire `start_server_threads()` / `stop_server_threads()` into `ServerService.start()` / `ServerService.stop()`
|
||||
|
||||
### Step 3.2 — Process Monitor Thread (core)
|
||||
|
||||
1. `backend/core/threads/process_monitor.py`
|
||||
2. Crash detection + status update in DB
|
||||
3. Auto-restart with exponential backoff (daemon cleanup thread pattern)
|
||||
|
||||
**Test:** Start server → kill process manually → confirm DB status changes to 'crashed'.
|
||||
**Test:** Enable auto_restart → kill → confirm server restarts automatically.
|
||||
|
||||
### Step 3.3 — Log Parser (Arma 3 adapter) + Log Tail Thread (core)
|
||||
|
||||
1. `backend/adapters/arma3/log_parser.py` — `RPTParser` implementing `LogParser` protocol
|
||||
2. `backend/core/threads/log_tail.py` — `LogTailThread` (generic, takes adapter's `LogParser`)
|
||||
3. `backend/core/dal/log_repository.py`
|
||||
4. `backend/core/logs/service.py`
|
||||
5. `backend/core/logs/router.py` — `GET /servers/{id}/logs`
|
||||
|
||||
**Test:** Start server → `GET /api/servers/{id}/logs` returns recent RPT lines.
|
||||
|
||||
### Step 3.4 — Metrics Collector Thread (core)
|
||||
|
||||
1. `backend/core/metrics/service.py`
|
||||
2. `backend/core/dal/metrics_repository.py`
|
||||
3. `backend/core/threads/metrics_collector.py`
|
||||
4. `backend/core/metrics/router.py` — `GET /servers/{id}/metrics`
|
||||
|
||||
**Test:** Running server → query metrics endpoint → see CPU/RAM data points.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Remote Admin (Arma 3: BattlEye RCon)
|
||||
|
||||
**Goal:** Real-time player list, in-game admin commands via the adapter's RemoteAdmin protocol.
|
||||
|
||||
### Step 4.1 — RCon Client (Arma 3 adapter)
|
||||
|
||||
1. `backend/adapters/arma3/rcon_client.py` — `BERConClient`
|
||||
2. Implement BE RCon UDP protocol:
|
||||
- Packet structure: `'BE'` + CRC32 (little-endian) + type byte + payload
|
||||
- Login: type `0x00`, payload = password
|
||||
- Command: type `0x01`, payload = sequence byte + command string
|
||||
- Keepalive: type `0x02`, payload = empty
|
||||
3. **Request multiplexer**: track pending requests by sequence byte, route responses to correct caller via `threading.Event` per request
|
||||
4. `parse_players_response()` — parse `players` command output
|
||||
5. Handle unsolicited server messages (type 0x02)
|
||||
|
||||
**Test:** Connect BERConClient to a running server with BattlEye → successfully login → send `players` → receive response.
|
||||
|
||||
### Step 4.2 — RCon Service (Arma 3 adapter) + Remote Admin Poller Thread (core)
|
||||
|
||||
1. `backend/adapters/arma3/rcon_service.py` — `Arma3RConService` implementing `RemoteAdmin` protocol
|
||||
2. `backend/core/threads/remote_admin_poller.py` — `RemoteAdminPollerThread` (generic, takes adapter's `RemoteAdmin`)
|
||||
3. `backend/core/dal/player_repository.py`
|
||||
4. `backend/core/players/service.py`
|
||||
5. `backend/core/players/router.py` — `GET /servers/{id}/players`
|
||||
|
||||
**Test:** Players join server → `GET /players` returns them with pings.
|
||||
|
||||
### Step 4.3 — Admin Actions via Remote Admin
|
||||
|
||||
1. `POST /servers/{id}/players/{slot_id}/kick` — delegates to adapter's `remote_admin.kick_player()`
|
||||
2. `POST /servers/{id}/players/{slot_id}/ban` — delegates to adapter's `remote_admin.ban_player()`
|
||||
3. `POST /servers/{id}/remote-admin/command` — delegates to adapter's `remote_admin.send_command()`
|
||||
4. `POST /servers/{id}/remote-admin/say` — delegates to adapter's `remote_admin.say_all()`
|
||||
5. `backend/core/dal/ban_repository.py`
|
||||
6. `GET/POST/DELETE /servers/{id}/bans`
|
||||
|
||||
### Step 4.4 — Ban Manager (Arma 3 adapter)
|
||||
|
||||
1. `backend/adapters/arma3/ban_manager.py` — `Arma3BanManager` implementing `BanManager` protocol
|
||||
2. **ban.txt bidirectional sync**: on ban add/delete via API, also write to `battleye/ban.txt`; on startup, read `ban.txt` and upsert into DB
|
||||
|
||||
**Test:** Kick a player via API → confirm player disconnected from server.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — WebSocket Real-Time
|
||||
|
||||
**Goal:** Live updates to React frontend without polling. **Fully game-agnostic.**
|
||||
|
||||
### Step 5.1 — Broadcast infrastructure
|
||||
|
||||
1. `backend/core/websocket/broadcaster.py` — `BroadcastThread` + `enqueue()`
|
||||
2. `backend/core/websocket/manager.py` — `ConnectionManager`
|
||||
3. Store event loop reference in `main.py:on_startup()`
|
||||
4. Start `BroadcastThread` in `on_startup()`
|
||||
5. Wire `BroadcastThread.enqueue()` calls into all background threads
|
||||
|
||||
### Step 5.2 — WebSocket endpoint
|
||||
|
||||
1. `backend/core/websocket/router.py`
|
||||
2. JWT validation from query param
|
||||
3. Subscribe/unsubscribe message handling
|
||||
4. Ping/pong keepalive
|
||||
|
||||
**Test:** Connect to `ws://localhost:8000/ws/1?token=...` → see live log lines stream in terminal.
|
||||
|
||||
### Step 5.3 — Integrate all event sources
|
||||
|
||||
Wire `BroadcastThread.enqueue()` into:
|
||||
- `ProcessMonitorThread` → status updates, crash events
|
||||
- `LogTailThread` → log lines
|
||||
- `MetricsCollectorThread` → metrics snapshots
|
||||
- `RemoteAdminPollerThread` → player list updates
|
||||
- `ServerService.start/stop` → status transitions
|
||||
|
||||
**Test:** React frontend connects to WS → server starts → see status, logs, metrics all update in real time.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Mission & Mod Management (Arma 3 Adapter)
|
||||
|
||||
### Step 6.1 — Missions
|
||||
|
||||
1. `backend/adapters/arma3/mission_manager.py` — `Arma3MissionManager` implementing `MissionManager` protocol
|
||||
2. `backend/core/missions/router.py` — generic endpoints (delegate to adapter if capability supported)
|
||||
3. Upload file validation (extension from adapter's `MissionManager.file_extension`)
|
||||
4. Mission rotation CRUD
|
||||
|
||||
**Test:** Upload a `.pbo` → appears in `GET /missions` → set as rotation → start server → mission available.
|
||||
|
||||
### Step 6.2 — Mods
|
||||
|
||||
1. `backend/adapters/arma3/mod_manager.py` — `Arma3ModManager` implementing `ModManager` protocol
|
||||
2. `backend/core/mods/router.py` — generic endpoints (delegate to adapter if capability supported)
|
||||
3. `build_mod_args()` — assemble `-mod=` and `-serverMod=` args
|
||||
4. Wire mod args into `Arma3ConfigGenerator.build_launch_args()`
|
||||
|
||||
**Test:** Register `@CBA_A3` → enable on server → start → server loads mod.
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Polish & Production
|
||||
|
||||
### Step 7.1 — APScheduler jobs
|
||||
|
||||
```python
|
||||
from apscheduler.schedulers.background import BackgroundScheduler
|
||||
scheduler = BackgroundScheduler()
|
||||
scheduler.add_job(log_service.cleanup_old_logs, 'cron', hour=3)
|
||||
scheduler.add_job(metrics_service.cleanup_old_metrics, 'cron', hour=3, minute=30)
|
||||
scheduler.add_job(player_service.cleanup_old_history, 'cron', hour=4)
|
||||
scheduler.start()
|
||||
```
|
||||
|
||||
### Step 7.2 — Startup recovery
|
||||
|
||||
In `on_startup()` → `ProcessManager.recover_on_startup()`:
|
||||
- Query DB for servers with `status='running'`
|
||||
- Check if PID still alive (`psutil.pid_exists(pid)`)
|
||||
- Validate process name against adapter's `get_allowed_executables()`
|
||||
- If alive: re-attach threads (skip process start, just start monitoring threads)
|
||||
- If dead: mark as `crashed`, clear players
|
||||
|
||||
### Step 7.3 — Events log
|
||||
|
||||
1. `backend/core/dal/event_repository.py`
|
||||
2. Insert events for: start, stop, crash, kick, ban, config change, mission change
|
||||
3. `GET /servers/{id}/events` endpoint
|
||||
|
||||
### Step 7.4 — Security hardening
|
||||
|
||||
1. Encrypt sensitive DB fields in `game_configs` JSON (passwords, rcon_password)
|
||||
- `backend/core/utils/crypto.py` with Fernet
|
||||
- `LANGUARD_ENCRYPTION_KEY` must be a Fernet base64 key
|
||||
- **Adapter declares sensitive fields**: `adapter.get_sensitive_fields(section) -> list[str]`
|
||||
- ConfigRepository handles Fernet encrypt/decrypt transparently: encrypts declared fields on write, decrypts on read
|
||||
2. Content-Security-Policy headers for frontend
|
||||
3. Penetration testing and security audit
|
||||
|
||||
### Step 7.5 — Frontend integration checklist
|
||||
|
||||
Verify React app can:
|
||||
- [ ] Login and store JWT
|
||||
- [ ] See list of supported game types
|
||||
- [ ] Create server with game type selection
|
||||
- [ ] List servers with live status (any game type)
|
||||
- [ ] Start/stop server and see status update via WebSocket
|
||||
- [ ] View streaming log output (parsed by adapter)
|
||||
- [ ] See player list update (via adapter's remote admin)
|
||||
- [ ] See CPU/RAM charts update
|
||||
- [ ] Edit config sections (dynamic form from adapter's JSON Schema)
|
||||
- [ ] Upload a mission file (if adapter supports missions)
|
||||
- [ ] Manage mods (if adapter supports mods)
|
||||
- [ ] Kick/ban a player (if adapter supports remote admin)
|
||||
- [ ] Send a message to all players (if adapter supports remote admin)
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Second Adapter (Validation)
|
||||
|
||||
**Goal:** Prove the architecture works by adding a second game adapter. This validates that new games require zero core changes.
|
||||
|
||||
### Choose a second game (examples):
|
||||
- **Minecraft Java Edition** — Has RCON (Source protocol), server.properties config, JAR executable, world/ directory, plugins/ mods
|
||||
- **Rust** — Has RCON (websocket-based), server.cfg, RustDedicated.exe, oxide/mods
|
||||
- **Valheim** — Has no RCON, start_server.sh config, valheim_server.exe, mods via BepInEx
|
||||
|
||||
### Steps for a new adapter:
|
||||
|
||||
1. Create `backend/adapters/<game_type>/` directory (built-in) or separate Python package (third-party)
|
||||
2. Implement required protocols: `ConfigGenerator` (schema + generation), `ProcessConfig`, `LogParser`
|
||||
3. Implement optional protocols as needed: `RemoteAdmin`, `MissionManager`, `ModManager`, `BanManager`
|
||||
4. Create adapter class implementing `GameAdapter`
|
||||
5. Register adapter:
|
||||
- **Built-in**: add to `backend/adapters/<game_type>/__init__.py` and auto-import in `adapters/__init__.py`
|
||||
- **Third-party**: add `languard.adapters` entry_point in `pyproject.toml`:
|
||||
```toml
|
||||
[project.entry-points."languard.adapters"]
|
||||
mygame = "my_package.adapters:MYGAME_ADAPTER"
|
||||
```
|
||||
Core discovers these via `importlib.metadata` at startup.
|
||||
6. **No core code changes needed**
|
||||
7. **No DB migrations needed**
|
||||
8. Test: create a server with the new game_type, start it, monitor it
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit tests (pytest)
|
||||
- `GameAdapterRegistry` — register, get, list, missing adapter
|
||||
- `Arma3ConfigGenerator` — Pydantic model validation for each section (merged schema + generation)
|
||||
- `Arma3ConfigGenerator.write_server_cfg()` — compare output against expected string; test config injection prevention
|
||||
- `Arma3ConfigGenerator._escape_config_string()` — test double-quote and newline escaping
|
||||
- `RPTParser.parse_line()` — test all log formats
|
||||
- `BERConClient.parse_players_response()` — test with sample output
|
||||
- `AuthService.login()` — correct/wrong password / rate limiting
|
||||
- Repository methods — use in-memory SQLite (`:memory:`)
|
||||
- `check_server_ports_available()` — test derived port validation (via adapter conventions)
|
||||
- `sanitize_filename()` — test path traversal prevention
|
||||
- Protocol conformance — verify Arma3Adapter satisfies all GameAdapter protocol methods
|
||||
|
||||
### Integration tests
|
||||
- Full start/stop cycle with a real arma3server.exe (manual — requires licensed Arma 3)
|
||||
- WebSocket message delivery (can be automated with httpx test client)
|
||||
- RCon command round-trip (manual — requires running server with BattlEye)
|
||||
- Adapter resolution: create server with game_type, verify correct adapter is used throughout
|
||||
|
||||
### Adapter contract tests
|
||||
- Template test suite that any new adapter should pass
|
||||
- Tests: ConfigGenerator produces valid sections and valid config files, ProcessConfig returns allowed executables, LogParser parses sample lines
|
||||
- ConfigGenerator migration test: `migrate_config(old_version, config_json)` returns valid migrated dict; `ConfigMigrationError` on invalid old_version
|
||||
|
||||
### Load notes
|
||||
- SQLite with WAL handles concurrent reads from 4 threads per server well
|
||||
- For >10 simultaneous servers, consider connection pool size tuning
|
||||
- WebSocket broadcast scales to ~100 concurrent connections without issue
|
||||
|
||||
---
|
||||
|
||||
## Environment Setup (Developer)
|
||||
|
||||
```bash
|
||||
# 1. Clone repo
|
||||
git clone <repo>
|
||||
cd languard-servers-manager
|
||||
|
||||
# 2. Backend
|
||||
cd backend
|
||||
python -m venv venv
|
||||
source venv/bin/activate # or venv\Scripts\activate on Windows
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 3. Environment
|
||||
cp .env.example .env
|
||||
# Edit .env: set game-specific paths (LANGUARD_ARMA3_DEFAULT_EXE, etc.)
|
||||
|
||||
# 4. Run backend
|
||||
uvicorn main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# 5. Frontend (separate)
|
||||
cd ../frontend
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
Backend auto-creates `languard.db`, seeds an admin user on first run, and registers the Arma 3 adapter automatically.
|
||||
|
||||
---
|
||||
|
||||
## Phase Summary
|
||||
|
||||
| Phase | Deliverable | Key Change from Single-Game |
|
||||
|-------|-------------|------------------------------|
|
||||
| 0 | Adapter framework (protocols + exceptions + registry) | **NEW** — foundation for modularity |
|
||||
| 1 | Foundation (auth + server CRUD + game discovery + migration) | Core tables, `game_type` field, `game_configs` JSON, migration from old schema |
|
||||
| 2 | Arma 3 adapter: config gen + process mgmt | Config generation in adapter, atomic writes, typed exceptions, optimistic locking |
|
||||
| 3 | Background threads (core + adapter injection) | Generic threads + adapter parsers/clients, per-server lock for RemoteAdmin |
|
||||
| 4 | Remote admin (Arma 3: BattlEye RCon) | RCon in adapter, generic poller in core |
|
||||
| 5 | WebSocket real-time | No change — fully game-agnostic |
|
||||
| 6 | Mission + mod management (Arma 3 adapter) | In adapter, generic endpoints in core |
|
||||
| 7 | Polish, security, recovery | Adapter-declared sensitive fields, Fernet encryption |
|
||||
| 8 | Second game adapter | **NEW** — validates zero core changes, entry_points for third-party |
|
||||
|
||||
Implement phases in order — each phase builds on the previous and is independently testable. Phase 0 must come first as it defines the contract that all subsequent code depends on.
|
||||
1209
MODULES.md
1209
MODULES.md
File diff suppressed because it is too large
Load Diff
139
README.md
Normal file
139
README.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Languard Server Manager
|
||||
|
||||
A multi-game server management platform with a Python/FastAPI backend and React/TypeScript frontend. Currently supports Arma 3 with an extensible adapter system for adding more games.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
### Backend
|
||||
- **Python 3.12+** / **FastAPI** — async REST API
|
||||
- **SQLite** with WAL mode — zero-config database
|
||||
- **SQLAlchemy** — raw SQL via `text()` queries (no ORM)
|
||||
- **BattlEye RCon** — UDP protocol v2 for remote admin
|
||||
- **APScheduler** — background cleanup jobs
|
||||
- **psutil** — process monitoring and resource metrics
|
||||
- **JWT** (python-jose) + **bcrypt** — authentication
|
||||
- **Fernet** (cryptography) — sensitive config field encryption
|
||||
|
||||
### Frontend
|
||||
- **React 19** / **TypeScript 6** / **Vite 8**
|
||||
- **TanStack Query v5** — server state management
|
||||
- **Zustand 5** — client state (auth, UI)
|
||||
- **Tailwind CSS** — dark neumorphic design system
|
||||
- **Playwright** — E2E testing (23 tests)
|
||||
- **Vitest** + **React Testing Library** — unit tests (69 tests)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Backend
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
python -m venv venv
|
||||
source venv/bin/activate # Windows: venv\Scripts\activate
|
||||
pip install -r requirements.txt
|
||||
cp .env.example .env # Edit with your settings
|
||||
uvicorn main:app --reload
|
||||
```
|
||||
|
||||
First run prints a generated admin password. Change it immediately via `PUT /api/auth/password`.
|
||||
|
||||
### Frontend
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
Opens at `http://localhost:5173`. The dev server proxies `/api` to the backend on port 8000.
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Frontend Unit Tests
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm test # Watch mode
|
||||
npx vitest run # Single run
|
||||
npx vitest run --coverage # With coverage
|
||||
```
|
||||
|
||||
### Frontend E2E Tests
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
# Start backend + frontend dev server first
|
||||
npx playwright test # All tests (mocked + integration)
|
||||
npx playwright tests-e2e/integration/ # Full-stack integration tests only
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
languard-servers-manager/
|
||||
├── backend/
|
||||
│ ├── main.py # FastAPI app factory, lifespan, middleware
|
||||
│ ├── config.py # Pydantic Settings (env vars)
|
||||
│ ├── database.py # SQLAlchemy engine, migration runner
|
||||
│ ├── dependencies.py # FastAPI deps: auth, admin, server, adapter
|
||||
│ ├── adapters/ # Game adapter system
|
||||
│ │ ├── protocols.py # Protocol definitions (7 capabilities)
|
||||
│ │ ├── registry.py # GameAdapterRegistry singleton
|
||||
│ │ ├── exceptions.py # Typed adapter exceptions
|
||||
│ │ └── arma3/ # Arma 3 adapter (7/7 capabilities)
|
||||
│ ├── core/
|
||||
│ │ ├── auth/ # JWT auth, user CRUD
|
||||
│ │ ├── servers/ # Server service, routers, process manager
|
||||
│ │ ├── games/ # Game type discovery
|
||||
│ │ ├── system/ # Health and status endpoints
|
||||
│ │ ├── websocket/ # WS manager, broadcast thread
|
||||
│ │ ├── threads/ # Background thread registry
|
||||
│ │ ├── dal/ # Data access layer (repositories)
|
||||
│ │ ├── jobs/ # APScheduler cleanup jobs
|
||||
│ │ ├── utils/ # Crypto, file utils, port checker
|
||||
│ │ └── migrations/ # SQL migration scripts
|
||||
│ └── requirements.txt
|
||||
├── frontend/
|
||||
│ ├── src/
|
||||
│ │ ├── App.tsx # Router + auth guard
|
||||
│ │ ├── pages/ # LoginPage, DashboardPage
|
||||
│ │ ├── components/ # Sidebar, ServerCard, StatusLed
|
||||
│ │ ├── hooks/ # useServers, useWebSocket
|
||||
│ │ ├── store/ # auth.store, ui.store (Zustand)
|
||||
│ │ ├── lib/ # api.ts (Axios client)
|
||||
│ │ └── __tests__/ # Vitest unit tests
|
||||
│ ├── tests-e2e/ # Playwright E2E tests
|
||||
│ └── playwright.config.ts
|
||||
├── API.md # REST + WebSocket API reference
|
||||
├── ARCHITECTURE.md # System architecture overview
|
||||
├── DATABASE.md # Database schema reference
|
||||
├── FRONTEND.md # Frontend architecture and components
|
||||
├── MODULES.md # Module-by-module reference
|
||||
└── THREADING.md # Background threading model
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|---|---|---|
|
||||
| `LANGUARD_SECRET_KEY` | (required) | JWT signing key |
|
||||
| `LANGUARD_ENCRYPTION_KEY` | (required) | Fernet key for sensitive config fields |
|
||||
| `LANGUARD_DB_PATH` | `./languard.db` | SQLite database path |
|
||||
| `LANGUARD_SERVERS_DIR` | `./servers` | Base directory for server data |
|
||||
| `LANGUARD_HOST` | `0.0.0.0` | Listen host |
|
||||
| `LANGUARD_PORT` | `8000` | Listen port |
|
||||
| `LANGUARD_CORS_ORIGINS` | `["http://localhost:5173"]` | CORS allowed origins |
|
||||
| `LANGUARD_LOG_RETENTION_DAYS` | `7` | Log cleanup retention |
|
||||
| `LANGUARD_METRICS_RETENTION_DAYS` | `30` | Metrics cleanup retention |
|
||||
| `LANGUARD_PLAYER_HISTORY_RETENTION_DAYS` | `90` | Player history retention |
|
||||
| `LANGUARD_JWT_EXPIRE_HOURS` | `24` | JWT token expiry |
|
||||
| `LANGUARD_ARMA3_DEFAULT_EXE` | (required for Arma 3) | Default Arma 3 executable path |
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[ARCHITECTURE.md](ARCHITECTURE.md)** — System design, component diagram, security model
|
||||
- **[API.md](API.md)** — Complete REST + WebSocket API reference
|
||||
- **[DATABASE.md](DATABASE.md)** — Schema, tables, indexes, migration system
|
||||
- **[FRONTEND.md](FRONTEND.md)** — React component tree, state management, design system
|
||||
- **[MODULES.md](MODULES.md)** — File-by-file module reference
|
||||
- **[THREADING.md](THREADING.md)** — Background thread model and concurrency
|
||||
899
THREADING.md
899
THREADING.md
@@ -1,782 +1,173 @@
|
||||
# Languard Servers Manager — Threading & Concurrency Design
|
||||
# Threading & Concurrency Model
|
||||
|
||||
## Overview
|
||||
|
||||
The system uses a hybrid concurrency model:
|
||||
- **FastAPI (asyncio)** handles HTTP requests and WebSocket connections
|
||||
- **Python threads** (`threading.Thread`) handle long-running background work per server
|
||||
- **Queue** bridges the thread world → asyncio world for WebSocket broadcasting
|
||||
- **SQLAlchemy sync sessions** are used in threads (thread-local connections)
|
||||
Languard uses a hybrid concurrency model:
|
||||
|
||||
The key change for multi-game support: **core threads are game-agnostic** and receive game-specific behavior (log parsers, remote admin clients) via dependency injection from the adapter.
|
||||
- **FastAPI (asyncio)** handles HTTP requests and WebSocket connections on the main event loop
|
||||
- **Python `threading.Thread`** handles long-running background work per server
|
||||
- **`queue.Queue`** bridges the thread world to the asyncio world for WebSocket broadcasting
|
||||
- **SQLAlchemy sync sessions** with thread-local connections provide thread-safe database access
|
||||
|
||||
---
|
||||
## Thread Architecture
|
||||
|
||||
## Thread Map
|
||||
For N running servers, the system runs up to 4N+1 background threads:
|
||||
|
||||
```
|
||||
Main Process (FastAPI / asyncio event loop)
|
||||
│
|
||||
├── [uvicorn] HTTP/WS event loop (asyncio)
|
||||
│ ├── REST request handlers (async def / plain def)
|
||||
│ └── WebSocket handlers (async def)
|
||||
│
|
||||
├── BroadcastThread (daemon thread, 1 global)
|
||||
│ └── Reads from broadcast_queue (thread-safe)
|
||||
│ Calls asyncio.run_coroutine_threadsafe()
|
||||
│ → ConnectionManager.broadcast()
|
||||
│
|
||||
└── Per-running-server thread group (started when server starts, stopped when server stops):
|
||||
├── ProcessMonitorThread (1 per server, 1s interval) — CORE
|
||||
├── LogTailThread (1 per server, 100ms interval) — CORE + adapter LogParser
|
||||
├── MetricsCollectorThread (1 per server, 5s interval) — CORE
|
||||
└── RemoteAdminPollerThread (1 per server, 10s interval) — CORE + adapter RemoteAdmin
|
||||
```
|
||||
| Thread Type | Count | Purpose |
|
||||
|---|---|---|
|
||||
| `BroadcastThread` | 1 (global) | Bridges `queue.Queue` to asyncio WebSocket broadcasts |
|
||||
| `LogTailThread` | 1 per server | Tails .rpt log files, parses lines, persists to DB, broadcasts events |
|
||||
| `ProcessMonitorThread` | 1 per server | Monitors server process, detects crashes, triggers auto-restart |
|
||||
| `MetricsCollectorThread` | 1 per server | Collects CPU/RAM metrics via psutil every 10 seconds |
|
||||
| `RemoteAdminPollerThread` | 1 per server | Polls player list via RCon, syncs join/leave events |
|
||||
|
||||
For **N running servers**, there are:
|
||||
- `4*N` background threads + 1 BroadcastThread = `4N+1` background threads total
|
||||
- (If adapter has no `remote_admin`, RemoteAdminPollerThread is skipped → `3*N+1`)
|
||||
All server-specific threads are managed by `ThreadRegistry`, which creates/destroys thread bundles as servers start/stop.
|
||||
|
||||
---
|
||||
## BaseServerThread
|
||||
|
||||
## Adapter Injection into Threads
|
||||
All background threads extend `BaseServerThread`, which provides:
|
||||
|
||||
The `ThreadRegistry` resolves the adapter at thread creation time and injects game-specific components into the generic core threads:
|
||||
|
||||
```python
|
||||
class ThreadRegistry:
|
||||
@classmethod
|
||||
def start_server_threads(cls, server_id: int, db: Connection) -> None:
|
||||
server = ServerRepository(db).get_by_id(server_id)
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
|
||||
threads: dict[str, BaseServerThread] = {}
|
||||
|
||||
# Core threads — always present
|
||||
threads["process_monitor"] = ProcessMonitorThread(server_id)
|
||||
threads["metrics_collector"] = MetricsCollectorThread(server_id)
|
||||
|
||||
# Core thread with adapter's log parser injected
|
||||
log_parser = adapter.get_log_parser()
|
||||
threads["log_tail"] = LogTailThread(
|
||||
server_id,
|
||||
parser=log_parser,
|
||||
log_file_resolver=log_parser.get_log_file_resolver(server_id),
|
||||
)
|
||||
|
||||
# Core thread with adapter's remote admin injected (if supported)
|
||||
remote_admin = adapter.get_remote_admin()
|
||||
if remote_admin is not None:
|
||||
threads["remote_admin_poller"] = RemoteAdminPollerThread(
|
||||
server_id,
|
||||
remote_admin_factory=lambda: remote_admin.create_client(
|
||||
host="127.0.0.1",
|
||||
port=server["rcon_port"],
|
||||
password=_get_remote_admin_password(server_id, db),
|
||||
),
|
||||
)
|
||||
|
||||
# Adapter-declared custom threads (for game-specific background work)
|
||||
for thread_factory in adapter.get_custom_thread_factories():
|
||||
thread = thread_factory(server_id, db)
|
||||
threads[thread.name_key] = thread
|
||||
|
||||
with cls._lock:
|
||||
cls._threads[server_id] = threads
|
||||
|
||||
for thread in threads.values():
|
||||
thread.start()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread Safety Rules
|
||||
|
||||
| Resource | Access Pattern | Protection |
|
||||
|----------|---------------|------------|
|
||||
| `ProcessManager._processes` | read/write from multiple threads | `threading.Lock` |
|
||||
| `ThreadRegistry._threads` | read/write from main + shutdown | `threading.Lock` |
|
||||
| `broadcast_queue` | multi-writer, single reader | `queue.Queue` (thread-safe built-in) |
|
||||
| `ConnectionManager._connections` | async, single event loop | `asyncio.Lock` |
|
||||
| SQLite connections | one connection per thread | Thread-local via `threading.local()` |
|
||||
| Config files on disk | write on start, read-only during run | No lock needed (regenerated before start) |
|
||||
| Adapter objects | read-only after registration | No lock needed (registered once at startup) |
|
||||
| RemoteAdminClient calls | called from RemoteAdminPollerThread only | **Core wraps with per-server `threading.Lock`** (see below) |
|
||||
|
||||
### RemoteAdminClient Thread Safety
|
||||
|
||||
Adapters do NOT need to make their `RemoteAdminClient` implementations thread-safe. The core wraps every RemoteAdminClient call with a **per-server `threading.Lock`** so only one call executes at a time against a given server's admin client.
|
||||
|
||||
```python
|
||||
# In RemoteAdminPollerThread
|
||||
class RemoteAdminPollerThread(BaseServerThread):
|
||||
def __init__(self, server_id: int,
|
||||
remote_admin_factory: Callable[[], "RemoteAdminClient"]):
|
||||
super().__init__(server_id, self.interval)
|
||||
self._client_factory = remote_admin_factory
|
||||
self._client: RemoteAdminClient | None = None
|
||||
self._connected = False
|
||||
self._call_lock = threading.Lock() # per-server lock
|
||||
|
||||
def _call(self, method, *args, **kwargs):
|
||||
"""All RemoteAdminClient calls go through this to serialize access."""
|
||||
with self._call_lock:
|
||||
return method(*args, **kwargs)
|
||||
|
||||
# In tick(), replace direct self._client.get_players() with:
|
||||
# players = self._call(self._client.get_players)
|
||||
```
|
||||
|
||||
This means:
|
||||
- Adapter authors write simple, non-thread-safe clients
|
||||
- Core guarantees no concurrent calls to the same client
|
||||
- Different servers' clients can call concurrently (different locks)
|
||||
|
||||
### SQLite Thread Safety
|
||||
```python
|
||||
# Each background thread creates its own SQLAlchemy connection
|
||||
# from the same engine (WAL mode allows concurrent reads)
|
||||
# PRAGMA busy_timeout=5000 prevents "database is locked" errors
|
||||
#
|
||||
# If busy_timeout is exhausted (5s), the write fails with
|
||||
# OperationalError. Background threads retry with exponential
|
||||
# backoff: 1s, 2s, 4s — then log and skip the tick.
|
||||
# API request handlers retry up to 2 times with 1s backoff,
|
||||
# then return 503 "database temporarily unavailable".
|
||||
|
||||
class BaseServerThread(threading.Thread):
|
||||
_db_retry_delays = [1.0, 2.0, 4.0] # seconds, exponential backoff
|
||||
|
||||
def run(self):
|
||||
engine = get_engine()
|
||||
self._db = engine.connect()
|
||||
try:
|
||||
self.setup()
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
self.tick()
|
||||
except OperationalError as e:
|
||||
if "database is locked" in str(e):
|
||||
retried = self._retry_db_write(self.tick)
|
||||
if not retried:
|
||||
logger.warning(f"{self.name}: DB locked after all retries, skipping tick")
|
||||
else:
|
||||
self.on_error(e)
|
||||
except Exception as e:
|
||||
self.on_error(e)
|
||||
self._stop_event.wait(self.interval)
|
||||
except Exception as e:
|
||||
logger.error(f"{self.name} setup error: {e}")
|
||||
finally:
|
||||
self.teardown()
|
||||
self._db.close()
|
||||
|
||||
def _retry_db_write(self, fn, max_retries=3):
|
||||
for i, delay in enumerate(self._db_retry_delays[:max_retries]):
|
||||
self._stop_event.wait(delay)
|
||||
if self._stop_event.is_set():
|
||||
return False
|
||||
try:
|
||||
fn()
|
||||
return True
|
||||
except OperationalError:
|
||||
continue
|
||||
return False
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## BroadcastThread — Asyncio Bridge
|
||||
|
||||
This is the critical bridge between background threads and the asyncio WebSocket layer. **Game-agnostic.**
|
||||
|
||||
```
|
||||
Background Thread Asyncio Event Loop
|
||||
───────────────── ──────────────────
|
||||
Any background thread uvicorn runs here
|
||||
│
|
||||
▼
|
||||
BroadcastThread.enqueue( loop = asyncio.get_running_loop()
|
||||
server_id=1, (stored at app startup)
|
||||
msg_type='log',
|
||||
data={...}
|
||||
)
|
||||
│
|
||||
▼
|
||||
broadcast_queue.put({ asyncio.run_coroutine_threadsafe(
|
||||
'server_id': 1, connection_manager.broadcast(
|
||||
'type': 'log', server_id=1,
|
||||
'data': {...} message={type, data}
|
||||
) ),
|
||||
│ loop=self._loop
|
||||
▼ )
|
||||
BroadcastThread.run() ──────────────────►
|
||||
while True:
|
||||
msg = queue.get()
|
||||
fut = run_coroutine_threadsafe(
|
||||
broadcast_coro,
|
||||
self._loop
|
||||
)
|
||||
fut.result(timeout=5)
|
||||
```
|
||||
|
||||
### Implementation Sketch
|
||||
```python
|
||||
# core/websocket/broadcaster.py
|
||||
import asyncio
|
||||
import queue
|
||||
import threading
|
||||
|
||||
_broadcast_queue: queue.Queue = queue.Queue(maxsize=10000)
|
||||
_event_loop: asyncio.AbstractEventLoop | None = None
|
||||
|
||||
class BroadcastThread(threading.Thread):
|
||||
daemon = True
|
||||
|
||||
def __init__(self, loop: asyncio.AbstractEventLoop, manager):
|
||||
super().__init__(name="BroadcastThread")
|
||||
self._loop = loop
|
||||
self._manager = manager
|
||||
self._running = True
|
||||
|
||||
def run(self):
|
||||
while self._running:
|
||||
try:
|
||||
msg = _broadcast_queue.get(timeout=1.0)
|
||||
server_id = msg['server_id']
|
||||
outgoing = {
|
||||
'type': msg['type'],
|
||||
'server_id': server_id,
|
||||
'data': msg['data'],
|
||||
}
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._manager.broadcast(str(server_id), outgoing, channel=msg['type']),
|
||||
self._loop
|
||||
)
|
||||
try:
|
||||
future.result(timeout=5.0)
|
||||
except TimeoutError:
|
||||
logger.warning(f"Broadcast timeout for server {server_id} msg type {msg['type']}")
|
||||
except queue.Empty:
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.error(f"BroadcastThread error: {e}")
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
@staticmethod
|
||||
def enqueue(server_id: int, msg_type: str, data: dict):
|
||||
"""Thread-safe. Called from any background thread."""
|
||||
try:
|
||||
_broadcast_queue.put_nowait({
|
||||
'server_id': server_id,
|
||||
'type': msg_type,
|
||||
'data': data,
|
||||
})
|
||||
except queue.Full:
|
||||
logger.warning(f"Broadcast queue full, dropping {msg_type} for server {server_id}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ProcessMonitorThread — Crash Detection & Auto-Restart
|
||||
|
||||
**Game-agnostic.** This thread only checks OS-level process status and updates the core `servers` table.
|
||||
|
||||
```python
|
||||
class ProcessMonitorThread(BaseServerThread):
|
||||
interval = 1.0
|
||||
|
||||
def tick(self):
|
||||
proc = ProcessManager.get().get_process(self.server_id)
|
||||
if proc is None:
|
||||
self.stop()
|
||||
return
|
||||
|
||||
exit_code = proc.poll()
|
||||
if exit_code is not None:
|
||||
self._handle_process_exit(exit_code)
|
||||
self.stop()
|
||||
|
||||
def _handle_process_exit(self, exit_code: int):
|
||||
is_crash = (exit_code != 0)
|
||||
status = 'crashed' if is_crash else 'stopped'
|
||||
|
||||
server = ServerRepository(self._db).get_by_id(self.server_id)
|
||||
ServerRepository(self._db).update_status(
|
||||
self.server_id, status, pid=None,
|
||||
stopped_at=datetime.utcnow().isoformat()
|
||||
)
|
||||
PlayerRepository(self._db).clear(self.server_id)
|
||||
ServerEventRepository(self._db).insert(
|
||||
self.server_id, status,
|
||||
actor='system',
|
||||
detail={'exit_code': exit_code}
|
||||
)
|
||||
|
||||
BroadcastThread.enqueue(self.server_id, 'status', {'status': status})
|
||||
BroadcastThread.enqueue(self.server_id, 'event', {
|
||||
'event_type': status,
|
||||
'detail': {'exit_code': exit_code}
|
||||
})
|
||||
|
||||
# Stop other threads for this server via daemon cleanup thread
|
||||
# (avoids thread joining itself)
|
||||
import threading as _threading
|
||||
|
||||
def _cleanup_and_maybe_restart():
|
||||
try:
|
||||
ThreadRegistry.get().stop_server_threads(self.server_id)
|
||||
if is_crash and server.get('auto_restart'):
|
||||
self._schedule_auto_restart(server)
|
||||
except Exception as e:
|
||||
logger.error(f"Cleanup/restart failed for server {self.server_id}: {e}")
|
||||
BroadcastThread.enqueue(self.server_id, 'event', {
|
||||
'event_type': 'auto_restart_failed',
|
||||
'detail': {'error': str(e)}
|
||||
})
|
||||
|
||||
_threading.Thread(
|
||||
target=_cleanup_and_maybe_restart,
|
||||
daemon=True,
|
||||
name=f"StopCleanup-{self.server_id}"
|
||||
).start()
|
||||
|
||||
def _schedule_auto_restart(self, server: dict):
|
||||
# IMPORTANT: Runs in daemon cleanup thread, NOT ProcessMonitorThread.
|
||||
# Must create its own DB connection.
|
||||
from database import get_thread_db
|
||||
db = get_thread_db()
|
||||
|
||||
restart_count = server['restart_count']
|
||||
max_restarts = server['max_restarts']
|
||||
window = server['restart_window_seconds']
|
||||
last_restart = server.get('last_restart_at')
|
||||
|
||||
if last_restart:
|
||||
last_dt = datetime.fromisoformat(last_restart)
|
||||
elapsed = (datetime.utcnow() - last_dt).total_seconds()
|
||||
if elapsed > window:
|
||||
ServerRepository(db).reset_restart_count(self.server_id)
|
||||
restart_count = 0
|
||||
|
||||
if restart_count < max_restarts:
|
||||
delay = min(10 * (restart_count + 1), 60) # exponential backoff
|
||||
logger.info(f"Auto-restarting server {self.server_id} in {delay}s (attempt {restart_count+1}/{max_restarts})")
|
||||
threading.Timer(delay, self._auto_restart).start()
|
||||
else:
|
||||
logger.warning(f"Server {self.server_id} exceeded max auto-restarts ({max_restarts})")
|
||||
BroadcastThread.enqueue(self.server_id, 'event', {
|
||||
'event_type': 'max_restarts_exceeded',
|
||||
'detail': {'restart_count': restart_count}
|
||||
})
|
||||
|
||||
def _auto_restart(self):
|
||||
from core.servers.service import ServerService
|
||||
try:
|
||||
ServerService().start(self.server_id)
|
||||
except Exception as e:
|
||||
logger.error(f"Auto-restart failed for server {self.server_id}: {e}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## LogTailThread — Generic File Tailing with Adapter Parser
|
||||
|
||||
**Core thread** that takes an adapter-provided `LogParser` for game-specific log line parsing and file discovery.
|
||||
|
||||
```python
|
||||
class LogTailThread(BaseServerThread):
|
||||
interval = 0.1 # 100ms
|
||||
|
||||
def __init__(self, server_id: int, log_parser: "LogParser",
|
||||
log_file_resolver: Callable[[Path], Path | None]):
|
||||
super().__init__(server_id, self.interval)
|
||||
self._parser = log_parser
|
||||
self._log_file_resolver = log_file_resolver
|
||||
self._file: TextIO | None = None
|
||||
self._current_path: Path | None = None
|
||||
self._last_size: int = 0
|
||||
|
||||
def setup(self):
|
||||
self._open_latest_log()
|
||||
|
||||
def _open_latest_log(self):
|
||||
"""
|
||||
Uses the adapter-provided log_file_resolver to find the current log file.
|
||||
Opens it and seeks to end (tail behavior).
|
||||
|
||||
NOTE: Do NOT use os.stat().st_ino for rotation detection — on Windows/NTFS
|
||||
st_ino is always 0. Instead, track filename and file size.
|
||||
"""
|
||||
server_dir = get_server_dir(self.server_id)
|
||||
log_path = self._log_file_resolver(server_dir)
|
||||
if log_path is None:
|
||||
return # Server hasn't created log yet; retry in next tick
|
||||
|
||||
try:
|
||||
self._file = open(log_path, 'r', encoding='utf-8', errors='replace')
|
||||
self._file.seek(0, 2) # seek to end
|
||||
self._current_path = log_path
|
||||
self._last_size = self._file.tell()
|
||||
except OSError:
|
||||
self._file = None
|
||||
|
||||
def tick(self):
|
||||
if self._file is None:
|
||||
self._open_latest_log()
|
||||
return
|
||||
|
||||
# Rotation detection: only re-check every 5 seconds
|
||||
now = time.monotonic()
|
||||
if now - getattr(self, '_last_glob_time', 0) > 5.0:
|
||||
self._last_glob_time = now
|
||||
server_dir = get_server_dir(self.server_id)
|
||||
log_path = self._log_file_resolver(server_dir)
|
||||
if log_path is not None and log_path != self._current_path:
|
||||
self._file.close()
|
||||
self._open_latest_log()
|
||||
return
|
||||
|
||||
try:
|
||||
current_size = self._current_path.stat().st_size
|
||||
except OSError:
|
||||
return
|
||||
|
||||
if current_size < self._last_size:
|
||||
self._file.close()
|
||||
self._open_latest_log()
|
||||
return
|
||||
|
||||
# Read new lines and parse using adapter's parser
|
||||
while True:
|
||||
line = self._file.readline()
|
||||
if not line:
|
||||
break
|
||||
self._last_size = self._file.tell()
|
||||
line = line.rstrip('\n')
|
||||
if not line:
|
||||
continue
|
||||
|
||||
# Adapter parses the line — game-specific format
|
||||
entry = self._parser.parse_line(line)
|
||||
if entry:
|
||||
LogRepository(self._db).insert(self.server_id, entry)
|
||||
BroadcastThread.enqueue(self.server_id, 'log', entry)
|
||||
|
||||
def teardown(self):
|
||||
if self._file is not None:
|
||||
try:
|
||||
self._file.close()
|
||||
except OSError:
|
||||
pass
|
||||
self._file = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MetricsCollectorThread — Game-Agnostic Resource Monitoring
|
||||
|
||||
**Fully game-agnostic.** Uses psutil to monitor any process.
|
||||
|
||||
```python
|
||||
class MetricsCollectorThread(BaseServerThread):
|
||||
interval = 5.0
|
||||
|
||||
def tick(self):
|
||||
pid = ProcessManager.get().get_pid(self.server_id)
|
||||
if pid is None:
|
||||
return
|
||||
|
||||
try:
|
||||
proc = psutil.Process(pid)
|
||||
cpu = proc.cpu_percent(interval=0.5)
|
||||
ram = proc.memory_info().rss / (1024 * 1024) # MB
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
return
|
||||
|
||||
player_count = PlayerRepository(self._db).count(self.server_id)
|
||||
|
||||
MetricsRepository(self._db).insert(self.server_id, cpu, ram, player_count)
|
||||
BroadcastThread.enqueue(self.server_id, 'metrics', {
|
||||
'cpu_percent': cpu,
|
||||
'ram_mb': ram,
|
||||
'player_count': player_count,
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RemoteAdminPollerThread — Generic Polling with Adapter Client
|
||||
|
||||
**Core thread** that takes an adapter-provided `RemoteAdmin` factory for game-specific admin protocol communication. Skipped entirely if adapter has no `remote_admin` capability.
|
||||
|
||||
```python
|
||||
class RemoteAdminPollerThread(BaseServerThread):
|
||||
interval = 10.0
|
||||
STARTUP_DELAY = 30.0
|
||||
|
||||
def __init__(self, server_id: int,
|
||||
remote_admin_factory: Callable[[], "RemoteAdminClient"]):
|
||||
super().__init__(server_id, self.interval)
|
||||
self._client_factory = remote_admin_factory
|
||||
self._client: RemoteAdminClient | None = None
|
||||
self._connected = False
|
||||
|
||||
def setup(self):
|
||||
# Wait for server to start up before attempting connection
|
||||
# Uses _stop_event.wait() instead of time.sleep() for immediate shutdown
|
||||
startup_delay = self._get_startup_delay()
|
||||
if self._stop_event.wait(startup_delay):
|
||||
return # stop was requested during wait
|
||||
self._connect()
|
||||
|
||||
def _get_startup_delay(self) -> float:
|
||||
# Default delay; adapter may override via RemoteAdmin.get_startup_delay()
|
||||
return self.STARTUP_DELAY
|
||||
|
||||
def _connect(self):
|
||||
try:
|
||||
self._client = self._client_factory()
|
||||
self._connected = True
|
||||
except Exception as e:
|
||||
logger.warning(f"Remote admin connection failed for server {self.server_id}: {e}")
|
||||
self._connected = False
|
||||
|
||||
def tick(self):
|
||||
if not self._connected:
|
||||
self._reconnect_attempts = getattr(self, '_reconnect_attempts', 0) + 1
|
||||
delay = min(10 * 2 ** self._reconnect_attempts, 120) # exponential backoff
|
||||
if self._reconnect_attempts > 1:
|
||||
logger.info(f"Remote admin reconnect attempt {self._reconnect_attempts} for server {self.server_id}")
|
||||
if self._stop_event.wait(delay):
|
||||
return
|
||||
self._connect()
|
||||
if not self._connected:
|
||||
return
|
||||
self._reconnect_attempts = 0
|
||||
|
||||
try:
|
||||
players = self._call(self._client.get_players)
|
||||
PlayerService(self._db).update_from_remote_admin(self.server_id, players)
|
||||
BroadcastThread.enqueue(self.server_id, 'players', {
|
||||
'players': [p for p in players],
|
||||
'count': len(players),
|
||||
})
|
||||
except ConnectionError:
|
||||
self._connected = False
|
||||
logger.warning(f"Remote admin connection lost for server {self.server_id}")
|
||||
except RemoteAdminError as e:
|
||||
logger.error(f"Remote admin adapter error for server {self.server_id}: {e}")
|
||||
self._connected = False
|
||||
|
||||
def teardown(self):
|
||||
if self._client is not None:
|
||||
try:
|
||||
self._client.disconnect()
|
||||
except Exception:
|
||||
pass
|
||||
self._client = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread Lifecycle
|
||||
|
||||
### Start Server Flow
|
||||
```
|
||||
POST /servers/{id}/start
|
||||
│
|
||||
├── ServerService.start()
|
||||
│ ├── adapter = GameAdapterRegistry.get(server.game_type)
|
||||
│ ├── check_server_ports_available(server_id)
|
||||
│ │ └── For ALL running servers, resolve each adapter,
|
||||
│ │ get port conventions, check full derived port set
|
||||
│ │ (cross-game: Arma 3 game+steam query + other games' ports)
|
||||
│ ├── adapter.config_generator.write_configs()
|
||||
│ │ └── Atomic write: write to .tmp files first, then os.replace()
|
||||
│ │ On failure: .tmp files cleaned up, originals untouched
|
||||
│ ├── launch_args = adapter.config_generator.build_launch_args()
|
||||
│ ├── ProcessManager.start() ← creates subprocess.Popen
|
||||
│ └── ThreadRegistry.start_server_threads(id, db)
|
||||
│ ├── ProcessMonitorThread(id) ← core, always
|
||||
│ ├── LogTailThread(id, adapter.log_parser) ← core + adapter
|
||||
│ ├── MetricsCollectorThread(id) ← core, always
|
||||
│ └── RemoteAdminPollerThread(id, adapter.remote_admin)
|
||||
│ ← core + adapter (if available)
|
||||
│
|
||||
└── BroadcastThread.enqueue(id, 'status', {status: 'starting'})
|
||||
|
||||
Error paths on start:
|
||||
├── ConfigWriteError → rollback .tmp files, return 500 to client
|
||||
├── ConfigValidationError → return 422 with validation details
|
||||
├── LaunchArgsError → return 400 with invalid arg info
|
||||
├── ExeNotAllowedError → return 403 with executable name
|
||||
└── PortInUseError → return 409 with conflicting port info
|
||||
```
|
||||
|
||||
### Stop Server Flow
|
||||
```
|
||||
POST /servers/{id}/stop
|
||||
│
|
||||
├── adapter.remote_admin.shutdown() ← if adapter has remote_admin
|
||||
├── Wait up to 30s for process exit (ProcessManager.stop(timeout=30))
|
||||
├── If still running: ProcessManager.kill()
|
||||
├── ThreadRegistry.stop_server_threads(id)
|
||||
│ ├── ProcessMonitorThread.stop()
|
||||
│ ├── LogTailThread.stop()
|
||||
│ ├── MetricsCollectorThread.stop()
|
||||
│ └── RemoteAdminPollerThread.stop() ← if present
|
||||
│ └── Thread.join(timeout=5) for each
|
||||
│
|
||||
└── BroadcastThread.enqueue(id, 'status', {status: 'stopped'})
|
||||
```
|
||||
|
||||
### App Shutdown Flow
|
||||
```
|
||||
FastAPI shutdown event
|
||||
│
|
||||
├── ThreadRegistry.stop_all() ← stop all threads for all servers
|
||||
├── BroadcastThread.stop()
|
||||
├── ConnectionManager.close_all()
|
||||
└── database engine dispose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stop Event Pattern
|
||||
|
||||
All background threads use a `threading.Event` for graceful shutdown:
|
||||
- **Stop event**: `threading.Event` for graceful shutdown
|
||||
- **Thread-local DB**: Creates a fresh SQLAlchemy connection per thread via `get_thread_db()`
|
||||
- **Exception backoff**: On unhandled exceptions, sleeps with exponential backoff (5s → 30s max), then retries. If stop event is set, exits cleanly.
|
||||
- **Abstract `run_loop()` method**: Subclasses implement the main loop, called repeatedly until stop event is set
|
||||
|
||||
```python
|
||||
class BaseServerThread(threading.Thread):
|
||||
def __init__(self, server_id: int, interval: float):
|
||||
super().__init__(name=f"{self.__class__.__name__}-{server_id}", daemon=True)
|
||||
def __init__(self, server_id: int, ...):
|
||||
super().__init__(daemon=True)
|
||||
self.server_id = server_id
|
||||
self.interval = interval
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def stop(self):
|
||||
self._stop_event.set()
|
||||
|
||||
def is_stopped(self) -> bool:
|
||||
return self._stop_event.is_set()
|
||||
|
||||
def teardown(self):
|
||||
"""Override to release resources (close files, sockets) after the loop ends."""
|
||||
pass
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
self.setup()
|
||||
except Exception as e:
|
||||
logger.error(f"{self.name} setup error: {e}")
|
||||
return # setup failed completely
|
||||
|
||||
try:
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
self.tick()
|
||||
except Exception as e:
|
||||
self.on_error(e)
|
||||
self._stop_event.wait(self.interval)
|
||||
finally:
|
||||
self.teardown()
|
||||
|
||||
def on_error(self, error: Exception):
|
||||
"""Default error handler. Adapter exceptions are typed for specific handling."""
|
||||
if isinstance(error, RemoteAdminError):
|
||||
logger.error(f"{self.name} remote admin error: {error}")
|
||||
# RemoteAdminPollerThread overrides to set _connected = False
|
||||
elif isinstance(error, ConfigWriteError):
|
||||
logger.critical(f"{self.name} config write error (atomic write failed): {error}")
|
||||
elif isinstance(error, ConfigValidationError):
|
||||
logger.error(f"{self.name} config validation error: {error}")
|
||||
else:
|
||||
logger.error(f"{self.name} unhandled error: {error}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## WebSocket Connection Manager (asyncio)
|
||||
|
||||
**Game-agnostic.** No changes from single-game design.
|
||||
|
||||
```python
|
||||
# core/websocket/manager.py
|
||||
class ConnectionManager:
|
||||
def __init__(self):
|
||||
self._connections: dict[str, set[WebSocket]] = defaultdict(set)
|
||||
self._channel_subs: dict[WebSocket, set[str]] = defaultdict(set)
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
async def connect(self, ws: WebSocket, server_id: str):
|
||||
await ws.accept()
|
||||
async with self._lock:
|
||||
self._connections[server_id].add(ws)
|
||||
self._channel_subs[ws].add('status')
|
||||
if server_id == 'all':
|
||||
self._connections['all'].add(ws)
|
||||
|
||||
async def disconnect(self, ws: WebSocket, server_id: str):
|
||||
async with self._lock:
|
||||
self._connections[server_id].discard(ws)
|
||||
self._connections['all'].discard(ws)
|
||||
self._channel_subs.pop(ws, None)
|
||||
|
||||
async def subscribe(self, ws: WebSocket, channels: list[str]):
|
||||
async with self._lock:
|
||||
self._channel_subs[ws].update(channels)
|
||||
|
||||
async def unsubscribe(self, ws: WebSocket, channels: list[str]):
|
||||
async with self._lock:
|
||||
self._channel_subs[ws].difference_update(channels)
|
||||
|
||||
async def broadcast(self, server_id: str, message: dict, channel: str = None):
|
||||
targets: set[WebSocket] = set()
|
||||
async with self._lock:
|
||||
server_clients = self._connections.get(server_id, set())
|
||||
all_clients = self._connections.get('all', set())
|
||||
candidates = server_clients | all_clients
|
||||
|
||||
if channel:
|
||||
targets = {ws for ws in candidates
|
||||
if channel in self._channel_subs.get(ws, set())}
|
||||
else:
|
||||
targets = candidates
|
||||
|
||||
dead = []
|
||||
for ws in targets:
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
await ws.send_json(message)
|
||||
self.run_loop()
|
||||
except Exception:
|
||||
dead.append(ws)
|
||||
|
||||
if dead:
|
||||
async with self._lock:
|
||||
for ws in dead:
|
||||
for bucket in self._connections.values():
|
||||
bucket.discard(ws)
|
||||
self._channel_subs.pop(ws, None)
|
||||
backoff = min(backoff * 2, 30)
|
||||
self._stop_event.wait(backoff)
|
||||
```
|
||||
|
||||
---
|
||||
## ThreadRegistry
|
||||
|
||||
## Memory & Performance Considerations
|
||||
`ThreadRegistry` manages thread lifecycle per server:
|
||||
|
||||
| Thread | Memory Impact | CPU Impact |
|
||||
|--------|--------------|-----------|
|
||||
| ProcessMonitorThread | Minimal (one `os.kill` check) | Negligible |
|
||||
| LogTailThread | Buffer for unread log lines | Low (file I/O + adapter parsing) |
|
||||
| MetricsCollectorThread | psutil subprocess scan | Low-Medium |
|
||||
| RemoteAdminPollerThread | Adapter client socket + buffer | Low (varies by adapter protocol) |
|
||||
| BroadcastThread | Queue buffer (max 10000 entries) | Low |
|
||||
- **`start_server_threads(server_id, db)`** — Creates and starts all 4 thread types for a server
|
||||
- **`stop_server_threads(server_id)`** — Sets stop events and joins all threads for a server
|
||||
- **`reattach_server_threads(server_id, db)`** — Recovers threads for a server that survived a process restart
|
||||
- **`stop_all()`** — Stops all threads for all servers (called on shutdown)
|
||||
|
||||
### Recommendations
|
||||
- Set all threads as `daemon=True` — they die automatically if main process exits
|
||||
- `broadcast_queue.maxsize=10000` — backpressure; drop on Full (log warning)
|
||||
- `LogTailThread` buffers max ~100 lines per tick before writing to DB in batch
|
||||
- `MetricsCollectorThread` uses `psutil.Process.cpu_percent(interval=0.5)` — blocks 500ms, acceptable at 5s interval
|
||||
- For N=10 servers: 31-41 background threads — well within Python's thread limits
|
||||
- Games without remote admin skip the RemoteAdminPollerThread entirely
|
||||
Thread bundles are stored in a dict: `{server_id → ThreadBundle}`, where `ThreadBundle` is a dataclass holding all thread references.
|
||||
|
||||
## BroadcastThread
|
||||
|
||||
The `BroadcastThread` is the single global thread that bridges synchronous background threads to asynchronous WebSocket clients:
|
||||
|
||||
1. Background threads push events into a `queue.Queue(maxsize=1000)`
|
||||
2. `BroadcastThread` runs a loop reading from the queue
|
||||
3. For each event, it calls `asyncio.run_coroutine_threadsafe()` to schedule a WebSocket broadcast on the main event loop
|
||||
4. If the queue is full, events are dropped (non-blocking put)
|
||||
|
||||
Events are broadcast to WebSocket clients subscribed to the relevant `server_id` (or `None` for all servers).
|
||||
|
||||
## ProcessManager
|
||||
|
||||
`ProcessManager` is a singleton that manages server processes via `subprocess.Popen`:
|
||||
|
||||
- **`start_process(server_id, cmd, cwd, env)`** — Starts a new subprocess, stores the PID
|
||||
- **`stop_process(server_id, timeout)`** — Sends terminate signal, waits for exit, force-kills after timeout
|
||||
- **`kill_process(server_id)`** — Force-kills the process immediately
|
||||
- **`recover_on_startup(db)`** — On startup, checks all stored PIDs against running processes via `psutil.pid_exists()`. If a process is still alive, marks the server as running. If not, marks it as stopped.
|
||||
- Thread-safe with per-server `threading.Lock`
|
||||
|
||||
## LogTailThread
|
||||
|
||||
Tails the Arma 3 .rpt log file for each server:
|
||||
|
||||
- Resolves the latest log file path using the adapter's `LogParser.get_latest_log_file()`
|
||||
- Reads new lines from the end of the file, detecting log rotation (Windows/NTFS safe)
|
||||
- Parses each line using `RPTParser.parse_line()` to extract timestamp, level, and message
|
||||
- Persists parsed entries to the `logs` table via `LogRepository`
|
||||
- Broadcasts `log` events via the global queue
|
||||
|
||||
## ProcessMonitorThread
|
||||
|
||||
Monitors each server process for crashes:
|
||||
|
||||
- Checks every 5 seconds whether the process is still alive
|
||||
- If the process has exited unexpectedly:
|
||||
1. Updates server status to `crashed`
|
||||
2. Logs the crash event
|
||||
3. If `auto_restart` is enabled and restart count hasn't exceeded `max_restarts` within the `restart_window_seconds`:
|
||||
- Triggers a restart via `ServerService.start_server()`
|
||||
- Increments `restart_count`
|
||||
|
||||
## MetricsCollectorThread
|
||||
|
||||
Collects CPU and RAM metrics for each running server:
|
||||
|
||||
- Uses `psutil.Process(pid)` to get CPU and memory usage
|
||||
- Collects every 10 seconds
|
||||
- Stores metrics in the `metrics` table via `MetricsRepository`
|
||||
- Broadcasts `metrics` events via the global queue
|
||||
|
||||
## RemoteAdminPollerThread
|
||||
|
||||
Polls the BattlEye RCon interface for player list updates:
|
||||
|
||||
- Connects via `Arma3RemoteAdmin` using `BERConClient`
|
||||
- Polls player list every 10 seconds
|
||||
- Compares current players with previous state to detect joins/leaves
|
||||
- On player join: upserts to `players` table, inserts to `player_history`, broadcasts `players` event
|
||||
- On player leave: removes from `players`, updates `left_at` in `player_history`, broadcasts `players` event
|
||||
- On RCon connection failure: reconnects with exponential backoff
|
||||
|
||||
## WebSocketManager
|
||||
|
||||
Runs on the main asyncio event loop:
|
||||
|
||||
- Clients connect to `/ws?token=JWT&server_id=N`
|
||||
- JWT is validated on connection; invalid tokens close with code 4001
|
||||
- Clients subscribe to specific `server_id`s or `None` (all servers)
|
||||
- `broadcast(server_id, message)` sends JSON-encoded messages to matching subscribers
|
||||
- `disconnect(websocket)` removes the client from the registry
|
||||
- Thread-safe via `asyncio.Lock`
|
||||
|
||||
## Thread Safety Rules
|
||||
|
||||
1. **Database access**: Each thread uses its own connection via `get_thread_db()`. No shared DB connections.
|
||||
2. **WebSocket broadcasting**: Threads write to `queue.Queue`, which is thread-safe. Only `BroadcastThread` reads from the queue.
|
||||
3. **Process management**: `ProcessManager` uses per-server locks for thread-safe start/stop operations.
|
||||
4. **SQLite WAL mode**: Enables concurrent reads from multiple threads while a single writer operates.
|
||||
5. **Asyncio locks**: `WebSocketManager` uses `asyncio.Lock` for connection registry modifications.
|
||||
|
||||
## Scheduled Jobs
|
||||
|
||||
APScheduler `BackgroundScheduler` runs 3 cleanup cron jobs:
|
||||
|
||||
| Job | Schedule | Cleanup |
|
||||
|---|---|---|
|
||||
| Clean up old log entries | Daily at 03:00 | `DELETE FROM logs WHERE created_at < datetime('now', '-7 days')` |
|
||||
| Clean up old metrics | Every 6 hours | `DELETE FROM metrics WHERE timestamp < datetime('now', '-1 day')` |
|
||||
| Clean up old events | Weekly (Sunday 04:00) | `DELETE FROM server_events WHERE created_at < datetime('now', '-30 days')` |
|
||||
|
||||
## Startup Sequence
|
||||
|
||||
1. Init DB engine and run pending migrations
|
||||
2. Register built-in adapters (Arma 3) and scan for third-party plugins
|
||||
3. Create `WebSocketManager` (asyncio-only)
|
||||
4. Create global `BroadcastThread` (queue → asyncio bridge)
|
||||
5. Create `ThreadRegistry` with `ProcessManager` and adapter registry
|
||||
6. Recover processes that survived a restart (PID validation via psutil)
|
||||
7. Re-attach monitoring threads for running servers
|
||||
8. Seed default admin user if no users exist
|
||||
9. Register and start APScheduler cleanup jobs
|
||||
|
||||
## Shutdown Sequence
|
||||
|
||||
1. Stop all server threads via `ThreadRegistry.stop_all()`
|
||||
2. Stop `BroadcastThread` and join with 5s timeout
|
||||
3. Stop APScheduler
|
||||
12
backend/.env.example
Normal file
12
backend/.env.example
Normal file
@@ -0,0 +1,12 @@
|
||||
LANGUARD_SECRET_KEY=changeme-generate-with-openssl-rand-hex-32
|
||||
LANGUARD_ENCRYPTION_KEY=changeme-generate-with-python-cryptography-fernet
|
||||
LANGUARD_DB_PATH=./languard.db
|
||||
LANGUARD_SERVERS_DIR=./servers
|
||||
LANGUARD_HOST=0.0.0.0
|
||||
LANGUARD_PORT=8000
|
||||
LANGUARD_CORS_ORIGINS=["http://localhost:5173","http://localhost:3000"]
|
||||
LANGUARD_LOG_RETENTION_DAYS=7
|
||||
LANGUARD_METRICS_RETENTION_DAYS=30
|
||||
LANGUARD_PLAYER_HISTORY_RETENTION_DAYS=90
|
||||
LANGUARD_JWT_EXPIRE_HOURS=24
|
||||
LANGUARD_ARMA3_DEFAULT_EXE=C:/Arma3Server/arma3server_x64.exe
|
||||
73
backend/CLAUDE.md
Normal file
73
backend/CLAUDE.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Languard Server Manager
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Backend (from backend/)
|
||||
python -m uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
||||
|
||||
# Frontend (from frontend/)
|
||||
npx vite --host
|
||||
```
|
||||
|
||||
- Backend API: http://localhost:8000 (docs: http://localhost:8000/docs)
|
||||
- Frontend: http://localhost:5173
|
||||
- Default admin: `admin` / (random, printed at first startup; reset via `python -c "from core.auth.utils import hash_password; print(hash_password('admin123'))"` then update SQLite)
|
||||
|
||||
## Architecture
|
||||
|
||||
FastAPI + SQLite backend, React 19 + TypeScript + Vite frontend. See ARCHITECTURE.md for full details.
|
||||
|
||||
### Key Rules
|
||||
- Frontend types must match API response shapes, NOT database schema columns
|
||||
- There is no REST endpoint for logs — logs are only pushed via WebSocket events
|
||||
- WebSocket `onEvent` callback is the mechanism for receiving real-time log entries
|
||||
- Config updates use optimistic locking (config_version) — 409 on conflict
|
||||
- Sensitive config fields are encrypted at rest with Fernet
|
||||
|
||||
## Current Implementation Status
|
||||
|
||||
### Backend: Fully implemented (42+ endpoints)
|
||||
All routers, services, repositories, game adapter system, WebSocket, background threads, and scheduled cleanup are complete.
|
||||
|
||||
### Frontend: Mostly implemented
|
||||
|
||||
| Route | Status | Notes |
|
||||
|-------|--------|-------|
|
||||
| `/login` | Complete | Zod + react-hook-form validation |
|
||||
| `/` | Complete | Dashboard with server grid |
|
||||
| `/servers/:id` | Complete | 7-tab detail page (overview, config, players, bans, missions, mods, logs) |
|
||||
| `/servers/new` | Partial | 4-step wizard; **known bug: form validation issues cause silent failure on submit** |
|
||||
| `/settings` | Complete | Password change + admin user management |
|
||||
|
||||
### Known Bugs (as of 2026-04-17)
|
||||
|
||||
1. **Create Server silent failure**: The 4-step wizard's "Next" buttons don't validate before advancing steps, so users can reach step 3 with invalid data. `handleSubmit` then silently fails because validation errors prevent `onSubmit` from firing. Fix: validate on each "Next" click using `trigger()` from react-hook-form.
|
||||
|
||||
### Frontend Type Mapping (API → Frontend)
|
||||
|
||||
| API Resource | Frontend Type | Key Fields |
|
||||
|---|---|---|
|
||||
| Server (enriched) | `Server` in useServers.ts | `game_port`, `current_players`, `max_players`, `cpu_percent`, `ram_mb` |
|
||||
| Mission | `Mission` in useServerDetail.ts | `name`, `filename`, `size_bytes` |
|
||||
| Mod | `Mod` in useServerDetail.ts | `name`, `path`, `size_bytes`, `enabled` |
|
||||
| Ban | `Ban` in useServerDetail.ts | `id`, `server_id`, `guid`, `name`, `reason`, `banned_by`, `banned_at`, `expires_at`, `is_active`, `game_data` |
|
||||
| Player | `Player` in useServerDetail.ts | `id`, `slot_id`, `name`, `guid`, `ip`, `ping` |
|
||||
|
||||
## Test Commands
|
||||
|
||||
```bash
|
||||
# Frontend unit tests
|
||||
cd frontend && npx vitest run
|
||||
|
||||
# Frontend type check
|
||||
cd frontend && npx tsc --noEmit
|
||||
|
||||
# Backend (no test suite yet)
|
||||
```
|
||||
|
||||
## Future Enhancements (user requested)
|
||||
|
||||
- Config sub-tab redesign for user-friendliness (non-technical users)
|
||||
- "Choose mission" button that auto-selects mission for server config
|
||||
- Mission rotation management
|
||||
0
backend/__init__.py
Normal file
0
backend/__init__.py
Normal file
39
backend/adapters/__init__.py
Normal file
39
backend/adapters/__init__.py
Normal file
@@ -0,0 +1,39 @@
|
||||
"""
|
||||
Auto-register all built-in adapters.
|
||||
Also scans importlib entry_points for third-party adapters.
|
||||
"""
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def load_builtin_adapters():
|
||||
"""Import built-in adapter packages — they self-register on import."""
|
||||
from adapters.arma3 import ARMA3_ADAPTER # noqa: F401
|
||||
|
||||
|
||||
def load_third_party_adapters():
|
||||
"""
|
||||
Scan 'languard.adapters' entry_point group for third-party adapters.
|
||||
Third-party packages add this to their pyproject.toml:
|
||||
[project.entry-points."languard.adapters"]
|
||||
mygame = "mygame_adapter:MYGAME_ADAPTER"
|
||||
"""
|
||||
try:
|
||||
from importlib.metadata import entry_points
|
||||
eps = entry_points(group="languard.adapters")
|
||||
for ep in eps:
|
||||
try:
|
||||
adapter = ep.load()
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
GameAdapterRegistry.register(adapter)
|
||||
logger.info("Loaded third-party adapter via entry_point: %s", ep.name)
|
||||
except Exception as e:
|
||||
logger.error("Failed to load third-party adapter '%s': %s", ep.name, e)
|
||||
except Exception as e:
|
||||
logger.warning("Entry point scanning failed: %s", e)
|
||||
|
||||
|
||||
def initialize_adapters():
|
||||
load_builtin_adapters()
|
||||
load_third_party_adapters()
|
||||
7
backend/adapters/arma3/__init__.py
Normal file
7
backend/adapters/arma3/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Auto-register Arma 3 adapter on import."""
|
||||
from adapters.arma3.adapter import ARMA3_ADAPTER
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
|
||||
GameAdapterRegistry.register(ARMA3_ADAPTER)
|
||||
|
||||
__all__ = ["ARMA3_ADAPTER"]
|
||||
59
backend/adapters/arma3/adapter.py
Normal file
59
backend/adapters/arma3/adapter.py
Normal file
@@ -0,0 +1,59 @@
|
||||
"""Arma 3 adapter — composes all Arma 3 capability implementations."""
|
||||
from adapters.arma3.config_generator import Arma3ConfigGenerator
|
||||
from adapters.arma3.process_config import Arma3ProcessConfig
|
||||
|
||||
# Capabilities enabled so far (add more as phases complete)
|
||||
_CAPABILITIES = {
|
||||
"config_generator",
|
||||
"process_config",
|
||||
"log_parser",
|
||||
"remote_admin",
|
||||
"ban_manager",
|
||||
"mission_manager",
|
||||
"mod_manager",
|
||||
}
|
||||
|
||||
|
||||
class Arma3Adapter:
|
||||
game_type = "arma3"
|
||||
display_name = "Arma 3"
|
||||
version = "1.0.0"
|
||||
|
||||
def get_config_generator(self):
|
||||
return Arma3ConfigGenerator()
|
||||
|
||||
def get_process_config(self):
|
||||
return Arma3ProcessConfig()
|
||||
|
||||
def get_log_parser(self):
|
||||
from adapters.arma3.log_parser import RPTParser
|
||||
return RPTParser()
|
||||
|
||||
def get_remote_admin(self):
|
||||
"""Return the RemoteAdmin factory for Arma3 BattlEye RCon."""
|
||||
from adapters.arma3.remote_admin import Arma3RemoteAdminFactory
|
||||
return Arma3RemoteAdminFactory()
|
||||
|
||||
def get_mission_manager(self, server_id: int | None = None):
|
||||
from adapters.arma3.mission_manager import Arma3MissionManager
|
||||
return Arma3MissionManager(server_id=server_id)
|
||||
|
||||
def get_mod_manager(self, server_id: int | None = None):
|
||||
from adapters.arma3.mod_manager import Arma3ModManager
|
||||
return Arma3ModManager(server_id=server_id)
|
||||
|
||||
def get_ban_manager(self, server_id: int | None = None):
|
||||
from adapters.arma3.ban_manager import Arma3BanManager
|
||||
return Arma3BanManager(server_id=server_id)
|
||||
|
||||
def has_capability(self, name: str) -> bool:
|
||||
return name in _CAPABILITIES
|
||||
|
||||
def get_additional_routers(self) -> list:
|
||||
return []
|
||||
|
||||
def get_custom_thread_factories(self) -> list:
|
||||
return []
|
||||
|
||||
|
||||
ARMA3_ADAPTER = Arma3Adapter()
|
||||
200
backend/adapters/arma3/ban_manager.py
Normal file
200
backend/adapters/arma3/ban_manager.py
Normal file
@@ -0,0 +1,200 @@
|
||||
"""Arma 3 ban manager — bidirectional sync between DB bans and BattlEye ban file."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from core.utils.file_utils import get_server_dir
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_BANS_FILE = "battleye/bans.txt"
|
||||
|
||||
|
||||
class Arma3BanData(BaseModel):
|
||||
"""Ban data schema for Arma 3."""
|
||||
guid: str = ""
|
||||
ip: str = ""
|
||||
|
||||
|
||||
class Arma3BanManager:
|
||||
"""
|
||||
Implements BanManager protocol for Arma3 BattlEye.
|
||||
|
||||
Also provides richer file-based operations for the ban endpoints.
|
||||
"""
|
||||
|
||||
def __init__(self, server_id: int | None = None) -> None:
|
||||
self._server_id = server_id
|
||||
|
||||
def _bans_path(self) -> Path:
|
||||
if self._server_id is None:
|
||||
raise ValueError("server_id required for file-based ban operations")
|
||||
server_dir = get_server_dir(self._server_id)
|
||||
return server_dir / _BANS_FILE
|
||||
|
||||
# ── BanManager protocol methods ──
|
||||
|
||||
def get_ban_file_path(self, server_dir: Path) -> Path:
|
||||
return server_dir / _BANS_FILE
|
||||
|
||||
def sync_bans_to_file(self, bans: list[dict], ban_file: Path) -> None:
|
||||
"""Write bans from DB to BattlEye ban file format."""
|
||||
lines = []
|
||||
for ban in bans:
|
||||
identifier = ban.get("player_uid") or ban.get("guid") or ban.get("ip", "")
|
||||
ban_type = ban.get("ban_type", "GUID")
|
||||
reason = ban.get("reason", "")
|
||||
duration = ban.get("duration_minutes", 0)
|
||||
reason_clean = reason.replace("\n", " ").replace("\r", "").strip()
|
||||
if identifier:
|
||||
lines.append(f"{ban_type} {identifier} {duration} {reason_clean}".strip())
|
||||
|
||||
ban_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp_path = str(ban_file) + ".tmp"
|
||||
try:
|
||||
with open(tmp_path, "w", encoding="utf-8") as f:
|
||||
f.write("\n".join(lines) + "\n" if lines else "")
|
||||
os.replace(tmp_path, str(ban_file))
|
||||
except OSError as exc:
|
||||
self._safe_delete(tmp_path)
|
||||
raise
|
||||
|
||||
def read_bans_from_file(self, ban_file: Path) -> list[dict]:
|
||||
"""Read bans from BattlEye ban file into standard format."""
|
||||
if not ban_file.exists():
|
||||
return []
|
||||
|
||||
bans = []
|
||||
for line_num, line in enumerate(ban_file.read_text(encoding="utf-8", errors="replace").splitlines(), 1):
|
||||
line = line.strip()
|
||||
if not line or line.startswith("//") or line.startswith("#"):
|
||||
continue
|
||||
|
||||
parsed = self._parse_ban_line(line, line_num)
|
||||
if parsed:
|
||||
bans.append(parsed)
|
||||
|
||||
return bans
|
||||
|
||||
def get_ban_data_schema(self) -> type[BaseModel] | None:
|
||||
return Arma3BanData
|
||||
|
||||
# ── Richer file-based operations (used by ban endpoints) ──
|
||||
|
||||
def get_bans(self) -> list[dict]:
|
||||
"""Read all bans from bans.txt. Returns list of dicts."""
|
||||
bans_path = self._bans_path()
|
||||
if not bans_path.exists():
|
||||
return []
|
||||
|
||||
bans = []
|
||||
try:
|
||||
with open(bans_path, "r", encoding="utf-8", errors="replace") as f:
|
||||
for line_num, line in enumerate(f, 1):
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
continue
|
||||
parsed = self._parse_ban_line(line, line_num)
|
||||
if parsed:
|
||||
bans.append(parsed)
|
||||
except OSError as exc:
|
||||
logger.error("Cannot read bans.txt: %s", exc)
|
||||
|
||||
return bans
|
||||
|
||||
def add_ban(self, identifier: str, ban_type: str, reason: str, duration_minutes: int) -> None:
|
||||
"""Append a ban entry to bans.txt."""
|
||||
reason_clean = reason.replace("\n", " ").replace("\r", "").strip()
|
||||
line = f"{ban_type} {identifier} {duration_minutes} {reason_clean}\n"
|
||||
|
||||
bans_path = self._bans_path()
|
||||
bans_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
try:
|
||||
with open(bans_path, "a", encoding="utf-8") as f:
|
||||
f.write(line)
|
||||
except OSError as exc:
|
||||
logger.error("Cannot write bans.txt: %s", exc)
|
||||
|
||||
def remove_ban(self, identifier: str) -> bool:
|
||||
"""Remove all ban entries matching the given identifier. Returns True if removed."""
|
||||
bans_path = self._bans_path()
|
||||
if not bans_path.exists():
|
||||
return False
|
||||
|
||||
try:
|
||||
with open(bans_path, "r", encoding="utf-8", errors="replace") as f:
|
||||
lines = f.readlines()
|
||||
except OSError as exc:
|
||||
logger.error("Cannot read bans.txt: %s", exc)
|
||||
return False
|
||||
|
||||
new_lines = []
|
||||
removed = 0
|
||||
for line in lines:
|
||||
stripped = line.strip()
|
||||
if stripped and not stripped.startswith("#"):
|
||||
parts = stripped.split(None, 3)
|
||||
if len(parts) >= 2 and parts[1] == identifier:
|
||||
removed += 1
|
||||
continue
|
||||
new_lines.append(line)
|
||||
|
||||
if removed == 0:
|
||||
return False
|
||||
|
||||
tmp_path = str(bans_path) + ".tmp"
|
||||
try:
|
||||
with open(tmp_path, "w", encoding="utf-8") as f:
|
||||
f.writelines(new_lines)
|
||||
os.replace(tmp_path, str(bans_path))
|
||||
except OSError as exc:
|
||||
self._safe_delete(tmp_path)
|
||||
logger.error("Cannot update bans.txt: %s", exc)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
# ── Internal ──
|
||||
|
||||
def _parse_ban_line(self, line: str, line_num: int) -> dict | None:
|
||||
"""Parse one ban line: TYPE IDENTIFIER DURATION REASON"""
|
||||
parts = line.split(None, 3)
|
||||
if len(parts) < 2:
|
||||
return None
|
||||
|
||||
ban_type = parts[0].upper()
|
||||
if ban_type not in ("GUID", "IP"):
|
||||
return None
|
||||
|
||||
identifier = parts[1]
|
||||
duration = 0
|
||||
reason = ""
|
||||
|
||||
if len(parts) >= 3:
|
||||
try:
|
||||
duration = int(parts[2])
|
||||
except ValueError:
|
||||
duration = 0
|
||||
|
||||
if len(parts) >= 4:
|
||||
reason = parts[3]
|
||||
|
||||
return {
|
||||
"type": ban_type,
|
||||
"identifier": identifier,
|
||||
"duration_minutes": duration,
|
||||
"reason": reason,
|
||||
"is_permanent": duration == 0,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _safe_delete(path: str) -> None:
|
||||
try:
|
||||
os.unlink(path)
|
||||
except OSError as exc:
|
||||
logger.debug("Arma3BanManager: could not delete %s: %s", path, exc)
|
||||
400
backend/adapters/arma3/config_generator.py
Normal file
400
backend/adapters/arma3/config_generator.py
Normal file
@@ -0,0 +1,400 @@
|
||||
"""
|
||||
Arma 3 config generator.
|
||||
Merged protocol: Pydantic models (schema) + file generation + launch args.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
# ─── Pydantic Models (config schema) ─────────────────────────────────────────
|
||||
|
||||
class ServerConfig(BaseModel):
|
||||
hostname: str = "My Arma 3 Server"
|
||||
password: str | None = None
|
||||
password_admin: str = ""
|
||||
server_command_password: str | None = None
|
||||
max_players: int = Field(default=40, gt=0, le=1000)
|
||||
kick_duplicate: int = Field(default=1, ge=0, le=1)
|
||||
persistent: int = Field(default=1, ge=0, le=1)
|
||||
vote_threshold: float = Field(default=0.33, ge=0.0, le=1.0)
|
||||
vote_mission_players: int = Field(default=1, ge=0)
|
||||
vote_timeout: int = Field(default=60, ge=0)
|
||||
role_timeout: int = Field(default=90, ge=0)
|
||||
briefing_timeout: int = Field(default=60, ge=0)
|
||||
debriefing_timeout: int = Field(default=45, ge=0)
|
||||
lobby_idle_timeout: int = Field(default=300, ge=0)
|
||||
disable_von: int = Field(default=0, ge=0, le=1)
|
||||
von_codec: int = Field(default=1, ge=0, le=1)
|
||||
von_codec_quality: int = Field(default=20, ge=0, le=30)
|
||||
max_ping: int = Field(default=250, gt=0)
|
||||
max_packet_loss: int = Field(default=50, ge=0, le=100)
|
||||
max_desync: int = Field(default=200, ge=0)
|
||||
disconnect_timeout: int = Field(default=15, ge=0)
|
||||
kick_on_ping: int = Field(default=1, ge=0, le=1)
|
||||
kick_on_packet_loss: int = Field(default=1, ge=0, le=1)
|
||||
kick_on_desync: int = Field(default=1, ge=0, le=1)
|
||||
kick_on_timeout: int = Field(default=1, ge=0, le=1)
|
||||
battleye: int = Field(default=1, ge=0, le=1)
|
||||
verify_signatures: int = Field(default=2, ge=0, le=2)
|
||||
allowed_file_patching: int = Field(default=0, ge=0, le=2)
|
||||
forced_difficulty: str = "Regular"
|
||||
timestamp_format: str = "short"
|
||||
auto_select_mission: int = Field(default=0, ge=0, le=1)
|
||||
random_mission_order: int = Field(default=0, ge=0, le=1)
|
||||
log_file: str = "server_console.log"
|
||||
skip_lobby: int = Field(default=0, ge=0, le=1)
|
||||
drawing_in_map: int = Field(default=1, ge=0, le=1)
|
||||
upnp: int = Field(default=0, ge=0, le=1)
|
||||
loopback: int = Field(default=0, ge=0, le=1)
|
||||
statistics_enabled: int = Field(default=1, ge=0, le=1)
|
||||
motd_lines: list[str] = Field(default_factory=lambda: ["Welcome!", "Have fun"])
|
||||
motd_interval: float = Field(default=5.0, gt=0)
|
||||
headless_clients: list[str] = Field(default_factory=list)
|
||||
local_clients: list[str] = Field(default_factory=list)
|
||||
admin_uids: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class BasicConfig(BaseModel):
|
||||
min_bandwidth: int = Field(default=800000, gt=0)
|
||||
max_bandwidth: int = Field(default=25000000, gt=0)
|
||||
max_msg_send: int = Field(default=384, gt=0)
|
||||
max_size_guaranteed: int = Field(default=512, gt=0)
|
||||
max_size_non_guaranteed: int = Field(default=256, gt=0)
|
||||
min_error_to_send: float = Field(default=0.003, gt=0)
|
||||
max_custom_file_size: int = Field(default=100000, ge=0)
|
||||
|
||||
|
||||
class ProfileConfig(BaseModel):
|
||||
reduced_damage: int = Field(default=0, ge=0, le=1)
|
||||
group_indicators: int = Field(default=0, ge=0, le=3)
|
||||
friendly_tags: int = Field(default=0, ge=0, le=3)
|
||||
enemy_tags: int = Field(default=0, ge=0, le=3)
|
||||
detected_mines: int = Field(default=0, ge=0, le=3)
|
||||
commands: int = Field(default=1, ge=0, le=3)
|
||||
waypoints: int = Field(default=1, ge=0, le=3)
|
||||
tactical_ping: int = Field(default=0, ge=0, le=1)
|
||||
weapon_info: int = Field(default=2, ge=0, le=3)
|
||||
stance_indicator: int = Field(default=2, ge=0, le=3)
|
||||
stamina_bar: int = Field(default=0, ge=0, le=1)
|
||||
weapon_crosshair: int = Field(default=0, ge=0, le=1)
|
||||
vision_aid: int = Field(default=0, ge=0, le=1)
|
||||
third_person_view: int = Field(default=0, ge=0, le=1)
|
||||
camera_shake: int = Field(default=1, ge=0, le=1)
|
||||
score_table: int = Field(default=1, ge=0, le=1)
|
||||
death_messages: int = Field(default=1, ge=0, le=1)
|
||||
von_id: int = Field(default=1, ge=0, le=1)
|
||||
map_content_friendly: int = Field(default=0, ge=0, le=3)
|
||||
map_content_enemy: int = Field(default=0, ge=0, le=3)
|
||||
map_content_mines: int = Field(default=0, ge=0, le=3)
|
||||
auto_report: int = Field(default=0, ge=0, le=1)
|
||||
multiple_saves: int = Field(default=0, ge=0, le=1)
|
||||
ai_level_preset: int = Field(default=3, ge=0, le=4)
|
||||
skill_ai: float = Field(default=0.5, ge=0.0, le=1.0)
|
||||
precision_ai: float = Field(default=0.5, ge=0.0, le=1.0)
|
||||
|
||||
|
||||
class LaunchConfig(BaseModel):
|
||||
world: str = "empty"
|
||||
extra_params: str = ""
|
||||
limit_fps: int = Field(default=50, gt=0, le=1000)
|
||||
auto_init: int = Field(default=0, ge=0, le=1)
|
||||
load_mission_to_memory: int = Field(default=0, ge=0, le=1)
|
||||
enable_ht: int = Field(default=0, ge=0, le=1)
|
||||
huge_pages: int = Field(default=0, ge=0, le=1)
|
||||
cpu_count: int | None = None
|
||||
ex_threads: int = Field(default=7, ge=0)
|
||||
max_mem: int | None = None
|
||||
no_logs: int = Field(default=0, ge=0, le=1)
|
||||
netlog: int = Field(default=0, ge=0, le=1)
|
||||
|
||||
|
||||
class RConConfig(BaseModel):
|
||||
rcon_password: str = ""
|
||||
max_ping: int = Field(default=200, gt=0)
|
||||
enabled: int = Field(default=1, ge=0, le=1)
|
||||
|
||||
|
||||
# ─── Config Generator ─────────────────────────────────────────────────────────
|
||||
|
||||
class Arma3ConfigGenerator:
|
||||
game_type = "arma3"
|
||||
|
||||
SECTIONS: dict[str, type[BaseModel]] = {
|
||||
"server": ServerConfig,
|
||||
"basic": BasicConfig,
|
||||
"profile": ProfileConfig,
|
||||
"launch": LaunchConfig,
|
||||
"rcon": RConConfig,
|
||||
}
|
||||
|
||||
SENSITIVE_FIELDS: dict[str, list[str]] = {
|
||||
"server": ["password", "password_admin", "server_command_password"],
|
||||
"rcon": ["rcon_password"],
|
||||
}
|
||||
|
||||
def get_sections(self) -> dict[str, type[BaseModel]]:
|
||||
return self.SECTIONS
|
||||
|
||||
def get_defaults(self, section: str) -> dict[str, Any]:
|
||||
model_cls = self.SECTIONS.get(section)
|
||||
if model_cls is None:
|
||||
return {}
|
||||
return model_cls().model_dump()
|
||||
|
||||
def get_sensitive_fields(self, section: str) -> list[str]:
|
||||
return self.SENSITIVE_FIELDS.get(section, [])
|
||||
|
||||
def get_config_version(self) -> str:
|
||||
return "1.0.0"
|
||||
|
||||
def migrate_config(self, old_version: str, config_json: dict) -> dict:
|
||||
"""
|
||||
For version 1.0.0 there is nothing to migrate.
|
||||
Future versions: add migration logic here.
|
||||
"""
|
||||
from adapters.exceptions import ConfigMigrationError
|
||||
raise ConfigMigrationError(
|
||||
old_version, f"No migration path from {old_version} to {self.get_config_version()}"
|
||||
)
|
||||
|
||||
# ── Config file writers ───────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _escape(value: str) -> str:
|
||||
"""
|
||||
Escape a string for use inside Arma 3 double-quoted config values.
|
||||
Order matters: escape backslashes FIRST.
|
||||
"""
|
||||
value = value.replace("\\", "\\\\")
|
||||
value = value.replace('"', '\\"')
|
||||
value = value.replace('\n', '\\n')
|
||||
return value
|
||||
|
||||
@staticmethod
|
||||
def _atomic_write(path: Path, content: str) -> None:
|
||||
"""Write content to path atomically via tmp file + os.replace()."""
|
||||
from adapters.exceptions import ConfigWriteError
|
||||
tmp_path = path.with_suffix(path.suffix + ".tmp")
|
||||
try:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp_path.write_text(content, encoding="utf-8")
|
||||
os.replace(str(tmp_path), str(path))
|
||||
except OSError as e:
|
||||
# Clean up tmp file if it exists
|
||||
try:
|
||||
tmp_path.unlink(missing_ok=True)
|
||||
except OSError as exc:
|
||||
logger.debug("Could not clean up temp file %s: %s", tmp_path, exc)
|
||||
raise ConfigWriteError(str(path), str(e)) from e
|
||||
|
||||
def _render_server_cfg(self, cfg: ServerConfig) -> str:
|
||||
"""Render server.cfg content string."""
|
||||
motd_items = ", ".join(f'"{self._escape(l)}"' for l in cfg.motd_lines)
|
||||
headless = ", ".join(f'"{h}"' for h in cfg.headless_clients)
|
||||
local = ", ".join(f'"{l}"' for l in cfg.local_clients)
|
||||
admin_uids = ", ".join(f'"{u}"' for u in cfg.admin_uids)
|
||||
|
||||
lines = [
|
||||
f'hostname = "{self._escape(cfg.hostname)}";',
|
||||
]
|
||||
if cfg.password:
|
||||
lines.append(f'password = "{self._escape(cfg.password)}";')
|
||||
if cfg.password_admin:
|
||||
lines.append(f'passwordAdmin = "{self._escape(cfg.password_admin)}";')
|
||||
if cfg.server_command_password:
|
||||
lines.append(f'serverCommandPassword = "{self._escape(cfg.server_command_password)}";')
|
||||
|
||||
lines += [
|
||||
f"maxPlayers = {cfg.max_players};",
|
||||
f"kickDuplicate = {cfg.kick_duplicate};",
|
||||
f"persistent = {cfg.persistent};",
|
||||
f"voteThreshold = {cfg.vote_threshold};",
|
||||
f"voteMissionPlayers = {cfg.vote_mission_players};",
|
||||
f"voteTimeout = {cfg.vote_timeout};",
|
||||
f"roleTimeout = {cfg.role_timeout};",
|
||||
f"briefingTimeOut = {cfg.briefing_timeout};",
|
||||
f"debriefingTimeOut = {cfg.debriefing_timeout};",
|
||||
f"lobbyIdleTimeout = {cfg.lobby_idle_timeout};",
|
||||
f"disableVoN = {cfg.disable_von};",
|
||||
f"vonCodec = {cfg.von_codec};",
|
||||
f"vonCodecQuality = {cfg.von_codec_quality};",
|
||||
f"maxPing = {cfg.max_ping};",
|
||||
f"maxPacketLoss = {cfg.max_packet_loss};",
|
||||
f"maxDesync = {cfg.max_desync};",
|
||||
f"disconnectTimeout = {cfg.disconnect_timeout};",
|
||||
f"kickOnPing = {cfg.kick_on_ping};",
|
||||
f"kickOnPacketLoss = {cfg.kick_on_packet_loss};",
|
||||
f"kickOnDesync = {cfg.kick_on_desync};",
|
||||
f"kickOnTimeout = {cfg.kick_on_timeout};",
|
||||
f"BattlEye = {cfg.battleye};",
|
||||
f"verifySignatures = {cfg.verify_signatures};",
|
||||
f"allowedFilePatching = {cfg.allowed_file_patching};",
|
||||
f'forcedDifficulty = "{cfg.forced_difficulty}";',
|
||||
f'timeStampFormat = "{cfg.timestamp_format}";',
|
||||
f"autoSelectMission = {cfg.auto_select_mission};",
|
||||
f"randomMissionOrder = {cfg.random_mission_order};",
|
||||
f'logFile = "{cfg.log_file}";',
|
||||
f"skipLobby = {cfg.skip_lobby};",
|
||||
f"drawingInMap = {cfg.drawing_in_map};",
|
||||
f"upnp = {cfg.upnp};",
|
||||
f"loopback = {cfg.loopback};",
|
||||
f"statisticsEnabled = {cfg.statistics_enabled};",
|
||||
f"motd[] = {{{motd_items}}};",
|
||||
f"motdInterval = {cfg.motd_interval};",
|
||||
]
|
||||
if cfg.headless_clients:
|
||||
lines.append(f"headlessClients[] = {{{headless}}};")
|
||||
if cfg.local_clients:
|
||||
lines.append(f"localClient[] = {{{local}}};")
|
||||
if cfg.admin_uids:
|
||||
lines.append(f"admins[] = {{{admin_uids}}};")
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
def _render_basic_cfg(self, cfg: BasicConfig) -> str:
|
||||
return (
|
||||
f"MinBandwidth = {cfg.min_bandwidth};\n"
|
||||
f"MaxBandwidth = {cfg.max_bandwidth};\n"
|
||||
f"MaxMsgSend = {cfg.max_msg_send};\n"
|
||||
f"MaxSizeGuaranteed = {cfg.max_size_guaranteed};\n"
|
||||
f"MaxSizeNonguaranteed = {cfg.max_size_non_guaranteed};\n"
|
||||
f"MinErrorToSend = {cfg.min_error_to_send};\n"
|
||||
f"MaxCustomFileSize = {cfg.max_custom_file_size};\n"
|
||||
)
|
||||
|
||||
def _render_arma3profile(self, cfg: ProfileConfig) -> str:
|
||||
return (
|
||||
"class DifficultyPresets {\n"
|
||||
" class CustomDifficulty {\n"
|
||||
" class Options {\n"
|
||||
f" reducedDamage = {cfg.reduced_damage};\n"
|
||||
f" groupIndicators = {cfg.group_indicators};\n"
|
||||
f" friendlyTags = {cfg.friendly_tags};\n"
|
||||
f" enemyTags = {cfg.enemy_tags};\n"
|
||||
f" detectedMines = {cfg.detected_mines};\n"
|
||||
f" commands = {cfg.commands};\n"
|
||||
f" waypoints = {cfg.waypoints};\n"
|
||||
f" tacticalPing = {cfg.tactical_ping};\n"
|
||||
f" weaponInfo = {cfg.weapon_info};\n"
|
||||
f" stanceIndicator = {cfg.stance_indicator};\n"
|
||||
f" staminaBar = {cfg.stamina_bar};\n"
|
||||
f" weaponCrosshair = {cfg.weapon_crosshair};\n"
|
||||
f" visionAid = {cfg.vision_aid};\n"
|
||||
f" thirdPersonView = {cfg.third_person_view};\n"
|
||||
f" cameraShake = {cfg.camera_shake};\n"
|
||||
f" scoreTable = {cfg.score_table};\n"
|
||||
f" deathMessages = {cfg.death_messages};\n"
|
||||
f" vonID = {cfg.von_id};\n"
|
||||
f" mapContentFriendly = {cfg.map_content_friendly};\n"
|
||||
f" mapContentEnemy = {cfg.map_content_enemy};\n"
|
||||
f" mapContentMines = {cfg.map_content_mines};\n"
|
||||
f" autoReport = {cfg.auto_report};\n"
|
||||
f" multipleSaves = {cfg.multiple_saves};\n"
|
||||
" };\n"
|
||||
f" aiLevelPreset = {cfg.ai_level_preset};\n"
|
||||
" };\n"
|
||||
" class CustomAILevel {\n"
|
||||
f" skillAI = {cfg.skill_ai};\n"
|
||||
f" precisionAI = {cfg.precision_ai};\n"
|
||||
" };\n"
|
||||
"};\n"
|
||||
)
|
||||
|
||||
def _render_beserver_cfg(self, cfg: RConConfig) -> str:
|
||||
return (
|
||||
f"RConPassword {cfg.rcon_password}\n"
|
||||
f"MaxPing {cfg.max_ping}\n"
|
||||
)
|
||||
|
||||
# ── Public interface ──────────────────────────────────────────────────────
|
||||
|
||||
def write_configs(
|
||||
self,
|
||||
server_id: int,
|
||||
server_dir: Path,
|
||||
config_sections: dict[str, dict],
|
||||
) -> list[Path]:
|
||||
server_cfg = ServerConfig(**config_sections.get("server", {}))
|
||||
basic_cfg = BasicConfig(**config_sections.get("basic", {}))
|
||||
profile_cfg = ProfileConfig(**config_sections.get("profile", {}))
|
||||
rcon_cfg = RConConfig(**config_sections.get("rcon", {}))
|
||||
|
||||
written = []
|
||||
pairs = [
|
||||
(server_dir / "server.cfg", self._render_server_cfg(server_cfg)),
|
||||
(server_dir / "basic.cfg", self._render_basic_cfg(basic_cfg)),
|
||||
(server_dir / "server" / "server.Arma3Profile", self._render_arma3profile(profile_cfg)),
|
||||
(server_dir / "battleye" / "beserver.cfg", self._render_beserver_cfg(rcon_cfg)),
|
||||
]
|
||||
for path, content in pairs:
|
||||
self._atomic_write(path, content)
|
||||
written.append(path)
|
||||
|
||||
# Restrict permissions on files containing passwords (Unix only)
|
||||
if os.name != "nt":
|
||||
for path in [server_dir / "server.cfg", server_dir / "battleye" / "beserver.cfg"]:
|
||||
if path.exists():
|
||||
os.chmod(path, 0o600)
|
||||
|
||||
return written
|
||||
|
||||
def build_launch_args(
|
||||
self,
|
||||
config_sections: dict[str, dict],
|
||||
mod_args: list[str] | None = None,
|
||||
) -> list[str]:
|
||||
from adapters.exceptions import LaunchArgsError
|
||||
launch = LaunchConfig(**config_sections.get("launch", {}))
|
||||
server = ServerConfig(**config_sections.get("server", {}))
|
||||
|
||||
args = [
|
||||
f"-port={config_sections.get('_port', 2302)}",
|
||||
"-config=server.cfg",
|
||||
"-cfg=basic.cfg",
|
||||
"-profiles=./server",
|
||||
"-name=server",
|
||||
f"-world={launch.world}",
|
||||
f"-limitFPS={launch.limit_fps}",
|
||||
"-bepath=./battleye",
|
||||
]
|
||||
if launch.auto_init:
|
||||
args.append("-autoInit")
|
||||
if launch.enable_ht:
|
||||
args.append("-enableHT")
|
||||
if launch.huge_pages:
|
||||
args.append("-hugePages")
|
||||
if launch.cpu_count is not None:
|
||||
args.append(f"-cpuCount={launch.cpu_count}")
|
||||
if launch.max_mem is not None:
|
||||
args.append(f"-maxMem={launch.max_mem}")
|
||||
if launch.no_logs:
|
||||
args.append("-noLogs")
|
||||
if launch.netlog:
|
||||
args.append("-netlog")
|
||||
if launch.extra_params:
|
||||
args.extend(launch.extra_params.split())
|
||||
if mod_args:
|
||||
args.extend(mod_args)
|
||||
return args
|
||||
|
||||
def preview_config(
|
||||
self,
|
||||
server_id: int,
|
||||
server_dir: Path,
|
||||
config_sections: dict[str, dict],
|
||||
) -> dict[str, str]:
|
||||
server_cfg = ServerConfig(**config_sections.get("server", {}))
|
||||
basic_cfg = BasicConfig(**config_sections.get("basic", {}))
|
||||
profile_cfg = ProfileConfig(**config_sections.get("profile", {}))
|
||||
rcon_cfg = RConConfig(**config_sections.get("rcon", {}))
|
||||
return {
|
||||
"server.cfg": self._render_server_cfg(server_cfg),
|
||||
"basic.cfg": self._render_basic_cfg(basic_cfg),
|
||||
"server/server.Arma3Profile": self._render_arma3profile(profile_cfg),
|
||||
"battleye/beserver.cfg": self._render_beserver_cfg(rcon_cfg),
|
||||
}
|
||||
81
backend/adapters/arma3/log_parser.py
Normal file
81
backend/adapters/arma3/log_parser.py
Normal file
@@ -0,0 +1,81 @@
|
||||
"""Arma 3 RPT log parser."""
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Callable
|
||||
|
||||
|
||||
class RPTParser:
|
||||
"""Parses Arma 3 .rpt log files."""
|
||||
|
||||
# Pattern: "HH:MM:SS ..." or "[HH:MM:SS] ..." with optional date prefix
|
||||
_timestamp_re = re.compile(
|
||||
r"^\s*(?:(\d{2}/\d{2}/\d{4})\s+)?"
|
||||
r"(?:\[)?(\d{2}:\d{2}:\d{2})(?:\])?\s*"
|
||||
r"(?:\[?(\w+)\]?\s*)?(.*)$"
|
||||
)
|
||||
|
||||
def parse_line(self, line: str) -> dict | None:
|
||||
"""Parse one RPT log line."""
|
||||
if not line or not line.strip():
|
||||
return None
|
||||
|
||||
match = self._timestamp_re.match(line)
|
||||
if not match:
|
||||
# Non-timestamped line — treat as info
|
||||
stripped = line.strip()
|
||||
if not stripped:
|
||||
return None
|
||||
return {
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"level": "info",
|
||||
"message": stripped,
|
||||
}
|
||||
|
||||
date_str, time_str, level_str, message = match.groups()
|
||||
|
||||
# Map Arma 3 log levels
|
||||
level = "info"
|
||||
if level_str:
|
||||
level_lower = level_str.lower()
|
||||
if level_lower in ("error", "fault"):
|
||||
level = "error"
|
||||
elif level_lower in ("warning", "warn"):
|
||||
level = "warning"
|
||||
|
||||
# Build ISO timestamp
|
||||
try:
|
||||
if date_str:
|
||||
dt = datetime.strptime(f"{date_str} {time_str}", "%m/%d/%Y %H:%M:%S")
|
||||
else:
|
||||
dt = datetime.strptime(time_str, "%H:%M:%S")
|
||||
dt = dt.replace(year=datetime.utcnow().year, month=datetime.utcnow().month, day=datetime.utcnow().day)
|
||||
timestamp = dt.isoformat()
|
||||
except ValueError:
|
||||
timestamp = datetime.utcnow().isoformat()
|
||||
|
||||
return {
|
||||
"timestamp": timestamp,
|
||||
"level": level,
|
||||
"message": (message or "").strip(),
|
||||
}
|
||||
|
||||
def get_log_file_resolver(self, server_id: int) -> Callable[[Path], Path | None]:
|
||||
"""Return a callable that finds the current RPT log file."""
|
||||
def resolver(server_dir: Path) -> Path | None:
|
||||
# Arma 3 stores logs in server_dir/server/*.rpt
|
||||
profile_dir = server_dir / "server"
|
||||
if not profile_dir.exists():
|
||||
return None
|
||||
|
||||
rpt_files = sorted(profile_dir.glob("*.rpt"), key=lambda p: p.stat().st_mtime, reverse=True)
|
||||
if rpt_files:
|
||||
return rpt_files[0]
|
||||
|
||||
# Fallback: check for arma3server_x64_*.rpt pattern
|
||||
rpt_files = sorted(profile_dir.glob("arma3server*.rpt"), key=lambda p: p.stat().st_mtime, reverse=True)
|
||||
return rpt_files[0] if rpt_files else None
|
||||
|
||||
return resolver
|
||||
191
backend/adapters/arma3/mission_manager.py
Normal file
191
backend/adapters/arma3/mission_manager.py
Normal file
@@ -0,0 +1,191 @@
|
||||
"""Arma 3 mission manager — handles .pbo mission files, upload, delete, rotation."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from adapters.exceptions import AdapterError
|
||||
from core.utils.file_utils import get_server_dir, sanitize_filename, safe_delete_file
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_MISSIONS_DIR = "mpmissions"
|
||||
_ALLOWED_EXTENSION = ".pbo"
|
||||
_MAX_MISSION_SIZE_MB = 500
|
||||
|
||||
|
||||
class Arma3MissionData(BaseModel):
|
||||
"""Mission data schema for Arma 3."""
|
||||
terrain: str = ""
|
||||
difficulty: str = "Regular"
|
||||
|
||||
|
||||
class Arma3MissionManager:
|
||||
file_extension = ".pbo"
|
||||
|
||||
def __init__(self, server_id: int | None = None) -> None:
|
||||
self._server_id = server_id
|
||||
|
||||
def _missions_dir(self) -> Path:
|
||||
return get_server_dir(self._server_id) / _MISSIONS_DIR
|
||||
|
||||
# ── File operations ──
|
||||
|
||||
def list_missions(self) -> list[dict]:
|
||||
"""
|
||||
Scan the mpmissions directory and return all .pbo files.
|
||||
|
||||
Returns list of dicts:
|
||||
name: str — filename without extension
|
||||
filename: str — full filename
|
||||
size_bytes: int — file size
|
||||
"""
|
||||
missions_dir = self._missions_dir()
|
||||
if not missions_dir.exists():
|
||||
return []
|
||||
|
||||
missions = []
|
||||
try:
|
||||
for entry in missions_dir.iterdir():
|
||||
if entry.is_file() and entry.suffix.lower() == _ALLOWED_EXTENSION:
|
||||
missions.append({
|
||||
"name": entry.stem,
|
||||
"filename": entry.name,
|
||||
"size_bytes": entry.stat().st_size,
|
||||
})
|
||||
except OSError as exc:
|
||||
raise AdapterError(f"Cannot list missions: {exc}") from exc
|
||||
|
||||
missions.sort(key=lambda m: m["filename"].lower())
|
||||
return missions
|
||||
|
||||
def upload_mission(self, filename: str, content: bytes) -> dict:
|
||||
"""
|
||||
Save a mission file to the mpmissions directory.
|
||||
|
||||
Args:
|
||||
filename: Original filename from the upload (will be sanitized).
|
||||
content: Raw file bytes.
|
||||
|
||||
Returns the saved mission dict.
|
||||
"""
|
||||
safe_name = sanitize_filename(filename)
|
||||
if not safe_name.lower().endswith(_ALLOWED_EXTENSION):
|
||||
raise AdapterError(
|
||||
f"Invalid mission file extension. Only {_ALLOWED_EXTENSION} files are allowed."
|
||||
)
|
||||
|
||||
size_mb = len(content) / (1024 * 1024)
|
||||
if size_mb > _MAX_MISSION_SIZE_MB:
|
||||
raise AdapterError(
|
||||
f"Mission file too large ({size_mb:.1f} MB). Max is {_MAX_MISSION_SIZE_MB} MB."
|
||||
)
|
||||
|
||||
missions_dir = self._missions_dir()
|
||||
missions_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
dest_path = missions_dir / safe_name
|
||||
|
||||
# Atomic write: write to .tmp first, then replace
|
||||
tmp_path = str(dest_path) + ".tmp"
|
||||
try:
|
||||
with open(tmp_path, "wb") as f:
|
||||
f.write(content)
|
||||
os.replace(tmp_path, str(dest_path))
|
||||
except OSError as exc:
|
||||
safe_delete_file(Path(tmp_path))
|
||||
raise AdapterError(f"Cannot save mission file: {exc}") from exc
|
||||
|
||||
logger.info(
|
||||
"Mission uploaded for server %d: %s (%d bytes)",
|
||||
self._server_id, safe_name, len(content),
|
||||
)
|
||||
return {
|
||||
"name": dest_path.stem,
|
||||
"filename": safe_name,
|
||||
"size_bytes": len(content),
|
||||
}
|
||||
|
||||
def delete_mission(self, filename: str) -> bool:
|
||||
"""
|
||||
Delete a mission file.
|
||||
Returns True if deleted, False if not found.
|
||||
"""
|
||||
safe_name = sanitize_filename(filename)
|
||||
if not safe_name.lower().endswith(_ALLOWED_EXTENSION):
|
||||
raise AdapterError("Invalid mission filename")
|
||||
|
||||
dest_path = self._missions_dir() / safe_name
|
||||
|
||||
# Verify resolved path is inside missions directory (path traversal guard)
|
||||
try:
|
||||
dest_path.resolve().relative_to(self._missions_dir().resolve())
|
||||
except ValueError:
|
||||
raise AdapterError("Path traversal detected in filename")
|
||||
|
||||
if not dest_path.exists():
|
||||
return False
|
||||
|
||||
try:
|
||||
dest_path.unlink()
|
||||
logger.info("Mission deleted for server %d: %s", self._server_id, safe_name)
|
||||
return True
|
||||
except OSError as exc:
|
||||
raise AdapterError(f"Cannot delete mission: {exc}") from exc
|
||||
|
||||
# ── Mission rotation config ──
|
||||
|
||||
def parse_mission_filename(self, filename: str) -> dict:
|
||||
"""
|
||||
Parse Arma 3 mission filename.
|
||||
Format: MissionName.Terrain.pbo
|
||||
"""
|
||||
name = filename
|
||||
if name.endswith(self.file_extension):
|
||||
name = name[: -len(self.file_extension)]
|
||||
|
||||
parts = name.rsplit(".", 1)
|
||||
if len(parts) == 2:
|
||||
return {
|
||||
"mission_name": parts[0],
|
||||
"terrain": parts[1],
|
||||
"filename": filename,
|
||||
}
|
||||
return {
|
||||
"mission_name": name,
|
||||
"terrain": "",
|
||||
"filename": filename,
|
||||
}
|
||||
|
||||
def get_rotation_config(self, rotation_entries: list[dict]) -> str:
|
||||
"""
|
||||
Generate Arma 3 mission rotation config block.
|
||||
rotation_entries: list of {mission_name, terrain, difficulty, params_json}
|
||||
"""
|
||||
if not rotation_entries:
|
||||
return ""
|
||||
|
||||
lines = ['class Missions {']
|
||||
for i, entry in enumerate(rotation_entries):
|
||||
mission = entry.get("mission_name", "")
|
||||
terrain = entry.get("terrain", "")
|
||||
difficulty = entry.get("difficulty", "Regular")
|
||||
params = entry.get("params_json", "{}")
|
||||
lines.append(f' class Mission_{i} {{')
|
||||
lines.append(f' template = "{mission}.{terrain}";')
|
||||
lines.append(f' difficulty = "{difficulty}";')
|
||||
if params and params != "{}":
|
||||
lines.append(f' params = {params};')
|
||||
lines.append(' };')
|
||||
lines.append('};')
|
||||
return "\n".join(lines)
|
||||
|
||||
def get_missions_dir(self, server_dir: Path) -> Path:
|
||||
return server_dir / _MISSIONS_DIR
|
||||
|
||||
def get_mission_data_schema(self) -> type[BaseModel] | None:
|
||||
return Arma3MissionData
|
||||
165
backend/adapters/arma3/mod_manager.py
Normal file
165
backend/adapters/arma3/mod_manager.py
Normal file
@@ -0,0 +1,165 @@
|
||||
"""Arma 3 mod manager — handles mod folder conventions, CLI args, and enable/disable."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from adapters.exceptions import AdapterError
|
||||
from core.utils.file_utils import get_server_dir
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_MOD_DIR_PATTERN = re.compile(r"^@.+", re.IGNORECASE)
|
||||
|
||||
|
||||
class Arma3ModData(BaseModel):
|
||||
"""Mod data schema for Arma 3."""
|
||||
workshop_id: str = ""
|
||||
is_server_mod: bool = False
|
||||
|
||||
|
||||
class Arma3ModManager:
|
||||
|
||||
def __init__(self, server_id: int | None = None) -> None:
|
||||
self._server_id = server_id
|
||||
|
||||
def _server_dir(self) -> Path:
|
||||
return get_server_dir(self._server_id)
|
||||
|
||||
# ── File / DB operations ──
|
||||
|
||||
def list_available_mods(self) -> list[dict]:
|
||||
"""
|
||||
Scan the server directory for mod folders (directories starting with '@').
|
||||
|
||||
Returns list of dicts:
|
||||
name: str — directory name (e.g. "@CBA_A3")
|
||||
path: str — absolute directory path
|
||||
size_bytes: int — total directory size (approximate, non-recursive)
|
||||
"""
|
||||
server_dir = self._server_dir()
|
||||
if not server_dir.exists():
|
||||
return []
|
||||
|
||||
mods = []
|
||||
try:
|
||||
for entry in server_dir.iterdir():
|
||||
if entry.is_dir() and _MOD_DIR_PATTERN.match(entry.name):
|
||||
try:
|
||||
size = sum(
|
||||
f.stat().st_size
|
||||
for f in entry.iterdir()
|
||||
if f.is_file()
|
||||
)
|
||||
except OSError:
|
||||
size = 0
|
||||
mods.append({
|
||||
"name": entry.name,
|
||||
"path": str(entry.resolve()),
|
||||
"size_bytes": size,
|
||||
})
|
||||
except OSError as exc:
|
||||
raise AdapterError(f"Cannot scan mod directory: {exc}") from exc
|
||||
|
||||
mods.sort(key=lambda m: m["name"].lower())
|
||||
return mods
|
||||
|
||||
def get_enabled_mods(self, config_repo) -> list[str]:
|
||||
"""
|
||||
Get the list of enabled mod names from the database config.
|
||||
|
||||
Args:
|
||||
config_repo: ConfigRepository instance.
|
||||
|
||||
Returns list of mod directory names (e.g. ["@CBA_A3", "@ace"]).
|
||||
"""
|
||||
mods_section = config_repo.get_section(self._server_id, "mods")
|
||||
if mods_section is None:
|
||||
return []
|
||||
enabled = mods_section.get("enabled_mods", [])
|
||||
if isinstance(enabled, str):
|
||||
enabled = [m.strip() for m in enabled.split(",") if m.strip()]
|
||||
return enabled
|
||||
|
||||
def set_enabled_mods(self, mod_names: list[str], config_repo) -> None:
|
||||
"""
|
||||
Update the enabled mods list in the database config.
|
||||
|
||||
Args:
|
||||
mod_names: List of mod directory names to enable.
|
||||
config_repo: ConfigRepository instance.
|
||||
|
||||
Raises AdapterError if any mod name doesn't exist on disk.
|
||||
"""
|
||||
available = {m["name"] for m in self.list_available_mods()}
|
||||
for name in mod_names:
|
||||
if not _MOD_DIR_PATTERN.match(name):
|
||||
raise AdapterError(f"Invalid mod name '{name}': must start with '@'")
|
||||
if name not in available:
|
||||
raise AdapterError(
|
||||
f"Mod '{name}' not found in server directory. "
|
||||
f"Available: {sorted(available)}"
|
||||
)
|
||||
|
||||
mods_section = config_repo.get_section(self._server_id, "mods") or {}
|
||||
current_version = mods_section.get("config_version", 0)
|
||||
config_repo.upsert_section(
|
||||
server_id=self._server_id,
|
||||
section="mods",
|
||||
data={"enabled_mods": mod_names},
|
||||
expected_version=current_version,
|
||||
)
|
||||
logger.info(
|
||||
"Updated enabled mods for server %d: %s",
|
||||
self._server_id, mod_names,
|
||||
)
|
||||
|
||||
# ── CLI argument building ──
|
||||
|
||||
def get_mod_folder_pattern(self) -> str:
|
||||
"""Arma 3 mods use @ prefix for local, or numeric workshop IDs."""
|
||||
return "@*"
|
||||
|
||||
def build_mod_args(self, server_mods: list[dict]) -> list[str]:
|
||||
"""
|
||||
Build Arma 3 mod CLI arguments.
|
||||
Returns -mod and -serverMod argument lists.
|
||||
"""
|
||||
client_mods = []
|
||||
server_only_mods = []
|
||||
|
||||
for mod in server_mods:
|
||||
path = mod.get("folder_path", "")
|
||||
game_data = mod.get("game_data", {})
|
||||
if isinstance(game_data, str):
|
||||
import json
|
||||
try:
|
||||
game_data = json.loads(game_data)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
game_data = {}
|
||||
|
||||
is_server = game_data.get("is_server_mod", False) if isinstance(game_data, dict) else False
|
||||
|
||||
if is_server:
|
||||
server_only_mods.append(path)
|
||||
else:
|
||||
client_mods.append(path)
|
||||
|
||||
args = []
|
||||
if client_mods:
|
||||
args.append('-mod="' + ";".join(client_mods) + '"')
|
||||
if server_only_mods:
|
||||
args.append('-serverMod="' + ";".join(server_only_mods) + '"')
|
||||
return args
|
||||
|
||||
def validate_mod_folder(self, path: Path) -> bool:
|
||||
"""Validate that a path looks like a valid Arma 3 mod folder."""
|
||||
if not path.exists() or not path.is_dir():
|
||||
return False
|
||||
return (path / "addons").exists() or (path / "$PREFIX$").exists()
|
||||
|
||||
def get_mod_data_schema(self) -> type[BaseModel] | None:
|
||||
return Arma3ModData
|
||||
30
backend/adapters/arma3/process_config.py
Normal file
30
backend/adapters/arma3/process_config.py
Normal file
@@ -0,0 +1,30 @@
|
||||
"""Arma 3 process configuration: executables, ports, directory layout."""
|
||||
|
||||
|
||||
class Arma3ProcessConfig:
|
||||
|
||||
def get_allowed_executables(self) -> list[str]:
|
||||
return ["arma3server_x64.exe", "arma3server.exe"]
|
||||
|
||||
def get_port_conventions(self, game_port: int) -> dict[str, int]:
|
||||
"""
|
||||
Arma 3 derives 3 additional ports from the game port.
|
||||
All 4 must be free when starting a server.
|
||||
rcon_port is separate (user-configurable, not auto-derived here).
|
||||
"""
|
||||
return {
|
||||
"game": game_port,
|
||||
"steam_query": game_port + 1,
|
||||
"von": game_port + 2,
|
||||
"steam_auth": game_port + 3,
|
||||
}
|
||||
|
||||
def get_default_game_port(self) -> int:
|
||||
return 2302
|
||||
|
||||
def get_default_rcon_port(self, game_port: int) -> int | None:
|
||||
return game_port + 4 # e.g. 2306 for default game port
|
||||
|
||||
def get_server_dir_layout(self) -> list[str]:
|
||||
"""Subdirectories to create inside servers/{id}/."""
|
||||
return ["server", "battleye", "mpmissions"]
|
||||
278
backend/adapters/arma3/rcon_client.py
Normal file
278
backend/adapters/arma3/rcon_client.py
Normal file
@@ -0,0 +1,278 @@
|
||||
"""
|
||||
BERConClient — BattlEye RCon UDP client for Arma3.
|
||||
|
||||
Implements the BattlEye RCon protocol version 2.
|
||||
Reference: https://www.battleye.com/downloads/BERConProtocol.txt
|
||||
|
||||
Thread safety: This client is NOT thread-safe by itself.
|
||||
The RemoteAdminPollerThread serializes all calls through a single thread.
|
||||
For the send_command() called from HTTP request handlers, use a threading.Lock.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import socket
|
||||
import struct
|
||||
import threading
|
||||
import time
|
||||
import zlib
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_SOCKET_TIMEOUT = 5.0
|
||||
_LOGIN_TIMEOUT = 5.0
|
||||
_RESPONSE_TIMEOUT = 5.0
|
||||
_MAX_RESPONSE_PARTS = 10
|
||||
_KEEPALIVE_INTERVAL = 30.0
|
||||
|
||||
|
||||
class BERConClient:
|
||||
"""
|
||||
BattlEye RCon UDP client.
|
||||
|
||||
Usage:
|
||||
client = BERConClient(host="127.0.0.1", port=2302, password="secret")
|
||||
client.connect() # raises ConnectionError on failure
|
||||
players = client.get_players()
|
||||
client.send_command("say -1 Hello")
|
||||
client.disconnect()
|
||||
"""
|
||||
|
||||
def __init__(self, host: str, port: int, password: str) -> None:
|
||||
self._host = host
|
||||
self._port = port
|
||||
self._password = password
|
||||
self._sock: socket.socket | None = None
|
||||
self._seq = 0
|
||||
self._connected = False
|
||||
self._lock = threading.Lock()
|
||||
self._last_activity = 0.0
|
||||
|
||||
# ── Public API ──
|
||||
|
||||
def connect(self) -> None:
|
||||
"""Open UDP socket and perform BattlEye login handshake."""
|
||||
with self._lock:
|
||||
if self._connected:
|
||||
return
|
||||
self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
self._sock.settimeout(_SOCKET_TIMEOUT)
|
||||
self._sock.connect((self._host, self._port))
|
||||
|
||||
login_payload = self._password.encode("ascii", errors="replace")
|
||||
packet = self._build_packet(0x00, login_payload)
|
||||
self._sock.send(packet)
|
||||
self._last_activity = time.monotonic()
|
||||
|
||||
deadline = time.monotonic() + _LOGIN_TIMEOUT
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
data = self._sock.recv(4096)
|
||||
except socket.timeout:
|
||||
break
|
||||
if not self._verify_checksum(data):
|
||||
continue
|
||||
if len(data) >= 9 and data[7] == 0x00:
|
||||
if data[8] == 0x01:
|
||||
self._connected = True
|
||||
self._seq = 0
|
||||
logger.info("BERConClient: logged in to %s:%d", self._host, self._port)
|
||||
return
|
||||
else:
|
||||
self._sock.close()
|
||||
self._sock = None
|
||||
raise ConnectionError(
|
||||
f"BattlEye login rejected at {self._host}:{self._port}"
|
||||
)
|
||||
|
||||
self._sock.close()
|
||||
self._sock = None
|
||||
raise ConnectionError(
|
||||
f"BattlEye login timed out at {self._host}:{self._port}"
|
||||
)
|
||||
|
||||
def disconnect(self) -> None:
|
||||
with self._lock:
|
||||
self._connected = False
|
||||
if self._sock is not None:
|
||||
try:
|
||||
self._sock.close()
|
||||
except OSError as exc:
|
||||
logger.debug("BERConClient: error closing socket during disconnect: %s", exc)
|
||||
self._sock = None
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
return self._connected
|
||||
|
||||
def send_command(self, command: str) -> str:
|
||||
"""Send a BattlEye command and return the response string."""
|
||||
with self._lock:
|
||||
if not self._connected or self._sock is None:
|
||||
raise ConnectionError("BERConClient: not connected")
|
||||
return self._send_command_locked(command)
|
||||
|
||||
def get_players(self) -> list[dict]:
|
||||
"""Send 'players' command and parse the response."""
|
||||
response = self.send_command("players")
|
||||
return self._parse_players(response)
|
||||
|
||||
def keepalive(self) -> None:
|
||||
"""Send a keepalive packet if the connection has been idle."""
|
||||
if not self._connected:
|
||||
return
|
||||
elapsed = time.monotonic() - self._last_activity
|
||||
if elapsed >= _KEEPALIVE_INTERVAL:
|
||||
try:
|
||||
self.send_command("")
|
||||
except Exception as exc:
|
||||
logger.debug("BERConClient: keepalive failed: %s", exc)
|
||||
|
||||
# ── Packet building ──
|
||||
|
||||
def _build_packet(self, pkt_type: int, payload: bytes) -> bytes:
|
||||
"""Build a BattlEye packet: 'B' 'E' <crc32 LE> 0xFF <type> <payload>"""
|
||||
body = bytes([0xFF, pkt_type]) + payload
|
||||
crc = zlib.crc32(body) & 0xFFFFFFFF
|
||||
crc_bytes = struct.pack("<I", crc)
|
||||
return b"BE" + crc_bytes + body
|
||||
|
||||
def _build_command_packet(self, seq: int, command: str) -> bytes:
|
||||
payload = bytes([seq]) + command.encode("ascii", errors="replace")
|
||||
return self._build_packet(0x01, payload)
|
||||
|
||||
def _build_ack_packet(self, seq: int) -> bytes:
|
||||
return self._build_packet(0x02, bytes([seq]))
|
||||
|
||||
def _verify_checksum(self, data: bytes) -> bool:
|
||||
"""Verify the CRC32 checksum in the received packet."""
|
||||
if len(data) < 8:
|
||||
return False
|
||||
if data[0:2] != b"BE":
|
||||
return False
|
||||
stored_crc = struct.unpack("<I", data[2:6])[0]
|
||||
body = data[6:]
|
||||
computed_crc = zlib.crc32(body) & 0xFFFFFFFF
|
||||
return stored_crc == computed_crc
|
||||
|
||||
# ── Command send (must be called with self._lock held) ──
|
||||
|
||||
def _send_command_locked(self, command: str) -> str:
|
||||
seq = self._seq
|
||||
self._seq = (self._seq + 1) % 256
|
||||
|
||||
packet = self._build_command_packet(seq, command)
|
||||
self._sock.send(packet)
|
||||
self._last_activity = time.monotonic()
|
||||
|
||||
parts: dict[int, str] = {}
|
||||
total_parts: int | None = None
|
||||
deadline = time.monotonic() + _RESPONSE_TIMEOUT
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
data = self._sock.recv(65535)
|
||||
except socket.timeout:
|
||||
break
|
||||
|
||||
if not self._verify_checksum(data):
|
||||
continue
|
||||
|
||||
if len(data) < 9:
|
||||
continue
|
||||
|
||||
pkt_type = data[7]
|
||||
|
||||
# Server message — acknowledge and ignore
|
||||
if pkt_type == 0x02:
|
||||
srv_seq = data[8]
|
||||
ack = self._build_ack_packet(srv_seq)
|
||||
try:
|
||||
self._sock.send(ack)
|
||||
except OSError as exc:
|
||||
logger.debug("BERConClient: failed to send ack for server message %d: %s", srv_seq, exc)
|
||||
continue
|
||||
|
||||
# Command response
|
||||
if pkt_type == 0x01:
|
||||
resp_seq = data[8]
|
||||
if resp_seq != seq:
|
||||
continue
|
||||
|
||||
payload = data[9:]
|
||||
|
||||
# Check if multi-part
|
||||
if len(payload) >= 3 and payload[0] == 0x00:
|
||||
total_parts = payload[1]
|
||||
part_index = payload[2]
|
||||
part_text = payload[3:].decode("utf-8", errors="replace")
|
||||
parts[part_index] = part_text
|
||||
if len(parts) == total_parts:
|
||||
break
|
||||
else:
|
||||
# Single-part response
|
||||
return payload.decode("utf-8", errors="replace")
|
||||
|
||||
if total_parts is not None and parts:
|
||||
return "".join(parts[i] for i in sorted(parts.keys()))
|
||||
|
||||
return ""
|
||||
|
||||
# ── Player parsing ──
|
||||
|
||||
def _parse_players(self, response: str) -> list[dict]:
|
||||
"""Parse the 'players' command response."""
|
||||
players = []
|
||||
lines = response.split("\n")
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
if line.startswith("Players on") or line.startswith("-") or line.startswith("("):
|
||||
continue
|
||||
|
||||
parts = line.split(None, 4)
|
||||
if len(parts) < 4:
|
||||
continue
|
||||
|
||||
try:
|
||||
number = int(parts[0])
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
ip_port = parts[1]
|
||||
ping_str = parts[2]
|
||||
guid_part = parts[3]
|
||||
name = parts[4].strip() if len(parts) > 4 else ""
|
||||
|
||||
ip = ip_port
|
||||
port = 0
|
||||
if ":" in ip_port:
|
||||
ip, port_str = ip_port.rsplit(":", 1)
|
||||
try:
|
||||
port = int(port_str)
|
||||
except ValueError:
|
||||
port = 0
|
||||
|
||||
try:
|
||||
ping = int(ping_str)
|
||||
except ValueError:
|
||||
ping = 0
|
||||
|
||||
uid = guid_part.split("(")[0]
|
||||
|
||||
is_admin = "(Admin)" in name
|
||||
name = name.replace("(Admin)", "").strip()
|
||||
|
||||
players.append({
|
||||
"number": number,
|
||||
"uid": uid,
|
||||
"name": name,
|
||||
"ip": ip,
|
||||
"port": port,
|
||||
"ping": ping,
|
||||
"is_admin": is_admin,
|
||||
"slot_id": number,
|
||||
})
|
||||
|
||||
return players
|
||||
142
backend/adapters/arma3/rcon_service.py
Normal file
142
backend/adapters/arma3/rcon_service.py
Normal file
@@ -0,0 +1,142 @@
|
||||
"""Arma 3 RCon service — remote admin via BattleEye RCon protocol."""
|
||||
from __future__ import annotations
|
||||
|
||||
import socket
|
||||
import logging
|
||||
import struct
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Arma3PlayerData(BaseModel):
|
||||
"""Player data schema for Arma 3."""
|
||||
name: str
|
||||
ping: int = 0
|
||||
guid: str = ""
|
||||
|
||||
|
||||
class Arma3RConClient:
|
||||
"""BattleEye RCon client for a single connection."""
|
||||
|
||||
def __init__(self, host: str, port: int, password: str):
|
||||
self._host = host
|
||||
self._port = port
|
||||
self._password = password
|
||||
self._sock: socket.socket | None = None
|
||||
|
||||
def _connect(self) -> None:
|
||||
if self._sock is not None:
|
||||
return
|
||||
self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
self._sock.settimeout(5.0)
|
||||
self._sock.connect((self._host, self._port))
|
||||
# Login sequence
|
||||
self._login()
|
||||
|
||||
def _login(self) -> None:
|
||||
if self._sock is None:
|
||||
raise ConnectionError("Not connected")
|
||||
# BE RCon login: send password with checksum
|
||||
password_bytes = self._password.encode("utf-8")
|
||||
checksum = self._compute_checksum(password_bytes)
|
||||
packet = b"\xff" + bytes([0, len(password_bytes) & 0xff]) + checksum + password_bytes
|
||||
self._sock.send(packet)
|
||||
response = self._sock.recv(4096)
|
||||
if not response or response[0] != 0xff:
|
||||
raise ConnectionError("RCon login failed")
|
||||
|
||||
@staticmethod
|
||||
def _compute_checksum(data: bytes) -> bytes:
|
||||
"""Compute BE RCon checksum (sum of bytes) & 0xFF."""
|
||||
return bytes([sum(data) & 0xFF])
|
||||
|
||||
def send_command(self, command: str, timeout: float = 5.0) -> str | None:
|
||||
try:
|
||||
self._connect()
|
||||
if self._sock is None:
|
||||
return None
|
||||
self._sock.settimeout(timeout)
|
||||
cmd_bytes = command.encode("utf-8")
|
||||
checksum = self._compute_checksum(cmd_bytes)
|
||||
packet = b"\xff\x01" + bytes([len(cmd_bytes) & 0xff]) + checksum + cmd_bytes
|
||||
self._sock.send(packet)
|
||||
response = self._sock.recv(4096)
|
||||
if response and len(response) > 2:
|
||||
return response[2:].decode("utf-8", errors="replace")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error("RCon command error: %s", e)
|
||||
return None
|
||||
|
||||
def get_players(self) -> list[dict]:
|
||||
result = self.send_command("players")
|
||||
if result is None:
|
||||
return []
|
||||
# Parse player list from BE RCon response
|
||||
players = []
|
||||
for line in result.split("\n"):
|
||||
line = line.strip()
|
||||
if not line or line.startswith("(") or line.startswith("total"):
|
||||
continue
|
||||
parts = line.split(maxsplit=4)
|
||||
if len(parts) >= 5:
|
||||
players.append({
|
||||
"slot_id": parts[0],
|
||||
"name": parts[3] if len(parts) > 3 else "",
|
||||
"guid": parts[2] if len(parts) > 2 else "",
|
||||
"ping": int(parts[1]) if parts[1].isdigit() else 0,
|
||||
})
|
||||
return players
|
||||
|
||||
def kick_player(self, identifier: str, reason: str = "") -> bool:
|
||||
cmd = f"kick {identifier}"
|
||||
if reason:
|
||||
cmd += f" {reason}"
|
||||
result = self.send_command(cmd)
|
||||
return result is not None
|
||||
|
||||
def ban_player(self, identifier: str, duration_minutes: int, reason: str) -> bool:
|
||||
cmd = f"ban {identifier} {duration_minutes} {reason}"
|
||||
result = self.send_command(cmd)
|
||||
return result is not None
|
||||
|
||||
def say_all(self, message: str) -> bool:
|
||||
result = self.send_command(f"say {message}")
|
||||
return result is not None
|
||||
|
||||
def shutdown(self) -> bool:
|
||||
result = self.send_command("#shutdown")
|
||||
return result is not None
|
||||
|
||||
def keepalive(self) -> None:
|
||||
try:
|
||||
self.send_command("")
|
||||
except Exception as exc:
|
||||
logger.debug("Arma3RConClient: keepalive failed: %s", exc)
|
||||
|
||||
def disconnect(self) -> None:
|
||||
if self._sock:
|
||||
try:
|
||||
self._sock.close()
|
||||
except Exception as exc:
|
||||
logger.debug("Arma3RConClient: error closing socket: %s", exc)
|
||||
self._sock = None
|
||||
|
||||
|
||||
class Arma3RConService:
|
||||
"""Factory for Arma 3 RCon clients."""
|
||||
|
||||
def create_client(self, host: str, port: int, password: str) -> Arma3RConClient:
|
||||
return Arma3RConClient(host, port, password)
|
||||
|
||||
def get_startup_delay(self) -> float:
|
||||
return 30.0
|
||||
|
||||
def get_poll_interval(self) -> float:
|
||||
return 10.0
|
||||
|
||||
def get_player_data_schema(self) -> type[BaseModel] | None:
|
||||
return Arma3PlayerData
|
||||
135
backend/adapters/arma3/remote_admin.py
Normal file
135
backend/adapters/arma3/remote_admin.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""
|
||||
Arma3RemoteAdmin — implements the RemoteAdmin protocol using BERConClient.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from adapters.arma3.rcon_client import BERConClient
|
||||
from adapters.exceptions import RemoteAdminError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Arma3RemoteAdmin:
|
||||
"""
|
||||
RemoteAdmin protocol implementation for Arma3 BattlEye RCon.
|
||||
|
||||
Args:
|
||||
server_id: Database server ID.
|
||||
host: RCon host (usually 127.0.0.1).
|
||||
port: RCon port (usually game_port + 3).
|
||||
password: RCon password.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_id: int,
|
||||
host: str,
|
||||
port: int,
|
||||
password: str,
|
||||
) -> None:
|
||||
self._server_id = server_id
|
||||
self._client = BERConClient(host=host, port=port, password=password)
|
||||
|
||||
# ── RemoteAdmin protocol ──
|
||||
|
||||
def connect(self) -> None:
|
||||
"""Connect to RCon. Raises RemoteAdminError on failure."""
|
||||
try:
|
||||
self._client.connect()
|
||||
except ConnectionError as exc:
|
||||
raise RemoteAdminError(str(exc)) from exc
|
||||
|
||||
def disconnect(self) -> None:
|
||||
self._client.disconnect()
|
||||
|
||||
def is_connected(self) -> bool:
|
||||
return self._client.is_connected
|
||||
|
||||
def get_players(self) -> list[dict]:
|
||||
"""Fetch current player list."""
|
||||
try:
|
||||
return self._client.get_players()
|
||||
except Exception as exc:
|
||||
raise RemoteAdminError(f"get_players failed: {exc}") from exc
|
||||
|
||||
def send_command(self, command: str, timeout: float = 5.0) -> str | None:
|
||||
"""Send an arbitrary RCon command."""
|
||||
try:
|
||||
return self._client.send_command(command)
|
||||
except Exception as exc:
|
||||
raise RemoteAdminError(f"send_command failed: {exc}") from exc
|
||||
|
||||
def kick_player(self, player_number: int, reason: str = "") -> bool:
|
||||
"""Kick a player by their in-game slot number."""
|
||||
command = f"kick {player_number}"
|
||||
if reason:
|
||||
command += f" {reason}"
|
||||
try:
|
||||
self._client.send_command(command)
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] kick_player failed for player %d: %s", self._server_id, player_number, exc)
|
||||
return False
|
||||
|
||||
def ban_player(self, player_uid: str, duration_minutes: int = 0, reason: str = "") -> bool:
|
||||
"""Add a GUID ban. duration_minutes=0 means permanent."""
|
||||
duration = duration_minutes if duration_minutes > 0 else 0
|
||||
command = f"addBan {player_uid} {duration} {reason}"
|
||||
try:
|
||||
self._client.send_command(command)
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] ban_player failed: %s", self._server_id, exc)
|
||||
return False
|
||||
|
||||
def say_all(self, message: str) -> bool:
|
||||
"""Broadcast a message to all players."""
|
||||
try:
|
||||
self._client.send_command(f"say -1 {message}")
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] say_all failed: %s", self._server_id, exc)
|
||||
return False
|
||||
|
||||
def shutdown(self) -> bool:
|
||||
"""Shutdown the game server via RCon."""
|
||||
try:
|
||||
self._client.send_command("#shutdown")
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] shutdown failed: %s", self._server_id, exc)
|
||||
return False
|
||||
|
||||
def keepalive(self) -> None:
|
||||
"""Send keepalive if idle."""
|
||||
self._client.keepalive()
|
||||
|
||||
|
||||
class Arma3RemoteAdminFactory:
|
||||
"""
|
||||
RemoteAdmin factory for Arma3.
|
||||
Implements the RemoteAdmin protocol (create_client, get_startup_delay, etc.).
|
||||
"""
|
||||
|
||||
def create_client(self, host: str, port: int, password: str) -> Arma3RemoteAdmin:
|
||||
"""Create a new Arma3RemoteAdmin client instance."""
|
||||
return Arma3RemoteAdmin(
|
||||
server_id=0, # Will be set by caller
|
||||
host=host,
|
||||
port=port,
|
||||
password=password,
|
||||
)
|
||||
|
||||
def get_startup_delay(self) -> float:
|
||||
"""Seconds to wait after server start before connecting."""
|
||||
return 30.0
|
||||
|
||||
def get_poll_interval(self) -> float:
|
||||
"""Seconds between player list polls."""
|
||||
return 10.0
|
||||
|
||||
def get_player_data_schema(self):
|
||||
"""Pydantic model for players.game_data JSON."""
|
||||
return None
|
||||
53
backend/adapters/exceptions.py
Normal file
53
backend/adapters/exceptions.py
Normal file
@@ -0,0 +1,53 @@
|
||||
"""Typed adapter exceptions. Core catches these specifically."""
|
||||
|
||||
|
||||
class AdapterError(Exception):
|
||||
"""Base for all adapter errors."""
|
||||
pass
|
||||
|
||||
|
||||
class ConfigWriteError(AdapterError):
|
||||
"""Atomic file write failed. Temp files are already cleaned up."""
|
||||
def __init__(self, path: str, detail: str):
|
||||
self.path = path
|
||||
self.detail = detail
|
||||
super().__init__(f"Config write failed at {path}: {detail}")
|
||||
|
||||
|
||||
class ConfigValidationError(AdapterError):
|
||||
"""Adapter Pydantic model rejected the config values."""
|
||||
def __init__(self, section: str, errors: list[dict]):
|
||||
self.section = section
|
||||
self.errors = errors
|
||||
super().__init__(f"Config validation failed for section '{section}': {errors}")
|
||||
|
||||
|
||||
class ConfigMigrationError(AdapterError):
|
||||
"""migrate_config() could not transform old schema. Core keeps original."""
|
||||
def __init__(self, from_version: str, detail: str):
|
||||
self.from_version = from_version
|
||||
self.detail = detail
|
||||
super().__init__(f"Config migration from {from_version} failed: {detail}")
|
||||
|
||||
|
||||
class LaunchArgsError(AdapterError):
|
||||
"""build_launch_args() failed (missing mod path, bad config value)."""
|
||||
def __init__(self, detail: str):
|
||||
self.detail = detail
|
||||
super().__init__(f"Launch args error: {detail}")
|
||||
|
||||
|
||||
class RemoteAdminError(AdapterError):
|
||||
"""Remote admin connection or command failed."""
|
||||
def __init__(self, detail: str, recoverable: bool = True):
|
||||
self.detail = detail
|
||||
self.recoverable = recoverable
|
||||
super().__init__(f"Remote admin error: {detail}")
|
||||
|
||||
|
||||
class ExeNotAllowedError(AdapterError):
|
||||
"""Executable not in adapter allowlist."""
|
||||
def __init__(self, exe: str, allowed: list[str]):
|
||||
self.exe = exe
|
||||
self.allowed = allowed
|
||||
super().__init__(f"Executable '{exe}' not allowed. Allowed: {allowed}")
|
||||
238
backend/adapters/protocols.py
Normal file
238
backend/adapters/protocols.py
Normal file
@@ -0,0 +1,238 @@
|
||||
"""
|
||||
All adapter capability Protocol definitions.
|
||||
Core code only imports from here — never from adapter internals.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Protocol, runtime_checkable
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ConfigGenerator(Protocol):
|
||||
"""
|
||||
Merged protocol: config schema definition + file generation + launch args.
|
||||
Always implement all methods. Return empty dict/list where not applicable.
|
||||
"""
|
||||
game_type: str
|
||||
|
||||
def get_sections(self) -> dict[str, type[BaseModel]]:
|
||||
"""Return {section_name: PydanticModelClass} for all config sections."""
|
||||
...
|
||||
|
||||
def get_defaults(self, section: str) -> dict[str, Any]:
|
||||
"""Return default values dict for the given section."""
|
||||
...
|
||||
|
||||
def get_sensitive_fields(self, section: str) -> list[str]:
|
||||
"""
|
||||
Return JSON keys in this section that need Fernet encryption.
|
||||
Core's ConfigRepository encrypts/decrypts these transparently.
|
||||
Example: ["password", "password_admin"] for section "server".
|
||||
"""
|
||||
...
|
||||
|
||||
def get_config_version(self) -> str:
|
||||
"""
|
||||
Current adapter schema version string (e.g. "1.0.0").
|
||||
Stored in game_configs.schema_version.
|
||||
When this changes, core calls migrate_config() automatically.
|
||||
"""
|
||||
...
|
||||
|
||||
def migrate_config(
|
||||
self, old_version: str, config_json: dict[str, dict]
|
||||
) -> dict[str, dict]:
|
||||
"""
|
||||
Transform config JSON from old_version to current version.
|
||||
Called by ConfigRepository when stored schema_version differs.
|
||||
Returns migrated config dict.
|
||||
Raises ConfigMigrationError on failure — core keeps original.
|
||||
"""
|
||||
...
|
||||
|
||||
def write_configs(
|
||||
self,
|
||||
server_id: int,
|
||||
server_dir: Path,
|
||||
config_sections: dict[str, dict],
|
||||
) -> list[Path]:
|
||||
"""
|
||||
Write all config files to disk using atomic write pattern:
|
||||
1. Write to .tmp files
|
||||
2. os.replace() each .tmp to final path
|
||||
3. On any failure: clean up .tmp files, raise ConfigWriteError
|
||||
Returns list of written file paths.
|
||||
"""
|
||||
...
|
||||
|
||||
def build_launch_args(
|
||||
self,
|
||||
config_sections: dict[str, dict],
|
||||
mod_args: list[str] | None = None,
|
||||
) -> list[str]:
|
||||
"""
|
||||
Return full CLI argument list for the game executable.
|
||||
Raises LaunchArgsError if required values are missing/invalid.
|
||||
"""
|
||||
...
|
||||
|
||||
def preview_config(
|
||||
self,
|
||||
server_id: int,
|
||||
server_dir: Path,
|
||||
config_sections: dict[str, dict],
|
||||
) -> dict[str, str]:
|
||||
"""
|
||||
Render config files as strings WITHOUT writing to disk.
|
||||
Returns {label: content}.
|
||||
Label = filename for file-based games, var name for env-var games.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class RemoteAdminClient(Protocol):
|
||||
"""A connected client instance. Not required to be thread-safe — core wraps calls."""
|
||||
|
||||
def send_command(self, command: str, timeout: float = 5.0) -> str | None: ...
|
||||
def get_players(self) -> list[dict]: ...
|
||||
def kick_player(self, identifier: str, reason: str = "") -> bool: ...
|
||||
def ban_player(self, identifier: str, duration_minutes: int, reason: str) -> bool: ...
|
||||
def say_all(self, message: str) -> bool: ...
|
||||
def shutdown(self) -> bool: ...
|
||||
def keepalive(self) -> None: ...
|
||||
def disconnect(self) -> None: ...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class RemoteAdmin(Protocol):
|
||||
"""Factory for remote admin clients. One per adapter, creates clients on demand."""
|
||||
|
||||
def create_client(self, host: str, port: int, password: str) -> RemoteAdminClient: ...
|
||||
|
||||
def get_startup_delay(self) -> float:
|
||||
"""Seconds to wait after server start before connecting. Default: 30."""
|
||||
...
|
||||
|
||||
def get_poll_interval(self) -> float:
|
||||
"""Seconds between player list polls. Default: 10."""
|
||||
...
|
||||
|
||||
def get_player_data_schema(self) -> type[BaseModel] | None:
|
||||
"""Pydantic model for players.game_data JSON. None = no validation."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class LogParser(Protocol):
|
||||
"""Parses game-specific log lines into standard format."""
|
||||
|
||||
def parse_line(self, line: str) -> dict | None:
|
||||
"""
|
||||
Parse one log line.
|
||||
Returns: {"timestamp": ISO str, "level": "info"|"warning"|"error", "message": str}
|
||||
Returns None to skip the line (e.g. blank lines, binary garbage).
|
||||
"""
|
||||
...
|
||||
|
||||
def get_log_file_resolver(self, server_id: int) -> Callable[[Path], Path | None]:
|
||||
"""
|
||||
Return a callable(server_dir: Path) -> Path | None.
|
||||
Called by LogTailThread to find the current log file.
|
||||
Return None if log file not yet created.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class MissionManager(Protocol):
|
||||
"""Handles mission/scenario file format and rotation."""
|
||||
file_extension: str # e.g. ".pbo"
|
||||
|
||||
def parse_mission_filename(self, filename: str) -> dict: ...
|
||||
def get_rotation_config(self, rotation_entries: list[dict]) -> str: ...
|
||||
def get_missions_dir(self, server_dir: Path) -> Path: ...
|
||||
|
||||
def get_mission_data_schema(self) -> type[BaseModel] | None:
|
||||
"""Pydantic model for missions.game_data. None = no validation."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ModManager(Protocol):
|
||||
"""Handles mod folder conventions and CLI argument building."""
|
||||
|
||||
def get_mod_folder_pattern(self) -> str: ...
|
||||
def build_mod_args(self, server_mods: list[dict]) -> list[str]: ...
|
||||
def validate_mod_folder(self, path: Path) -> bool: ...
|
||||
|
||||
def get_mod_data_schema(self) -> type[BaseModel] | None:
|
||||
"""Pydantic model for mods.game_data. None = no validation."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ProcessConfig(Protocol):
|
||||
"""Game-specific process and directory conventions."""
|
||||
|
||||
def get_allowed_executables(self) -> list[str]: ...
|
||||
def get_port_conventions(self, game_port: int) -> dict[str, int]: ...
|
||||
def get_default_game_port(self) -> int: ...
|
||||
def get_default_rcon_port(self, game_port: int) -> int | None: ...
|
||||
def get_server_dir_layout(self) -> list[str]: ...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class BanManager(Protocol):
|
||||
"""Bidirectional sync between DB bans and game ban file."""
|
||||
|
||||
def get_ban_file_path(self, server_dir: Path) -> Path: ...
|
||||
def sync_bans_to_file(self, bans: list[dict], ban_file: Path) -> None: ...
|
||||
def read_bans_from_file(self, ban_file: Path) -> list[dict]: ...
|
||||
|
||||
def get_ban_data_schema(self) -> type[BaseModel] | None:
|
||||
"""Pydantic model for bans.game_data. None = no validation."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class GameAdapter(Protocol):
|
||||
"""
|
||||
Composite adapter. Every game must implement this.
|
||||
Optional capabilities return None — core degrades gracefully.
|
||||
Use has_capability(name) instead of None checks throughout.
|
||||
"""
|
||||
game_type: str # e.g. "arma3"
|
||||
display_name: str # e.g. "Arma 3"
|
||||
version: str # e.g. "1.0.0"
|
||||
|
||||
def get_config_generator(self) -> ConfigGenerator: ...
|
||||
def get_process_config(self) -> ProcessConfig: ...
|
||||
def get_log_parser(self) -> LogParser: ...
|
||||
def get_remote_admin(self) -> RemoteAdmin | None: ...
|
||||
def get_mission_manager(self) -> MissionManager | None: ...
|
||||
def get_mod_manager(self) -> ModManager | None: ...
|
||||
def get_ban_manager(self) -> BanManager | None: ...
|
||||
|
||||
def has_capability(self, name: str) -> bool:
|
||||
"""
|
||||
Explicit capability probe. Use this instead of:
|
||||
if adapter.get_remote_admin() is not None:
|
||||
Use this instead:
|
||||
if adapter.has_capability("remote_admin"):
|
||||
|
||||
Valid names: "config_generator", "process_config", "log_parser",
|
||||
"remote_admin", "mission_manager", "mod_manager", "ban_manager"
|
||||
"""
|
||||
...
|
||||
|
||||
def get_additional_routers(self) -> list:
|
||||
"""List of FastAPI APIRouter instances for game-specific routes."""
|
||||
...
|
||||
|
||||
def get_custom_thread_factories(self) -> list[Callable]:
|
||||
"""List of callables(server_id, db) -> BaseServerThread for extra threads."""
|
||||
...
|
||||
66
backend/adapters/registry.py
Normal file
66
backend/adapters/registry.py
Normal file
@@ -0,0 +1,66 @@
|
||||
"""
|
||||
GameAdapterRegistry — singleton that holds all registered game adapters.
|
||||
Adapters register themselves at import time.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class GameAdapterRegistry:
|
||||
_adapters: dict[str, object] = {} # game_type -> GameAdapter
|
||||
|
||||
@classmethod
|
||||
def register(cls, adapter) -> None:
|
||||
"""Register a game adapter. Called at import time by each adapter package."""
|
||||
if adapter.game_type in cls._adapters:
|
||||
logger.warning(
|
||||
"Adapter for '%s' already registered. Overwriting.", adapter.game_type
|
||||
)
|
||||
cls._adapters[adapter.game_type] = adapter
|
||||
logger.info("Registered game adapter: %s (%s)", adapter.game_type, adapter.display_name)
|
||||
|
||||
@classmethod
|
||||
def get(cls, game_type: str):
|
||||
"""
|
||||
Get adapter by game_type. Raises KeyError if not registered.
|
||||
Core code calls this whenever game-specific behavior is needed.
|
||||
"""
|
||||
adapter = cls._adapters.get(game_type)
|
||||
if adapter is None:
|
||||
raise KeyError(
|
||||
f"No adapter registered for game type '{game_type}'. "
|
||||
f"Available: {list(cls._adapters.keys())}"
|
||||
)
|
||||
return adapter
|
||||
|
||||
@classmethod
|
||||
def all(cls) -> list:
|
||||
"""Return all registered adapters."""
|
||||
return list(cls._adapters.values())
|
||||
|
||||
@classmethod
|
||||
def list_game_types(cls) -> list[dict]:
|
||||
"""Return metadata list for API /games endpoint."""
|
||||
result = []
|
||||
for adapter in cls._adapters.values():
|
||||
caps = []
|
||||
for cap in [
|
||||
"config_generator", "process_config", "log_parser",
|
||||
"remote_admin", "mission_manager", "mod_manager", "ban_manager",
|
||||
]:
|
||||
if adapter.has_capability(cap):
|
||||
caps.append(cap)
|
||||
result.append({
|
||||
"game_type": adapter.game_type,
|
||||
"display_name": adapter.display_name,
|
||||
"version": adapter.version,
|
||||
"capabilities": caps,
|
||||
})
|
||||
return result
|
||||
|
||||
@classmethod
|
||||
def is_registered(cls, game_type: str) -> bool:
|
||||
return game_type in cls._adapters
|
||||
35
backend/config.py
Normal file
35
backend/config.py
Normal file
@@ -0,0 +1,35 @@
|
||||
"""Load and validate all environment variables at startup."""
|
||||
from __future__ import annotations
|
||||
|
||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
model_config = SettingsConfigDict(
|
||||
env_prefix="LANGUARD_",
|
||||
env_file=".env",
|
||||
env_file_encoding="utf-8",
|
||||
case_sensitive=False,
|
||||
# Enable JSON parsing for complex types (list[str]) from env vars
|
||||
json_parse_ints=False,
|
||||
)
|
||||
|
||||
secret_key: str
|
||||
encryption_key: str # Fernet base64 key
|
||||
db_path: str = "./languard.db"
|
||||
servers_dir: str = "./servers"
|
||||
host: str = "0.0.0.0"
|
||||
port: int = 8000
|
||||
cors_origins: list[str] = ["http://localhost:5173"]
|
||||
log_retention_days: int = 7
|
||||
metrics_retention_days: int = 30
|
||||
player_history_retention_days: int = 90
|
||||
jwt_expire_hours: int = 24
|
||||
login_rate_limit: str = "5/minute"
|
||||
log_level: str = "INFO"
|
||||
|
||||
# Game-specific defaults (used by adapters, not core)
|
||||
arma3_default_exe: str = "C:/Arma3Server/arma3server_x64.exe"
|
||||
|
||||
|
||||
settings = Settings()
|
||||
0
backend/core/__init__.py
Normal file
0
backend/core/__init__.py
Normal file
0
backend/core/auth/__init__.py
Normal file
0
backend/core/auth/__init__.py
Normal file
77
backend/core/auth/router.py
Normal file
77
backend/core/auth/router.py
Normal file
@@ -0,0 +1,77 @@
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from core.auth.schemas import (
|
||||
ChangePasswordRequest, CreateUserRequest, LoginRequest,
|
||||
)
|
||||
from core.auth.service import AuthService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user, require_admin
|
||||
|
||||
router = APIRouter(prefix="/auth", tags=["auth"])
|
||||
|
||||
# Rate limiter will be attached after main.py is imported
|
||||
_limiter = None
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
@router.post("/login")
|
||||
def login(
|
||||
request: Request,
|
||||
body: LoginRequest,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
):
|
||||
return _ok(AuthService(db).login(body.username, body.password))
|
||||
|
||||
|
||||
@router.post("/logout")
|
||||
def logout(user: Annotated[dict, Depends(get_current_user)]):
|
||||
# Client-side token deletion. No server-side blacklist.
|
||||
return _ok({"message": "Logged out"})
|
||||
|
||||
|
||||
@router.get("/me")
|
||||
def me(user: Annotated[dict, Depends(get_current_user)]):
|
||||
return _ok({"id": user["id"], "username": user["username"], "role": user["role"]})
|
||||
|
||||
|
||||
@router.put("/password")
|
||||
def change_password(
|
||||
body: ChangePasswordRequest,
|
||||
user: Annotated[dict, Depends(get_current_user)],
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
):
|
||||
AuthService(db).change_password(user["id"], body.current_password, body.new_password)
|
||||
return _ok({"message": "Password changed"})
|
||||
|
||||
|
||||
@router.get("/users")
|
||||
def list_users(
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
):
|
||||
return _ok(AuthService(db).list_users())
|
||||
|
||||
|
||||
@router.post("/users", status_code=201)
|
||||
def create_user(
|
||||
body: CreateUserRequest,
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
):
|
||||
user = AuthService(db).create_user(body.username, body.password, body.role)
|
||||
return _ok(user)
|
||||
|
||||
|
||||
@router.delete("/users/{user_id}", status_code=204)
|
||||
def delete_user(
|
||||
user_id: int,
|
||||
admin: Annotated[dict, Depends(require_admin)],
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
):
|
||||
AuthService(db).delete_user(user_id, admin["id"])
|
||||
31
backend/core/auth/schemas.py
Normal file
31
backend/core/auth/schemas.py
Normal file
@@ -0,0 +1,31 @@
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class LoginRequest(BaseModel):
|
||||
username: str
|
||||
password: str
|
||||
|
||||
|
||||
class TokenResponse(BaseModel):
|
||||
access_token: str
|
||||
token_type: str = "bearer"
|
||||
expires_in: int
|
||||
user: dict
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: int
|
||||
username: str
|
||||
role: str
|
||||
created_at: str
|
||||
|
||||
|
||||
class CreateUserRequest(BaseModel):
|
||||
username: str
|
||||
password: str
|
||||
role: str = "viewer"
|
||||
|
||||
|
||||
class ChangePasswordRequest(BaseModel):
|
||||
current_password: str
|
||||
new_password: str
|
||||
105
backend/core/auth/service.py
Normal file
105
backend/core/auth/service.py
Normal file
@@ -0,0 +1,105 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import HTTPException, status
|
||||
from sqlalchemy import text
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from core.auth.utils import create_access_token, hash_password, verify_password
|
||||
from config import settings
|
||||
|
||||
|
||||
class AuthService:
|
||||
|
||||
def __init__(self, db: Connection):
|
||||
self._db = db
|
||||
|
||||
def login(self, username: str, password: str) -> dict:
|
||||
row = self._db.execute(
|
||||
text("SELECT * FROM users WHERE username = :u"), {"u": username}
|
||||
).fetchone()
|
||||
|
||||
if row is None or not verify_password(password, row.password_hash):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail={"code": "UNAUTHORIZED", "message": "Invalid credentials"},
|
||||
)
|
||||
|
||||
user = dict(row._mapping)
|
||||
self._db.execute(
|
||||
text("UPDATE users SET last_login = datetime('now') WHERE id = :id"),
|
||||
{"id": user["id"]},
|
||||
)
|
||||
|
||||
token = create_access_token(user["id"], user["username"], user["role"])
|
||||
return {
|
||||
"access_token": token,
|
||||
"token_type": "bearer",
|
||||
"expires_in": settings.jwt_expire_hours * 3600,
|
||||
"user": {"id": user["id"], "username": user["username"], "role": user["role"]},
|
||||
}
|
||||
|
||||
def create_user(self, username: str, password: str, role: str = "viewer") -> dict:
|
||||
existing = self._db.execute(
|
||||
text("SELECT id FROM users WHERE username = :u"), {"u": username}
|
||||
).fetchone()
|
||||
if existing:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={"code": "CONFLICT", "message": f"Username '{username}' already taken"},
|
||||
)
|
||||
self._db.execute(
|
||||
text(
|
||||
"INSERT INTO users (username, password_hash, role) VALUES (:u, :ph, :r)"
|
||||
),
|
||||
{"u": username, "ph": hash_password(password), "r": role},
|
||||
)
|
||||
row = self._db.execute(
|
||||
text("SELECT id, username, role, created_at FROM users WHERE username = :u"),
|
||||
{"u": username},
|
||||
).fetchone()
|
||||
return dict(row._mapping)
|
||||
|
||||
def change_password(self, user_id: int, current_password: str, new_password: str) -> None:
|
||||
row = self._db.execute(
|
||||
text("SELECT password_hash FROM users WHERE id = :id"),
|
||||
{"id": user_id},
|
||||
).fetchone()
|
||||
if row is None or not verify_password(current_password, row.password_hash):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail={"code": "UNAUTHORIZED", "message": "Current password is incorrect"},
|
||||
)
|
||||
self._db.execute(
|
||||
text("UPDATE users SET password_hash = :ph WHERE id = :id"),
|
||||
{"ph": hash_password(new_password), "id": user_id},
|
||||
)
|
||||
|
||||
def list_users(self) -> list[dict]:
|
||||
rows = self._db.execute(
|
||||
text("SELECT id, username, role, created_at, last_login FROM users ORDER BY id")
|
||||
).fetchall()
|
||||
return [dict(r._mapping) for r in rows]
|
||||
|
||||
def delete_user(self, user_id: int, requesting_user_id: int) -> None:
|
||||
if user_id == requesting_user_id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "VALIDATION_ERROR", "message": "Cannot delete yourself"},
|
||||
)
|
||||
self._db.execute(
|
||||
text("DELETE FROM users WHERE id = :id"),
|
||||
{"id": user_id},
|
||||
)
|
||||
|
||||
def seed_admin_if_empty(self) -> str | None:
|
||||
"""
|
||||
Create a default admin user if no users exist.
|
||||
Returns the generated password (printed to stdout on startup).
|
||||
"""
|
||||
count = self._db.execute(text("SELECT COUNT(*) FROM users")).fetchone()[0]
|
||||
if count > 0:
|
||||
return None
|
||||
import secrets
|
||||
password = secrets.token_urlsafe(16)
|
||||
self.create_user("admin", password, "admin")
|
||||
return password
|
||||
48
backend/core/auth/utils.py
Normal file
48
backend/core/auth/utils.py
Normal file
@@ -0,0 +1,48 @@
|
||||
"""JWT creation/validation and password hashing."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import bcrypt
|
||||
from jose import JWTError, jwt
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Hash a password using bcrypt. Returns UTF-8 encoded hash string."""
|
||||
password_bytes = password.encode("utf-8")
|
||||
salt = bcrypt.gensalt()
|
||||
hashed = bcrypt.hashpw(password_bytes, salt)
|
||||
return hashed.decode("utf-8")
|
||||
|
||||
|
||||
def verify_password(plain: str, hashed: str) -> bool:
|
||||
"""Verify a plain password against a bcrypt hash."""
|
||||
try:
|
||||
return bcrypt.checkpw(plain.encode("utf-8"), hashed.encode("utf-8"))
|
||||
except Exception as exc:
|
||||
logger.warning("Password verification failed: %s", exc)
|
||||
return False
|
||||
|
||||
|
||||
def create_access_token(user_id: int, username: str, role: str) -> str:
|
||||
from config import settings
|
||||
expire = datetime.now(timezone.utc) + timedelta(hours=settings.jwt_expire_hours)
|
||||
payload = {
|
||||
"sub": str(user_id),
|
||||
"username": username,
|
||||
"role": role,
|
||||
"exp": expire,
|
||||
}
|
||||
return jwt.encode(payload, settings.secret_key, algorithm="HS256")
|
||||
|
||||
|
||||
def decode_access_token(token: str) -> dict:
|
||||
"""
|
||||
Decode and validate JWT. Returns payload dict.
|
||||
Raises JWTError on invalid/expired token.
|
||||
"""
|
||||
from config import settings
|
||||
return jwt.decode(token, settings.secret_key, algorithms=["HS256"])
|
||||
0
backend/core/bans/__init__.py
Normal file
0
backend/core/bans/__init__.py
Normal file
1
backend/core/dal/__init__.py
Normal file
1
backend/core/dal/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Data Access Layer repositories."""
|
||||
52
backend/core/dal/ban_repository.py
Normal file
52
backend/core/dal/ban_repository.py
Normal file
@@ -0,0 +1,52 @@
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class BanRepository(BaseRepository):
|
||||
|
||||
def get_all(self, server_id: int, active_only: bool = True) -> list[dict]:
|
||||
if active_only:
|
||||
return self._fetchall(
|
||||
"SELECT * FROM bans WHERE server_id = :sid AND is_active = 1 ORDER BY banned_at DESC",
|
||||
{"sid": server_id},
|
||||
)
|
||||
return self._fetchall(
|
||||
"SELECT * FROM bans WHERE server_id = :sid ORDER BY banned_at DESC",
|
||||
{"sid": server_id},
|
||||
)
|
||||
|
||||
def create(
|
||||
self,
|
||||
server_id: int,
|
||||
guid: str | None,
|
||||
name: str | None,
|
||||
reason: str | None,
|
||||
banned_by: str,
|
||||
expires_at: str | None = None,
|
||||
game_data: dict | None = None,
|
||||
) -> int:
|
||||
return self._lastrowid(
|
||||
"""
|
||||
INSERT INTO bans (server_id, guid, name, reason, banned_by, expires_at, game_data)
|
||||
VALUES (:sid, :guid, :name, :reason, :by, :exp, :gd)
|
||||
""",
|
||||
{
|
||||
"sid": server_id,
|
||||
"guid": guid,
|
||||
"name": name,
|
||||
"reason": reason,
|
||||
"by": banned_by,
|
||||
"exp": expires_at,
|
||||
"gd": json.dumps(game_data or {}),
|
||||
},
|
||||
)
|
||||
|
||||
def deactivate(self, ban_id: int) -> None:
|
||||
self._execute(
|
||||
"UPDATE bans SET is_active = 0 WHERE id = :id",
|
||||
{"id": ban_id},
|
||||
)
|
||||
|
||||
def get_by_id(self, ban_id: int) -> dict | None:
|
||||
return self._fetchone("SELECT * FROM bans WHERE id = :id", {"id": ban_id})
|
||||
27
backend/core/dal/base_repository.py
Normal file
27
backend/core/dal/base_repository.py
Normal file
@@ -0,0 +1,27 @@
|
||||
"""Base repository with common DB helpers."""
|
||||
from __future__ import annotations
|
||||
|
||||
from sqlalchemy import text
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
|
||||
class BaseRepository:
|
||||
def __init__(self, db: Connection):
|
||||
self._db = db
|
||||
|
||||
def _execute(self, query: str, params: dict | None = None):
|
||||
return self._db.execute(text(query), params or {})
|
||||
|
||||
def _fetchone(self, query: str, params: dict | None = None) -> dict | None:
|
||||
row = self._db.execute(text(query), params or {}).fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return dict(row._mapping)
|
||||
|
||||
def _fetchall(self, query: str, params: dict | None = None) -> list[dict]:
|
||||
rows = self._db.execute(text(query), params or {}).fetchall()
|
||||
return [dict(r._mapping) for r in rows]
|
||||
|
||||
def _lastrowid(self, query: str, params: dict | None = None) -> int:
|
||||
result = self._db.execute(text(query), params or {})
|
||||
return result.lastrowid
|
||||
163
backend/core/dal/config_repository.py
Normal file
163
backend/core/dal/config_repository.py
Normal file
@@ -0,0 +1,163 @@
|
||||
"""
|
||||
Manages the game_configs table.
|
||||
Handles Fernet encryption/decryption of sensitive fields transparently.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.dal.base_repository import BaseRepository
|
||||
from core.utils.crypto import decrypt, encrypt, is_encrypted
|
||||
|
||||
|
||||
class ConfigRepository(BaseRepository):
|
||||
|
||||
def _encrypt_sensitive(
|
||||
self, config: dict, sensitive_fields: list[str]
|
||||
) -> dict:
|
||||
"""Return new dict with sensitive fields encrypted."""
|
||||
result = dict(config)
|
||||
for field in sensitive_fields:
|
||||
if field in result and result[field] and not is_encrypted(str(result[field])):
|
||||
result[field] = encrypt(str(result[field]))
|
||||
return result
|
||||
|
||||
def _decrypt_sensitive(
|
||||
self, config: dict, sensitive_fields: list[str]
|
||||
) -> dict:
|
||||
"""Return new dict with sensitive fields decrypted."""
|
||||
result = dict(config)
|
||||
for field in sensitive_fields:
|
||||
if field in result and is_encrypted(str(result[field])):
|
||||
result[field] = decrypt(str(result[field]))
|
||||
return result
|
||||
|
||||
def get_section(
|
||||
self,
|
||||
server_id: int,
|
||||
section: str,
|
||||
sensitive_fields: list[str] | None = None,
|
||||
) -> dict | None:
|
||||
"""Get a config section. Decrypts sensitive fields automatically."""
|
||||
row = self._fetchone(
|
||||
"SELECT * FROM game_configs WHERE server_id = :sid AND section = :sec",
|
||||
{"sid": server_id, "sec": section},
|
||||
)
|
||||
if row is None:
|
||||
return None
|
||||
config = json.loads(row["config_json"])
|
||||
if sensitive_fields:
|
||||
config = self._decrypt_sensitive(config, sensitive_fields)
|
||||
config["_meta"] = {
|
||||
"config_version": row["config_version"],
|
||||
"schema_version": row["schema_version"],
|
||||
}
|
||||
return config
|
||||
|
||||
def get_all_sections(
|
||||
self,
|
||||
server_id: int,
|
||||
sensitive_fields_by_section: dict[str, list[str]] | None = None,
|
||||
) -> dict[str, dict]:
|
||||
"""Get all config sections for a server."""
|
||||
rows = self._fetchall(
|
||||
"SELECT * FROM game_configs WHERE server_id = :sid ORDER BY section",
|
||||
{"sid": server_id},
|
||||
)
|
||||
result = {}
|
||||
for row in rows:
|
||||
config = json.loads(row["config_json"])
|
||||
sf = (sensitive_fields_by_section or {}).get(row["section"], [])
|
||||
if sf:
|
||||
config = self._decrypt_sensitive(config, sf)
|
||||
config["_meta"] = {
|
||||
"config_version": row["config_version"],
|
||||
"schema_version": row["schema_version"],
|
||||
}
|
||||
result[row["section"]] = config
|
||||
return result
|
||||
|
||||
def upsert_section(
|
||||
self,
|
||||
server_id: int,
|
||||
game_type: str,
|
||||
section: str,
|
||||
config_data: dict,
|
||||
schema_version: str,
|
||||
sensitive_fields: list[str] | None = None,
|
||||
expected_config_version: int | None = None,
|
||||
) -> int:
|
||||
"""
|
||||
Upsert a config section.
|
||||
If expected_config_version is provided, checks optimistic lock.
|
||||
Returns the new config_version.
|
||||
Raises ValueError on version conflict (caller returns 409).
|
||||
"""
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
# Strip _meta before storing
|
||||
data_to_store = {k: v for k, v in config_data.items() if k != "_meta"}
|
||||
|
||||
# Encrypt sensitive fields
|
||||
if sensitive_fields:
|
||||
data_to_store = self._encrypt_sensitive(data_to_store, sensitive_fields)
|
||||
|
||||
# Check if row exists
|
||||
existing = self._fetchone(
|
||||
"SELECT id, config_version FROM game_configs WHERE server_id = :sid AND section = :sec",
|
||||
{"sid": server_id, "sec": section},
|
||||
)
|
||||
|
||||
if existing is None:
|
||||
# Insert
|
||||
self._execute(
|
||||
"""
|
||||
INSERT INTO game_configs
|
||||
(server_id, game_type, section, config_json, config_version, schema_version, updated_at)
|
||||
VALUES (:sid, :gt, :sec, :json, 1, :sv, :now)
|
||||
""",
|
||||
{
|
||||
"sid": server_id, "gt": game_type, "sec": section,
|
||||
"json": json.dumps(data_to_store), "sv": schema_version, "now": now,
|
||||
},
|
||||
)
|
||||
return 1
|
||||
else:
|
||||
current_version = existing["config_version"]
|
||||
if expected_config_version is not None and expected_config_version != current_version:
|
||||
raise ValueError(
|
||||
f"CONFIG_VERSION_CONFLICT:{current_version}"
|
||||
)
|
||||
new_version = current_version + 1
|
||||
self._execute(
|
||||
"""
|
||||
UPDATE game_configs
|
||||
SET config_json = :json, config_version = :cv,
|
||||
schema_version = :sv, updated_at = :now
|
||||
WHERE server_id = :sid AND section = :sec
|
||||
""",
|
||||
{
|
||||
"json": json.dumps(data_to_store),
|
||||
"cv": new_version,
|
||||
"sv": schema_version,
|
||||
"now": now,
|
||||
"sid": server_id,
|
||||
"sec": section,
|
||||
},
|
||||
)
|
||||
return new_version
|
||||
|
||||
def delete_sections(self, server_id: int) -> None:
|
||||
self._execute(
|
||||
"DELETE FROM game_configs WHERE server_id = :sid",
|
||||
{"sid": server_id},
|
||||
)
|
||||
|
||||
def get_raw_sections(self, server_id: int) -> dict[str, dict]:
|
||||
"""Get all sections without decryption — for config file generation."""
|
||||
rows = self._fetchall(
|
||||
"SELECT section, config_json FROM game_configs WHERE server_id = :sid",
|
||||
{"sid": server_id},
|
||||
)
|
||||
return {row["section"]: json.loads(row["config_json"]) for row in rows}
|
||||
62
backend/core/dal/event_repository.py
Normal file
62
backend/core/dal/event_repository.py
Normal file
@@ -0,0 +1,62 @@
|
||||
import json
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class EventRepository(BaseRepository):
|
||||
|
||||
def insert(
|
||||
self,
|
||||
server_id: int,
|
||||
event_type: str,
|
||||
actor: str = "system",
|
||||
detail: dict | None = None,
|
||||
) -> None:
|
||||
self._execute(
|
||||
"""
|
||||
INSERT INTO server_events (server_id, event_type, actor, detail)
|
||||
VALUES (:sid, :et, :actor, :detail)
|
||||
""",
|
||||
{
|
||||
"sid": server_id,
|
||||
"et": event_type,
|
||||
"actor": actor,
|
||||
"detail": json.dumps(detail) if detail else None,
|
||||
},
|
||||
)
|
||||
|
||||
def get_events(
|
||||
self,
|
||||
server_id: int,
|
||||
limit: int = 50,
|
||||
offset: int = 0,
|
||||
event_type: str | None = None,
|
||||
) -> list[dict]:
|
||||
if event_type:
|
||||
return self._fetchall(
|
||||
"""
|
||||
SELECT * FROM server_events
|
||||
WHERE server_id = :sid AND event_type = :et
|
||||
ORDER BY created_at DESC LIMIT :limit OFFSET :offset
|
||||
""",
|
||||
{"sid": server_id, "et": event_type, "limit": limit, "offset": offset},
|
||||
)
|
||||
return self._fetchall(
|
||||
"""
|
||||
SELECT * FROM server_events WHERE server_id = :sid
|
||||
ORDER BY created_at DESC LIMIT :limit OFFSET :offset
|
||||
""",
|
||||
{"sid": server_id, "limit": limit, "offset": offset},
|
||||
)
|
||||
|
||||
def get_recent_all_servers(self, limit: int = 20) -> list[dict]:
|
||||
return self._fetchall(
|
||||
"SELECT * FROM server_events ORDER BY created_at DESC LIMIT :limit",
|
||||
{"limit": limit},
|
||||
)
|
||||
|
||||
def cleanup_old(self, retention_days: int) -> None:
|
||||
"""Delete events older than retention_days."""
|
||||
self._execute(
|
||||
"DELETE FROM server_events WHERE created_at < datetime('now', :delta)",
|
||||
{"delta": f"-{retention_days} days"},
|
||||
)
|
||||
61
backend/core/dal/log_repository.py
Normal file
61
backend/core/dal/log_repository.py
Normal file
@@ -0,0 +1,61 @@
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class LogRepository(BaseRepository):
|
||||
|
||||
def insert(self, server_id: int, entry: dict) -> None:
|
||||
"""entry = {timestamp, level, message}"""
|
||||
self._execute(
|
||||
"""
|
||||
INSERT INTO logs (server_id, timestamp, level, message)
|
||||
VALUES (:sid, :ts, :level, :msg)
|
||||
""",
|
||||
{
|
||||
"sid": server_id,
|
||||
"ts": entry.get("timestamp", ""),
|
||||
"level": entry.get("level", "info"),
|
||||
"msg": entry.get("message", ""),
|
||||
},
|
||||
)
|
||||
|
||||
def query(
|
||||
self,
|
||||
server_id: int,
|
||||
limit: int = 200,
|
||||
offset: int = 0,
|
||||
level: str | None = None,
|
||||
since: str | None = None,
|
||||
search: str | None = None,
|
||||
) -> tuple[int, list[dict]]:
|
||||
conditions = ["server_id = :sid"]
|
||||
params: dict = {"sid": server_id, "limit": limit, "offset": offset}
|
||||
if level:
|
||||
conditions.append("level = :level")
|
||||
params["level"] = level
|
||||
if since:
|
||||
conditions.append("timestamp >= :since")
|
||||
params["since"] = since
|
||||
if search:
|
||||
conditions.append("message LIKE :search")
|
||||
params["search"] = f"%{search}%"
|
||||
|
||||
where = " AND ".join(conditions)
|
||||
total_row = self._fetchone(f"SELECT COUNT(*) as cnt FROM logs WHERE {where}", params)
|
||||
total = total_row["cnt"] if total_row else 0
|
||||
rows = self._fetchall(
|
||||
f"SELECT * FROM logs WHERE {where} ORDER BY timestamp DESC LIMIT :limit OFFSET :offset",
|
||||
params,
|
||||
)
|
||||
return total, rows
|
||||
|
||||
def clear(self, server_id: int) -> int:
|
||||
result = self._execute(
|
||||
"DELETE FROM logs WHERE server_id = :sid", {"sid": server_id}
|
||||
)
|
||||
return result.rowcount
|
||||
|
||||
def cleanup_old(self, retention_days: int) -> None:
|
||||
self._execute(
|
||||
"DELETE FROM logs WHERE created_at < datetime('now', :delta)",
|
||||
{"delta": f"-{retention_days} days"},
|
||||
)
|
||||
53
backend/core/dal/metrics_repository.py
Normal file
53
backend/core/dal/metrics_repository.py
Normal file
@@ -0,0 +1,53 @@
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class MetricsRepository(BaseRepository):
|
||||
|
||||
def insert(
|
||||
self, server_id: int, cpu_percent: float, ram_mb: float = 0.0, player_count: int = 0
|
||||
) -> None:
|
||||
self._execute(
|
||||
"""
|
||||
INSERT INTO metrics (server_id, cpu_percent, ram_mb, player_count)
|
||||
VALUES (:sid, :cpu, :ram, :pc)
|
||||
""",
|
||||
{"sid": server_id, "cpu": cpu_percent, "ram": ram_mb, "pc": player_count},
|
||||
)
|
||||
|
||||
def query(
|
||||
self,
|
||||
server_id: int,
|
||||
from_ts: str | None = None,
|
||||
to_ts: str | None = None,
|
||||
) -> list[dict]:
|
||||
conditions = ["server_id = :sid"]
|
||||
params: dict = {"sid": server_id}
|
||||
if from_ts:
|
||||
conditions.append("timestamp >= :from_ts")
|
||||
params["from_ts"] = from_ts
|
||||
if to_ts:
|
||||
conditions.append("timestamp <= :to_ts")
|
||||
params["to_ts"] = to_ts
|
||||
where = " AND ".join(conditions)
|
||||
return self._fetchall(
|
||||
f"SELECT * FROM metrics WHERE {where} ORDER BY timestamp ASC",
|
||||
params,
|
||||
)
|
||||
|
||||
def get_latest(self, server_id: int) -> dict | None:
|
||||
return self._fetchone(
|
||||
"SELECT * FROM metrics WHERE server_id = :sid ORDER BY timestamp DESC LIMIT 1",
|
||||
{"sid": server_id},
|
||||
)
|
||||
|
||||
def cleanup_old(self, retention_days: int = 1, server_id: int | None = None) -> None:
|
||||
if server_id is not None:
|
||||
self._execute(
|
||||
"DELETE FROM metrics WHERE server_id = :sid AND timestamp < datetime('now', :delta)",
|
||||
{"sid": server_id, "delta": f"-{retention_days} days"},
|
||||
)
|
||||
else:
|
||||
self._execute(
|
||||
"DELETE FROM metrics WHERE timestamp < datetime('now', :delta)",
|
||||
{"delta": f"-{retention_days} days"},
|
||||
)
|
||||
70
backend/core/dal/player_repository.py
Normal file
70
backend/core/dal/player_repository.py
Normal file
@@ -0,0 +1,70 @@
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class PlayerRepository(BaseRepository):
|
||||
|
||||
def get_all(self, server_id: int) -> list[dict]:
|
||||
return self._fetchall(
|
||||
"SELECT * FROM players WHERE server_id = :sid ORDER BY slot_id",
|
||||
{"sid": server_id},
|
||||
)
|
||||
|
||||
def count(self, server_id: int) -> int:
|
||||
row = self._fetchone(
|
||||
"SELECT COUNT(*) as cnt FROM players WHERE server_id = :sid",
|
||||
{"sid": server_id},
|
||||
)
|
||||
return row["cnt"] if row else 0
|
||||
|
||||
def upsert(self, server_id: int, player: dict) -> None:
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
self._execute(
|
||||
"""
|
||||
INSERT INTO players (server_id, slot_id, name, guid, ip, ping, game_data, joined_at, updated_at)
|
||||
VALUES (:sid, :slot, :name, :guid, :ip, :ping, :gd, :now, :now)
|
||||
ON CONFLICT(server_id, slot_id) DO UPDATE SET
|
||||
name = excluded.name,
|
||||
guid = excluded.guid,
|
||||
ping = excluded.ping,
|
||||
game_data = excluded.game_data,
|
||||
updated_at = excluded.updated_at
|
||||
""",
|
||||
{
|
||||
"sid": server_id,
|
||||
"slot": str(player.get("slot_id", "")),
|
||||
"name": player.get("name", ""),
|
||||
"guid": player.get("guid"),
|
||||
"ip": player.get("ip"),
|
||||
"ping": player.get("ping"),
|
||||
"gd": json.dumps(player.get("game_data", {})),
|
||||
"now": now,
|
||||
},
|
||||
)
|
||||
|
||||
def clear(self, server_id: int) -> None:
|
||||
self._execute("DELETE FROM players WHERE server_id = :sid", {"sid": server_id})
|
||||
|
||||
def get_history(
|
||||
self,
|
||||
server_id: int,
|
||||
limit: int = 50,
|
||||
offset: int = 0,
|
||||
search: str | None = None,
|
||||
) -> tuple[int, list[dict]]:
|
||||
conditions = ["server_id = :sid"]
|
||||
params: dict = {"sid": server_id, "limit": limit, "offset": offset}
|
||||
if search:
|
||||
conditions.append("name LIKE :search")
|
||||
params["search"] = f"%{search}%"
|
||||
where = " AND ".join(conditions)
|
||||
total_row = self._fetchone(
|
||||
f"SELECT COUNT(*) as cnt FROM player_history WHERE {where}", params
|
||||
)
|
||||
total = total_row["cnt"] if total_row else 0
|
||||
rows = self._fetchall(
|
||||
f"SELECT * FROM player_history WHERE {where} ORDER BY left_at DESC LIMIT :limit OFFSET :offset",
|
||||
params,
|
||||
)
|
||||
return total, rows
|
||||
111
backend/core/dal/server_repository.py
Normal file
111
backend/core/dal/server_repository.py
Normal file
@@ -0,0 +1,111 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from core.dal.base_repository import BaseRepository
|
||||
|
||||
|
||||
class ServerRepository(BaseRepository):
|
||||
|
||||
def get_all(self, game_type: str | None = None) -> list[dict]:
|
||||
if game_type:
|
||||
return self._fetchall(
|
||||
"SELECT * FROM servers WHERE game_type = :gt ORDER BY name",
|
||||
{"gt": game_type},
|
||||
)
|
||||
return self._fetchall("SELECT * FROM servers ORDER BY name")
|
||||
|
||||
def get_by_id(self, server_id: int) -> dict | None:
|
||||
return self._fetchone("SELECT * FROM servers WHERE id = :id", {"id": server_id})
|
||||
|
||||
def create(
|
||||
self,
|
||||
name: str,
|
||||
game_type: str,
|
||||
exe_path: str,
|
||||
game_port: int,
|
||||
rcon_port: int | None = None,
|
||||
description: str | None = None,
|
||||
auto_restart: bool = False,
|
||||
max_restarts: int = 3,
|
||||
) -> int:
|
||||
return self._lastrowid(
|
||||
"""
|
||||
INSERT INTO servers
|
||||
(name, description, game_type, exe_path, game_port, rcon_port,
|
||||
auto_restart, max_restarts)
|
||||
VALUES
|
||||
(:name, :desc, :game_type, :exe, :gp, :rp, :ar, :mr)
|
||||
""",
|
||||
{
|
||||
"name": name,
|
||||
"desc": description,
|
||||
"game_type": game_type,
|
||||
"exe": exe_path,
|
||||
"gp": game_port,
|
||||
"rp": rcon_port,
|
||||
"ar": int(auto_restart),
|
||||
"mr": max_restarts,
|
||||
},
|
||||
)
|
||||
|
||||
def update(self, server_id: int, **fields) -> None:
|
||||
if not fields:
|
||||
return
|
||||
fields["updated_at"] = datetime.now(timezone.utc).isoformat()
|
||||
fields["id"] = server_id
|
||||
set_clause = ", ".join(f"{k} = :{k}" for k in fields if k != "id")
|
||||
self._execute(f"UPDATE servers SET {set_clause} WHERE id = :id", fields)
|
||||
|
||||
def update_status(
|
||||
self,
|
||||
server_id: int,
|
||||
status: str,
|
||||
pid: int | None = None,
|
||||
started_at: str | None = None,
|
||||
stopped_at: str | None = None,
|
||||
) -> None:
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
self._execute(
|
||||
"""
|
||||
UPDATE servers
|
||||
SET status = :status, pid = :pid, started_at = :sa,
|
||||
stopped_at = :sta, updated_at = :now
|
||||
WHERE id = :id
|
||||
""",
|
||||
{
|
||||
"status": status,
|
||||
"pid": pid,
|
||||
"sa": started_at,
|
||||
"sta": stopped_at,
|
||||
"now": now,
|
||||
"id": server_id,
|
||||
},
|
||||
)
|
||||
|
||||
def delete(self, server_id: int) -> None:
|
||||
self._execute("DELETE FROM servers WHERE id = :id", {"id": server_id})
|
||||
|
||||
def get_running(self) -> list[dict]:
|
||||
return self._fetchall(
|
||||
"SELECT * FROM servers WHERE status IN ('running', 'starting')"
|
||||
)
|
||||
|
||||
def increment_restart_count(self, server_id: int) -> None:
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
self._execute(
|
||||
"""
|
||||
UPDATE servers
|
||||
SET restart_count = restart_count + 1,
|
||||
last_restart_at = :now,
|
||||
updated_at = :now
|
||||
WHERE id = :id
|
||||
""",
|
||||
{"now": now, "id": server_id},
|
||||
)
|
||||
|
||||
def reset_restart_count(self, server_id: int) -> None:
|
||||
self._execute(
|
||||
"UPDATE servers SET restart_count = 0 WHERE id = :id",
|
||||
{"id": server_id},
|
||||
)
|
||||
0
backend/core/events/__init__.py
Normal file
0
backend/core/events/__init__.py
Normal file
0
backend/core/games/__init__.py
Normal file
0
backend/core/games/__init__.py
Normal file
70
backend/core/games/router.py
Normal file
70
backend/core/games/router.py
Normal file
@@ -0,0 +1,70 @@
|
||||
from fastapi import APIRouter, HTTPException, status
|
||||
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
|
||||
router = APIRouter(prefix="/games", tags=["games"])
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
@router.get("")
|
||||
def list_games():
|
||||
return _ok(GameAdapterRegistry.list_game_types())
|
||||
|
||||
|
||||
@router.get("/{game_type}")
|
||||
def get_game(game_type: str):
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
except KeyError:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "GAME_TYPE_NOT_FOUND", "message": f"Unknown game type: {game_type}"},
|
||||
)
|
||||
caps = []
|
||||
for cap in ["config_generator", "process_config", "log_parser",
|
||||
"remote_admin", "mission_manager", "mod_manager", "ban_manager"]:
|
||||
if adapter.has_capability(cap):
|
||||
caps.append(cap)
|
||||
|
||||
config_gen = adapter.get_config_generator()
|
||||
sections = list(config_gen.get_sections().keys())
|
||||
process_config = adapter.get_process_config()
|
||||
|
||||
return _ok({
|
||||
"game_type": adapter.game_type,
|
||||
"display_name": adapter.display_name,
|
||||
"version": adapter.version,
|
||||
"schema_version": config_gen.get_config_version(),
|
||||
"capabilities": caps,
|
||||
"config_sections": sections,
|
||||
"allowed_executables": process_config.get_allowed_executables(),
|
||||
})
|
||||
|
||||
|
||||
@router.get("/{game_type}/config-schema")
|
||||
def get_config_schema(game_type: str):
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
except KeyError:
|
||||
raise HTTPException(status_code=404, detail={"code": "GAME_TYPE_NOT_FOUND"})
|
||||
config_gen = adapter.get_config_generator()
|
||||
schemas = {}
|
||||
for section, model_cls in config_gen.get_sections().items():
|
||||
schemas[section] = model_cls.model_json_schema()
|
||||
return _ok(schemas)
|
||||
|
||||
|
||||
@router.get("/{game_type}/defaults")
|
||||
def get_defaults(game_type: str):
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
except KeyError:
|
||||
raise HTTPException(status_code=404, detail={"code": "GAME_TYPE_NOT_FOUND"})
|
||||
config_gen = adapter.get_config_generator()
|
||||
defaults = {}
|
||||
for section in config_gen.get_sections():
|
||||
defaults[section] = config_gen.get_defaults(section)
|
||||
return _ok(defaults)
|
||||
0
backend/core/jobs/__init__.py
Normal file
0
backend/core/jobs/__init__.py
Normal file
102
backend/core/jobs/cleanup_jobs.py
Normal file
102
backend/core/jobs/cleanup_jobs.py
Normal file
@@ -0,0 +1,102 @@
|
||||
"""
|
||||
Cleanup jobs registered with APScheduler.
|
||||
|
||||
Jobs:
|
||||
- cleanup_old_logs: Delete log entries older than 7 days, daily at 03:00
|
||||
- cleanup_old_metrics: Delete metrics older than 1 day, every 6 hours
|
||||
- cleanup_old_events: Delete events older than 30 days, weekly on Sunday
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
from apscheduler.triggers.interval import IntervalTrigger
|
||||
|
||||
from core.jobs.scheduler import get_scheduler
|
||||
from database import get_thread_db
|
||||
from core.dal.log_repository import LogRepository
|
||||
from core.dal.metrics_repository import MetricsRepository
|
||||
from core.dal.event_repository import EventRepository
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_LOG_RETENTION_DAYS = 7
|
||||
_METRICS_RETENTION_DAYS = 1
|
||||
_EVENT_RETENTION_DAYS = 30
|
||||
|
||||
|
||||
def register_cleanup_jobs() -> None:
|
||||
"""Register all cleanup jobs with the scheduler. Call at startup."""
|
||||
sched = get_scheduler()
|
||||
|
||||
sched.add_job(
|
||||
func=_cleanup_old_logs,
|
||||
trigger=CronTrigger(hour=3, minute=0),
|
||||
id="cleanup_old_logs",
|
||||
name="Clean up old log entries",
|
||||
replace_existing=True,
|
||||
)
|
||||
|
||||
sched.add_job(
|
||||
func=_cleanup_old_metrics,
|
||||
trigger=IntervalTrigger(hours=6),
|
||||
id="cleanup_old_metrics",
|
||||
name="Clean up old metrics",
|
||||
replace_existing=True,
|
||||
)
|
||||
|
||||
sched.add_job(
|
||||
func=_cleanup_old_events,
|
||||
trigger=CronTrigger(day_of_week="sun", hour=4, minute=0),
|
||||
id="cleanup_old_events",
|
||||
name="Clean up old events",
|
||||
replace_existing=True,
|
||||
)
|
||||
|
||||
logger.info("Cleanup jobs registered")
|
||||
|
||||
|
||||
def _cleanup_old_logs() -> None:
|
||||
logger.info("Running log cleanup (retention=%d days)", _LOG_RETENTION_DAYS)
|
||||
try:
|
||||
db = get_thread_db()
|
||||
try:
|
||||
log_repo = LogRepository(db)
|
||||
log_repo.cleanup_old(retention_days=_LOG_RETENTION_DAYS)
|
||||
db.commit()
|
||||
finally:
|
||||
db.close()
|
||||
logger.info("Log cleanup complete")
|
||||
except Exception as exc:
|
||||
logger.error("Log cleanup failed: %s", exc, exc_info=True)
|
||||
|
||||
|
||||
def _cleanup_old_metrics() -> None:
|
||||
logger.info("Running metrics cleanup (retention=%d days)", _METRICS_RETENTION_DAYS)
|
||||
try:
|
||||
db = get_thread_db()
|
||||
try:
|
||||
metrics_repo = MetricsRepository(db)
|
||||
metrics_repo.cleanup_old(retention_days=_METRICS_RETENTION_DAYS)
|
||||
db.commit()
|
||||
finally:
|
||||
db.close()
|
||||
logger.info("Metrics cleanup complete")
|
||||
except Exception as exc:
|
||||
logger.error("Metrics cleanup failed: %s", exc, exc_info=True)
|
||||
|
||||
|
||||
def _cleanup_old_events() -> None:
|
||||
logger.info("Running event cleanup (retention=%d days)", _EVENT_RETENTION_DAYS)
|
||||
try:
|
||||
db = get_thread_db()
|
||||
try:
|
||||
event_repo = EventRepository(db)
|
||||
event_repo.cleanup_old(retention_days=_EVENT_RETENTION_DAYS)
|
||||
db.commit()
|
||||
finally:
|
||||
db.close()
|
||||
logger.info("Event cleanup complete")
|
||||
except Exception as exc:
|
||||
logger.error("Event cleanup failed: %s", exc, exc_info=True)
|
||||
40
backend/core/jobs/scheduler.py
Normal file
40
backend/core/jobs/scheduler.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""
|
||||
APScheduler setup for background cleanup jobs.
|
||||
|
||||
One scheduler instance runs per process.
|
||||
Jobs run in their own threads (ThreadPoolExecutor).
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from apscheduler.schedulers.background import BackgroundScheduler
|
||||
from apscheduler.executors.pool import ThreadPoolExecutor
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_scheduler: BackgroundScheduler | None = None
|
||||
|
||||
|
||||
def get_scheduler() -> BackgroundScheduler:
|
||||
global _scheduler
|
||||
if _scheduler is None:
|
||||
_scheduler = BackgroundScheduler(
|
||||
executors={"default": ThreadPoolExecutor(max_workers=2)},
|
||||
job_defaults={"coalesce": True, "max_instances": 1},
|
||||
)
|
||||
return _scheduler
|
||||
|
||||
|
||||
def start_scheduler() -> None:
|
||||
sched = get_scheduler()
|
||||
if not sched.running:
|
||||
sched.start()
|
||||
logger.info("APScheduler started")
|
||||
|
||||
|
||||
def stop_scheduler() -> None:
|
||||
global _scheduler
|
||||
if _scheduler is not None and _scheduler.running:
|
||||
_scheduler.shutdown(wait=False)
|
||||
logger.info("APScheduler stopped")
|
||||
0
backend/core/logs/__init__.py
Normal file
0
backend/core/logs/__init__.py
Normal file
0
backend/core/metrics/__init__.py
Normal file
0
backend/core/metrics/__init__.py
Normal file
187
backend/core/migrations/001_initial_schema.sql
Normal file
187
backend/core/migrations/001_initial_schema.sql
Normal file
@@ -0,0 +1,187 @@
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
username TEXT NOT NULL UNIQUE,
|
||||
password_hash TEXT NOT NULL,
|
||||
role TEXT NOT NULL DEFAULT 'viewer',
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
last_login TEXT,
|
||||
CHECK (role IN ('admin', 'viewer'))
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS servers (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
description TEXT,
|
||||
game_type TEXT NOT NULL DEFAULT 'arma3',
|
||||
status TEXT NOT NULL DEFAULT 'stopped',
|
||||
pid INTEGER,
|
||||
exe_path TEXT NOT NULL,
|
||||
started_at TEXT,
|
||||
stopped_at TEXT,
|
||||
game_port INTEGER NOT NULL,
|
||||
rcon_port INTEGER,
|
||||
auto_restart INTEGER NOT NULL DEFAULT 0,
|
||||
max_restarts INTEGER NOT NULL DEFAULT 3,
|
||||
restart_window_seconds INTEGER NOT NULL DEFAULT 300,
|
||||
restart_count INTEGER NOT NULL DEFAULT 0,
|
||||
last_restart_at TEXT,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
CHECK (status IN ('stopped','starting','running','stopping','crashed','error')),
|
||||
CHECK (game_port BETWEEN 1024 AND 65535),
|
||||
CHECK (rcon_port IS NULL OR (rcon_port BETWEEN 1024 AND 65535))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_servers_status ON servers(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_servers_game_type ON servers(game_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_servers_game_port ON servers(game_port);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS game_configs (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
game_type TEXT NOT NULL,
|
||||
section TEXT NOT NULL,
|
||||
config_json TEXT NOT NULL DEFAULT '{}',
|
||||
config_version INTEGER NOT NULL DEFAULT 1,
|
||||
schema_version TEXT NOT NULL DEFAULT '1.0.0',
|
||||
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
UNIQUE(server_id, section)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_game_configs_server ON game_configs(server_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_game_configs_type_section ON game_configs(game_type, section);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS mods (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
game_type TEXT NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
folder_path TEXT NOT NULL,
|
||||
workshop_id TEXT,
|
||||
description TEXT,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
UNIQUE (game_type, folder_path)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS server_mods (
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
mod_id INTEGER NOT NULL REFERENCES mods(id) ON DELETE CASCADE,
|
||||
is_server_mod INTEGER NOT NULL DEFAULT 0,
|
||||
sort_order INTEGER NOT NULL DEFAULT 0,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
PRIMARY KEY (server_id, mod_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_server_mods_server ON server_mods(server_id);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS missions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
filename TEXT NOT NULL,
|
||||
mission_name TEXT NOT NULL,
|
||||
terrain TEXT,
|
||||
file_size INTEGER,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
uploaded_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
UNIQUE (server_id, filename)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_missions_server ON missions(server_id);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS mission_rotation (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
mission_id INTEGER NOT NULL REFERENCES missions(id) ON DELETE CASCADE,
|
||||
sort_order INTEGER NOT NULL DEFAULT 0,
|
||||
difficulty TEXT,
|
||||
params_json TEXT NOT NULL DEFAULT '{}',
|
||||
game_data TEXT DEFAULT '{}',
|
||||
UNIQUE (server_id, sort_order)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_mission_rotation_server ON mission_rotation(server_id);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS players (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
slot_id TEXT NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
guid TEXT,
|
||||
ip TEXT,
|
||||
ping INTEGER,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
joined_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
UNIQUE (server_id, slot_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_players_server ON players(server_id);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS player_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
name TEXT NOT NULL,
|
||||
guid TEXT,
|
||||
ip TEXT,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
joined_at TEXT NOT NULL,
|
||||
left_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
session_duration_seconds INTEGER
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_player_history_server ON player_history(server_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_player_history_guid ON player_history(guid);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS bans (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
guid TEXT,
|
||||
name TEXT,
|
||||
reason TEXT,
|
||||
banned_by TEXT,
|
||||
banned_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
expires_at TEXT,
|
||||
is_active INTEGER NOT NULL DEFAULT 1,
|
||||
game_data TEXT DEFAULT '{}',
|
||||
CHECK (is_active IN (0, 1))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_bans_server ON bans(server_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_bans_guid ON bans(guid);
|
||||
CREATE INDEX IF NOT EXISTS idx_bans_active ON bans(is_active);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS logs (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
timestamp TEXT NOT NULL,
|
||||
level TEXT NOT NULL DEFAULT 'info',
|
||||
message TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
CHECK (level IN ('info', 'warning', 'error'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_logs_server_ts ON logs(server_id, timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_logs_level ON logs(level);
|
||||
CREATE INDEX IF NOT EXISTS idx_logs_created ON logs(created_at);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
cpu_percent REAL,
|
||||
ram_mb REAL,
|
||||
player_count INTEGER
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_metrics_server_ts ON metrics(server_id, timestamp);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS server_events (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
server_id INTEGER NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
|
||||
event_type TEXT NOT NULL,
|
||||
actor TEXT,
|
||||
detail TEXT,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_events_server ON server_events(server_id, created_at)
|
||||
0
backend/core/migrations/__init__.py
Normal file
0
backend/core/migrations/__init__.py
Normal file
0
backend/core/players/__init__.py
Normal file
0
backend/core/players/__init__.py
Normal file
0
backend/core/servers/__init__.py
Normal file
0
backend/core/servers/__init__.py
Normal file
142
backend/core/servers/bans_router.py
Normal file
142
backend/core/servers/bans_router.py
Normal file
@@ -0,0 +1,142 @@
|
||||
"""Ban management endpoints — create, list, and revoke bans."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from pydantic import BaseModel, field_validator
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from adapters.arma3.ban_manager import Arma3BanManager
|
||||
from core.dal.ban_repository import BanRepository
|
||||
from core.servers.service import ServerService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user, require_admin
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/servers/{server_id}/bans", tags=["bans"])
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
class CreateBanRequest(BaseModel):
|
||||
player_uid: str
|
||||
ban_type: str = "GUID"
|
||||
reason: str = ""
|
||||
duration_minutes: int = 0 # 0 = permanent
|
||||
|
||||
@field_validator("ban_type")
|
||||
@classmethod
|
||||
def validate_ban_type(cls, v: str) -> str:
|
||||
if v not in ("GUID", "IP"):
|
||||
raise ValueError("ban_type must be 'GUID' or 'IP'")
|
||||
return v
|
||||
|
||||
@field_validator("duration_minutes")
|
||||
@classmethod
|
||||
def validate_duration(cls, v: int) -> int:
|
||||
if v < 0:
|
||||
raise ValueError("duration_minutes cannot be negative")
|
||||
return v
|
||||
|
||||
|
||||
@router.get("")
|
||||
def list_bans(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_user: Annotated[dict, Depends(get_current_user)],
|
||||
) -> dict:
|
||||
"""List all active bans for the server."""
|
||||
ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
ban_repo = BanRepository(db)
|
||||
bans = ban_repo.get_all(server_id=server_id)
|
||||
return _ok(bans)
|
||||
|
||||
|
||||
@router.post("", status_code=status.HTTP_201_CREATED)
|
||||
def create_ban(
|
||||
server_id: int,
|
||||
body: CreateBanRequest,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
) -> dict:
|
||||
"""Create a new ban. Writes to DB and syncs to bans.txt."""
|
||||
ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
ban_repo = BanRepository(db)
|
||||
|
||||
# Calculate expires_at if duration is set
|
||||
expires_at = None
|
||||
if body.duration_minutes > 0:
|
||||
from datetime import datetime, timezone, timedelta
|
||||
expires_at = (
|
||||
datetime.now(timezone.utc) + timedelta(minutes=body.duration_minutes)
|
||||
).isoformat()
|
||||
|
||||
ban_id = ban_repo.create(
|
||||
server_id=server_id,
|
||||
guid=body.player_uid if body.ban_type == "GUID" else None,
|
||||
name=None,
|
||||
reason=body.reason,
|
||||
banned_by=_admin["username"],
|
||||
expires_at=expires_at,
|
||||
game_data={"ban_type": body.ban_type, "duration_minutes": body.duration_minutes},
|
||||
)
|
||||
db.commit()
|
||||
|
||||
ban = ban_repo.get_by_id(ban_id)
|
||||
|
||||
# Sync to bans.txt (non-blocking — log error but don't fail request)
|
||||
_sync_ban_to_file(server_id, body.player_uid, body.ban_type, body.reason, body.duration_minutes)
|
||||
|
||||
return _ok(ban)
|
||||
|
||||
|
||||
@router.delete("/{ban_id}")
|
||||
def revoke_ban(
|
||||
server_id: int,
|
||||
ban_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
) -> dict:
|
||||
"""Revoke a ban (marks as inactive in DB, removes from bans.txt)."""
|
||||
ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
ban_repo = BanRepository(db)
|
||||
ban = ban_repo.get_by_id(ban_id)
|
||||
if ban is None or ban["server_id"] != server_id:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": "Ban not found"},
|
||||
)
|
||||
ban_repo.deactivate(ban_id)
|
||||
db.commit()
|
||||
|
||||
# Remove from bans.txt
|
||||
_remove_ban_from_file(server_id, ban.get("guid") or "")
|
||||
|
||||
return _ok({"message": f"Ban {ban_id} revoked"})
|
||||
|
||||
|
||||
# ── File sync helpers ──
|
||||
|
||||
def _sync_ban_to_file(
|
||||
server_id: int, identifier: str, ban_type: str, reason: str, duration_minutes: int
|
||||
) -> None:
|
||||
"""Write ban to bans.txt. Log error but don't fail the request."""
|
||||
try:
|
||||
mgr = Arma3BanManager(server_id)
|
||||
mgr.add_ban(identifier, ban_type, reason, duration_minutes)
|
||||
except Exception as exc:
|
||||
logger.error("Failed to sync ban to bans.txt for server %d: %s", server_id, exc)
|
||||
|
||||
|
||||
def _remove_ban_from_file(server_id: int, identifier: str) -> None:
|
||||
"""Remove ban from bans.txt. Log error but don't fail the request."""
|
||||
try:
|
||||
mgr = Arma3BanManager(server_id)
|
||||
mgr.remove_ban(identifier)
|
||||
except Exception as exc:
|
||||
logger.error("Failed to remove ban from bans.txt for server %d: %s", server_id, exc)
|
||||
115
backend/core/servers/missions_router.py
Normal file
115
backend/core/servers/missions_router.py
Normal file
@@ -0,0 +1,115 @@
|
||||
"""Mission management endpoints — list, upload, delete mission files."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File, status
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from adapters.exceptions import AdapterError
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from core.servers.service import ServerService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user, require_admin
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/servers/{server_id}/missions", tags=["missions"])
|
||||
|
||||
_MAX_UPLOAD_SIZE = 500 * 1024 * 1024 # 500 MB
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
def _get_mission_manager(server_id: int, game_type: str):
|
||||
"""Get MissionManager for the server's game type."""
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
if not adapter.has_capability("mission_manager"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "NOT_SUPPORTED", "message": f"Game type '{game_type}' does not support mission management"},
|
||||
)
|
||||
return adapter.get_mission_manager(server_id)
|
||||
|
||||
|
||||
@router.get("")
|
||||
def list_missions(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_user: Annotated[dict, Depends(get_current_user)],
|
||||
) -> dict:
|
||||
"""List all available mission files on disk."""
|
||||
server = ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
mgr = _get_mission_manager(server_id, server["game_type"])
|
||||
try:
|
||||
missions = mgr.list_missions()
|
||||
except AdapterError as exc:
|
||||
raise HTTPException(status_code=500, detail={"code": "ADAPTER_ERROR", "message": str(exc)})
|
||||
|
||||
return _ok({
|
||||
"server_id": server_id,
|
||||
"missions": missions,
|
||||
"total": len(missions),
|
||||
})
|
||||
|
||||
|
||||
@router.post("", status_code=status.HTTP_201_CREATED)
|
||||
async def upload_mission(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
file: UploadFile = File(...),
|
||||
) -> dict:
|
||||
"""
|
||||
Upload a mission .pbo file.
|
||||
Max size: 500 MB.
|
||||
"""
|
||||
server = ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
|
||||
if not file.filename:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "NO_FILENAME", "message": "No filename provided"},
|
||||
)
|
||||
|
||||
content = await file.read()
|
||||
if len(content) > _MAX_UPLOAD_SIZE:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
|
||||
detail={"code": "FILE_TOO_LARGE", "message": f"File too large. Max size is {_MAX_UPLOAD_SIZE // (1024*1024)} MB"},
|
||||
)
|
||||
|
||||
mgr = _get_mission_manager(server_id, server["game_type"])
|
||||
try:
|
||||
mission = mgr.upload_mission(file.filename, content)
|
||||
except AdapterError as exc:
|
||||
raise HTTPException(status_code=400, detail={"code": "ADAPTER_ERROR", "message": str(exc)})
|
||||
|
||||
return _ok(mission)
|
||||
|
||||
|
||||
@router.delete("/{filename}")
|
||||
def delete_mission(
|
||||
server_id: int,
|
||||
filename: str,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
) -> dict:
|
||||
"""Delete a mission file by filename."""
|
||||
server = ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
mgr = _get_mission_manager(server_id, server["game_type"])
|
||||
try:
|
||||
deleted = mgr.delete_mission(filename)
|
||||
except AdapterError as exc:
|
||||
raise HTTPException(status_code=400, detail={"code": "ADAPTER_ERROR", "message": str(exc)})
|
||||
|
||||
if not deleted:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Mission '{filename}' not found"},
|
||||
)
|
||||
|
||||
return _ok({"message": f"Mission '{filename}' deleted"})
|
||||
101
backend/core/servers/mods_router.py
Normal file
101
backend/core/servers/mods_router.py
Normal file
@@ -0,0 +1,101 @@
|
||||
"""Mod management endpoints — list available mods, set enabled mods."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from adapters.exceptions import AdapterError
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from core.dal.config_repository import ConfigRepository
|
||||
from core.servers.service import ServerService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user, require_admin
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/servers/{server_id}/mods", tags=["mods"])
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
class SetEnabledModsRequest(BaseModel):
|
||||
mods: list[str]
|
||||
|
||||
|
||||
def _get_mod_manager(server_id: int, game_type: str):
|
||||
"""Get ModManager for the server's game type."""
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
if not adapter.has_capability("mod_manager"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "NOT_SUPPORTED", "message": f"Game type '{game_type}' does not support mod management"},
|
||||
)
|
||||
return adapter.get_mod_manager(server_id)
|
||||
|
||||
|
||||
@router.get("")
|
||||
def list_mods(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_user: Annotated[dict, Depends(get_current_user)],
|
||||
) -> dict:
|
||||
"""List all available mods and which are enabled."""
|
||||
server = ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
mgr = _get_mod_manager(server_id, server["game_type"])
|
||||
|
||||
config_repo = ConfigRepository(db)
|
||||
try:
|
||||
available = mgr.list_available_mods()
|
||||
enabled = set(mgr.get_enabled_mods(config_repo))
|
||||
except AdapterError as exc:
|
||||
raise HTTPException(status_code=500, detail={"code": "ADAPTER_ERROR", "message": str(exc)})
|
||||
|
||||
for mod in available:
|
||||
mod["enabled"] = mod["name"] in enabled
|
||||
|
||||
return _ok({
|
||||
"server_id": server_id,
|
||||
"mods": available,
|
||||
"enabled_count": len(enabled),
|
||||
})
|
||||
|
||||
|
||||
@router.put("/enabled")
|
||||
def set_enabled_mods(
|
||||
server_id: int,
|
||||
body: SetEnabledModsRequest,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_admin: Annotated[dict, Depends(require_admin)],
|
||||
) -> dict:
|
||||
"""
|
||||
Set the list of enabled mods.
|
||||
Replaces the current enabled list entirely.
|
||||
Server must be restarted for changes to take effect.
|
||||
"""
|
||||
server = ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
mgr = _get_mod_manager(server_id, server["game_type"])
|
||||
|
||||
config_repo = ConfigRepository(db)
|
||||
try:
|
||||
mgr.set_enabled_mods(body.mods, config_repo)
|
||||
except AdapterError as exc:
|
||||
raise HTTPException(status_code=400, detail={"code": "ADAPTER_ERROR", "message": str(exc)})
|
||||
except ValueError as exc:
|
||||
if "CONFIG_VERSION_CONFLICT" in str(exc):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={"code": "VERSION_CONFLICT", "message": "Config was modified by another request. Please retry."},
|
||||
)
|
||||
raise
|
||||
db.commit()
|
||||
|
||||
return _ok({
|
||||
"message": "Enabled mods updated. Restart the server for changes to take effect.",
|
||||
"enabled_mods": body.mods,
|
||||
})
|
||||
57
backend/core/servers/players_router.py
Normal file
57
backend/core/servers/players_router.py
Normal file
@@ -0,0 +1,57 @@
|
||||
"""Player endpoints — list current players for a running server."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from core.dal.player_repository import PlayerRepository
|
||||
from core.servers.service import ServerService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/servers/{server_id}/players", tags=["players"])
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
@router.get("")
|
||||
def list_players(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_user: Annotated[dict, Depends(get_current_user)],
|
||||
) -> dict:
|
||||
"""List current players (cached from RemoteAdminPollerThread)."""
|
||||
ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
player_repo = PlayerRepository(db)
|
||||
players = player_repo.get_all(server_id=server_id)
|
||||
count = player_repo.count(server_id=server_id)
|
||||
return _ok({
|
||||
"server_id": server_id,
|
||||
"player_count": count,
|
||||
"players": players,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/history")
|
||||
def player_history(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
_user: Annotated[dict, Depends(get_current_user)],
|
||||
limit: int = 100,
|
||||
offset: int = 0,
|
||||
search: str | None = None,
|
||||
) -> dict:
|
||||
"""Get historical player sessions."""
|
||||
ServerService(db).get_server(server_id) # raises 404 if not found
|
||||
player_repo = PlayerRepository(db)
|
||||
total, rows = player_repo.get_history(
|
||||
server_id=server_id, limit=limit, offset=offset, search=search,
|
||||
)
|
||||
return _ok({"total": total, "items": rows})
|
||||
243
backend/core/servers/process_manager.py
Normal file
243
backend/core/servers/process_manager.py
Normal file
@@ -0,0 +1,243 @@
|
||||
"""
|
||||
ProcessManager singleton — owns all subprocess handles.
|
||||
Game-agnostic: delegates exe validation and config to adapters.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
import psutil
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ProcessManager:
|
||||
_instance: "ProcessManager | None" = None
|
||||
_init_lock = threading.Lock()
|
||||
|
||||
def __init__(self):
|
||||
self._processes: dict[int, subprocess.Popen] = {}
|
||||
self._lock = threading.Lock()
|
||||
self._operation_locks: dict[int, threading.Lock] = {}
|
||||
self._ops_lock = threading.Lock()
|
||||
|
||||
@classmethod
|
||||
def get(cls) -> "ProcessManager":
|
||||
if cls._instance is None:
|
||||
with cls._init_lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = ProcessManager()
|
||||
return cls._instance
|
||||
|
||||
def get_operation_lock(self, server_id: int) -> threading.Lock:
|
||||
"""Per-server lock that serializes start/stop/restart for the same server."""
|
||||
with self._ops_lock:
|
||||
if server_id not in self._operation_locks:
|
||||
self._operation_locks[server_id] = threading.Lock()
|
||||
return self._operation_locks[server_id]
|
||||
|
||||
def start(
|
||||
self,
|
||||
server_id: int,
|
||||
exe_path: str,
|
||||
args: list[str],
|
||||
cwd: str | Path,
|
||||
) -> int:
|
||||
"""
|
||||
Start a game server process.
|
||||
Returns the PID.
|
||||
cwd is set to servers/{server_id}/ so relative config paths work.
|
||||
"""
|
||||
with self._lock:
|
||||
if server_id in self._processes:
|
||||
proc = self._processes[server_id]
|
||||
if proc.poll() is None:
|
||||
raise RuntimeError(f"Server {server_id} is already running (PID {proc.pid})")
|
||||
del self._processes[server_id]
|
||||
|
||||
full_cmd = [exe_path] + args
|
||||
logger.info("Starting server %d: %s", server_id, ' '.join(full_cmd))
|
||||
|
||||
proc = subprocess.Popen(
|
||||
full_cmd,
|
||||
cwd=str(cwd),
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
# On Windows, don't create a new console window
|
||||
creationflags=subprocess.CREATE_NO_WINDOW if hasattr(subprocess, "CREATE_NO_WINDOW") else 0,
|
||||
)
|
||||
|
||||
with self._lock:
|
||||
self._processes[server_id] = proc
|
||||
|
||||
logger.info("Server %d started with PID %d", server_id, proc.pid)
|
||||
return proc.pid
|
||||
|
||||
def stop(self, server_id: int, timeout: int = 30) -> bool:
|
||||
"""
|
||||
Send terminate signal and wait up to timeout seconds.
|
||||
On Windows, terminate() = hard kill (no SIGTERM).
|
||||
Returns True if process exited, False if still running.
|
||||
"""
|
||||
with self._lock:
|
||||
proc = self._processes.get(server_id)
|
||||
if proc is None:
|
||||
return True
|
||||
|
||||
try:
|
||||
proc.terminate()
|
||||
except ProcessLookupError:
|
||||
return True
|
||||
|
||||
try:
|
||||
proc.wait(timeout=timeout)
|
||||
with self._lock:
|
||||
self._processes.pop(server_id, None)
|
||||
return True
|
||||
except subprocess.TimeoutExpired:
|
||||
return False
|
||||
|
||||
def kill(self, server_id: int) -> bool:
|
||||
"""Force-kill the process immediately."""
|
||||
with self._lock:
|
||||
proc = self._processes.get(server_id)
|
||||
if proc is None:
|
||||
return True
|
||||
try:
|
||||
proc.kill()
|
||||
proc.wait(timeout=5)
|
||||
except (ProcessLookupError, subprocess.TimeoutExpired):
|
||||
logger.debug("Process %d already exited or timed out during kill", server_id)
|
||||
with self._lock:
|
||||
self._processes.pop(server_id, None)
|
||||
return True
|
||||
|
||||
def is_running(self, server_id: int) -> bool:
|
||||
with self._lock:
|
||||
proc = self._processes.get(server_id)
|
||||
if proc is None:
|
||||
return False
|
||||
return proc.poll() is None
|
||||
|
||||
def get_pid(self, server_id: int) -> int | None:
|
||||
with self._lock:
|
||||
proc = self._processes.get(server_id)
|
||||
if proc is None or proc.poll() is not None:
|
||||
return None
|
||||
return proc.pid
|
||||
|
||||
def get_process(self, server_id: int) -> subprocess.Popen | None:
|
||||
with self._lock:
|
||||
return self._processes.get(server_id)
|
||||
|
||||
def list_running(self) -> list[int]:
|
||||
with self._lock:
|
||||
return [sid for sid, p in self._processes.items() if p.poll() is None]
|
||||
|
||||
def recover_on_startup(self, db) -> None:
|
||||
"""
|
||||
On app restart: check DB for servers marked 'running'.
|
||||
If the PID is still alive AND the process name matches the adapter's
|
||||
allowed executables, re-attach monitoring threads.
|
||||
Otherwise mark server as 'crashed'.
|
||||
"""
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from core.dal.server_repository import ServerRepository
|
||||
from core.dal.event_repository import EventRepository
|
||||
from sqlalchemy import text
|
||||
|
||||
running_servers = ServerRepository(db).get_running()
|
||||
for server in running_servers:
|
||||
pid = server.get("pid")
|
||||
if pid is None:
|
||||
self._mark_crashed(server, db, "No PID recorded")
|
||||
continue
|
||||
|
||||
# Check if PID is alive
|
||||
if not psutil.pid_exists(pid):
|
||||
self._mark_crashed(server, db, f"PID {pid} no longer exists")
|
||||
continue
|
||||
|
||||
# Check process name matches adapter allowlist
|
||||
try:
|
||||
proc = psutil.Process(pid)
|
||||
proc_name = proc.name()
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
allowed = adapter.get_process_config().get_allowed_executables()
|
||||
if not any(proc_name.lower() == exe.lower() for exe in allowed):
|
||||
self._mark_crashed(
|
||||
server, db,
|
||||
f"PID {pid} has name '{proc_name}', not in allowlist {allowed}"
|
||||
)
|
||||
continue
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied, KeyError) as e:
|
||||
self._mark_crashed(server, db, str(e))
|
||||
continue
|
||||
|
||||
# PID is valid — re-attach the process and start monitoring threads
|
||||
logger.info(
|
||||
"Recovering server %d (PID %d, %s)", server['id'], pid, server['game_type']
|
||||
)
|
||||
proc_obj = self._get_popen_for_pid(pid)
|
||||
if proc_obj:
|
||||
with self._lock:
|
||||
self._processes[server["id"]] = proc_obj
|
||||
|
||||
# Re-start monitoring threads without re-launching the process
|
||||
try:
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
ThreadRegistry.reattach_server_threads(server["id"], db)
|
||||
except Exception as e:
|
||||
logger.warning("Could not re-attach threads for server %d: %s", server['id'], e)
|
||||
else:
|
||||
self._mark_crashed(server, db, f"Could not attach to PID {pid}")
|
||||
|
||||
def _mark_crashed(self, server: dict, db, reason: str) -> None:
|
||||
from core.dal.server_repository import ServerRepository
|
||||
from core.dal.event_repository import EventRepository
|
||||
logger.warning("Server %d marked crashed on startup: %s", server['id'], reason)
|
||||
ServerRepository(db).update_status(server["id"], "crashed")
|
||||
EventRepository(db).insert(
|
||||
server["id"], "crashed", actor="system",
|
||||
detail={"reason": reason, "on_startup": True}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _get_popen_for_pid(pid: int) -> subprocess.Popen | None:
|
||||
"""
|
||||
Create a Popen-like wrapper that attaches to an existing PID.
|
||||
NOTE: This is a limited wrapper — we cannot use Popen() on existing PIDs.
|
||||
We use a sentinel object that wraps psutil.Process.
|
||||
"""
|
||||
try:
|
||||
return _PsutilProcessWrapper(pid)
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
return None
|
||||
|
||||
|
||||
class _PsutilProcessWrapper:
|
||||
"""
|
||||
Minimal Popen-compatible wrapper around an existing process (by PID).
|
||||
Used for startup recovery only.
|
||||
"""
|
||||
def __init__(self, pid: int):
|
||||
self._psutil_proc = psutil.Process(pid)
|
||||
self.pid = pid
|
||||
|
||||
def poll(self) -> int | None:
|
||||
"""Return None if running, exit code if not (we use -1 for external termination)."""
|
||||
if self._psutil_proc.is_running():
|
||||
return None
|
||||
return -1
|
||||
|
||||
def wait(self, timeout: int | None = None):
|
||||
self._psutil_proc.wait(timeout=timeout)
|
||||
|
||||
def terminate(self):
|
||||
self._psutil_proc.terminate()
|
||||
|
||||
def kill(self):
|
||||
self._psutil_proc.kill()
|
||||
233
backend/core/servers/router.py
Normal file
233
backend/core/servers/router.py
Normal file
@@ -0,0 +1,233 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from fastapi.responses import Response
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from core.servers.schemas import (
|
||||
CreateServerRequest, StopServerRequest, UpdateServerRequest,
|
||||
)
|
||||
from core.servers.service import ServerService
|
||||
from database import get_db
|
||||
from dependencies import get_current_user, require_admin
|
||||
|
||||
router = APIRouter(prefix="/servers", tags=["servers"])
|
||||
|
||||
|
||||
def _ok(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
class SendCommandRequest(BaseModel):
|
||||
command: str
|
||||
|
||||
|
||||
# ── Server CRUD ──────────────────────────────────────────────────────────────
|
||||
|
||||
@router.get("")
|
||||
def list_servers(
|
||||
game_type: str | None = None,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_user: Annotated[dict, Depends(get_current_user)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).list_servers(game_type))
|
||||
|
||||
|
||||
@router.post("", status_code=201)
|
||||
def create_server(
|
||||
body: CreateServerRequest,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).create_server(
|
||||
name=body.name,
|
||||
game_type=body.game_type,
|
||||
exe_path=body.exe_path,
|
||||
game_port=body.game_port,
|
||||
rcon_port=body.rcon_port,
|
||||
description=body.description,
|
||||
auto_restart=body.auto_restart,
|
||||
max_restarts=body.max_restarts,
|
||||
))
|
||||
|
||||
|
||||
@router.get("/{server_id}")
|
||||
def get_server(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_user: Annotated[dict, Depends(get_current_user)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).get_server(server_id))
|
||||
|
||||
|
||||
@router.put("/{server_id}")
|
||||
def update_server(
|
||||
server_id: int,
|
||||
body: UpdateServerRequest,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).update_server(server_id, **body.model_dump(exclude_none=True)))
|
||||
|
||||
|
||||
@router.delete("/{server_id}", status_code=204)
|
||||
def delete_server(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
ServerService(db).delete_server(server_id)
|
||||
return Response(status_code=204)
|
||||
|
||||
|
||||
# ── Lifecycle ────────────────────────────────────────────────────────────────
|
||||
|
||||
@router.post("/{server_id}/start")
|
||||
def start_server(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).start(server_id))
|
||||
|
||||
|
||||
@router.post("/{server_id}/stop")
|
||||
def stop_server(
|
||||
server_id: int,
|
||||
body: StopServerRequest = None,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
force = body.force if body else False
|
||||
return _ok(ServerService(db).stop(server_id, force=force))
|
||||
|
||||
|
||||
@router.post("/{server_id}/restart")
|
||||
def restart_server(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).restart(server_id))
|
||||
|
||||
|
||||
@router.post("/{server_id}/kill")
|
||||
def kill_server(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).kill(server_id))
|
||||
|
||||
|
||||
# ── Config ───────────────────────────────────────────────────────────────────
|
||||
|
||||
@router.get("/{server_id}/config")
|
||||
def get_config(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_user: Annotated[dict, Depends(get_current_user)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).get_config(server_id))
|
||||
|
||||
|
||||
@router.get("/{server_id}/config/preview")
|
||||
def get_config_preview(
|
||||
server_id: int,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).get_config_preview(server_id))
|
||||
|
||||
|
||||
@router.get("/{server_id}/config/{section}")
|
||||
def get_config_section(
|
||||
server_id: int,
|
||||
section: str,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_user: Annotated[dict, Depends(get_current_user)] = None,
|
||||
):
|
||||
return _ok(ServerService(db).get_config_section(server_id, section))
|
||||
|
||||
|
||||
@router.put("/{server_id}/config/{section}")
|
||||
def update_config_section(
|
||||
server_id: int,
|
||||
section: str,
|
||||
body: dict, # Dynamic — adapter-specific fields
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
expected_version = body.pop("config_version", None)
|
||||
return _ok(ServerService(db).update_config_section(
|
||||
server_id, section, body, expected_version
|
||||
))
|
||||
|
||||
|
||||
# ── RCon ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
@router.post("/{server_id}/rcon/command")
|
||||
def send_rcon_command(
|
||||
server_id: int,
|
||||
body: SendCommandRequest,
|
||||
db: Annotated[Connection, Depends(get_db)] = None,
|
||||
_admin: Annotated[dict, Depends(require_admin)] = None,
|
||||
):
|
||||
"""Send an RCon command to a running server."""
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from adapters.exceptions import RemoteAdminError
|
||||
from core.dal.config_repository import ConfigRepository
|
||||
from core.dal.server_repository import ServerRepository
|
||||
|
||||
server = ServerRepository(db).get_by_id(server_id)
|
||||
if server is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Server {server_id} not found"},
|
||||
)
|
||||
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
if not adapter.has_capability("remote_admin"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "NOT_SUPPORTED", "message": f"Game type {server['game_type']} does not support RCon"},
|
||||
)
|
||||
|
||||
# Get RCon password from config
|
||||
remote_admin_factory = adapter.get_remote_admin()
|
||||
config_gen = adapter.get_config_generator()
|
||||
sensitive = config_gen.get_sensitive_fields("rcon") if "rcon" in config_gen.get_sections() else []
|
||||
config_repo = ConfigRepository(db)
|
||||
rcon_section = config_repo.get_section(server_id, "rcon", sensitive)
|
||||
if not rcon_section or not rcon_section.get("password"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "NO_RCON_PASSWORD", "message": "RCon password not configured for this server"},
|
||||
)
|
||||
password = rcon_section["password"]
|
||||
|
||||
rcon_port = server.get("rcon_port") or (server["game_port"] + 3)
|
||||
client = remote_admin_factory.create_client(
|
||||
host="127.0.0.1",
|
||||
port=rcon_port,
|
||||
password=password,
|
||||
)
|
||||
try:
|
||||
client.connect()
|
||||
result = client.send_command(body.command)
|
||||
client.disconnect()
|
||||
except RemoteAdminError as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail={"code": "RCON_ERROR", "message": f"RCon command failed: {exc}"},
|
||||
)
|
||||
except Exception as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail={"code": "RCON_ERROR", "message": f"RCon connection failed: {exc}"},
|
||||
)
|
||||
|
||||
return _ok({"response": result})
|
||||
35
backend/core/servers/schemas.py
Normal file
35
backend/core/servers/schemas.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class CreateServerRequest(BaseModel):
|
||||
name: str
|
||||
description: str | None = None
|
||||
game_type: str = "arma3"
|
||||
exe_path: str
|
||||
game_port: int = Field(ge=1024, le=65535)
|
||||
rcon_port: int | None = Field(default=None, ge=1024, le=65535)
|
||||
auto_restart: bool = False
|
||||
max_restarts: int = Field(default=3, ge=0, le=20)
|
||||
|
||||
|
||||
class UpdateServerRequest(BaseModel):
|
||||
name: str | None = None
|
||||
description: str | None = None
|
||||
exe_path: str | None = None
|
||||
game_port: int | None = Field(default=None, ge=1024, le=65535)
|
||||
rcon_port: int | None = Field(default=None, ge=1024, le=65535)
|
||||
auto_restart: bool | None = None
|
||||
max_restarts: int | None = None
|
||||
|
||||
|
||||
class StopServerRequest(BaseModel):
|
||||
force: bool = False
|
||||
reason: str | None = None
|
||||
|
||||
|
||||
class UpdateConfigSectionRequest(BaseModel):
|
||||
config_version: int | None = None # Required for optimistic locking on PUT
|
||||
# All other fields come from the adapter's JSON Schema — passed through as-is
|
||||
model_config = {"extra": "allow"}
|
||||
503
backend/core/servers/service.py
Normal file
503
backend/core/servers/service.py
Normal file
@@ -0,0 +1,503 @@
|
||||
"""
|
||||
ServerService — orchestrates all server lifecycle operations.
|
||||
Delegates game-specific work to the adapter.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import HTTPException, status
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from core.dal.config_repository import ConfigRepository
|
||||
from core.dal.event_repository import EventRepository
|
||||
from core.dal.server_repository import ServerRepository
|
||||
from core.servers.process_manager import ProcessManager
|
||||
from core.utils.file_utils import ensure_server_dirs, get_server_dir
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _ok_response(data):
|
||||
return {"success": True, "data": data, "error": None}
|
||||
|
||||
|
||||
class ServerService:
|
||||
|
||||
def __init__(self, db: Connection):
|
||||
self._db = db
|
||||
self._server_repo = ServerRepository(db)
|
||||
self._config_repo = ConfigRepository(db)
|
||||
self._event_repo = EventRepository(db)
|
||||
|
||||
# ── CRUD ──────────────────────────────────────────────────────────────────
|
||||
|
||||
def list_servers(self, game_type: str | None = None) -> list[dict]:
|
||||
"""Return server list with live metrics merged in."""
|
||||
servers = self._server_repo.get_all(game_type)
|
||||
return [self._enrich_server(s) for s in servers]
|
||||
|
||||
def get_server(self, server_id: int) -> dict:
|
||||
server = self._server_repo.get_by_id(server_id)
|
||||
if server is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Server {server_id} not found"},
|
||||
)
|
||||
return self._enrich_server(server)
|
||||
|
||||
def _enrich_server(self, server: dict) -> dict:
|
||||
"""Add live CPU/RAM/player count from DB."""
|
||||
from core.dal.metrics_repository import MetricsRepository
|
||||
from core.dal.player_repository import PlayerRepository
|
||||
result = dict(server)
|
||||
metrics = MetricsRepository(self._db).get_latest(server["id"])
|
||||
if metrics:
|
||||
result["cpu_percent"] = metrics["cpu_percent"]
|
||||
result["ram_mb"] = metrics["ram_mb"]
|
||||
else:
|
||||
result["cpu_percent"] = None
|
||||
result["ram_mb"] = None
|
||||
result["player_count"] = PlayerRepository(self._db).count(server["id"])
|
||||
return result
|
||||
|
||||
def create_server(
|
||||
self,
|
||||
name: str,
|
||||
game_type: str,
|
||||
exe_path: str,
|
||||
game_port: int,
|
||||
rcon_port: int | None = None,
|
||||
description: str | None = None,
|
||||
auto_restart: bool = False,
|
||||
max_restarts: int = 3,
|
||||
) -> dict:
|
||||
# Validate adapter exists
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get(game_type)
|
||||
except KeyError:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "GAME_TYPE_NOT_FOUND", "message": f"Unknown game type: {game_type}"},
|
||||
)
|
||||
|
||||
# Validate exe
|
||||
process_config = adapter.get_process_config()
|
||||
exe_name = Path(exe_path).name
|
||||
if exe_name not in process_config.get_allowed_executables():
|
||||
from adapters.exceptions import ExeNotAllowedError
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={
|
||||
"code": "EXE_NOT_ALLOWED",
|
||||
"message": f"Executable '{exe_name}' not allowed",
|
||||
"allowed": process_config.get_allowed_executables(),
|
||||
},
|
||||
)
|
||||
|
||||
# Determine rcon_port if not provided
|
||||
if rcon_port is None:
|
||||
rcon_port = process_config.get_default_rcon_port(game_port)
|
||||
|
||||
# Check port conflicts against running servers
|
||||
from core.utils.port_checker import check_ports_against_running_servers
|
||||
conflicts = check_ports_against_running_servers(game_port, rcon_port, None, self._db)
|
||||
if conflicts:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={
|
||||
"code": "PORT_IN_USE",
|
||||
"message": f"Ports already in use: {conflicts}",
|
||||
},
|
||||
)
|
||||
|
||||
# Create DB row
|
||||
server_id = self._server_repo.create(
|
||||
name=name,
|
||||
game_type=game_type,
|
||||
exe_path=exe_path,
|
||||
game_port=game_port,
|
||||
rcon_port=rcon_port,
|
||||
description=description,
|
||||
auto_restart=auto_restart,
|
||||
max_restarts=max_restarts,
|
||||
)
|
||||
|
||||
# Create directory layout
|
||||
layout = process_config.get_server_dir_layout()
|
||||
ensure_server_dirs(server_id, layout)
|
||||
|
||||
# Seed default config sections
|
||||
config_gen = adapter.get_config_generator()
|
||||
schema_version = config_gen.get_config_version()
|
||||
for section in config_gen.get_sections():
|
||||
defaults = config_gen.get_defaults(section)
|
||||
sensitive = config_gen.get_sensitive_fields(section)
|
||||
self._config_repo.upsert_section(
|
||||
server_id=server_id,
|
||||
game_type=game_type,
|
||||
section=section,
|
||||
config_data=defaults,
|
||||
schema_version=schema_version,
|
||||
sensitive_fields=sensitive,
|
||||
)
|
||||
|
||||
self._event_repo.insert(server_id, "created", actor="admin")
|
||||
return self.get_server(server_id)
|
||||
|
||||
def update_server(self, server_id: int, **updates) -> dict:
|
||||
self.get_server(server_id) # raises 404 if not found
|
||||
filtered = {k: v for k, v in updates.items() if v is not None}
|
||||
if filtered:
|
||||
self._server_repo.update(server_id, **filtered)
|
||||
return self.get_server(server_id)
|
||||
|
||||
def delete_server(self, server_id: int) -> None:
|
||||
server = self.get_server(server_id)
|
||||
if server["status"] not in ("stopped", "crashed", "error"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={
|
||||
"code": "SERVER_NOT_STOPPED",
|
||||
"message": "Server must be stopped before deletion",
|
||||
},
|
||||
)
|
||||
self._server_repo.delete(server_id)
|
||||
# Delete server directory
|
||||
server_dir = get_server_dir(server_id)
|
||||
if server_dir.exists():
|
||||
shutil.rmtree(str(server_dir), ignore_errors=True)
|
||||
|
||||
# ── Lifecycle ─────────────────────────────────────────────────────────────
|
||||
|
||||
def start(self, server_id: int) -> dict:
|
||||
"""
|
||||
Full start sequence:
|
||||
1. Load server + adapter
|
||||
2. Validate exe
|
||||
3. Check ports
|
||||
4. Write config files (atomic)
|
||||
5. Build launch args
|
||||
6. Start process
|
||||
7. Start monitoring threads
|
||||
8. Return status
|
||||
"""
|
||||
from adapters.exceptions import (
|
||||
ConfigWriteError, ExeNotAllowedError,
|
||||
LaunchArgsError, ConfigValidationError,
|
||||
)
|
||||
from core.utils.port_checker import check_ports_against_running_servers
|
||||
|
||||
server = self.get_server(server_id)
|
||||
if server["status"] in ("running", "starting"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={"code": "SERVER_ALREADY_RUNNING", "message": "Server is already running"},
|
||||
)
|
||||
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
process_config = adapter.get_process_config()
|
||||
config_gen = adapter.get_config_generator()
|
||||
|
||||
# Validate exe
|
||||
exe_name = Path(server["exe_path"]).name
|
||||
if exe_name not in process_config.get_allowed_executables():
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={
|
||||
"code": "EXE_NOT_ALLOWED",
|
||||
"message": f"Executable '{exe_name}' not in adapter allowlist",
|
||||
"allowed": process_config.get_allowed_executables(),
|
||||
},
|
||||
)
|
||||
|
||||
# Check ports
|
||||
conflicts = check_ports_against_running_servers(
|
||||
server["game_port"], server.get("rcon_port"), server_id, self._db
|
||||
)
|
||||
if conflicts:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={"code": "PORT_IN_USE", "message": f"Ports in use: {conflicts}"},
|
||||
)
|
||||
|
||||
# Load config sections (decrypt sensitive fields for config generation)
|
||||
sensitive_by_section = {
|
||||
s: config_gen.get_sensitive_fields(s)
|
||||
for s in config_gen.get_sections()
|
||||
}
|
||||
sections = self._config_repo.get_all_sections(server_id, sensitive_by_section)
|
||||
# Remove _meta from each section before passing to adapter
|
||||
raw_sections = {
|
||||
section: {k: v for k, v in data.items() if k != "_meta"}
|
||||
for section, data in sections.items()
|
||||
}
|
||||
# Inject port into sections so build_launch_args can use it
|
||||
if "_port" not in raw_sections:
|
||||
raw_sections["_port"] = server["game_port"]
|
||||
|
||||
# Get mod args if adapter supports mods
|
||||
mod_args: list[str] = []
|
||||
if adapter.has_capability("mod_manager"):
|
||||
from sqlalchemy import text
|
||||
mods = self._db.execute(
|
||||
text("""
|
||||
SELECT m.folder_path, sm.is_server_mod, sm.sort_order
|
||||
FROM server_mods sm JOIN mods m ON m.id = sm.mod_id
|
||||
WHERE sm.server_id = :sid ORDER BY sm.sort_order
|
||||
"""),
|
||||
{"sid": server_id},
|
||||
).fetchall()
|
||||
mod_list = [dict(r._mapping) for r in mods]
|
||||
mod_args = adapter.get_mod_manager().build_mod_args(mod_list)
|
||||
|
||||
# Write config files (atomic)
|
||||
server_dir = get_server_dir(server_id)
|
||||
try:
|
||||
config_gen.write_configs(server_id, server_dir, raw_sections)
|
||||
except ConfigWriteError as e:
|
||||
self._server_repo.update_status(server_id, "error")
|
||||
self._event_repo.insert(server_id, "config_write_error", detail={"error": str(e)})
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail={"code": "CONFIG_WRITE_ERROR", "message": str(e)},
|
||||
)
|
||||
except ConfigValidationError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail={"code": "INVALID_CONFIG", "message": str(e), "errors": e.errors},
|
||||
)
|
||||
|
||||
# Build launch args
|
||||
try:
|
||||
launch_args = config_gen.build_launch_args(raw_sections, mod_args)
|
||||
except LaunchArgsError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"code": "INVALID_CONFIG", "message": str(e)},
|
||||
)
|
||||
|
||||
# Start process
|
||||
pm = ProcessManager.get()
|
||||
with pm.get_operation_lock(server_id):
|
||||
pid = pm.start(server_id, server["exe_path"], launch_args, cwd=str(server_dir))
|
||||
|
||||
# Update DB
|
||||
from datetime import datetime, timezone
|
||||
self._server_repo.update_status(
|
||||
server_id, "starting", pid=pid,
|
||||
started_at=datetime.now(timezone.utc).isoformat()
|
||||
)
|
||||
self._event_repo.insert(server_id, "started", detail={"pid": pid})
|
||||
|
||||
# Start monitoring threads
|
||||
try:
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
ThreadRegistry.start_server_threads(server_id, self._db)
|
||||
except Exception as e:
|
||||
logger.warning("Could not start monitoring threads: %s", e)
|
||||
|
||||
return {"status": "starting", "pid": pid}
|
||||
|
||||
def stop(self, server_id: int, force: bool = False) -> dict:
|
||||
server = self.get_server(server_id)
|
||||
if server["status"] in ("stopped", "crashed"):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={"code": "SERVER_NOT_RUNNING", "message": "Server is not running"},
|
||||
)
|
||||
|
||||
# Mark as "stopping" so ProcessMonitorThread doesn't treat this as a crash
|
||||
self._server_repo.update_status(server_id, "stopping")
|
||||
|
||||
# Stop monitoring threads first so they don't fight with shutdown
|
||||
try:
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
ThreadRegistry.stop_server_threads(server_id)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to stop monitoring threads for server %d during stop: %s", server_id, exc)
|
||||
|
||||
# Try graceful shutdown via remote admin
|
||||
if not force:
|
||||
try:
|
||||
pm = ProcessManager.get()
|
||||
logger.info("Sending graceful shutdown to server %d", server_id)
|
||||
except Exception as e:
|
||||
logger.warning("Graceful shutdown failed: %s, falling back to terminate", e)
|
||||
|
||||
pm = ProcessManager.get()
|
||||
with pm.get_operation_lock(server_id):
|
||||
exited = pm.stop(server_id, timeout=30)
|
||||
if not exited:
|
||||
logger.warning("Server %d did not exit in 30s, force-killing", server_id)
|
||||
pm.kill(server_id)
|
||||
|
||||
from datetime import datetime, timezone
|
||||
self._server_repo.update_status(
|
||||
server_id, "stopped",
|
||||
pid=None, stopped_at=datetime.now(timezone.utc).isoformat()
|
||||
)
|
||||
|
||||
from core.dal.player_repository import PlayerRepository
|
||||
PlayerRepository(self._db).clear(server_id)
|
||||
self._event_repo.insert(server_id, "stopped")
|
||||
|
||||
return {"status": "stopped"}
|
||||
|
||||
def restart(self, server_id: int) -> dict:
|
||||
self.stop(server_id)
|
||||
return self.start(server_id)
|
||||
|
||||
def kill(self, server_id: int) -> dict:
|
||||
server = self.get_server(server_id)
|
||||
|
||||
# Mark as "stopping" so ProcessMonitorThread doesn't treat this as a crash
|
||||
self._server_repo.update_status(server_id, "stopping")
|
||||
|
||||
# Stop monitoring threads first
|
||||
try:
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
ThreadRegistry.stop_server_threads(server_id)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to stop monitoring threads for server %d during kill: %s", server_id, exc)
|
||||
|
||||
pm = ProcessManager.get()
|
||||
with pm.get_operation_lock(server_id):
|
||||
pm.kill(server_id)
|
||||
|
||||
from datetime import datetime, timezone
|
||||
self._server_repo.update_status(server_id, "stopped", pid=None,
|
||||
stopped_at=datetime.now(timezone.utc).isoformat())
|
||||
from core.dal.player_repository import PlayerRepository
|
||||
PlayerRepository(self._db).clear(server_id)
|
||||
self._event_repo.insert(server_id, "killed")
|
||||
return {"status": "stopped"}
|
||||
|
||||
# ── Config ────────────────────────────────────────────────────────────────
|
||||
|
||||
def get_config(self, server_id: int) -> dict:
|
||||
self.get_server(server_id)
|
||||
adapter = GameAdapterRegistry.get(
|
||||
self._server_repo.get_by_id(server_id)["game_type"]
|
||||
)
|
||||
config_gen = adapter.get_config_generator()
|
||||
sensitive_by_section = {
|
||||
s: config_gen.get_sensitive_fields(s) for s in config_gen.get_sections()
|
||||
}
|
||||
sections = self._config_repo.get_all_sections(server_id, sensitive_by_section)
|
||||
# Mask sensitive fields in response (replace actual value with "***")
|
||||
for section, data in sections.items():
|
||||
sf = config_gen.get_sensitive_fields(section)
|
||||
for field in sf:
|
||||
if field in data and data[field]:
|
||||
data[field] = "***"
|
||||
return sections
|
||||
|
||||
def get_config_section(self, server_id: int, section: str) -> dict:
|
||||
server = self.get_server(server_id)
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
config_gen = adapter.get_config_generator()
|
||||
if section not in config_gen.get_sections():
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Config section '{section}' not found"},
|
||||
)
|
||||
sensitive = config_gen.get_sensitive_fields(section)
|
||||
data = self._config_repo.get_section(server_id, section, sensitive)
|
||||
if data is None:
|
||||
data = config_gen.get_defaults(section)
|
||||
data["_meta"] = {"config_version": 0, "schema_version": config_gen.get_config_version()}
|
||||
# Mask sensitive fields
|
||||
for field in sensitive:
|
||||
if field in data and data[field]:
|
||||
data[field] = "***"
|
||||
return data
|
||||
|
||||
def update_config_section(
|
||||
self,
|
||||
server_id: int,
|
||||
section: str,
|
||||
data: dict,
|
||||
expected_version: int | None = None,
|
||||
) -> dict:
|
||||
server = self.get_server(server_id)
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
config_gen = adapter.get_config_generator()
|
||||
|
||||
sections = config_gen.get_sections()
|
||||
if section not in sections:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Config section '{section}' not found"},
|
||||
)
|
||||
|
||||
# Validate against adapter's Pydantic model
|
||||
model_cls = sections[section]
|
||||
# Get current values, merge with update (partial update support)
|
||||
current = self._config_repo.get_section(
|
||||
server_id, section, config_gen.get_sensitive_fields(section)
|
||||
)
|
||||
if current:
|
||||
merged = {k: v for k, v in current.items() if k != "_meta"}
|
||||
else:
|
||||
merged = config_gen.get_defaults(section)
|
||||
# Apply updates
|
||||
for k, v in data.items():
|
||||
if k not in ("_meta", "config_version"):
|
||||
merged[k] = v
|
||||
|
||||
# Validate
|
||||
try:
|
||||
model_cls(**merged)
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail={"code": "INVALID_CONFIG", "message": str(e)},
|
||||
)
|
||||
|
||||
sensitive = config_gen.get_sensitive_fields(section)
|
||||
try:
|
||||
new_version = self._config_repo.upsert_section(
|
||||
server_id=server_id,
|
||||
game_type=server["game_type"],
|
||||
section=section,
|
||||
config_data=merged,
|
||||
schema_version=config_gen.get_config_version(),
|
||||
sensitive_fields=sensitive,
|
||||
expected_config_version=expected_version,
|
||||
)
|
||||
except ValueError as e:
|
||||
error_msg = str(e)
|
||||
if "CONFIG_VERSION_CONFLICT" in error_msg:
|
||||
current_version = int(error_msg.split(":")[1])
|
||||
current_data = self.get_config_section(server_id, section)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail={
|
||||
"code": "CONFIG_VERSION_CONFLICT",
|
||||
"message": "Config was modified by another user. Re-read and merge.",
|
||||
"current_config": current_data,
|
||||
"current_version": current_version,
|
||||
},
|
||||
)
|
||||
raise
|
||||
|
||||
self._event_repo.insert(
|
||||
server_id, "config_updated", detail={"section": section, "version": new_version}
|
||||
)
|
||||
return self.get_config_section(server_id, section)
|
||||
|
||||
def get_config_preview(self, server_id: int) -> dict[str, str]:
|
||||
server = self.get_server(server_id)
|
||||
adapter = GameAdapterRegistry.get(server["game_type"])
|
||||
config_gen = adapter.get_config_generator()
|
||||
sensitive_by_section = {
|
||||
s: config_gen.get_sensitive_fields(s) for s in config_gen.get_sections()
|
||||
}
|
||||
sections = self._config_repo.get_all_sections(server_id, sensitive_by_section)
|
||||
raw_sections = {k: {kk: vv for kk, vv in v.items() if kk != "_meta"} for k, v in sections.items()}
|
||||
server_dir = get_server_dir(server_id)
|
||||
return config_gen.preview_config(server_id, server_dir, raw_sections)
|
||||
0
backend/core/system/__init__.py
Normal file
0
backend/core/system/__init__.py
Normal file
32
backend/core/system/router.py
Normal file
32
backend/core/system/router.py
Normal file
@@ -0,0 +1,32 @@
|
||||
from fastapi import APIRouter, Depends
|
||||
from typing import Annotated
|
||||
from dependencies import get_current_user
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
|
||||
router = APIRouter(prefix="/system", tags=["system"])
|
||||
|
||||
|
||||
@router.get("/health")
|
||||
def health():
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@router.get("/status")
|
||||
def system_status(_user: Annotated[dict, Depends(get_current_user)]):
|
||||
from sqlalchemy import text
|
||||
from database import get_engine
|
||||
with get_engine().connect() as db:
|
||||
running = db.execute(
|
||||
text("SELECT COUNT(*) FROM servers WHERE status IN ('running','starting')")
|
||||
).fetchone()[0]
|
||||
total = db.execute(text("SELECT COUNT(*) FROM servers")).fetchone()[0]
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"data": {
|
||||
"version": "1.0.0",
|
||||
"running_servers": running,
|
||||
"total_servers": total,
|
||||
"supported_games": [a.game_type for a in GameAdapterRegistry.all()],
|
||||
},
|
||||
}
|
||||
3
backend/core/threads/__init__.py
Normal file
3
backend/core/threads/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
|
||||
__all__ = ["ThreadRegistry"]
|
||||
123
backend/core/threads/base_thread.py
Normal file
123
backend/core/threads/base_thread.py
Normal file
@@ -0,0 +1,123 @@
|
||||
"""
|
||||
BaseServerThread — base class for all per-server background threads.
|
||||
|
||||
Rules every subclass MUST follow:
|
||||
- Call super().__init__(server_id, name) in __init__
|
||||
- Implement _run_loop() — called repeatedly until _stop_event is set
|
||||
- Do NOT override run() directly
|
||||
- Use self._db for all database operations — it is a thread-local connection
|
||||
- Call self._close_db() in your finally block if you open additional connections
|
||||
- Exceptions raised from _run_loop() are caught, logged, and the loop continues
|
||||
unless the exception is a fatal error — set self._fatal_error = True to stop
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from database import get_thread_db
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_EXCEPTION_BACKOFF_BASE = 2.0
|
||||
_EXCEPTION_BACKOFF_MAX = 60.0
|
||||
_EXCEPTION_BACKOFF_MULTIPLIER = 2.0
|
||||
|
||||
|
||||
class BaseServerThread(ABC, threading.Thread):
|
||||
"""
|
||||
Abstract base for all per-server background threads.
|
||||
|
||||
Subclasses implement _run_loop(). This base class handles:
|
||||
- Stop event signaling
|
||||
- Thread-local DB connection lifecycle
|
||||
- Exception backoff to prevent tight crash loops
|
||||
- Structured logging with server_id context
|
||||
"""
|
||||
|
||||
def __init__(self, server_id: int, name: str) -> None:
|
||||
super().__init__(name=f"{name}-server-{server_id}", daemon=True)
|
||||
self.server_id = server_id
|
||||
self._stop_event = threading.Event()
|
||||
self._fatal_error = False
|
||||
self._db = None
|
||||
self._exception_count = 0
|
||||
|
||||
# ── Public API ──
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Signal the thread to stop. Does not block."""
|
||||
self._stop_event.set()
|
||||
|
||||
def stop_and_join(self, timeout: float = 5.0) -> None:
|
||||
"""Signal stop and wait for the thread to exit."""
|
||||
self._stop_event.set()
|
||||
self.join(timeout=timeout)
|
||||
|
||||
@property
|
||||
def is_stopping(self) -> bool:
|
||||
return self._stop_event.is_set()
|
||||
|
||||
# ── Thread entry point ──
|
||||
|
||||
def run(self) -> None:
|
||||
logger.info("[%s] Starting", self.name)
|
||||
backoff = _EXCEPTION_BACKOFF_BASE
|
||||
|
||||
try:
|
||||
self._db = get_thread_db()
|
||||
self._on_start()
|
||||
|
||||
while not self._stop_event.is_set() and not self._fatal_error:
|
||||
try:
|
||||
self._run_loop()
|
||||
backoff = _EXCEPTION_BACKOFF_BASE
|
||||
self._exception_count = 0
|
||||
except Exception as exc:
|
||||
self._exception_count += 1
|
||||
logger.error(
|
||||
"[%s] Unhandled exception in _run_loop (count=%d): %s",
|
||||
self.name, self._exception_count, exc, exc_info=True,
|
||||
)
|
||||
if self._fatal_error:
|
||||
break
|
||||
self._stop_event.wait(timeout=backoff)
|
||||
backoff = min(backoff * _EXCEPTION_BACKOFF_MULTIPLIER, _EXCEPTION_BACKOFF_MAX)
|
||||
|
||||
except Exception as exc:
|
||||
logger.critical("[%s] Fatal error in thread setup: %s", self.name, exc, exc_info=True)
|
||||
finally:
|
||||
self._on_stop()
|
||||
self._close_db()
|
||||
logger.info("[%s] Stopped", self.name)
|
||||
|
||||
# ── Hooks for subclasses ──
|
||||
|
||||
def _on_start(self) -> None:
|
||||
"""Called once before the loop starts. Override for setup."""
|
||||
|
||||
def _on_stop(self) -> None:
|
||||
"""Called once after the loop ends. Override for cleanup."""
|
||||
|
||||
@abstractmethod
|
||||
def _run_loop(self) -> None:
|
||||
"""
|
||||
Implement the thread's work here.
|
||||
Called repeatedly until stop() is called or _fatal_error is set.
|
||||
Should block for a short period (sleep or wait) to avoid busy-looping.
|
||||
"""
|
||||
|
||||
# ── Internal helpers ──
|
||||
|
||||
def _close_db(self) -> None:
|
||||
if self._db is not None:
|
||||
try:
|
||||
self._db.close()
|
||||
except Exception as exc:
|
||||
logger.debug("[%s] Error closing DB connection: %s", self.name, exc)
|
||||
self._db = None
|
||||
|
||||
def _sleep(self, seconds: float) -> None:
|
||||
"""Interruptible sleep — wakes up early if stop() is called."""
|
||||
self._stop_event.wait(timeout=seconds)
|
||||
167
backend/core/threads/log_tail.py
Normal file
167
backend/core/threads/log_tail.py
Normal file
@@ -0,0 +1,167 @@
|
||||
"""
|
||||
LogTailThread — tails a server's log file, parses lines via LogParser,
|
||||
and persists parsed entries to the logs table.
|
||||
|
||||
Design notes:
|
||||
- Opens the log file in text mode with errors="replace" to handle encoding issues
|
||||
- Detects log rotation by checking if the inode changes (Unix) or file shrinks (Windows)
|
||||
- On rotation: closes old handle, reopens from position 0
|
||||
- Flushes inserts in batches of up to LOG_BATCH_SIZE per loop iteration
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import queue
|
||||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
|
||||
from core.dal.log_repository import LogRepository
|
||||
from core.threads.base_thread import BaseServerThread
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_LOG_BATCH_SIZE = 50
|
||||
_POLL_INTERVAL = 1.0
|
||||
_REOPEN_DELAY = 2.0
|
||||
|
||||
|
||||
class LogTailThread(BaseServerThread):
|
||||
"""
|
||||
Tails a log file for a specific server.
|
||||
|
||||
Args:
|
||||
server_id: The database server ID.
|
||||
log_path: Absolute path to the log file to tail.
|
||||
log_parser: LogParser adapter instance for this game type.
|
||||
broadcast_queue: Optional queue.Queue to push parsed events to BroadcastThread.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_id: int,
|
||||
log_path: str,
|
||||
log_parser,
|
||||
broadcast_queue=None,
|
||||
) -> None:
|
||||
super().__init__(server_id, "LogTail")
|
||||
self._log_path = log_path
|
||||
self._log_parser = log_parser
|
||||
self._broadcast_queue = broadcast_queue
|
||||
self._file_handle = None
|
||||
self._last_inode = None
|
||||
self._last_size = 0
|
||||
|
||||
# ── Lifecycle ──
|
||||
|
||||
def _on_start(self) -> None:
|
||||
self._open_log_file()
|
||||
|
||||
def _on_stop(self) -> None:
|
||||
self._close_file()
|
||||
|
||||
# ── Main loop ──
|
||||
|
||||
def _run_loop(self) -> None:
|
||||
if self._file_handle is None:
|
||||
self._stop_event.wait(timeout=_POLL_INTERVAL)
|
||||
self._open_log_file()
|
||||
return
|
||||
|
||||
if self._detect_rotation():
|
||||
logger.info("[%s] Log rotation detected, reopening", self.name)
|
||||
self._close_file()
|
||||
self._stop_event.wait(timeout=_REOPEN_DELAY)
|
||||
self._open_log_file()
|
||||
return
|
||||
|
||||
lines_read = 0
|
||||
entries_to_insert = []
|
||||
|
||||
while lines_read < _LOG_BATCH_SIZE:
|
||||
line = self._file_handle.readline()
|
||||
if not line:
|
||||
break
|
||||
lines_read += 1
|
||||
line = line.rstrip("\n").rstrip("\r")
|
||||
if not line:
|
||||
continue
|
||||
|
||||
parsed = self._log_parser.parse_line(line)
|
||||
if parsed is not None:
|
||||
entries_to_insert.append(parsed)
|
||||
|
||||
if entries_to_insert and self._db is not None:
|
||||
log_repo = LogRepository(self._db)
|
||||
for entry in entries_to_insert:
|
||||
log_repo.insert(server_id=self.server_id, entry=entry)
|
||||
try:
|
||||
self._db.commit()
|
||||
except Exception as exc:
|
||||
logger.error("[%s] DB commit failed: %s", self.name, exc)
|
||||
self._db.rollback()
|
||||
|
||||
if self._broadcast_queue is not None:
|
||||
for entry in entries_to_insert:
|
||||
try:
|
||||
self._broadcast_queue.put_nowait({
|
||||
"type": "log",
|
||||
"server_id": self.server_id,
|
||||
"data": entry,
|
||||
})
|
||||
except queue.Full:
|
||||
logger.debug("[%s] Broadcast queue full, dropping log event", self.name)
|
||||
|
||||
if lines_read == 0:
|
||||
self._stop_event.wait(timeout=_POLL_INTERVAL)
|
||||
|
||||
# ── File management ──
|
||||
|
||||
def _open_log_file(self) -> None:
|
||||
if not os.path.exists(self._log_path):
|
||||
return
|
||||
try:
|
||||
self._file_handle = open(
|
||||
self._log_path, "r", encoding="utf-8", errors="replace"
|
||||
)
|
||||
# Start tailing from the end of the file
|
||||
self._file_handle.seek(0, 2)
|
||||
self._last_size = self._file_handle.tell()
|
||||
stat = os.stat(self._log_path)
|
||||
self._last_inode = getattr(stat, "st_ino", None)
|
||||
logger.debug("[%s] Opened log file: %s", self.name, self._log_path)
|
||||
except OSError as exc:
|
||||
logger.warning("[%s] Cannot open log file %s: %s", self.name, self._log_path, exc)
|
||||
self._file_handle = None
|
||||
|
||||
def _close_file(self) -> None:
|
||||
if self._file_handle is not None:
|
||||
try:
|
||||
self._file_handle.close()
|
||||
except OSError as exc:
|
||||
logger.debug("[%s] Error closing log file: %s", self.name, exc)
|
||||
self._file_handle = None
|
||||
self._last_inode = None
|
||||
self._last_size = 0
|
||||
|
||||
def _detect_rotation(self) -> bool:
|
||||
"""Returns True if the log file has been rotated."""
|
||||
try:
|
||||
stat = os.stat(self._log_path)
|
||||
except OSError:
|
||||
return True
|
||||
|
||||
current_inode = getattr(stat, "st_ino", None)
|
||||
if current_inode is not None and self._last_inode is not None:
|
||||
if current_inode != self._last_inode:
|
||||
return True
|
||||
|
||||
# Windows fallback: file shrunk
|
||||
current_size = stat.st_size
|
||||
if self._file_handle is not None:
|
||||
current_pos = self._file_handle.tell()
|
||||
if current_size < current_pos:
|
||||
return True
|
||||
self._last_size = current_size
|
||||
|
||||
return False
|
||||
118
backend/core/threads/metrics_collector.py
Normal file
118
backend/core/threads/metrics_collector.py
Normal file
@@ -0,0 +1,118 @@
|
||||
"""
|
||||
MetricsCollectorThread — collects CPU and memory usage for a server process
|
||||
and persists to the metrics table every COLLECTION_INTERVAL seconds.
|
||||
|
||||
Uses psutil to inspect the process identified by ProcessManager.get_pid().
|
||||
If the process is not running, the thread sleeps and retries.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import queue
|
||||
|
||||
import psutil
|
||||
|
||||
from core.dal.metrics_repository import MetricsRepository
|
||||
from core.threads.base_thread import BaseServerThread
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_COLLECTION_INTERVAL = 10.0
|
||||
_RETENTION_DAYS = 1
|
||||
|
||||
|
||||
class MetricsCollectorThread(BaseServerThread):
|
||||
"""
|
||||
Collects process metrics for a running game server.
|
||||
|
||||
Args:
|
||||
server_id: Database server ID.
|
||||
process_manager: ProcessManager singleton instance.
|
||||
broadcast_queue: Optional queue.Queue for real-time metric pushes.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_id: int,
|
||||
process_manager,
|
||||
broadcast_queue=None,
|
||||
) -> None:
|
||||
super().__init__(server_id, "MetricsCollector")
|
||||
self._process_manager = process_manager
|
||||
self._broadcast_queue = broadcast_queue
|
||||
self._psutil_process = None
|
||||
self._samples_since_cleanup = 0
|
||||
self._cleanup_every = 360 # ~1 hour at 10s intervals
|
||||
|
||||
# ── Main loop ──
|
||||
|
||||
def _run_loop(self) -> None:
|
||||
pid = self._process_manager.get_pid(self.server_id)
|
||||
if pid is None:
|
||||
self._psutil_process = None
|
||||
self._stop_event.wait(timeout=_COLLECTION_INTERVAL)
|
||||
return
|
||||
|
||||
# Reuse or create psutil.Process handle
|
||||
if self._psutil_process is None or self._psutil_process.pid != pid:
|
||||
try:
|
||||
self._psutil_process = psutil.Process(pid)
|
||||
self._psutil_process.cpu_percent(interval=None)
|
||||
except psutil.NoSuchProcess:
|
||||
self._psutil_process = None
|
||||
self._stop_event.wait(timeout=_COLLECTION_INTERVAL)
|
||||
return
|
||||
|
||||
self._stop_event.wait(timeout=_COLLECTION_INTERVAL)
|
||||
|
||||
if self._stop_event.is_set():
|
||||
return
|
||||
|
||||
try:
|
||||
cpu_pct = self._psutil_process.cpu_percent(interval=None)
|
||||
mem_info = self._psutil_process.memory_info()
|
||||
mem_mb = round(mem_info.rss / (1024 * 1024), 2)
|
||||
except psutil.NoSuchProcess:
|
||||
logger.info("[%s] Process %d no longer exists", self.name, pid)
|
||||
self._psutil_process = None
|
||||
return
|
||||
except psutil.AccessDenied as exc:
|
||||
logger.warning("[%s] Access denied reading process %d: %s", self.name, pid, exc)
|
||||
return
|
||||
|
||||
if self._db is None:
|
||||
return
|
||||
|
||||
metrics_repo = MetricsRepository(self._db)
|
||||
metrics_repo.insert(
|
||||
server_id=self.server_id,
|
||||
cpu_percent=cpu_pct,
|
||||
ram_mb=mem_mb,
|
||||
)
|
||||
try:
|
||||
self._db.commit()
|
||||
except Exception as exc:
|
||||
logger.error("[%s] DB commit failed: %s", self.name, exc)
|
||||
self._db.rollback()
|
||||
return
|
||||
|
||||
if self._broadcast_queue is not None:
|
||||
try:
|
||||
self._broadcast_queue.put_nowait({
|
||||
"type": "metrics",
|
||||
"server_id": self.server_id,
|
||||
"data": {"cpu_percent": cpu_pct, "memory_mb": mem_mb},
|
||||
})
|
||||
except queue.Full:
|
||||
logger.debug("[%s] Broadcast queue full, dropping metrics event", self.name)
|
||||
|
||||
# Periodic cleanup
|
||||
self._samples_since_cleanup += 1
|
||||
if self._samples_since_cleanup >= self._cleanup_every:
|
||||
self._samples_since_cleanup = 0
|
||||
try:
|
||||
metrics_repo.cleanup_old(server_id=self.server_id, retention_days=_RETENTION_DAYS)
|
||||
self._db.commit()
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] Cleanup failed: %s", self.name, exc)
|
||||
self._db.rollback()
|
||||
158
backend/core/threads/process_monitor.py
Normal file
158
backend/core/threads/process_monitor.py
Normal file
@@ -0,0 +1,158 @@
|
||||
"""
|
||||
ProcessMonitorThread — watches a running game server process.
|
||||
|
||||
Responsibilities:
|
||||
1. Detect when the process exits unexpectedly (crash).
|
||||
2. On crash: update server status to "crashed" in DB, emit a crash event.
|
||||
3. If auto_restart is enabled on the server record: trigger restart.
|
||||
4. Respect max_restarts — if exceeded, leave server in "crashed" state.
|
||||
|
||||
Poll interval: 5 seconds.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import queue
|
||||
|
||||
from core.dal.event_repository import EventRepository
|
||||
from core.dal.server_repository import ServerRepository
|
||||
from core.threads.base_thread import BaseServerThread
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_POLL_INTERVAL = 5.0
|
||||
|
||||
|
||||
class ProcessMonitorThread(BaseServerThread):
|
||||
"""
|
||||
Monitors the OS process for a running game server.
|
||||
|
||||
Args:
|
||||
server_id: Database server ID.
|
||||
process_manager: ProcessManager singleton (injected).
|
||||
broadcast_queue: Optional queue.Queue for crash notifications.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_id: int,
|
||||
process_manager,
|
||||
broadcast_queue=None,
|
||||
) -> None:
|
||||
super().__init__(server_id, "ProcessMonitor")
|
||||
self._process_manager = process_manager
|
||||
self._broadcast_queue = broadcast_queue
|
||||
|
||||
# ── Main loop ──
|
||||
|
||||
def _run_loop(self) -> None:
|
||||
self._stop_event.wait(timeout=_POLL_INTERVAL)
|
||||
|
||||
if self._stop_event.is_set():
|
||||
return
|
||||
|
||||
if not self._process_manager.is_running(self.server_id):
|
||||
self._handle_unexpected_exit()
|
||||
# After handling, stop this monitor — the server is no longer running
|
||||
self._fatal_error = True
|
||||
|
||||
# ── Crash handling ──
|
||||
|
||||
def _handle_unexpected_exit(self) -> None:
|
||||
if self._db is None:
|
||||
return
|
||||
|
||||
server_repo = ServerRepository(self._db)
|
||||
event_repo = EventRepository(self._db)
|
||||
|
||||
server = server_repo.get_by_id(self.server_id)
|
||||
if server is None:
|
||||
return
|
||||
|
||||
# Only treat as crash if the server was supposed to be running
|
||||
if server["status"] not in ("running", "starting"):
|
||||
return
|
||||
|
||||
logger.warning(
|
||||
"[%s] Server %d process exited unexpectedly (status was '%s')",
|
||||
self.name, self.server_id, server["status"],
|
||||
)
|
||||
|
||||
# Increment crash counter
|
||||
server_repo.increment_restart_count(self.server_id)
|
||||
restart_count = server["restart_count"] + 1
|
||||
max_restarts = server.get("max_restarts", 3)
|
||||
|
||||
# Record crash event
|
||||
event_repo.insert(
|
||||
server_id=self.server_id,
|
||||
event_type="crash",
|
||||
detail={"restart_count": restart_count},
|
||||
)
|
||||
|
||||
should_restart = (
|
||||
server.get("auto_restart", False)
|
||||
and restart_count <= max_restarts
|
||||
)
|
||||
|
||||
if should_restart:
|
||||
server_repo.update_status(self.server_id, "restarting")
|
||||
event_repo.insert(
|
||||
server_id=self.server_id,
|
||||
event_type="restart_scheduled",
|
||||
detail={"attempt": restart_count, "max": max_restarts},
|
||||
)
|
||||
else:
|
||||
server_repo.update_status(self.server_id, "crashed")
|
||||
if restart_count > max_restarts:
|
||||
event_repo.insert(
|
||||
server_id=self.server_id,
|
||||
event_type="restart_limit_reached",
|
||||
detail={"restart_count": restart_count, "max_restarts": max_restarts},
|
||||
)
|
||||
|
||||
try:
|
||||
self._db.commit()
|
||||
except Exception as exc:
|
||||
logger.error("[%s] DB commit failed during crash handling: %s", self.name, exc)
|
||||
self._db.rollback()
|
||||
|
||||
if self._broadcast_queue is not None:
|
||||
try:
|
||||
self._broadcast_queue.put_nowait({
|
||||
"type": "server_status",
|
||||
"server_id": self.server_id,
|
||||
"data": {
|
||||
"status": "restarting" if should_restart else "crashed",
|
||||
"restart_count": restart_count,
|
||||
},
|
||||
})
|
||||
except queue.Full:
|
||||
logger.debug("[%s] Broadcast queue full, dropping server_status event", self.name)
|
||||
|
||||
# Trigger actual restart outside DB work
|
||||
if should_restart:
|
||||
self._trigger_restart()
|
||||
|
||||
def _trigger_restart(self) -> None:
|
||||
"""
|
||||
Calls ServerService.start() to restart the server.
|
||||
This is safe to call from a background thread.
|
||||
"""
|
||||
try:
|
||||
from database import get_thread_db
|
||||
from core.servers.service import ServerService
|
||||
|
||||
db = get_thread_db()
|
||||
try:
|
||||
service = ServerService(db)
|
||||
service.start(self.server_id)
|
||||
except Exception as exc:
|
||||
logger.error("[%s] Auto-restart start() failed: %s", self.name, exc, exc_info=True)
|
||||
finally:
|
||||
try:
|
||||
db.close()
|
||||
except Exception as exc:
|
||||
logger.debug("[%s] Error closing restart DB connection: %s", self.name, exc)
|
||||
except Exception as exc:
|
||||
logger.error("[%s] Auto-restart failed: %s", self.name, exc, exc_info=True)
|
||||
169
backend/core/threads/remote_admin_poller.py
Normal file
169
backend/core/threads/remote_admin_poller.py
Normal file
@@ -0,0 +1,169 @@
|
||||
"""
|
||||
RemoteAdminPollerThread — polls the game server's remote admin interface
|
||||
(e.g. BattlEye RCon for Arma3) to sync the player list.
|
||||
|
||||
Design notes:
|
||||
- Uses the RemoteAdminClient protocol injected at construction time
|
||||
- Reconnects automatically on disconnect with exponential backoff
|
||||
- Persists current player list to players table via PlayerRepository
|
||||
- Emits player_join / player_leave events via EventRepository
|
||||
- Pushes player list updates to broadcast_queue if provided
|
||||
|
||||
Poll interval: 30 seconds.
|
||||
Reconnect backoff: 5s -> 10s -> 20s -> 40s -> 60s (cap).
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import queue
|
||||
|
||||
from core.dal.event_repository import EventRepository
|
||||
from core.dal.player_repository import PlayerRepository
|
||||
from core.threads.base_thread import BaseServerThread
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_POLL_INTERVAL = 30.0
|
||||
_RECONNECT_BACKOFF_BASE = 5.0
|
||||
_RECONNECT_BACKOFF_MAX = 60.0
|
||||
_RECONNECT_BACKOFF_MULT = 2.0
|
||||
|
||||
|
||||
class RemoteAdminPollerThread(BaseServerThread):
|
||||
"""
|
||||
Polls the remote admin interface for a game server.
|
||||
|
||||
Args:
|
||||
server_id: Database server ID.
|
||||
remote_admin_client: Connected RemoteAdminClient instance.
|
||||
broadcast_queue: Optional queue.Queue for player list pushes.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_id: int,
|
||||
remote_admin_client,
|
||||
broadcast_queue=None,
|
||||
) -> None:
|
||||
super().__init__(server_id, "RemoteAdminPoller")
|
||||
self._client = remote_admin_client
|
||||
self._broadcast_queue = broadcast_queue
|
||||
self._connected = False
|
||||
self._reconnect_backoff = _RECONNECT_BACKOFF_BASE
|
||||
self._known_players: dict[str, dict] = {} # player_uid -> player data
|
||||
|
||||
# ── Lifecycle ──
|
||||
|
||||
def _on_stop(self) -> None:
|
||||
if self._connected and self._client is not None:
|
||||
try:
|
||||
self._client.disconnect()
|
||||
except Exception as exc:
|
||||
logger.debug("[%s] Error disconnecting remote admin on stop: %s", self.name, exc)
|
||||
self._connected = False
|
||||
|
||||
# ── Main loop ──
|
||||
|
||||
def _run_loop(self) -> None:
|
||||
if not self._connected:
|
||||
self._attempt_connect()
|
||||
return
|
||||
|
||||
self._stop_event.wait(timeout=_POLL_INTERVAL)
|
||||
|
||||
if self._stop_event.is_set():
|
||||
return
|
||||
|
||||
try:
|
||||
players = self._client.get_players()
|
||||
self._reconnect_backoff = _RECONNECT_BACKOFF_BASE
|
||||
self._sync_players(players)
|
||||
except Exception as exc:
|
||||
logger.warning("[%s] Poll failed: %s — will reconnect", self.name, exc)
|
||||
self._connected = False
|
||||
try:
|
||||
if self._client is not None:
|
||||
self._client.disconnect()
|
||||
except Exception as exc:
|
||||
logger.debug("[%s] Error disconnecting after poll failure: %s", self.name, exc)
|
||||
|
||||
# ── Connection management ──
|
||||
|
||||
def _attempt_connect(self) -> None:
|
||||
try:
|
||||
self._client.connect() if hasattr(self._client, "connect") else None
|
||||
self._connected = True
|
||||
self._reconnect_backoff = _RECONNECT_BACKOFF_BASE
|
||||
logger.info("[%s] Connected to remote admin", self.name)
|
||||
except Exception as exc:
|
||||
logger.warning(
|
||||
"[%s] Connection failed: %s — retrying in %.1fs",
|
||||
self.name, exc, self._reconnect_backoff,
|
||||
)
|
||||
self._stop_event.wait(timeout=self._reconnect_backoff)
|
||||
self._reconnect_backoff = min(
|
||||
self._reconnect_backoff * _RECONNECT_BACKOFF_MULT,
|
||||
_RECONNECT_BACKOFF_MAX,
|
||||
)
|
||||
|
||||
# ── Player sync ──
|
||||
|
||||
def _sync_players(self, current_players: list[dict]) -> None:
|
||||
"""
|
||||
Diff current_players against self._known_players.
|
||||
Insert join events for new players, leave events for departed ones.
|
||||
Upsert all current players in the DB.
|
||||
|
||||
Each player dict must have at least: slot_id, name (other fields optional).
|
||||
"""
|
||||
if self._db is None:
|
||||
return
|
||||
|
||||
player_repo = PlayerRepository(self._db)
|
||||
event_repo = EventRepository(self._db)
|
||||
|
||||
# Build uid sets for diffing — use slot_id as key
|
||||
current_slots = {str(p.get("slot_id", i)): p for i, p in enumerate(current_players)}
|
||||
current_keys = set(current_slots.keys())
|
||||
known_keys = set(self._known_players.keys())
|
||||
|
||||
joined = current_keys - known_keys
|
||||
left = known_keys - current_keys
|
||||
|
||||
for slot_key, player in current_slots.items():
|
||||
player_repo.upsert(server_id=self.server_id, player=player)
|
||||
if slot_key in joined:
|
||||
event_repo.insert(
|
||||
server_id=self.server_id,
|
||||
event_type="player_join",
|
||||
detail={"name": player.get("name", ""), "slot": slot_key},
|
||||
)
|
||||
logger.debug("[%s] Player joined: %s (slot %s)", self.name, player.get("name"), slot_key)
|
||||
|
||||
for slot_key in left:
|
||||
departed = self._known_players[slot_key]
|
||||
event_repo.insert(
|
||||
server_id=self.server_id,
|
||||
event_type="player_leave",
|
||||
detail={"name": departed.get("name", ""), "slot": slot_key},
|
||||
)
|
||||
logger.debug("[%s] Player left: %s (slot %s)", self.name, departed.get("name"), slot_key)
|
||||
|
||||
try:
|
||||
self._db.commit()
|
||||
except Exception as exc:
|
||||
logger.error("[%s] DB commit failed during player sync: %s", self.name, exc)
|
||||
self._db.rollback()
|
||||
|
||||
# Update known players
|
||||
self._known_players = current_slots
|
||||
|
||||
if self._broadcast_queue is not None:
|
||||
try:
|
||||
self._broadcast_queue.put_nowait({
|
||||
"type": "players",
|
||||
"server_id": self.server_id,
|
||||
"data": current_players,
|
||||
})
|
||||
except queue.Full:
|
||||
logger.debug("[%s] Broadcast queue full, dropping players event", self.name)
|
||||
257
backend/core/threads/thread_registry.py
Normal file
257
backend/core/threads/thread_registry.py
Normal file
@@ -0,0 +1,257 @@
|
||||
"""
|
||||
ThreadRegistry — manages the lifecycle of all per-server background threads.
|
||||
|
||||
One instance is created at app startup and stored in app.state.thread_registry.
|
||||
Also provides class-level methods for convenience (called from ServerService).
|
||||
|
||||
Thread set per server:
|
||||
- LogTailThread (started if adapter has "log_parser" capability and log_path is known)
|
||||
- MetricsCollectorThread (always started)
|
||||
- ProcessMonitorThread (always started)
|
||||
- RemoteAdminPollerThread (started only if adapter has "remote_admin" capability)
|
||||
|
||||
Key methods:
|
||||
start_server_threads(server_id, db) — start all threads for a server
|
||||
stop_server_threads(server_id) — stop all threads for a server
|
||||
reattach_server_threads(server_id, db) — re-attach threads without restarting process
|
||||
stop_all() — called at app shutdown
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import queue
|
||||
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from core.dal.config_repository import ConfigRepository
|
||||
from core.dal.server_repository import ServerRepository
|
||||
from core.threads.log_tail import LogTailThread
|
||||
from core.threads.metrics_collector import MetricsCollectorThread
|
||||
from core.threads.process_monitor import ProcessMonitorThread
|
||||
from core.threads.remote_admin_poller import RemoteAdminPollerThread
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Module-level singleton for convenience (used by ServerService)
|
||||
_instance: ThreadRegistry | None = None
|
||||
|
||||
|
||||
class ThreadRegistry:
|
||||
"""
|
||||
Manages all background threads for all running servers.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
process_manager,
|
||||
adapter_registry: GameAdapterRegistry | None = None,
|
||||
global_broadcast_queue: queue.Queue | None = None,
|
||||
) -> None:
|
||||
self._process_manager = process_manager
|
||||
self._adapter_registry = adapter_registry or GameAdapterRegistry
|
||||
self._broadcast_queue = global_broadcast_queue or queue.Queue(maxsize=1000)
|
||||
self._bundles: dict[int, dict] = {} # server_id -> thread bundle
|
||||
|
||||
# ── Class-level convenience API ──
|
||||
|
||||
@classmethod
|
||||
def _get_instance(cls) -> "ThreadRegistry | None":
|
||||
return _instance
|
||||
|
||||
@classmethod
|
||||
def set_instance(cls, registry: "ThreadRegistry") -> None:
|
||||
global _instance
|
||||
_instance = registry
|
||||
|
||||
@classmethod
|
||||
def start_server_threads(cls, server_id: int, db) -> None:
|
||||
"""Class-level convenience — starts threads for a server using the singleton."""
|
||||
registry = cls._get_instance()
|
||||
if registry is not None:
|
||||
registry._start_server_threads(server_id, db)
|
||||
|
||||
@classmethod
|
||||
def stop_server_threads(cls, server_id: int) -> None:
|
||||
"""Class-level convenience — stops threads for a server using the singleton."""
|
||||
registry = cls._get_instance()
|
||||
if registry is not None:
|
||||
registry._stop_server_threads(server_id)
|
||||
|
||||
@classmethod
|
||||
def reattach_server_threads(cls, server_id: int, db) -> None:
|
||||
"""Class-level convenience — re-attaches threads for a recovered server."""
|
||||
registry = cls._get_instance()
|
||||
if registry is not None:
|
||||
registry._reattach_server_threads(server_id, db)
|
||||
|
||||
@classmethod
|
||||
def stop_all(cls) -> None:
|
||||
"""Class-level convenience — stops all threads."""
|
||||
registry = cls._get_instance()
|
||||
if registry is not None:
|
||||
registry._stop_all()
|
||||
|
||||
# ── Instance methods ──
|
||||
|
||||
def _start_server_threads(self, server_id: int, db) -> None:
|
||||
if server_id in self._bundles:
|
||||
logger.warning(
|
||||
"ThreadRegistry: threads already exist for server %d — stopping first",
|
||||
server_id,
|
||||
)
|
||||
self._stop_server_threads(server_id)
|
||||
|
||||
bundle = self._build_bundle(server_id, db)
|
||||
self._bundles[server_id] = bundle
|
||||
self._start_bundle(server_id, bundle)
|
||||
|
||||
def _stop_server_threads(self, server_id: int) -> None:
|
||||
bundle = self._bundles.pop(server_id, None)
|
||||
if bundle is None:
|
||||
return
|
||||
self._stop_bundle(server_id, bundle)
|
||||
|
||||
def _reattach_server_threads(self, server_id: int, db) -> None:
|
||||
logger.info("ThreadRegistry: reattaching threads for server %d", server_id)
|
||||
self._start_server_threads(server_id, db)
|
||||
|
||||
def _stop_all(self) -> None:
|
||||
server_ids = list(self._bundles.keys())
|
||||
for server_id in server_ids:
|
||||
self._stop_server_threads(server_id)
|
||||
logger.info("ThreadRegistry: all threads stopped")
|
||||
|
||||
def get_thread_count(self, server_id: int) -> int:
|
||||
"""Returns the number of running threads for a server."""
|
||||
bundle = self._bundles.get(server_id)
|
||||
if bundle is None:
|
||||
return 0
|
||||
return sum(
|
||||
1
|
||||
for key in ("log_tail", "metrics", "monitor", "rcon_poller")
|
||||
if bundle.get(key) is not None and bundle[key].is_alive()
|
||||
)
|
||||
|
||||
# ── Bundle construction ──
|
||||
|
||||
def _build_bundle(self, server_id: int, db) -> dict:
|
||||
"""Reads server + config data from DB and constructs (but does not start) the thread bundle."""
|
||||
server_repo = ServerRepository(db)
|
||||
config_repo = ConfigRepository(db)
|
||||
|
||||
server = server_repo.get_by_id(server_id)
|
||||
if server is None:
|
||||
raise ValueError(f"Server {server_id} not found in database")
|
||||
|
||||
game_type = server["game_type"]
|
||||
adapter = self._adapter_registry.get(game_type)
|
||||
|
||||
# Log path: read from config if present, else use adapter default
|
||||
log_path = None
|
||||
if adapter.has_capability("log_parser"):
|
||||
log_parser = adapter.get_log_parser()
|
||||
# Try to resolve log path via the adapter's log file resolver
|
||||
from core.utils.file_utils import get_server_dir
|
||||
server_dir = get_server_dir(server_id)
|
||||
if server_dir.exists():
|
||||
resolver = log_parser.get_log_file_resolver(server_id)
|
||||
resolved = resolver(server_dir)
|
||||
if resolved is not None:
|
||||
log_path = str(resolved)
|
||||
|
||||
bundle: dict = {
|
||||
"log_tail": None,
|
||||
"metrics": None,
|
||||
"monitor": None,
|
||||
"rcon_poller": None,
|
||||
}
|
||||
|
||||
# Always: ProcessMonitorThread
|
||||
bundle["monitor"] = ProcessMonitorThread(
|
||||
server_id=server_id,
|
||||
process_manager=self._process_manager,
|
||||
broadcast_queue=self._broadcast_queue,
|
||||
)
|
||||
|
||||
# Always: MetricsCollectorThread
|
||||
bundle["metrics"] = MetricsCollectorThread(
|
||||
server_id=server_id,
|
||||
process_manager=self._process_manager,
|
||||
broadcast_queue=self._broadcast_queue,
|
||||
)
|
||||
|
||||
# Conditional: LogTailThread
|
||||
if log_path and adapter.has_capability("log_parser"):
|
||||
log_parser = adapter.get_log_parser()
|
||||
bundle["log_tail"] = LogTailThread(
|
||||
server_id=server_id,
|
||||
log_path=log_path,
|
||||
log_parser=log_parser,
|
||||
broadcast_queue=self._broadcast_queue,
|
||||
)
|
||||
|
||||
# Conditional: RemoteAdminPollerThread
|
||||
if adapter.has_capability("remote_admin"):
|
||||
remote_admin = adapter.get_remote_admin()
|
||||
if remote_admin is not None:
|
||||
# Get RCon password from config
|
||||
rcon_password = self._get_remote_admin_password(server_id, config_repo)
|
||||
if rcon_password:
|
||||
try:
|
||||
rcon_port = server.get("rcon_port") or server.get("game_port", 0) + 1
|
||||
client = remote_admin.create_client(
|
||||
host="127.0.0.1",
|
||||
port=rcon_port,
|
||||
password=rcon_password,
|
||||
)
|
||||
bundle["rcon_poller"] = RemoteAdminPollerThread(
|
||||
server_id=server_id,
|
||||
remote_admin_client=client,
|
||||
broadcast_queue=self._broadcast_queue,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning(
|
||||
"ThreadRegistry: could not create RCon client for server %d: %s",
|
||||
server_id, exc,
|
||||
)
|
||||
|
||||
return bundle
|
||||
|
||||
def _start_bundle(self, server_id: int, bundle: dict) -> None:
|
||||
started = []
|
||||
for key in ("monitor", "metrics", "log_tail", "rcon_poller"):
|
||||
thread = bundle.get(key)
|
||||
if thread is not None:
|
||||
thread.start()
|
||||
started.append(key)
|
||||
logger.info("ThreadRegistry: started threads for server %d: %s", server_id, started)
|
||||
|
||||
def _stop_bundle(self, server_id: int, bundle: dict) -> None:
|
||||
for key in ("rcon_poller", "log_tail", "metrics", "monitor"):
|
||||
thread = bundle.get(key)
|
||||
if thread is not None and thread.is_alive():
|
||||
thread.stop_and_join(timeout=5.0)
|
||||
logger.info("ThreadRegistry: stopped all threads for server %d", server_id)
|
||||
|
||||
# ── Helpers ──
|
||||
|
||||
def _get_remote_admin_password(
|
||||
self, server_id: int, config_repo: ConfigRepository
|
||||
) -> str | None:
|
||||
"""Read the RCon password from the rcon config section."""
|
||||
# Need to decrypt sensitive fields
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
try:
|
||||
server = ServerRepository(config_repo._db).get_by_id(server_id)
|
||||
if server is None:
|
||||
return None
|
||||
adapter = self._adapter_registry.get(server["game_type"])
|
||||
config_gen = adapter.get_config_generator()
|
||||
sensitive = config_gen.get_sensitive_fields("rcon") if "rcon" in config_gen.get_sections() else []
|
||||
except Exception as exc:
|
||||
logger.debug("Could not determine sensitive fields for RCon config: %s", exc)
|
||||
sensitive = []
|
||||
|
||||
rcon_section = config_repo.get_section(server_id, "rcon", sensitive)
|
||||
if rcon_section is None:
|
||||
return None
|
||||
return rcon_section.get("password") or None
|
||||
1
backend/core/utils/__init__.py
Normal file
1
backend/core/utils/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Core utility modules."""
|
||||
32
backend/core/utils/crypto.py
Normal file
32
backend/core/utils/crypto.py
Normal file
@@ -0,0 +1,32 @@
|
||||
"""Field-level encryption using Fernet (AES-256)."""
|
||||
from __future__ import annotations
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
_fernet: Fernet | None = None
|
||||
|
||||
|
||||
def get_fernet() -> Fernet:
|
||||
global _fernet
|
||||
if _fernet is None:
|
||||
from config import settings
|
||||
_fernet = Fernet(settings.encryption_key.encode())
|
||||
return _fernet
|
||||
|
||||
|
||||
def encrypt(plaintext: str) -> str:
|
||||
"""Encrypt plaintext string. Returns 'encrypted:<base64-token>'."""
|
||||
token = get_fernet().encrypt(plaintext.encode()).decode()
|
||||
return f"encrypted:{token}"
|
||||
|
||||
|
||||
def decrypt(ciphertext: str) -> str:
|
||||
"""Decrypt 'encrypted:<token>' string. Returns plaintext."""
|
||||
if not ciphertext.startswith("encrypted:"):
|
||||
return ciphertext # Not encrypted, return as-is
|
||||
token = ciphertext[len("encrypted:"):]
|
||||
return get_fernet().decrypt(token.encode()).decode()
|
||||
|
||||
|
||||
def is_encrypted(value: str) -> bool:
|
||||
return isinstance(value, str) and value.startswith("encrypted:")
|
||||
65
backend/core/utils/file_utils.py
Normal file
65
backend/core/utils/file_utils.py
Normal file
@@ -0,0 +1,65 @@
|
||||
"""Game-agnostic file operations."""
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_server_dir(server_id: int) -> Path:
|
||||
"""Return the absolute directory path for a server's data."""
|
||||
from config import settings
|
||||
base = Path(settings.servers_dir).resolve()
|
||||
return base / str(server_id)
|
||||
|
||||
|
||||
def ensure_server_dirs(server_id: int, layout: list[str] | None = None) -> None:
|
||||
"""
|
||||
Create servers/{id}/ and any subdirectories from adapter layout.
|
||||
layout example: ["server", "battleye", "mpmissions"]
|
||||
"""
|
||||
server_dir = get_server_dir(server_id)
|
||||
server_dir.mkdir(parents=True, exist_ok=True)
|
||||
if layout:
|
||||
for subdir in layout:
|
||||
(server_dir / subdir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def safe_delete_file(path: Path) -> bool:
|
||||
"""Delete a file if it exists. Returns True if deleted."""
|
||||
try:
|
||||
path.unlink(missing_ok=True)
|
||||
return True
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
|
||||
def sanitize_filename(filename: str) -> str:
|
||||
"""
|
||||
Sanitize a filename for safe disk storage.
|
||||
|
||||
Rules:
|
||||
- Strip path separators (/ \\ and ..)
|
||||
- Allow only alphanumeric, dots, hyphens, underscores, @ signs
|
||||
- Collapse consecutive dots (prevent ../ tricks)
|
||||
- Truncate to 255 characters
|
||||
- Raise ValueError if the result is empty
|
||||
"""
|
||||
# Take only the basename — strip any directory components
|
||||
filename = filename.replace("\\", "/").split("/")[-1]
|
||||
|
||||
# Remove null bytes and control characters
|
||||
filename = re.sub(r"[\x00-\x1f\x7f]", "", filename)
|
||||
|
||||
# Allow only safe characters: alphanum, dot, hyphen, underscore, @
|
||||
filename = re.sub(r"[^\w.\-@]", "_", filename)
|
||||
|
||||
# Collapse consecutive dots to prevent tricks like ".../.."
|
||||
filename = re.sub(r"\.{2,}", ".", filename)
|
||||
|
||||
# Truncate
|
||||
filename = filename[:255]
|
||||
|
||||
if not filename or filename in (".", ".."):
|
||||
raise ValueError(f"Filename '{filename}' is not safe for storage")
|
||||
|
||||
return filename
|
||||
87
backend/core/utils/port_checker.py
Normal file
87
backend/core/utils/port_checker.py
Normal file
@@ -0,0 +1,87 @@
|
||||
"""Game-agnostic port availability checking."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import socket
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def is_port_in_use(port: int, host: str = "127.0.0.1") -> bool:
|
||||
"""Return True if the port is already bound."""
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.settimeout(0.5)
|
||||
try:
|
||||
s.bind((host, port))
|
||||
return False
|
||||
except OSError:
|
||||
return True
|
||||
|
||||
|
||||
def check_server_ports_available(
|
||||
game_port: int,
|
||||
rcon_port: int | None = None,
|
||||
host: str = "127.0.0.1",
|
||||
port_conventions: dict[str, int] | None = None,
|
||||
) -> list[int]:
|
||||
"""
|
||||
Check all ports for a server instance.
|
||||
If port_conventions is provided (from adapter), checks all derived ports.
|
||||
Returns list of ports that are already in use (empty = all available).
|
||||
"""
|
||||
ports_to_check: set[int] = set()
|
||||
|
||||
if port_conventions:
|
||||
ports_to_check.update(port_conventions.values())
|
||||
else:
|
||||
ports_to_check.add(game_port)
|
||||
|
||||
if rcon_port is not None:
|
||||
ports_to_check.add(rcon_port)
|
||||
|
||||
return [p for p in sorted(ports_to_check) if is_port_in_use(p, host)]
|
||||
|
||||
|
||||
def check_ports_against_running_servers(
|
||||
new_server_game_port: int,
|
||||
new_server_rcon_port: int | None,
|
||||
exclude_server_id: int | None,
|
||||
db,
|
||||
) -> list[int]:
|
||||
"""
|
||||
Cross-game port conflict detection.
|
||||
Checks new server's full port set against all running servers' full port sets.
|
||||
Returns list of conflicting ports.
|
||||
"""
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
from sqlalchemy import text
|
||||
|
||||
rows = db.execute(
|
||||
text("SELECT id, game_type, game_port, rcon_port FROM servers WHERE status IN ('running','starting')")
|
||||
).fetchall()
|
||||
|
||||
occupied_ports: set[int] = set()
|
||||
for row in rows:
|
||||
if exclude_server_id and row[0] == exclude_server_id:
|
||||
continue
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get(row[1])
|
||||
conventions = adapter.get_process_config().get_port_conventions(row[2])
|
||||
occupied_ports.update(conventions.values())
|
||||
except KeyError:
|
||||
logger.debug("Unknown game type '%s', falling back to game_port only", row[1])
|
||||
occupied_ports.add(row[2])
|
||||
if row[3] is not None:
|
||||
occupied_ports.add(row[3])
|
||||
|
||||
# Check new server's ports against occupied set
|
||||
try:
|
||||
adapter = GameAdapterRegistry.get("arma3") # temporary — will be passed in
|
||||
except KeyError:
|
||||
logger.debug("No 'arma3' adapter for port conventions, using defaults")
|
||||
|
||||
new_ports: set[int] = {new_server_game_port}
|
||||
if new_server_rcon_port:
|
||||
new_ports.add(new_server_rcon_port)
|
||||
|
||||
return sorted(new_ports & occupied_ports)
|
||||
4
backend/core/websocket/__init__.py
Normal file
4
backend/core/websocket/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from core.websocket.manager import WebSocketManager
|
||||
from core.websocket.broadcast_thread import BroadcastThread
|
||||
|
||||
__all__ = ["WebSocketManager", "BroadcastThread"]
|
||||
116
backend/core/websocket/broadcast_thread.py
Normal file
116
backend/core/websocket/broadcast_thread.py
Normal file
@@ -0,0 +1,116 @@
|
||||
"""
|
||||
BroadcastThread — the single bridge between OS threads and asyncio WebSocket world.
|
||||
|
||||
Reads events from a queue.Queue (written by background server threads) and
|
||||
forwards them to the WebSocketManager running in the asyncio event loop.
|
||||
|
||||
Design:
|
||||
- Runs as a daemon thread — no cleanup needed on shutdown.
|
||||
- queue.Queue is thread-safe — multiple producer threads, single consumer.
|
||||
- asyncio.run_coroutine_threadsafe() schedules the WebSocketManager.broadcast()
|
||||
coroutine on the event loop from this non-asyncio thread.
|
||||
- If the event loop is closed or the broadcast fails, the event is dropped silently.
|
||||
|
||||
Queue item format (dict):
|
||||
{
|
||||
"type": str, # "log", "metrics", "players", "server_status", etc.
|
||||
"server_id": int, # Which server this event belongs to
|
||||
"data": dict | list, # Payload — varies by type
|
||||
}
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import queue
|
||||
import threading
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_QUEUE_GET_TIMEOUT = 1.0
|
||||
_DROP_LOG_THRESHOLD = 100
|
||||
|
||||
|
||||
class BroadcastThread(threading.Thread):
|
||||
"""
|
||||
Bridge from thread-world to asyncio-world.
|
||||
|
||||
Args:
|
||||
event_queue: The shared queue.Queue that all background threads write to.
|
||||
ws_manager: The WebSocketManager instance (asyncio-side).
|
||||
loop: The asyncio event loop running in the main thread.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
event_queue: queue.Queue,
|
||||
ws_manager, # WebSocketManager — type annotation omitted to avoid circular import
|
||||
loop: asyncio.AbstractEventLoop,
|
||||
) -> None:
|
||||
super().__init__(name="BroadcastThread", daemon=True)
|
||||
self._queue = event_queue
|
||||
self._ws_manager = ws_manager
|
||||
self._loop = loop
|
||||
self._stop_event = threading.Event()
|
||||
self._dropped = 0
|
||||
|
||||
def stop(self) -> None:
|
||||
self._stop_event.set()
|
||||
|
||||
def run(self) -> None:
|
||||
logger.info("BroadcastThread: started")
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
item = self._queue.get(timeout=_QUEUE_GET_TIMEOUT)
|
||||
except queue.Empty:
|
||||
continue
|
||||
|
||||
self._forward(item)
|
||||
|
||||
# Drain remaining items on shutdown
|
||||
while not self._queue.empty():
|
||||
try:
|
||||
item = self._queue.get_nowait()
|
||||
self._forward(item)
|
||||
except queue.Empty:
|
||||
break
|
||||
|
||||
logger.info("BroadcastThread: stopped")
|
||||
|
||||
def _forward(self, item: dict) -> None:
|
||||
"""Schedule a broadcast on the asyncio event loop."""
|
||||
if self._loop.is_closed():
|
||||
self._dropped += 1
|
||||
if self._dropped % _DROP_LOG_THRESHOLD == 0:
|
||||
logger.warning(
|
||||
"BroadcastThread: event loop closed, dropped %d messages",
|
||||
self._dropped,
|
||||
)
|
||||
return
|
||||
|
||||
server_id = item.get("server_id")
|
||||
event_type = item.get("type", "unknown")
|
||||
data = item.get("data", {})
|
||||
|
||||
message = {
|
||||
"type": event_type,
|
||||
"server_id": server_id,
|
||||
"data": data,
|
||||
}
|
||||
|
||||
try:
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._ws_manager.broadcast(server_id, message),
|
||||
self._loop,
|
||||
)
|
||||
# Fire and forget — suppress unhandled exception warnings
|
||||
future.add_done_callback(self._on_broadcast_done)
|
||||
except RuntimeError as exc:
|
||||
logger.debug("BroadcastThread: could not schedule broadcast: %s", exc)
|
||||
|
||||
def _on_broadcast_done(self, future) -> None:
|
||||
"""Called when the broadcast coroutine completes. Log exceptions only."""
|
||||
try:
|
||||
future.result()
|
||||
except Exception as exc:
|
||||
logger.debug("BroadcastThread: broadcast error: %s", exc)
|
||||
96
backend/core/websocket/manager.py
Normal file
96
backend/core/websocket/manager.py
Normal file
@@ -0,0 +1,96 @@
|
||||
"""
|
||||
WebSocketManager — asyncio-side manager for WebSocket connections.
|
||||
|
||||
All methods are coroutines and must be called from the asyncio event loop.
|
||||
No locking needed — the event loop is single-threaded.
|
||||
|
||||
Subscription model:
|
||||
- Each connection subscribes to zero or more server_ids.
|
||||
- Subscribing to server_id=None means "all servers".
|
||||
- broadcast(server_id, message) sends to all clients subscribed to that server_id
|
||||
plus all clients subscribed to None (global subscribers).
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import WebSocket
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class WebSocketManager:
|
||||
"""Manages active WebSocket connections and delivers broadcast messages."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
# Maps WebSocket -> set of subscribed server_ids (None = all)
|
||||
self._connections: dict[WebSocket, set[Optional[int]]] = {}
|
||||
|
||||
# ── Connection lifecycle ──
|
||||
|
||||
async def connect(self, ws: WebSocket, server_ids: Optional[list[int]] = None) -> None:
|
||||
"""
|
||||
Accept a WebSocket connection and register it.
|
||||
|
||||
Args:
|
||||
ws: The FastAPI WebSocket instance.
|
||||
server_ids: List of server IDs to subscribe to, or None for all.
|
||||
"""
|
||||
await ws.accept()
|
||||
subscriptions: set[Optional[int]] = set(server_ids) if server_ids else {None}
|
||||
self._connections[ws] = subscriptions
|
||||
logger.info(
|
||||
"WebSocketManager: client connected, subscriptions=%s, total=%d",
|
||||
subscriptions,
|
||||
len(self._connections),
|
||||
)
|
||||
|
||||
async def disconnect(self, ws: WebSocket) -> None:
|
||||
"""Remove a disconnected WebSocket."""
|
||||
self._connections.pop(ws, None)
|
||||
logger.info(
|
||||
"WebSocketManager: client disconnected, total=%d",
|
||||
len(self._connections),
|
||||
)
|
||||
|
||||
# ── Broadcast ──
|
||||
|
||||
async def broadcast(self, server_id: Optional[int], message: dict) -> None:
|
||||
"""
|
||||
Send a message to all clients subscribed to the given server_id.
|
||||
Also sends to clients subscribed to None (global subscribers).
|
||||
|
||||
Disconnected clients are removed automatically.
|
||||
"""
|
||||
if not self._connections:
|
||||
return
|
||||
|
||||
payload = json.dumps(message)
|
||||
disconnected = []
|
||||
|
||||
for ws, subscriptions in self._connections.items():
|
||||
if None in subscriptions or server_id in subscriptions:
|
||||
try:
|
||||
await ws.send_text(payload)
|
||||
except Exception as exc:
|
||||
logger.debug("WebSocketManager: send failed, marking disconnected: %s", exc)
|
||||
disconnected.append(ws)
|
||||
|
||||
for ws in disconnected:
|
||||
await self.disconnect(ws)
|
||||
|
||||
async def send_to_connection(self, ws: WebSocket, message: dict) -> None:
|
||||
"""Send a message to a single specific connection."""
|
||||
try:
|
||||
await ws.send_text(json.dumps(message))
|
||||
except Exception as exc:
|
||||
logger.debug("WebSocketManager: direct send failed, disconnecting: %s", exc)
|
||||
await self.disconnect(ws)
|
||||
|
||||
# ── Stats ──
|
||||
|
||||
@property
|
||||
def connection_count(self) -> int:
|
||||
return len(self._connections)
|
||||
90
backend/core/websocket/router.py
Normal file
90
backend/core/websocket/router.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""
|
||||
WebSocket endpoint.
|
||||
|
||||
URL: /ws
|
||||
/ws?server_id=1
|
||||
/ws?server_id=1&server_id=2
|
||||
|
||||
Authentication: JWT passed as a query parameter `token` because
|
||||
browser WebSocket API does not support custom headers.
|
||||
If the token is missing or invalid, the connection is closed with code 4001.
|
||||
|
||||
After authentication, the client receives:
|
||||
- A "connected" welcome message with the list of subscribed server IDs
|
||||
- All events for subscribed servers pushed by BroadcastThread
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Query, WebSocket, WebSocketDisconnect
|
||||
|
||||
from core.auth.utils import decode_access_token
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=["websocket"])
|
||||
|
||||
|
||||
@router.websocket("/ws")
|
||||
async def websocket_endpoint(
|
||||
ws: WebSocket,
|
||||
token: Optional[str] = Query(default=None),
|
||||
server_id: Optional[list[int]] = Query(default=None),
|
||||
) -> None:
|
||||
"""
|
||||
WebSocket endpoint for real-time server events.
|
||||
|
||||
Query parameters:
|
||||
token: JWT access token (required)
|
||||
server_id: One or more server IDs to subscribe to (optional, default=all)
|
||||
"""
|
||||
# Authenticate before accepting
|
||||
if not token:
|
||||
await ws.close(code=4001, reason="Missing token")
|
||||
return
|
||||
|
||||
try:
|
||||
user = decode_access_token(token)
|
||||
except Exception as exc:
|
||||
logger.warning("WebSocket: token decode failed: %s", exc)
|
||||
user = None
|
||||
|
||||
if user is None:
|
||||
await ws.close(code=4001, reason="Invalid or expired token")
|
||||
return
|
||||
|
||||
# Get WebSocketManager from app state
|
||||
ws_manager = ws.app.state.ws_manager
|
||||
|
||||
await ws_manager.connect(ws, server_ids=server_id)
|
||||
logger.info(
|
||||
"WebSocket: user '%s' connected, subscribed to servers=%s",
|
||||
user.get("sub"),
|
||||
server_id,
|
||||
)
|
||||
|
||||
try:
|
||||
# Send welcome message
|
||||
await ws_manager.send_to_connection(ws, {
|
||||
"type": "connected",
|
||||
"data": {
|
||||
"user": user.get("sub"),
|
||||
"subscriptions": server_id or "all",
|
||||
},
|
||||
})
|
||||
|
||||
# Keep connection alive — wait for client to disconnect
|
||||
while True:
|
||||
data = await ws.receive_text()
|
||||
|
||||
except WebSocketDisconnect:
|
||||
logger.info(
|
||||
"WebSocket: user '%s' disconnected",
|
||||
user.get("sub"),
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.error("WebSocket: unexpected error: %s", exc)
|
||||
finally:
|
||||
await ws_manager.disconnect(ws)
|
||||
114
backend/database.py
Normal file
114
backend/database.py
Normal file
@@ -0,0 +1,114 @@
|
||||
"""SQLAlchemy engine setup, migration runner, and session helpers."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from sqlalchemy import create_engine, event, text
|
||||
from sqlalchemy.engine import Connection, Engine
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_engine: Engine | None = None
|
||||
_thread_local = threading.local()
|
||||
|
||||
|
||||
def get_engine() -> Engine:
|
||||
global _engine
|
||||
if _engine is not None:
|
||||
return _engine
|
||||
|
||||
from config import settings
|
||||
db_path = Path(settings.db_path).resolve()
|
||||
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
_engine = create_engine(
|
||||
f"sqlite:///{db_path}",
|
||||
connect_args={"check_same_thread": False},
|
||||
echo=False,
|
||||
)
|
||||
|
||||
# Apply pragmas on every new connection
|
||||
@event.listens_for(_engine, "connect")
|
||||
def set_sqlite_pragma(dbapi_conn, connection_record):
|
||||
cursor = dbapi_conn.cursor()
|
||||
cursor.execute("PRAGMA journal_mode=WAL")
|
||||
cursor.execute("PRAGMA foreign_keys=ON")
|
||||
cursor.execute("PRAGMA busy_timeout=5000")
|
||||
cursor.close()
|
||||
|
||||
return _engine
|
||||
|
||||
|
||||
def get_db():
|
||||
"""FastAPI dependency. Yields a SQLAlchemy Connection, closes after request."""
|
||||
engine = get_engine()
|
||||
with engine.connect() as conn:
|
||||
try:
|
||||
yield conn
|
||||
conn.commit()
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
|
||||
def get_thread_db() -> Connection:
|
||||
"""
|
||||
Return a thread-local DB connection for background threads.
|
||||
Each thread gets its own connection (SQLite requires this).
|
||||
Call conn.close() in thread teardown.
|
||||
"""
|
||||
if not hasattr(_thread_local, "conn") or _thread_local.conn is None:
|
||||
_thread_local.conn = get_engine().connect()
|
||||
return _thread_local.conn
|
||||
|
||||
|
||||
def run_migrations(engine: Engine) -> None:
|
||||
"""Apply all pending SQL migration files in order."""
|
||||
migrations_dir = Path(__file__).parent / "core" / "migrations"
|
||||
migration_files = sorted(migrations_dir.glob("*.sql"))
|
||||
|
||||
with engine.connect() as conn:
|
||||
# Ensure tracking table exists
|
||||
conn.execute(text("""
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version INTEGER PRIMARY KEY,
|
||||
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
)
|
||||
"""))
|
||||
conn.commit()
|
||||
|
||||
applied = {
|
||||
row[0] for row in conn.execute(
|
||||
text("SELECT version FROM schema_migrations")
|
||||
)
|
||||
}
|
||||
|
||||
for mfile in migration_files:
|
||||
# Extract version number from filename: 001_initial.sql -> 1
|
||||
version_str = mfile.name.split("_")[0]
|
||||
try:
|
||||
version = int(version_str)
|
||||
except ValueError:
|
||||
logger.warning("Skipping migration with non-numeric prefix: %s", mfile.name)
|
||||
continue
|
||||
|
||||
if version in applied:
|
||||
continue
|
||||
|
||||
logger.info("Applying migration: %s", mfile.name)
|
||||
sql = mfile.read_text(encoding="utf-8")
|
||||
|
||||
# Execute each statement separately (SQLite doesn't support executescript in transactions)
|
||||
for statement in sql.split(";"):
|
||||
statement = statement.strip()
|
||||
if statement:
|
||||
conn.execute(text(statement))
|
||||
|
||||
conn.execute(
|
||||
text("INSERT INTO schema_migrations (version) VALUES (:v)"),
|
||||
{"v": version},
|
||||
)
|
||||
conn.commit()
|
||||
logger.info("Migration %d applied.", version)
|
||||
86
backend/dependencies.py
Normal file
86
backend/dependencies.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""Reusable FastAPI dependencies."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import Depends, Header, HTTPException, status
|
||||
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||
from jose import JWTError
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from core.auth.utils import decode_access_token
|
||||
from database import get_db
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
_security = HTTPBearer()
|
||||
|
||||
|
||||
def get_current_user(
|
||||
credentials: Annotated[HTTPAuthorizationCredentials, Depends(_security)],
|
||||
db: Annotated[Connection, Depends(get_db)],
|
||||
) -> dict:
|
||||
"""Decode JWT and return user dict. Raises 401 on any failure."""
|
||||
token = credentials.credentials
|
||||
try:
|
||||
payload = decode_access_token(token)
|
||||
except JWTError as e:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail={"code": "UNAUTHORIZED", "message": "Invalid or expired token"},
|
||||
)
|
||||
# Optionally verify user still exists in DB
|
||||
from core.dal.base_repository import BaseRepository
|
||||
from sqlalchemy import text
|
||||
row = db.execute(
|
||||
text("SELECT id, username, role FROM users WHERE id = :id"),
|
||||
{"id": int(payload["sub"])},
|
||||
).fetchone()
|
||||
if row is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail={"code": "UNAUTHORIZED", "message": "User not found"},
|
||||
)
|
||||
return dict(row._mapping)
|
||||
|
||||
|
||||
def require_admin(
|
||||
user: Annotated[dict, Depends(get_current_user)],
|
||||
) -> dict:
|
||||
"""Raise 403 if user is not admin."""
|
||||
if user["role"] != "admin":
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail={"code": "FORBIDDEN", "message": "Admin role required"},
|
||||
)
|
||||
return user
|
||||
|
||||
|
||||
def get_server_or_404(server_id: int, db: Connection) -> dict:
|
||||
"""Load server by ID or raise 404."""
|
||||
from sqlalchemy import text
|
||||
row = db.execute(
|
||||
text("SELECT * FROM servers WHERE id = :id"), {"id": server_id}
|
||||
).fetchone()
|
||||
if row is None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"code": "NOT_FOUND", "message": f"Server {server_id} not found"},
|
||||
)
|
||||
return dict(row._mapping)
|
||||
|
||||
|
||||
def get_adapter_for_server(server_id: int, db: Connection):
|
||||
"""Load server and resolve its adapter. Raises 404 if server not found."""
|
||||
server = get_server_or_404(server_id, db)
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
try:
|
||||
return GameAdapterRegistry.get(server["game_type"])
|
||||
except KeyError:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={
|
||||
"code": "GAME_TYPE_NOT_FOUND",
|
||||
"message": f"No adapter for game type '{server['game_type']}'",
|
||||
},
|
||||
)
|
||||
186
backend/main.py
Normal file
186
backend/main.py
Normal file
@@ -0,0 +1,186 @@
|
||||
"""
|
||||
FastAPI application factory.
|
||||
Entry point: uvicorn main:app --reload
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import queue
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||
from slowapi.errors import RateLimitExceeded
|
||||
from slowapi.util import get_remote_address
|
||||
|
||||
from config import settings
|
||||
|
||||
logging.basicConfig(
|
||||
level=getattr(logging, settings.log_level.upper(), logging.INFO),
|
||||
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
limiter = Limiter(key_func=get_remote_address)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Startup + shutdown logic."""
|
||||
# ── Startup ──
|
||||
logger.info("Starting Languard...")
|
||||
|
||||
# 1. Init DB and run migrations
|
||||
from database import get_engine, run_migrations
|
||||
engine = get_engine()
|
||||
run_migrations(engine)
|
||||
|
||||
# 2. Register adapters
|
||||
from adapters import initialize_adapters
|
||||
initialize_adapters()
|
||||
|
||||
# 3. Create WebSocket manager (asyncio-only)
|
||||
from core.websocket.manager import WebSocketManager
|
||||
ws_manager = WebSocketManager()
|
||||
app.state.ws_manager = ws_manager
|
||||
|
||||
# 4. Create global broadcast queue and BroadcastThread
|
||||
broadcast_queue = queue.Queue(maxsize=1000)
|
||||
app.state.broadcast_queue = broadcast_queue
|
||||
|
||||
from core.websocket.broadcast_thread import BroadcastThread
|
||||
loop = asyncio.get_event_loop()
|
||||
broadcast_thread = BroadcastThread(
|
||||
event_queue=broadcast_queue,
|
||||
ws_manager=ws_manager,
|
||||
loop=loop,
|
||||
)
|
||||
broadcast_thread.start()
|
||||
app.state.broadcast_thread = broadcast_thread
|
||||
|
||||
# 5. Create ThreadRegistry
|
||||
from core.threads.thread_registry import ThreadRegistry
|
||||
from core.servers.process_manager import ProcessManager
|
||||
from adapters.registry import GameAdapterRegistry
|
||||
|
||||
process_manager = ProcessManager.get()
|
||||
thread_registry = ThreadRegistry(
|
||||
process_manager=process_manager,
|
||||
adapter_registry=GameAdapterRegistry,
|
||||
global_broadcast_queue=broadcast_queue,
|
||||
)
|
||||
ThreadRegistry.set_instance(thread_registry)
|
||||
app.state.thread_registry = thread_registry
|
||||
|
||||
# 6. Recover processes that survived a restart
|
||||
process_manager.recover_on_startup(engine.connect())
|
||||
|
||||
# 7. Reattach threads for running servers
|
||||
from core.dal.server_repository import ServerRepository
|
||||
with engine.connect() as db:
|
||||
server_repo = ServerRepository(db)
|
||||
running_servers = server_repo.get_running()
|
||||
for server in running_servers:
|
||||
try:
|
||||
thread_registry.reattach_server_threads(server["id"], db)
|
||||
logger.info("Reattached threads for server %d", server["id"])
|
||||
except Exception as exc:
|
||||
logger.error("Failed to reattach threads for server %d: %s", server["id"], exc)
|
||||
|
||||
# 8. Seed default admin if no users exist
|
||||
from core.auth.service import AuthService
|
||||
with engine.connect() as db:
|
||||
svc = AuthService(db)
|
||||
generated_password = svc.seed_admin_if_empty()
|
||||
db.commit()
|
||||
if generated_password:
|
||||
logger.warning("=" * 60)
|
||||
logger.warning(" FIRST RUN — default admin created")
|
||||
logger.warning(" Username: admin")
|
||||
logger.warning(" Password: %s", generated_password)
|
||||
logger.warning(" Change this password immediately!")
|
||||
logger.warning("=" * 60)
|
||||
|
||||
# 9. Register and start APScheduler cleanup jobs
|
||||
from core.jobs.scheduler import start_scheduler, stop_scheduler
|
||||
from core.jobs.cleanup_jobs import register_cleanup_jobs
|
||||
register_cleanup_jobs()
|
||||
start_scheduler()
|
||||
|
||||
yield
|
||||
|
||||
# ── Shutdown ──
|
||||
logger.info("Shutting down Languard...")
|
||||
try:
|
||||
ThreadRegistry.stop_all()
|
||||
except Exception as e:
|
||||
logger.error("Thread shutdown error: %s", e)
|
||||
broadcast_thread.stop()
|
||||
broadcast_thread.join(timeout=5.0)
|
||||
|
||||
from core.jobs.scheduler import stop_scheduler
|
||||
stop_scheduler()
|
||||
|
||||
|
||||
def create_app() -> FastAPI:
|
||||
app = FastAPI(
|
||||
title="Languard Server Manager",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan,
|
||||
docs_url="/docs",
|
||||
redoc_url="/redoc",
|
||||
)
|
||||
|
||||
# ── Middleware ──
|
||||
app.state.limiter = limiter
|
||||
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=settings.cors_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# ── Global exception handler ──
|
||||
@app.exception_handler(Exception)
|
||||
async def generic_exception_handler(request: Request, exc: Exception):
|
||||
logger.error("Unhandled exception: %s", exc, exc_info=True)
|
||||
return JSONResponse(
|
||||
status_code=500,
|
||||
content={
|
||||
"success": False,
|
||||
"data": None,
|
||||
"error": {"code": "INTERNAL_ERROR", "message": "An unexpected error occurred"},
|
||||
},
|
||||
)
|
||||
|
||||
# ── Routers ──
|
||||
from core.auth.router import router as auth_router
|
||||
from core.games.router import router as games_router
|
||||
from core.system.router import router as system_router
|
||||
from core.servers.router import router as servers_router
|
||||
from core.servers.players_router import router as players_router
|
||||
from core.servers.bans_router import router as bans_router
|
||||
from core.servers.missions_router import router as missions_router
|
||||
from core.servers.mods_router import router as mods_router
|
||||
from core.websocket.router import router as ws_router
|
||||
|
||||
app.include_router(auth_router, prefix="/api")
|
||||
app.include_router(games_router, prefix="/api")
|
||||
app.include_router(system_router, prefix="/api")
|
||||
app.include_router(servers_router, prefix="/api")
|
||||
app.include_router(players_router, prefix="/api")
|
||||
app.include_router(bans_router, prefix="/api")
|
||||
app.include_router(missions_router, prefix="/api")
|
||||
app.include_router(mods_router, prefix="/api")
|
||||
app.include_router(ws_router)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
app = create_app()
|
||||
50
backend/requirements.txt
Normal file
50
backend/requirements.txt
Normal file
@@ -0,0 +1,50 @@
|
||||
annotated-doc==0.0.4
|
||||
annotated-types==0.7.0
|
||||
anyio==4.13.0
|
||||
APScheduler==3.11.2
|
||||
bcrypt==5.0.0
|
||||
certifi==2026.2.25
|
||||
cffi==2.0.0
|
||||
click==8.3.2
|
||||
colorama==0.4.6
|
||||
cryptography==46.0.7
|
||||
Deprecated==1.3.1
|
||||
ecdsa==0.19.2
|
||||
fastapi==0.135.3
|
||||
greenlet==3.4.0
|
||||
h11==0.16.0
|
||||
httpcore==1.0.9
|
||||
httptools==0.7.1
|
||||
httpx==0.28.1
|
||||
idna==3.11
|
||||
iniconfig==2.3.0
|
||||
limits==5.8.0
|
||||
packaging==26.1
|
||||
passlib==1.7.4
|
||||
pluggy==1.6.0
|
||||
psutil==7.2.2
|
||||
pyasn1==0.6.3
|
||||
pycparser==3.0
|
||||
pydantic==2.13.1
|
||||
pydantic-settings==2.13.1
|
||||
pydantic_core==2.46.1
|
||||
Pygments==2.20.0
|
||||
pytest==9.0.3
|
||||
pytest-asyncio==1.3.0
|
||||
python-dotenv==1.2.2
|
||||
python-jose==3.5.0
|
||||
python-multipart==0.0.26
|
||||
PyYAML==6.0.3
|
||||
rsa==4.9.1
|
||||
six==1.17.0
|
||||
slowapi==0.1.9
|
||||
SQLAlchemy==2.0.49
|
||||
starlette==1.0.0
|
||||
typing-inspection==0.4.2
|
||||
typing_extensions==4.15.0
|
||||
tzdata==2026.1
|
||||
tzlocal==5.3.1
|
||||
uvicorn==0.44.0
|
||||
watchfiles==1.1.1
|
||||
websockets==16.0
|
||||
wrapt==2.1.2
|
||||
24
frontend/.gitignore
vendored
Normal file
24
frontend/.gitignore
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
dist
|
||||
dist-ssr
|
||||
*.local
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/*
|
||||
!.vscode/extensions.json
|
||||
.idea
|
||||
.DS_Store
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
73
frontend/README.md
Normal file
73
frontend/README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# React + TypeScript + Vite
|
||||
|
||||
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
|
||||
|
||||
Currently, two official plugins are available:
|
||||
|
||||
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Oxc](https://oxc.rs)
|
||||
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/)
|
||||
|
||||
## React Compiler
|
||||
|
||||
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
|
||||
|
||||
## Expanding the ESLint configuration
|
||||
|
||||
If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:
|
||||
|
||||
```js
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{ts,tsx}'],
|
||||
extends: [
|
||||
// Other configs...
|
||||
|
||||
// Remove tseslint.configs.recommended and replace with this
|
||||
tseslint.configs.recommendedTypeChecked,
|
||||
// Alternatively, use this for stricter rules
|
||||
tseslint.configs.strictTypeChecked,
|
||||
// Optionally, add this for stylistic rules
|
||||
tseslint.configs.stylisticTypeChecked,
|
||||
|
||||
// Other configs...
|
||||
],
|
||||
languageOptions: {
|
||||
parserOptions: {
|
||||
project: ['./tsconfig.node.json', './tsconfig.app.json'],
|
||||
tsconfigRootDir: import.meta.dirname,
|
||||
},
|
||||
// other options...
|
||||
},
|
||||
},
|
||||
])
|
||||
```
|
||||
|
||||
You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules:
|
||||
|
||||
```js
|
||||
// eslint.config.js
|
||||
import reactX from 'eslint-plugin-react-x'
|
||||
import reactDom from 'eslint-plugin-react-dom'
|
||||
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{ts,tsx}'],
|
||||
extends: [
|
||||
// Other configs...
|
||||
// Enable lint rules for React
|
||||
reactX.configs['recommended-typescript'],
|
||||
// Enable lint rules for React DOM
|
||||
reactDom.configs.recommended,
|
||||
],
|
||||
languageOptions: {
|
||||
parserOptions: {
|
||||
project: ['./tsconfig.node.json', './tsconfig.app.json'],
|
||||
tsconfigRootDir: import.meta.dirname,
|
||||
},
|
||||
// other options...
|
||||
},
|
||||
},
|
||||
])
|
||||
```
|
||||
23
frontend/eslint.config.js
Normal file
23
frontend/eslint.config.js
Normal file
@@ -0,0 +1,23 @@
|
||||
import js from '@eslint/js'
|
||||
import globals from 'globals'
|
||||
import reactHooks from 'eslint-plugin-react-hooks'
|
||||
import reactRefresh from 'eslint-plugin-react-refresh'
|
||||
import tseslint from 'typescript-eslint'
|
||||
import { defineConfig, globalIgnores } from 'eslint/config'
|
||||
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{ts,tsx}'],
|
||||
extends: [
|
||||
js.configs.recommended,
|
||||
tseslint.configs.recommended,
|
||||
reactHooks.configs.flat.recommended,
|
||||
reactRefresh.configs.vite,
|
||||
],
|
||||
languageOptions: {
|
||||
ecmaVersion: 2020,
|
||||
globals: globals.browser,
|
||||
},
|
||||
},
|
||||
])
|
||||
13
frontend/index.html
Normal file
13
frontend/index.html
Normal file
@@ -0,0 +1,13 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>frontend</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
<script type="module" src="/src/main.tsx"></script>
|
||||
</body>
|
||||
</html>
|
||||
5672
frontend/package-lock.json
generated
Normal file
5672
frontend/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
54
frontend/package.json
Normal file
54
frontend/package.json
Normal file
@@ -0,0 +1,54 @@
|
||||
{
|
||||
"name": "frontend",
|
||||
"private": true,
|
||||
"version": "0.0.0",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "tsc -b && vite build",
|
||||
"lint": "eslint .",
|
||||
"preview": "vite preview",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:e2e": "playwright test",
|
||||
"test:e2e:ui": "playwright test --ui"
|
||||
},
|
||||
"dependencies": {
|
||||
"@hookform/resolvers": "^5.2.2",
|
||||
"@tanstack/react-query": "^5.99.0",
|
||||
"@tanstack/react-query-devtools": "^5.99.0",
|
||||
"axios": "^1.15.0",
|
||||
"clsx": "^2.1.1",
|
||||
"lucide-react": "^1.8.0",
|
||||
"react": "^19.2.4",
|
||||
"react-dom": "^19.2.4",
|
||||
"react-hook-form": "^7.72.1",
|
||||
"react-router-dom": "^7.14.1",
|
||||
"zod": "^4.3.6",
|
||||
"zustand": "^5.0.12"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@eslint/js": "^9.39.4",
|
||||
"@playwright/test": "^1.59.1",
|
||||
"@testing-library/jest-dom": "^6.9.1",
|
||||
"@testing-library/react": "^16.3.2",
|
||||
"@testing-library/user-event": "^14.6.1",
|
||||
"@types/node": "^24.12.2",
|
||||
"@types/react": "^19.2.14",
|
||||
"@types/react-dom": "^19.2.3",
|
||||
"@vitejs/plugin-react": "^6.0.1",
|
||||
"@vitest/coverage-v8": "^4.1.4",
|
||||
"autoprefixer": "^10.5.0",
|
||||
"eslint": "^9.39.4",
|
||||
"eslint-plugin-react-hooks": "^7.0.1",
|
||||
"eslint-plugin-react-refresh": "^0.5.2",
|
||||
"globals": "^17.4.0",
|
||||
"jsdom": "^29.0.2",
|
||||
"postcss": "^8.5.10",
|
||||
"tailwindcss": "^3.4.19",
|
||||
"typescript": "~6.0.2",
|
||||
"typescript-eslint": "^8.58.0",
|
||||
"vite": "^8.0.4",
|
||||
"vitest": "^4.1.4"
|
||||
}
|
||||
}
|
||||
33
frontend/playwright.config.ts
Normal file
33
frontend/playwright.config.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
import { defineConfig, devices } from "@playwright/test";
|
||||
|
||||
export default defineConfig({
|
||||
testDir: "./tests-e2e",
|
||||
fullyParallel: true,
|
||||
forbidOnly: !!process.env.CI,
|
||||
retries: process.env.CI ? 2 : 0,
|
||||
workers: process.env.CI ? 1 : undefined,
|
||||
reporter: [
|
||||
["html", { outputFolder: "playwright-report" }],
|
||||
["list"],
|
||||
],
|
||||
use: {
|
||||
baseURL: process.env.BASE_URL || "http://localhost:5173",
|
||||
trace: "on-first-retry",
|
||||
screenshot: "only-on-failure",
|
||||
video: "retain-on-failure",
|
||||
actionTimeout: 10_000,
|
||||
navigationTimeout: 30_000,
|
||||
},
|
||||
projects: [
|
||||
{
|
||||
name: "chromium",
|
||||
use: { ...devices["Desktop Chrome"] },
|
||||
},
|
||||
],
|
||||
webServer: {
|
||||
command: "npm run dev",
|
||||
url: "http://localhost:5173",
|
||||
reuseExistingServer: !process.env.CI,
|
||||
timeout: 120_000,
|
||||
},
|
||||
});
|
||||
6
frontend/postcss.config.js
Normal file
6
frontend/postcss.config.js
Normal file
@@ -0,0 +1,6 @@
|
||||
export default {
|
||||
plugins: {
|
||||
tailwindcss: {},
|
||||
autoprefixer: {},
|
||||
},
|
||||
}
|
||||
1
frontend/public/favicon.svg
Normal file
1
frontend/public/favicon.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 9.3 KiB |
24
frontend/public/icons.svg
Normal file
24
frontend/public/icons.svg
Normal file
@@ -0,0 +1,24 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg">
|
||||
<symbol id="bluesky-icon" viewBox="0 0 16 17">
|
||||
<g clip-path="url(#bluesky-clip)"><path fill="#08060d" d="M7.75 7.735c-.693-1.348-2.58-3.86-4.334-5.097-1.68-1.187-2.32-.981-2.74-.79C.188 2.065.1 2.812.1 3.251s.241 3.602.398 4.13c.52 1.744 2.367 2.333 4.07 2.145-2.495.37-4.71 1.278-1.805 4.512 3.196 3.309 4.38-.71 4.987-2.746.608 2.036 1.307 5.91 4.93 2.746 2.72-2.746.747-4.143-1.747-4.512 1.702.189 3.55-.4 4.07-2.145.156-.528.397-3.691.397-4.13s-.088-1.186-.575-1.406c-.42-.19-1.06-.395-2.741.79-1.755 1.24-3.64 3.752-4.334 5.099"/></g>
|
||||
<defs><clipPath id="bluesky-clip"><path fill="#fff" d="M.1.85h15.3v15.3H.1z"/></clipPath></defs>
|
||||
</symbol>
|
||||
<symbol id="discord-icon" viewBox="0 0 20 19">
|
||||
<path fill="#08060d" d="M16.224 3.768a14.5 14.5 0 0 0-3.67-1.153c-.158.286-.343.67-.47.976a13.5 13.5 0 0 0-4.067 0c-.128-.306-.317-.69-.476-.976A14.4 14.4 0 0 0 3.868 3.77C1.546 7.28.916 10.703 1.231 14.077a14.7 14.7 0 0 0 4.5 2.306q.545-.748.965-1.587a9.5 9.5 0 0 1-1.518-.74q.191-.14.372-.293c2.927 1.369 6.107 1.369 8.999 0q.183.152.372.294-.723.437-1.52.74.418.838.963 1.588a14.6 14.6 0 0 0 4.504-2.308c.37-3.911-.63-7.302-2.644-10.309m-9.13 8.234c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.894 0 1.614.82 1.599 1.82.001 1-.705 1.82-1.6 1.82m5.91 0c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.893 0 1.614.82 1.599 1.82 0 1-.706 1.82-1.6 1.82"/>
|
||||
</symbol>
|
||||
<symbol id="documentation-icon" viewBox="0 0 21 20">
|
||||
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="m15.5 13.333 1.533 1.322c.645.555.967.833.967 1.178s-.322.623-.967 1.179L15.5 18.333m-3.333-5-1.534 1.322c-.644.555-.966.833-.966 1.178s.322.623.966 1.179l1.534 1.321"/>
|
||||
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M17.167 10.836v-4.32c0-1.41 0-2.117-.224-2.68-.359-.906-1.118-1.621-2.08-1.96-.599-.21-1.349-.21-2.848-.21-2.623 0-3.935 0-4.983.369-1.684.591-3.013 1.842-3.641 3.428C3 6.449 3 7.684 3 10.154v2.122c0 2.558 0 3.838.706 4.726q.306.383.713.671c.76.536 1.79.64 3.581.66"/>
|
||||
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M3 10a2.78 2.78 0 0 1 2.778-2.778c.555 0 1.209.097 1.748-.047.48-.129.854-.503.982-.982.145-.54.048-1.194.048-1.749a2.78 2.78 0 0 1 2.777-2.777"/>
|
||||
</symbol>
|
||||
<symbol id="github-icon" viewBox="0 0 19 19">
|
||||
<path fill="#08060d" fill-rule="evenodd" d="M9.356 1.85C5.05 1.85 1.57 5.356 1.57 9.694a7.84 7.84 0 0 0 5.324 7.44c.387.079.528-.168.528-.376 0-.182-.013-.805-.013-1.454-2.165.467-2.616-.935-2.616-.935-.349-.91-.864-1.143-.864-1.143-.71-.48.051-.48.051-.48.787.051 1.2.805 1.2.805.695 1.194 1.817.857 2.268.649.064-.507.27-.857.49-1.052-1.728-.182-3.545-.857-3.545-3.87 0-.857.31-1.558.8-2.104-.078-.195-.349-1 .077-2.078 0 0 .657-.208 2.14.805a7.5 7.5 0 0 1 1.946-.26c.657 0 1.328.092 1.946.26 1.483-1.013 2.14-.805 2.14-.805.426 1.078.155 1.883.078 2.078.502.546.799 1.247.799 2.104 0 3.013-1.818 3.675-3.558 3.87.284.247.528.714.528 1.454 0 1.052-.012 1.896-.012 2.156 0 .208.142.455.528.377a7.84 7.84 0 0 0 5.324-7.441c.013-4.338-3.48-7.844-7.773-7.844" clip-rule="evenodd"/>
|
||||
</symbol>
|
||||
<symbol id="social-icon" viewBox="0 0 20 20">
|
||||
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M12.5 6.667a4.167 4.167 0 1 0-8.334 0 4.167 4.167 0 0 0 8.334 0"/>
|
||||
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M2.5 16.667a5.833 5.833 0 0 1 8.75-5.053m3.837.474.513 1.035c.07.144.257.282.414.309l.93.155c.596.1.736.536.307.965l-.723.73a.64.64 0 0 0-.152.531l.207.903c.164.715-.213.991-.84.618l-.872-.52a.63.63 0 0 0-.577 0l-.872.52c-.624.373-1.003.094-.84-.618l.207-.903a.64.64 0 0 0-.152-.532l-.723-.729c-.426-.43-.289-.864.306-.964l.93-.156a.64.64 0 0 0 .412-.31l.513-1.034c.28-.562.735-.562 1.012 0"/>
|
||||
</symbol>
|
||||
<symbol id="x-icon" viewBox="0 0 19 19">
|
||||
<path fill="#08060d" fill-rule="evenodd" d="M1.893 1.98c.052.072 1.245 1.769 2.653 3.77l2.892 4.114c.183.261.333.48.333.486s-.068.089-.152.183l-.522.593-.765.867-3.597 4.087c-.375.426-.734.834-.798.905a1 1 0 0 0-.118.148c0 .01.236.017.664.017h.663l.729-.83c.4-.457.796-.906.879-.999a692 692 0 0 0 1.794-2.038c.034-.037.301-.34.594-.675l.551-.624.345-.392a7 7 0 0 1 .34-.374c.006 0 .93 1.306 2.052 2.903l2.084 2.965.045.063h2.275c1.87 0 2.273-.003 2.266-.021-.008-.02-1.098-1.572-3.894-5.547-2.013-2.862-2.28-3.246-2.273-3.266.008-.019.282-.332 2.085-2.38l2-2.274 1.567-1.782c.022-.028-.016-.03-.65-.03h-.674l-.3.342a871 871 0 0 1-1.782 2.025c-.067.075-.405.458-.75.852a100 100 0 0 1-.803.91c-.148.172-.299.344-.99 1.127-.304.343-.32.358-.345.327-.015-.019-.904-1.282-1.976-2.808L6.365 1.85H1.8zm1.782.91 8.078 11.294c.772 1.08 1.413 1.973 1.425 1.984.016.017.241.02 1.05.017l1.03-.004-2.694-3.766L7.796 5.75 5.722 2.852l-1.039-.004-1.039-.004z" clip-rule="evenodd"/>
|
||||
</symbol>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 4.9 KiB |
184
frontend/src/App.css
Normal file
184
frontend/src/App.css
Normal file
@@ -0,0 +1,184 @@
|
||||
.counter {
|
||||
font-size: 16px;
|
||||
padding: 5px 10px;
|
||||
border-radius: 5px;
|
||||
color: var(--accent);
|
||||
background: var(--accent-bg);
|
||||
border: 2px solid transparent;
|
||||
transition: border-color 0.3s;
|
||||
margin-bottom: 24px;
|
||||
|
||||
&:hover {
|
||||
border-color: var(--accent-border);
|
||||
}
|
||||
&:focus-visible {
|
||||
outline: 2px solid var(--accent);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
}
|
||||
|
||||
.hero {
|
||||
position: relative;
|
||||
|
||||
.base,
|
||||
.framework,
|
||||
.vite {
|
||||
inset-inline: 0;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
.base {
|
||||
width: 170px;
|
||||
position: relative;
|
||||
z-index: 0;
|
||||
}
|
||||
|
||||
.framework,
|
||||
.vite {
|
||||
position: absolute;
|
||||
}
|
||||
|
||||
.framework {
|
||||
z-index: 1;
|
||||
top: 34px;
|
||||
height: 28px;
|
||||
transform: perspective(2000px) rotateZ(300deg) rotateX(44deg) rotateY(39deg)
|
||||
scale(1.4);
|
||||
}
|
||||
|
||||
.vite {
|
||||
z-index: 0;
|
||||
top: 107px;
|
||||
height: 26px;
|
||||
width: auto;
|
||||
transform: perspective(2000px) rotateZ(300deg) rotateX(40deg) rotateY(39deg)
|
||||
scale(0.8);
|
||||
}
|
||||
}
|
||||
|
||||
#center {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 25px;
|
||||
place-content: center;
|
||||
place-items: center;
|
||||
flex-grow: 1;
|
||||
|
||||
@media (max-width: 1024px) {
|
||||
padding: 32px 20px 24px;
|
||||
gap: 18px;
|
||||
}
|
||||
}
|
||||
|
||||
#next-steps {
|
||||
display: flex;
|
||||
border-top: 1px solid var(--border);
|
||||
text-align: left;
|
||||
|
||||
& > div {
|
||||
flex: 1 1 0;
|
||||
padding: 32px;
|
||||
@media (max-width: 1024px) {
|
||||
padding: 24px 20px;
|
||||
}
|
||||
}
|
||||
|
||||
.icon {
|
||||
margin-bottom: 16px;
|
||||
width: 22px;
|
||||
height: 22px;
|
||||
}
|
||||
|
||||
@media (max-width: 1024px) {
|
||||
flex-direction: column;
|
||||
text-align: center;
|
||||
}
|
||||
}
|
||||
|
||||
#docs {
|
||||
border-right: 1px solid var(--border);
|
||||
|
||||
@media (max-width: 1024px) {
|
||||
border-right: none;
|
||||
border-bottom: 1px solid var(--border);
|
||||
}
|
||||
}
|
||||
|
||||
#next-steps ul {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
margin: 32px 0 0;
|
||||
|
||||
.logo {
|
||||
height: 18px;
|
||||
}
|
||||
|
||||
a {
|
||||
color: var(--text-h);
|
||||
font-size: 16px;
|
||||
border-radius: 6px;
|
||||
background: var(--social-bg);
|
||||
display: flex;
|
||||
padding: 6px 12px;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
text-decoration: none;
|
||||
transition: box-shadow 0.3s;
|
||||
|
||||
&:hover {
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
.button-icon {
|
||||
height: 18px;
|
||||
width: 18px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 1024px) {
|
||||
margin-top: 20px;
|
||||
flex-wrap: wrap;
|
||||
justify-content: center;
|
||||
|
||||
li {
|
||||
flex: 1 1 calc(50% - 8px);
|
||||
}
|
||||
|
||||
a {
|
||||
width: 100%;
|
||||
justify-content: center;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#spacer {
|
||||
height: 88px;
|
||||
border-top: 1px solid var(--border);
|
||||
@media (max-width: 1024px) {
|
||||
height: 48px;
|
||||
}
|
||||
}
|
||||
|
||||
.ticks {
|
||||
position: relative;
|
||||
width: 100%;
|
||||
|
||||
&::before,
|
||||
&::after {
|
||||
content: '';
|
||||
position: absolute;
|
||||
top: -4.5px;
|
||||
border: 5px solid transparent;
|
||||
}
|
||||
|
||||
&::before {
|
||||
left: 0;
|
||||
border-left-color: var(--border);
|
||||
}
|
||||
&::after {
|
||||
right: 0;
|
||||
border-right-color: var(--border);
|
||||
}
|
||||
}
|
||||
57
frontend/src/App.tsx
Normal file
57
frontend/src/App.tsx
Normal file
@@ -0,0 +1,57 @@
|
||||
import { BrowserRouter, Routes, Route, Navigate } from "react-router-dom";
|
||||
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
|
||||
import { ReactQueryDevtools } from "@tanstack/react-query-devtools";
|
||||
|
||||
import { useAuthStore } from "@/store/auth.store";
|
||||
import { Sidebar } from "@/components/layout/Sidebar";
|
||||
import { LoginPage } from "@/pages/LoginPage";
|
||||
import { DashboardPage } from "@/pages/DashboardPage";
|
||||
import { ServerDetailPage } from "@/pages/ServerDetailPage";
|
||||
import { CreateServerPage } from "@/pages/CreateServerPage";
|
||||
import { SettingsPage } from "@/pages/SettingsPage";
|
||||
|
||||
const queryClient = new QueryClient({
|
||||
defaultOptions: {
|
||||
queries: {
|
||||
staleTime: 10_000,
|
||||
retry: 2,
|
||||
refetchOnWindowFocus: false,
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
function ProtectedLayout() {
|
||||
const isAuthenticated = useAuthStore((s) => s.isAuthenticated);
|
||||
|
||||
if (!isAuthenticated) {
|
||||
return <Navigate to="/login" replace />;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex h-screen overflow-hidden">
|
||||
<Sidebar />
|
||||
<main className="flex-1 overflow-y-auto bg-surface-base">
|
||||
<Routes>
|
||||
<Route path="/" element={<DashboardPage />} />
|
||||
<Route path="/servers/:serverId" element={<ServerDetailPage />} />
|
||||
<Route path="/servers/new" element={<CreateServerPage />} />
|
||||
<Route path="/settings" element={<SettingsPage />} />
|
||||
</Routes>
|
||||
</main>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function App() {
|
||||
return (
|
||||
<QueryClientProvider client={queryClient}>
|
||||
<BrowserRouter>
|
||||
<Routes>
|
||||
<Route path="/login" element={<LoginPage />} />
|
||||
<Route path="/*" element={<ProtectedLayout />} />
|
||||
</Routes>
|
||||
</BrowserRouter>
|
||||
<ReactQueryDevtools initialIsOpen={false} />
|
||||
</QueryClientProvider>
|
||||
);
|
||||
}
|
||||
124
frontend/src/__tests__/DashboardPage.test.tsx
Normal file
124
frontend/src/__tests__/DashboardPage.test.tsx
Normal file
@@ -0,0 +1,124 @@
|
||||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
||||
import { render, screen } from "@testing-library/react";
|
||||
import { MemoryRouter, Route, Routes } from "react-router-dom";
|
||||
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
|
||||
|
||||
import { DashboardPage } from "@/pages/DashboardPage";
|
||||
import { useServers } from "@/hooks/useServers";
|
||||
import type { Server } from "@/hooks/useServers";
|
||||
|
||||
const mockMutation = () => ({
|
||||
mutateAsync: vi.fn(() => Promise.resolve()),
|
||||
mutate: vi.fn(),
|
||||
isPending: false,
|
||||
isSuccess: false,
|
||||
isError: false,
|
||||
reset: vi.fn(),
|
||||
});
|
||||
|
||||
vi.mock("@/hooks/useServers", () => ({
|
||||
useServers: vi.fn(),
|
||||
useStartServer: vi.fn(() => mockMutation()),
|
||||
useStopServer: vi.fn(() => mockMutation()),
|
||||
useRestartServer: vi.fn(() => mockMutation()),
|
||||
useCreateServer: vi.fn(() => mockMutation()),
|
||||
useDeleteServer: vi.fn(() => mockMutation()),
|
||||
}));
|
||||
vi.mock("@/hooks/useWebSocket", () => ({
|
||||
useWebSocket: vi.fn(),
|
||||
}));
|
||||
|
||||
const mockServer: Server = {
|
||||
id: 1,
|
||||
name: "Arma3 Test",
|
||||
game_type: "arma3",
|
||||
status: "running",
|
||||
port: 2302,
|
||||
max_players: 64,
|
||||
current_players: 32,
|
||||
restart_count: 0,
|
||||
auto_restart: true,
|
||||
created_at: "2026-01-01T00:00:00Z",
|
||||
};
|
||||
|
||||
function renderDashboard() {
|
||||
const queryClient = new QueryClient({
|
||||
defaultOptions: { queries: { retry: false } },
|
||||
});
|
||||
|
||||
return render(
|
||||
<QueryClientProvider client={queryClient}>
|
||||
<MemoryRouter>
|
||||
<Routes>
|
||||
<Route path="*" element={<DashboardPage />} />
|
||||
</Routes>
|
||||
</MemoryRouter>
|
||||
</QueryClientProvider>,
|
||||
);
|
||||
}
|
||||
|
||||
describe("DashboardPage", () => {
|
||||
beforeEach(() => {
|
||||
vi.mocked(useServers).mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: true,
|
||||
isError: false,
|
||||
error: null,
|
||||
} as unknown as ReturnType<typeof useServers>);
|
||||
});
|
||||
|
||||
it("should show loading state", () => {
|
||||
renderDashboard();
|
||||
expect(screen.getByText("Loading servers...")).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it("should show error state", () => {
|
||||
vi.mocked(useServers).mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: false,
|
||||
isError: true,
|
||||
error: new Error("fail"),
|
||||
} as unknown as ReturnType<typeof useServers>);
|
||||
|
||||
renderDashboard();
|
||||
expect(screen.getByText("Failed to load servers")).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it("should show empty state when no servers", () => {
|
||||
vi.mocked(useServers).mockReturnValue({
|
||||
data: [],
|
||||
isLoading: false,
|
||||
isError: false,
|
||||
error: null,
|
||||
} as unknown as ReturnType<typeof useServers>);
|
||||
|
||||
renderDashboard();
|
||||
expect(screen.getByText("No servers configured yet.")).toBeInTheDocument();
|
||||
expect(screen.getByText("Add your first server")).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it("should render server cards", () => {
|
||||
vi.mocked(useServers).mockReturnValue({
|
||||
data: [mockServer],
|
||||
isLoading: false,
|
||||
isError: false,
|
||||
error: null,
|
||||
} as unknown as ReturnType<typeof useServers>);
|
||||
|
||||
renderDashboard();
|
||||
expect(screen.getByText("Arma3 Test")).toBeInTheDocument();
|
||||
expect(screen.getByText("1 server configured")).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it("should show Add Server link", () => {
|
||||
vi.mocked(useServers).mockReturnValue({
|
||||
data: [mockServer],
|
||||
isLoading: false,
|
||||
isError: false,
|
||||
error: null,
|
||||
} as unknown as ReturnType<typeof useServers>);
|
||||
|
||||
renderDashboard();
|
||||
expect(screen.getByText("Add Server")).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
95
frontend/src/__tests__/LoginPage.test.tsx
Normal file
95
frontend/src/__tests__/LoginPage.test.tsx
Normal file
@@ -0,0 +1,95 @@
|
||||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
||||
import { render, screen, waitFor } from "@testing-library/react";
|
||||
import userEvent from "@testing-library/user-event";
|
||||
import { MemoryRouter, Route, Routes } from "react-router-dom";
|
||||
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
|
||||
|
||||
import { LoginPage } from "@/pages/LoginPage";
|
||||
|
||||
vi.mock("@/lib/api", () => ({
|
||||
apiClient: {
|
||||
post: vi.fn(),
|
||||
},
|
||||
}));
|
||||
|
||||
import { apiClient } from "@/lib/api";
|
||||
|
||||
function renderLoginPage() {
|
||||
const queryClient = new QueryClient({
|
||||
defaultOptions: { queries: { retry: false } },
|
||||
});
|
||||
|
||||
return {
|
||||
user: userEvent.setup(),
|
||||
...render(
|
||||
<QueryClientProvider client={queryClient}>
|
||||
<MemoryRouter initialEntries={["/login"]}>
|
||||
<Routes>
|
||||
<Route path="/login" element={<LoginPage />} />
|
||||
<Route path="/" element={<div>Dashboard</div>} />
|
||||
</Routes>
|
||||
</MemoryRouter>
|
||||
</QueryClientProvider>,
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
describe("LoginPage", () => {
|
||||
beforeEach(() => {
|
||||
vi.mocked(apiClient.post).mockReset();
|
||||
});
|
||||
|
||||
it("should render login form", () => {
|
||||
renderLoginPage();
|
||||
expect(screen.getByLabelText("Username")).toBeInTheDocument();
|
||||
expect(screen.getByLabelText("Password")).toBeInTheDocument();
|
||||
expect(screen.getByRole("button", { name: /sign in/i })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it("should show validation errors on empty submit", async () => {
|
||||
const { user } = renderLoginPage();
|
||||
await user.click(screen.getByRole("button", { name: /sign in/i }));
|
||||
await waitFor(() => {
|
||||
expect(screen.getByText("Username is required")).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
it("should call API on valid submit", async () => {
|
||||
vi.mocked(apiClient.post).mockResolvedValueOnce({
|
||||
data: {
|
||||
success: true,
|
||||
data: {
|
||||
access_token: "test-token",
|
||||
user: { id: 1, username: "admin", role: "admin" as const },
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const { user } = renderLoginPage();
|
||||
await user.type(screen.getByLabelText("Username"), "admin");
|
||||
await user.type(screen.getByLabelText("Password"), "password");
|
||||
await user.click(screen.getByRole("button", { name: /sign in/i }));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(apiClient.post).toHaveBeenCalledWith("/api/auth/login", {
|
||||
username: "admin",
|
||||
password: "password",
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
it("should show error on failed login", async () => {
|
||||
vi.mocked(apiClient.post).mockRejectedValueOnce({
|
||||
response: { data: { detail: "Invalid credentials" } },
|
||||
});
|
||||
|
||||
const { user } = renderLoginPage();
|
||||
await user.type(screen.getByLabelText("Username"), "admin");
|
||||
await user.type(screen.getByLabelText("Password"), "wrong");
|
||||
await user.click(screen.getByRole("button", { name: /sign in/i }));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(screen.getByText("Invalid credentials")).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
});
|
||||
173
frontend/src/__tests__/ServerCard.handlers.test.tsx
Normal file
173
frontend/src/__tests__/ServerCard.handlers.test.tsx
Normal file
@@ -0,0 +1,173 @@
|
||||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
||||
import { render, screen } from "@testing-library/react";
|
||||
import userEvent from "@testing-library/user-event";
|
||||
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
|
||||
|
||||
import { ServerCard } from "@/components/servers/ServerCard";
|
||||
import type { Server } from "@/hooks/useServers";
|
||||
import {
|
||||
useStartServer,
|
||||
useStopServer,
|
||||
useRestartServer,
|
||||
} from "@/hooks/useServers";
|
||||
import { useUIStore } from "@/store/ui.store";
|
||||
|
||||
vi.mock("@/hooks/useServers", () => ({
|
||||
useStartServer: vi.fn(),
|
||||
useStopServer: vi.fn(),
|
||||
useRestartServer: vi.fn(),
|
||||
}));
|
||||
|
||||
const baseServer: Server = {
|
||||
id: 1,
|
||||
name: "Test Arma3",
|
||||
game_type: "arma3",
|
||||
status: "running",
|
||||
port: 2302,
|
||||
max_players: 64,
|
||||
current_players: 32,
|
||||
restart_count: 3,
|
||||
auto_restart: true,
|
||||
created_at: "2026-01-01T00:00:00Z",
|
||||
};
|
||||
|
||||
function renderCard(server: Partial<Server> = {}) {
|
||||
const fullServer: Server = { ...baseServer, ...server };
|
||||
const queryClient = new QueryClient({
|
||||
defaultOptions: { queries: { retry: false } },
|
||||
});
|
||||
return {
|
||||
user: userEvent.setup(),
|
||||
...render(
|
||||
<QueryClientProvider client={queryClient}>
|
||||
<ServerCard server={fullServer} />
|
||||
</QueryClientProvider>,
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
function mockMutationResult(
|
||||
overrides: Partial<{ mutateAsync: ReturnType<typeof vi.fn>; isPending: boolean }> = {},
|
||||
) {
|
||||
return {
|
||||
mutateAsync: vi.fn(() => Promise.resolve()),
|
||||
isPending: false,
|
||||
isSuccess: false,
|
||||
isError: false,
|
||||
reset: vi.fn(),
|
||||
mutate: vi.fn(),
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
describe("ServerCard handlers", () => {
|
||||
beforeEach(() => {
|
||||
useUIStore.setState({ notifications: [] });
|
||||
vi.mocked(useStartServer).mockReturnValue(
|
||||
mockMutationResult() as unknown as ReturnType<typeof useStartServer>,
|
||||
);
|
||||
vi.mocked(useStopServer).mockReturnValue(
|
||||
mockMutationResult() as unknown as ReturnType<typeof useStopServer>,
|
||||
);
|
||||
vi.mocked(useRestartServer).mockReturnValue(
|
||||
mockMutationResult() as unknown as ReturnType<typeof useRestartServer>,
|
||||
);
|
||||
});
|
||||
|
||||
it("should add success notification on start success", async () => {
|
||||
const { user } = renderCard({ status: "stopped" });
|
||||
await user.click(screen.getByLabelText("Start Test Arma3"));
|
||||
|
||||
const state = useUIStore.getState();
|
||||
expect(state.notifications).toHaveLength(1);
|
||||
expect(state.notifications[0].type).toBe("success");
|
||||
});
|
||||
|
||||
it("should add error notification on start failure", async () => {
|
||||
const startMutation = mockMutationResult({
|
||||
mutateAsync: vi.fn(() => Promise.reject(new Error("fail"))),
|
||||
});
|
||||
vi.mocked(useStartServer).mockReturnValue(
|
||||
startMutation as unknown as ReturnType<typeof useStartServer>,
|
||||
);
|
||||
|
||||
const { user } = renderCard({ status: "stopped" });
|
||||
await user.click(screen.getByLabelText("Start Test Arma3"));
|
||||
|
||||
const state = useUIStore.getState();
|
||||
expect(state.notifications.some((n) => n.type === "error")).toBe(true);
|
||||
});
|
||||
|
||||
it("should call stopServer on Stop click", async () => {
|
||||
const stopMutation = mockMutationResult();
|
||||
vi.mocked(useStopServer).mockReturnValue(
|
||||
stopMutation as unknown as ReturnType<typeof useStopServer>,
|
||||
);
|
||||
|
||||
const { user } = renderCard({ status: "running" });
|
||||
await user.click(screen.getByLabelText("Stop Test Arma3"));
|
||||
expect(stopMutation.mutateAsync).toHaveBeenCalledWith({ serverId: 1 });
|
||||
});
|
||||
|
||||
it("should add error notification on stop failure", async () => {
|
||||
const stopMutation = mockMutationResult({
|
||||
mutateAsync: vi.fn(() => Promise.reject(new Error("fail"))),
|
||||
});
|
||||
vi.mocked(useStopServer).mockReturnValue(
|
||||
stopMutation as unknown as ReturnType<typeof useStopServer>,
|
||||
);
|
||||
|
||||
const { user } = renderCard({ status: "running" });
|
||||
await user.click(screen.getByLabelText("Stop Test Arma3"));
|
||||
|
||||
const state = useUIStore.getState();
|
||||
expect(state.notifications.some((n) => n.type === "error")).toBe(true);
|
||||
});
|
||||
|
||||
it("should call restartServer on Restart click", async () => {
|
||||
const restartMutation = mockMutationResult();
|
||||
vi.mocked(useRestartServer).mockReturnValue(
|
||||
restartMutation as unknown as ReturnType<typeof useRestartServer>,
|
||||
);
|
||||
|
||||
const { user } = renderCard({ status: "running" });
|
||||
await user.click(screen.getByLabelText("Restart Test Arma3"));
|
||||
expect(restartMutation.mutateAsync).toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
it("should add error notification on restart failure", async () => {
|
||||
const restartMutation = mockMutationResult({
|
||||
mutateAsync: vi.fn(() => Promise.reject(new Error("fail"))),
|
||||
});
|
||||
vi.mocked(useRestartServer).mockReturnValue(
|
||||
restartMutation as unknown as ReturnType<typeof useRestartServer>,
|
||||
);
|
||||
|
||||
const { user } = renderCard({ status: "running" });
|
||||
await user.click(screen.getByLabelText("Restart Test Arma3"));
|
||||
|
||||
const state = useUIStore.getState();
|
||||
expect(state.notifications.some((n) => n.type === "error")).toBe(true);
|
||||
});
|
||||
|
||||
it("should disable Restart button when server is starting", () => {
|
||||
renderCard({ status: "starting" });
|
||||
const restartBtn = screen.getByLabelText("Restart Test Arma3");
|
||||
expect(restartBtn).toBeDisabled();
|
||||
});
|
||||
|
||||
it("should disable Stop and Restart when server is restarting", () => {
|
||||
renderCard({ status: "restarting" });
|
||||
expect(screen.getByLabelText("Stop Test Arma3")).toBeDisabled();
|
||||
expect(screen.getByLabelText("Restart Test Arma3")).toBeDisabled();
|
||||
});
|
||||
|
||||
it("should disable Start button while start is pending", () => {
|
||||
vi.mocked(useStartServer).mockReturnValue(
|
||||
mockMutationResult({ isPending: true }) as unknown as ReturnType<typeof useStartServer>,
|
||||
);
|
||||
|
||||
renderCard({ status: "stopped" });
|
||||
expect(screen.getByLabelText("Start Test Arma3")).toBeDisabled();
|
||||
});
|
||||
});
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user