Reproducible, auditable Trust & Safety decisions — on-prem.

Swiftward is a modular on-prem policy engine for UGC, AI outputs, and internal workflows — with a full audit trail.

Who it's for

  • Teams shipping AI features (agents, copilots, auto-summaries) and need guardrails + audit.
  • UGC platforms where moderation mistakes have real cost.
  • Regulated orgs (fintech/health/enterprise) needing deterministic compliance evidence.

If you're already hiring Trust & Safety / Policy / Compliance roles — you likely feel this pain.

What Swiftward is

Swiftward is a modular, on-premise Trust & Safety engine for enforcing deterministic, auditable policies across UGC, AI outputs, and internal workflows — from a single-node setup to distributed deployments.

It fits environments where incorrect decisions have real cost (operational, financial, reputational) and where reproducible outcomes with a clear audit trail are required.

Why not just X?

Why not OPA?

OPA: Policy decision engine — great for authorization/DevOps.

Swiftward: Policy runtime + event processing + state management + audit trails + DLQ/replay. Built for Trust & Safety workflows.

Why not prompt/LLM guardrails?

Those protect prompts/responses; Swiftward is a general policy runtime over events + workflows.

Why not build in-house?

You'll re-create policy versioning, deterministic execution ordering, audit trails, DLQ/replay, and integrations.

Key capabilities

Deterministic decisions

DAG signals, ordering guarantees, two-phase execution (evaluation ≠ side effects). Same event + same policy = same verdict, always.

Full audit trail

Every signal, rule match, state mutation, and action logged per event. Traces, investigations, replay/DLQ for compliance.

On-prem control

No SaaS lock-in, data stays inside. Deploy anywhere: Docker, Kubernetes, or bare metal.

Scale without rewrites

Postgres-first design scales to Kafka/Redis/ClickHouse via adapters. Switch backends without changing policy logic.

Example output

A UGC moderation policy that blocks PII exposure, and the decision trace it produces.

In a pilot we map this to your event schema in day 1–2.

Policy (YAML)

signals:
  pii_detected:
    udf: pii/scanner
    params:
      text: "{{ event.data.text }}"
      types: ["email", "phone", "ssn"]

rules:
  block_pii_exposure:
    enabled: true
    all:
      - path: "signals.pii_detected.found"
        op: eq
        value: true
    effects:
      verdict: rejected
      state_changes:
        set_labels: ["pii_violation"]
        change_counters:
          pii_violations: 1
      response:
        blocked: true
        reason: "Content contains personal information"

Decision trace

trace_id:       tr_ugc_YYYYMMDD_001
policy_version: ugc_moderation_v1
policy_hash:    sha256:abc123...
duration:       106ms

SIGNALS COMPUTED
+ pii_detected = FOUND (12ms, pii/scanner)
  -> Email detected: john@example.com

RULES EVALUATED
[P90]  block_pii_exposure     MATCHED

VERDICT: REJECTED
Source:  block_pii_exposure
Reason:  Content contains personal information

STATE MUTATIONS
+ SET LABELS:      ["pii_violation"]
+ CHANGE COUNTERS: pii_violations += 1

Architecture & scaling

Swiftward runs as a single binary that can operate as one process or be deployed as role-based components (ingestion, workers, control API) for horizontal scaling.

Event Source Ingestion HTTP/gRPC Queue Postgres Worker Pulls events Rules Engine State Store Postgres Actions Webhooks Decision Trace
  • Single executable → role-based services: Run as one process for small teams, or deploy role-based components for scale.
  • Reference deployments: Docker Compose profiles for minimal (Postgres only), full (+ Kafka, Redis), and analytics (+ ClickHouse, VictoriaMetrics).
  • Postgres-first design: Queues, DLQ, state, rule versions, and execution history in one database for moderate workloads.
  • Horizontal scaling: Stream partitioning by entity_id preserves per-entity ordering.
  • Optional adapters: Kafka (ingestion/buffering), Redis (caching/rate limiting), ClickHouse/Druid (analytics) via config.
  • Switch backends without changing policy logic: No code changes required, only deployment configuration.
  • Two-phase execution: Deterministic rule evaluation separate from state mutations and external actions.

What it is NOT

Swiftward is purpose-built for Trust & Safety and policy enforcement, not general business orchestration.

  • Not a SaaS moderation API
  • Not a black-box classifier
  • Not a BPM/workflow engine
  • Not a chatbot wrapper

Design partner pilot offer

I'll personally help you install and run a pilot. This is a paid engagement to integrate Swiftward with your systems and validate fit.

Outcome: a go/no-go decision for adopting Swiftward with a working policy pack and reproducible traces on your data.

Target time-to-first-value

5–10 business days after prerequisites are ready — see pilot guide for detailed timeline

Typical end-to-end pilot: 3–6 weeks including security/access, integration, and evaluation.

Deliverables

  • • Policy pack for 1-2 real use cases
  • • Audit/investigation workflow demo
  • • Performance baseline + go/no-go criteria

Success criteria

  • • We can express your policies and reproduce verdicts deterministically
  • • We can produce a complete audit trace for any decision
  • • Latency/throughput targets met for your workload (or acceptable async)

Week 1 outcome

First event source integrated + first policy running + trace visible

What we need from you

  • • Access to environment (k8s/docker/vm) + Postgres
  • • Sample events/test dataset (10–50 events, anonymized OK)
  • • Integration endpoint owner (webhook/API contact)
  • • Success criteria & owner (one technical contact)

Scope and pricing

Paid on-prem pilot (fixed-fee): typically $5k–$10k depending on scope. Final scope + fee confirmed after a 30-minute scoping call. See pilot guide for details.

Request a pilot

Built by Konstantin Trunin — hands-on CTO (Go, distributed systems). LinkedIn ↗

FAQ

Do we need to learn a new DSL?

Yes, but it's YAML-first; we provide examples and do onboarding. Policies are defined declaratively using constants, UDF and action profiles, signals, and rules.

Do you have prebuilt detectors/UDFs?

Some; expanding library. AI calls are generic UDF; you can bring your own. UDFs implement domain-specific logic such as heuristics, scoring, parsing, enrichment, or LLM checks.

Human-in-the-loop?

Not in v1; available as an extension. Tell us your workflow and we can discuss integration options.

Documentation