AI safety and content moderation — with decisions you can explain and prove.

Block bad content, trace every verdict, replay any event. On-prem policy engine for deterministic, auditable enforcement.

Request a pilot Design-partner slots available; fee may be waived.

What teams use Swiftward for

AI safety & guardrails

Problem: Your LLM responds with threats, jailbreaks, or harmful content. Users complain. You can't explain what happened.

With Swiftward: Block or flag unsafe outputs in real-time. Get a full trace of why each decision was made — for debugging, appeals, and audits.

UGC & content moderation

Problem: Users post PII, spam, or policy-violating content. Moderation mistakes cost reputation and legal risk.

With Swiftward: Enforce content policies consistently. Same content + same policy = same decision, every time. Full audit trail for compliance.

Fraud & abuse detection

Problem: Coordinated spam rings, fake accounts, or abuse patterns that stateless APIs can't catch.

With Swiftward: Built-in entity state (labels, counters) lets you detect patterns across events. "5 new accounts posted the same link in 10 minutes" → block.

Compliance & audit trail

Problem: Regulators or legal ask "why did you block this user?" and you can't reproduce the decision.

With Swiftward: Every decision is traced with signal values, rule matches, and state changes. Replay any event against any policy version.

Who it's for

  • Teams shipping AI features (agents, copilots, auto-summaries) and need guardrails + audit.
  • UGC platforms where moderation mistakes have real cost.
  • Regulated orgs (fintech/health/enterprise) needing deterministic compliance evidence.

If you're already hiring Trust & Safety / Policy / Compliance roles — you likely feel this pain.

Key capabilities

On-prem control

No SaaS lock-in, data stays inside. Deploy anywhere: Docker, Kubernetes, or bare metal.

Full audit trail

Every signal, rule match, state mutation, and action logged per event. Traces, investigations, replay/DLQ for compliance.

Deterministic decisions

Same event + same policy version = same verdict (replayable). Under the hood: ordering guarantees + two-phase execution to keep side effects consistent.

Scale without rewrites

Postgres-first design scales to Kafka/Redis/ClickHouse via adapters. Switch backends without changing policy logic.

How it works

Event in → policy evaluated → decision + trace out

1 / 3

AI guardrails: Block user prompts containing threats or harassment.

Input
User prompt with threatening content
Decision
REJECTED
Rule: block_threats
Effects
Block prompt + Alert #ai-safety + Label "threat_detected"

Architecture & scaling

Swiftward runs as a single binary that can operate as one process or be deployed as role-based components (ingestion, workers, control API) for horizontal scaling.

Event Source Ingestion HTTP/gRPC Queue Postgres Worker Pulls events Rules Engine State Store Postgres Actions Webhooks Decision Trace
  • Single executable → role-based services: Run as one process for small teams, or deploy role-based components for scale.
  • Reference deployments: Docker Compose profiles for minimal (Postgres only), full (+ Kafka, Redis), and analytics (+ ClickHouse, VictoriaMetrics).
  • Postgres-first design: Queues, DLQ, state, rule versions, and execution history in one database for moderate workloads.
  • Horizontal scaling: Stream partitioning by entity_id preserves per-entity ordering.
  • Optional adapters: Kafka (ingestion/buffering), Redis (caching/rate limiting), ClickHouse/Druid (analytics) via config.
  • Switch backends without changing policy logic: No code changes required, only deployment configuration.
  • Two-phase execution: Deterministic rule evaluation separate from state mutations and external actions.

Why not just X?

Why not OPA?

OPA: Policy decision engine — great for authorization/DevOps.

Swiftward: Policy runtime + event processing + state management + audit trails + DLQ/replay. Built for content moderation and safety workflows.

Why not prompt/LLM guardrails?

Those protect prompts/responses; Swiftward is a general policy runtime over any events + workflows.

Why not build in-house?

You'll re-create policy versioning, deterministic execution ordering, audit trails, DLQ/replay, and integrations.

What it is NOT

Swiftward is purpose-built for Trust & Safety and policy enforcement, not general business orchestration.

  • Not a SaaS moderation API
  • Not a black-box classifier
  • Not a BPM/workflow engine
  • Not a chatbot wrapper

Pilot offer

I'll personally help you install and run a pilot. Swiftward pilots are offered in two formats, depending on fit and goals.

Design partner pilot

Limited slots. License fee may be waived in exchange for active participation, structured feedback, and a reference if the pilot is successful.

Standard paid pilot

Fixed-fee on-prem engagement, credited toward a commercial license.

Outcome: a go/no-go decision for adopting Swiftward with a working policy pack and reproducible traces on your data.

Target time-to-first-value

5–10 business days after prerequisites are ready — see pilot guide for detailed timeline

Typical end-to-end pilot: 3–6 weeks including security/access, integration, and evaluation.

Deliverables

  • • Policy pack for 1-2 real use cases
  • • Audit/investigation workflow demo
  • • Performance baseline + go/no-go criteria

Success criteria

  • • We can express your policies and reproduce verdicts deterministically
  • • We can produce a complete audit trace for any decision
  • • Latency/throughput targets met for your workload (or acceptable async)

Week 1 outcome

First event source integrated + first policy running + trace visible

What we need from you

  • • Access to environment (k8s/docker/vm) + Postgres
  • • Sample events/test dataset (10–50 events, anonymized OK)
  • • Integration endpoint owner (webhook/API contact)
  • • Success criteria & owner (one technical contact)

Standard paid pilot pricing

Paid on-prem pilot (fixed-fee): typically $5k–$10k depending on scope. Final scope and fee confirmed after a 30-minute scoping call. See pilot guide for details.

Request a pilot

Built by Konstantin Trunin — hands-on CTO (Go, distributed systems). LinkedIn ↗

FAQ

Is the design partner pilot free?

For a limited number of design partners, the pilot license fee may be waived. This requires active participation, real use cases, and agreement to provide feedback and a reference if successful.

Do we need to learn a new DSL?

Yes, but it's YAML-first; we provide examples and do onboarding. Policies are defined declaratively using constants, UDF and action profiles, signals, and rules.

Do you have prebuilt detectors/UDFs?

Some; expanding library. AI calls are generic UDF; you can bring your own. UDFs implement domain-specific logic such as heuristics, scoring, parsing, enrichment, or LLM checks.

Human-in-the-loop?

Not in v1; available as an extension. Tell us your workflow and we can discuss integration options.

Documentation