OpenSourceAIHub

OpenRouter Alternative for Secure AI Infrastructure

OpenRouter is a popular model aggregator that gives developers a single endpoint to access 200+ LLMs. For hobbyist projects and experimentation, it works well — pick a model, send a request, get a response.

But when you move from experimentation to production — when your prompts carry customer data, when multiple teams share the same AI infrastructure, when a compliance auditor asks “what controls do you have on AI data flows?” — aggregation is not enough. You need governance: per-project DLP policies, real-time PII redaction, versioned audit trails, vision OCR scanning, and hard budget enforcement. This is what OpenSourceAIHub provides.

Why This Matters

The gap between “model aggregator” and “enterprise AI governance” is the gap between a tool that sends requests and a platform that controls them.

Aggregation

Routes your request to a model. Does not inspect, filter, or enforce policies on the content. Every prompt — including those with SSNs, API keys, or patient records — is forwarded as-is.

Governance

Scans every request for 28 entity types, enforces per-project DLP policies, redacts or blocks sensitive data, logs violations with correlation IDs, and rejects requests that exceed the budget — before anything reaches a provider.

What OpenRouter Provides

OpenRouter is well-executed for its core use case — giving developers broad model access through a single API:

Unified model marketplace

Access to 200+ models from dozens of providers through one endpoint. Discover new models, compare prices, and switch instantly.

OpenAI-compatible endpoint

Standard POST /v1/chat/completions format works with the OpenAI SDK, making integration straightforward.

Prepaid credits

Top up a balance and pay per token. No need to sign up with individual providers.

Community and discovery

Model rankings, usage stats, and a community-driven ecosystem help developers find the right model for their use case.

Governance Gaps for Production Teams

When your AI integration moves from a side project to a production system — one that handles real customer data and must satisfy compliance requirements — model aggregation alone leaves critical gaps:

critical

No PII detection or redaction

Prompts containing emails, SSNs, credit cards, medical records, and API keys are forwarded directly to model providers. There is no scanning layer, no entity detection, and no redaction engine. If a customer support agent pastes a ticket containing a customer's SSN, it reaches the LLM unmodified.

critical

No per-project DLP policies

There is no concept of project-level security policies. You cannot define different rules for different teams or applications — a healthcare project and a marketing chatbot share the same (nonexistent) security posture.

critical

No prompt injection firewall

Jailbreak attempts and prompt injection patterns are forwarded to the model without interception. There is no heuristic detection and no ability to BLOCK malicious inputs at the gateway level.

critical

No vision / image OCR scanning

Images sent via multi-modal APIs are passed through without inspection. Screenshots of dashboards, medical records, or bank statements containing PII are forwarded directly to providers.

high

No structured audit logging

Basic usage history exists, but there is no per-request log of which entity types were detected, what actions were taken, or correlation IDs for tracing individual requests through a compliance audit.

medium

No BYOK with zero markup

You cannot bring your own provider API keys and use the platform purely as a security/governance layer at zero cost. All requests go through their billing system.

high

No hard budget enforcement

Credit limits exist but are soft — there is no pre-flight balance check that rejects requests before they are processed, and no automatic max_tokens capping based on remaining balance.

The OpenSourceAIHub Approach: Enterprise Governance

OpenSourceAIHub provides model routing and a governance layer that actively inspects, enforces, and audits every request. The key architectural differences:

Project-Level DLP Policies

Each project defines its own security rules: which entity types to scan, whether to REDACT or BLOCK, and custom regex patterns for proprietary data. Policies are versioned — every save creates an immutable snapshot with a full audit trail.

DLP policy docs

28-Entity PII Firewall

Pattern matching, checksum validation, intelligent entity recognition, and context heuristics detect SSNs, credit cards, API keys, emails, and 21 more entity types. Detected PII is redacted in-flight — the provider never sees the raw data.

PII redaction deep dive

Vision OCR Security

Base64-encoded images are OCR-extracted and scanned with the full DLP engine. A screenshot containing PII is blocked before it reaches the provider. Images are processed in RAM only — never stored.

Vision security docs

Pre-flight Budget Enforcement

In Managed Mode, every request is cost-estimated before forwarding. If the wallet can't cover it, a 402 is returned with exact balance details. Output tokens are auto-capped by the remaining balance.

Budget enforcement deep dive

Feature Comparison

Side-by-side governance capabilities. Green indicates full support, amber indicates partial or limited support, red indicates the feature is not available.

FeatureOpenRouterOpenSourceAIHub
Multi-provider model access200+ models100+ models, 9 providers
OpenAI SDK compatibleSupportedDrop-in (baseURL + apiKey)
PII detection & redactionNot available28 entity types, real-time
Prompt injection firewallNot availableMulti-layer detection + BLOCK
Vision / image OCR scanningNot availableBase64 OCR with DLP enforcement
Per-project DLP policiesNot availableCustom per-entity rules, templates, versioning
Policy version historyNot availableImmutable versions with full audit trail
Custom regex patternsNot availableEnterprise IP Guard per project
Budget enforcement (402)Credit limits (soft)Pre-flight hard stop, auto max_tokens cap
Managed wallet creditsPrepaid creditsPrepaid wallet with Smart Router
BYOK (zero-cost passthrough)Not availableFree — your keys, your cost
Smart cost routingBasic (by model availability)Indexes pricing, selects cheapest provider
Per-request audit loggingUsage historyEntity types, actions, correlation IDs, scan timing
Project-scoped API keysNot availableoah_* keys with isolated policies and analytics
Self-hostable / open-sourceHosted onlyOpen-source gateway

Audit Logs and Policy Versioning

Compliance auditors don't ask “which model did you use?” — they ask “what controls were in place, what data was intercepted, and can you prove it?” OpenSourceAIHub provides the evidence:

Per-request metadata logging

Every request is logged with entity types detected, actions taken (REDACT/BLOCK), scan timing (x-hub-scan-ms), model used, provider selected, and a unique correlation ID (x-hub-correlation-id). The actual prompt content is never stored.

Immutable policy versions

Every DLP policy save creates a new immutable version (v1, v2, v3...). Old versions are deactivated but never deleted. You can see who created each version, when, what entities it covered, and what it blocked.

Policy restore with history

Revert to any previous policy version with one click. The revert creates a new version (preserving the full history), so you always have a complete timeline of policy changes.

Per-project usage dashboards

Token usage, cost, and violation counts are tracked per project. Dashboard charts include timeline markers at each policy change point, so you can measure the impact of policy updates.

Response headers — audit trail on every request
HTTP/1.1 200 OK
x-hub-scan-ms: 14
x-hub-violations: EMAIL_ADDRESS,US_SSN
x-hub-correlation-id: req_a1b2c3d4
x-hub-model: llama-3-70b
x-hub-provider: groq
content-type: application/json

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "oah/llama-3-70b",
  "choices": [...],
  "usage": {
    "prompt_tokens": 45,
    "completion_tokens": 120,
    "total_tokens": 165
  }
}

Project-Level DLP Policies

Different applications have different security requirements. A healthcare app needs to block SSNs; a developer tool needs to catch API keys; a fintech product needs credit card detection. OpenSourceAIHub lets you define per-project policies so each team gets exactly the controls they need:

DLP Policy — Custom per-project configuration (JSON)
{
  "version": 3,
  "action": "BLOCK",
  "entities": [
    "US_SSN",
    "CREDIT_CARD",
    "EMAIL_ADDRESS",
    "PHONE_NUMBER",
    "API_KEY",
    "AWS_ACCESS_KEY",
    "AWS_SECRET_KEY",
    "PRIVATE_KEY",
    "GITHUB_TOKEN",
    "PROMPT_INJECTION"
  ],
  "custom_regex": [
    {
      "name": "INTERNAL_PROJECT_CODE",
      "pattern": "PRJ-[A-Z]{2}-\d{4,}",
      "description": "Internal project identifiers"
    },
    {
      "name": "CUSTOMER_ACCOUNT_ID",
      "pattern": "CUST-\d{8,12}",
      "description": "Customer account numbers"
    }
  ]
}

Policy Resolution Flow

# Every request follows this resolution path:

1. Request arrives → authenticate key

2. Is it a project key (oah_*)?

│─ Yes → Does project have a custom DLP policy?

│ │─ Yes → Use project policy (v3: BLOCK, 10 entities + 2 regex)

│ └─ No Use global default (REDACT, all 28 entities)

└─ No → Is it a Hub key (os_hub_*)?

└─ Use global default (REDACT, all 28 entities)

3. Scan all messages with resolved policy

4. Redact or block → forward to LLM (if not blocked)

Zero-Config Protection

Even without custom policies, every request is protected by the Global Default Policy — all 28 entity types scanned, all PII redacted, prompt injection patterns blocked. You are secure from your very first API call.

BYOK Mode: Enterprise Security at Zero Cost

OpenRouter requires all requests to go through their billing system. OpenSourceAIHub offers a fundamentally different model: Bring Your Own Key (BYOK) — store your existing provider API keys in the Hub and use the entire governance layer for free.

BYOK Mode

  • Store your own provider API keys (AES-256-GCM encrypted)
  • Zero Hub cost — provider bills you directly
  • Full DLP firewall, vision OCR, audit logging — all included
  • Per-project policies and analytics still apply

Managed Mode

  • Prepaid wallet ($1 = 1M Hub Credits)
  • Smart Router selects cheapest provider
  • Pre-flight balance check + 402 enforcement
  • 25% markup (open-source) / 30% (closed-source)
BYOK — use the governance layer for free
import OpenAI from "openai";

// Your existing provider keys are securely managed in the Hub dashboard
// with enterprise-grade authenticated encryption.
// The Hub uses them to route requests — you pay the provider directly.

const client = new OpenAI({
  apiKey: "oah_proj_production_xxxxx",  // project-scoped key
  baseURL: "https://api.opensourceaihub.ai/v1",
});

const response = await client.chat.completions.create({
  model: "oah/gpt-4.1",
  messages: [{ role: "user", content: prompt }],
  max_tokens: 512,
});

// What happens under the hood:
// 1. Hub authenticates your project key
// 2. Resolves project DLP policy (v3: BLOCK on 10 entities + 2 regex)
// 3. Scans all messages — redacts EMAIL_ADDRESS, blocks US_SSN
// 4. Detects BYOK key for OpenAI → uses YOUR OpenAI credentials
// 5. Forwards cleaned request to the resolved provider
// 6. Logs: entity types, actions, timing, correlation ID
// 7. Hub cost: $0.00 (BYOK mode)

When to Use Each

OpenRouter

Best for model exploration and personal projects:

  • Trying out new models from many providers
  • Personal projects with non-sensitive data
  • Benchmarking and model comparison research
  • Community-driven model discovery

OpenSourceAIHub

Built for production AI systems with governance requirements:

  • Applications processing customer PII
  • Teams with GDPR, HIPAA, or PCI-DSS requirements
  • Multi-team orgs needing per-project policy isolation
  • Enterprises that need auditable policy versioning
  • Vision/multi-modal apps with sensitive image data
  • Teams wanting BYOK with free governance

Add Enterprise Governance to Your AI Infrastructure

Create an account, get your API key, and every request is automatically scanned, policy-enforced, and audited — from your very first API call. No configuration required.