🔒 Internal Draft
Privacy Infrastructure for AI Agents

The privacy layer
your AI agent is missing.

Encrypted memory. User-held keys. Zero-retention inference. Agent runtime. All in a single TypeScript SDK — so you can ship a private AI product without building six systems first.

Start building → Read the docs

Open-core  ·  MIT licensed  ·  Hosted on krava.ai

Why This Exists

Building AI agents that handle sensitive data
means building six things correctly.
Most teams build zero.

Passkey authentication. Client-side cryptography. Encrypted vector storage. TEE-routed inference. Capability-based access control. Tamper-evident audit logs.

Get any one wrong and the privacy guarantee breaks. Build all six yourself and you've spent a quarter not shipping your product.

Krava is those six systems, packaged as an SDK drop-in.

// 01

No keys in your stack

Every API call routes through Krava's encrypted proxy. Your agent never touches a model API key directly — even if the container is compromised.

// 02

No plaintext in the database

User data is encrypted client-side before it reaches storage. The key lives with the user. Krava cannot read it. Neither can law enforcement with a subpoena.

// 03

No inference logs

For sensitive workloads, inference runs inside NVIDIA hardware enclaves (TEEs). The GPU operator cannot see the data in memory during processing. Not a policy. A hardware guarantee.

The SDK

Drop in. Ship private.

Two entry points. Same privacy guarantee underneath.

Track 1 — Chat Agent

Custom Persona

You define the persona. Krava provides the encrypted substrate. Streamed conversations with user-keyed encrypted memory and zero-retention TEE inference — behind a single SDK call.

import { createPrivyClient } from '@krava/sdk'

const krava = createPrivyClient({
  appKey: process.env.KRAVA_APP_KEY
})

// Provision a user — returns encrypted userToken
const { userToken } = await krava.provisionUser({
  externalUserId: user.id
})

// That's it. Encrypted memory,
// passkey identity, zero-retention inference.
Track 2 — Autonomous Agent

Agent Runtime

Provision a per-user agent pod in a zero-trust environment. Built-in tools: web search, email, calendar, Telegram. Each user gets their own containerized runtime — not a shared service.

// Spin up an autonomous agent for this user
const agent = await krava.provisionAgent({
  userToken,
  region: 'eu',     // GDPR-aligned by default
  tools: ['search', 'email', 'calendar']
})

// Agent runs in an isolated container.
// All inference routes through Krava's proxy.
// Agent never holds an API key.
await agent.start()
SDK TypeScript MIT licensed npm install @krava/sdk OpenAI-compatible inference API 3-region deployment (EU · US · SE)
Architecture

Your app. Krava in the middle.
Your users' data stays theirs.

Your Lovable / Bolt App
React · Vite · Web
userToken (browser-safe)
Krava Platform
auth · encryption · memory · inference · agent runtime · audit trail
encrypted at rest + in transit
Krava Infrastructure
Supabase · RunPod · Tinfoil TEE · Anthropic · Fireworks · OSS

step_01 — register

Register your app.

You get an appKey. Keep it server-side. Never expose it to the browser.

step_02 — provision

Provision your users.

Call Krava from your server with the appKey. Each user gets a userToken — a time-limited credential safe to pass to the browser.

step_03 — ship

Ship.

Your frontend calls Krava with the userToken. Every user now has encrypted memory, passkey identity, and model-agnostic inference.

"The security model is Stripe's: a server-side secret key that never touches the browser, and a client-safe token for everything downstream."

The Privacy Stack

Production-grade primitives.
Not a wrapper. Not a contract.

Four cryptographic guarantees. Each independent. All running together.

🔑

Passkey Identity

No username. No password. No email. Users authenticate with Face ID or Touch ID via WebAuthn. Their cryptographic handle is a hash — Krava stores it, but cannot reverse it to a real identity. Even under legal compulsion.

🔐

AES-256-GCM Encryption

Every memory item is encrypted with a key derived from the user's authentication token — 100,000 rounds of PBKDF2, per-user salt, per-message nonce. The ciphertext reaches the database. The key never does.

🖥️

Tinfoil TEE Inference

For sensitive workloads, inference runs inside NVIDIA H100/H200 Trusted Execution Environments. The GPU operator cannot read data in memory during processing. SOC 2 Type II. Open-source stack — cryptographically verifiable, not just auditable.

🔀

Model-Agnostic Routing

Route to Anthropic, OpenAI, Fireworks, or self-hosted models — based on task sensitivity, cost, and capability. Sensitive tasks go to Tinfoil; general tasks use commercial APIs. Users and developers are never locked to a single provider.

Built with Krava

We eat our own cooking.

Before asking other developers to trust this infrastructure, we built two real products on top of it.

Live at krava.ai/coach

Krava Coach

The private AI for senior leaders.

A coaching product for executives and founders working through sensitive decisions — board dynamics, layoffs, fundraising, co-founder conflict — where existing options all fail the privacy bar.

Krava Coach runs entirely on the SDK: passkey login, encrypted conversation memory, TEE inference for the highest-sensitivity sessions, and three coaching modes (Vent · Decision Lab · Reframe & Reset). It has paying subscribers.

Every feature a developer could build is visible and documented in the open-source repository.

stack: passkey_auth · aes-256-gcm_memory · tinfoil_tee · track_1_sdk
In development

Krava Agent

A private AI agent that lives in Telegram.

An autonomous AI assistant — model-agnostic, Telegram-native, capable of multi-step work (research, drafting, inbox triage, meeting prep, decision support) — where the user holds the keys to everything the agent knows about them.

Each user gets their own containerized agent instance. The agent never holds an API key. All inference routes through Krava's encrypted proxy.

This is what an AI agent looks like when it is architecturally incapable of leaking — not just contractually prohibited.

stack: openclaw_runtime · runpod_per-user · tinfoil_tee · track_2_sdk
Pricing

Start free. Pay for what you use.

Open-core: self-host forever, free. Hosted: managed gateway, usage-based.

open_source

Free

Forever. No limits.

MIT Licensed
  • Full SDK · MIT licensed
  • Self-host on your own infrastructure
  • All privacy primitives included
  • Community support via Discord
  • No usage limits on self-hosted
Clone on GitHub →

enterprise

Custom

Regulated verticals · dedicated infra

  • HIPAA BAA
  • Dedicated infrastructure
  • Custom SLA
  • Founder-led onboarding
  • Regulated-vertical pricing
Talk to us →

Your AI.
Your data.
Your control.

AI agents will touch more sensitive data in the next two years than every enterprise SaaS system of the last decade. The model companies are not going to solve this — it is architecturally opposed to their interests.

Krava is infrastructure for developers who want to build agents that are incapable of leaking — not just contractually prohibited.

The privacy story is real today. The keys are yours. The code is open.

Start with the SDK →