OMEGA Ω
GitHub →

OMEGA Ω

Personal AI Agent Infrastructure

Your AI. Your server. Your rules.

Technical Whitepaper v1.0 · February 2026
Ivan Lozada · ivan@omegacortex.ai
omegacortex.ai · github.com/omega-cortex/omega
Rust · 11,127 lines · 6 crates · SQLite · MIT License

Contents

  1. Abstract
  2. The Problem
  3. Architecture
  4. The Gateway Pipeline
  5. Memory System
  6. Provider Strategy
  7. Security Model
  8. Skills & Extensibility
  9. Distribution & Installation
  10. Competitive Analysis
  11. Trading Integration
  12. Roadmap
  13. Conclusion

1. Abstract

Omega is a personal AI agent built in Rust that connects messaging platforms (Telegram, WhatsApp) to AI providers (Claude Code CLI, Anthropic API, OpenAI, Ollama) through a secure, memory-persistent gateway. It runs as a single binary on user-owned hardware with no cloud dependency, no subscription, and no data leaving the machine beyond the AI provider API calls.

At 11,127 lines of Rust across 6 crates, Omega delivers persistent conversation memory, fact extraction, scheduled tasks, OS-level sandboxing (Seatbelt on macOS, Landlock on Linux), prompt injection filtering, and a markdown-based skill system — in a binary that uses approximately 20 MB of RAM at runtime.

This paper describes the architecture, security model, and design philosophy behind Omega, and positions it within the emerging landscape of personal AI agents.

2. The Problem

The current generation of AI agents suffers from three fundamental weaknesses: bloated codebases that expand the attack surface, cloud dependencies that surrender user data to third parties, and shallow security models that treat sandboxing as an afterthought.

OpenClaw, the most popular open-source AI agent as of early 2026 with over 100,000 GitHub stars, exemplifies these tradeoffs. Built in JavaScript across 430,000+ lines of code, it carries CVE-2026-25253 (CVSS 8.8), consumes 200–500 MB of RAM, and exposes over 21,000 instances to the public internet. Users routinely connect it to funded cryptocurrency wallets, email accounts, and messaging platforms with minimal isolation between the AI's execution environment and sensitive system resources.

The AI agent space has prioritized feature velocity over architectural integrity. Omega takes the opposite approach: security-first design, minimal attack surface, and memory safety guaranteed by the Rust compiler.

3. Architecture

Omega is a Cargo workspace with 6 crates, each responsible for a single domain. The workspace compiles to a single binary with no runtime dependencies beyond the AI provider CLI.

CratePurposeLines
omega-coreTypes, traits, config, error handling, prompt sanitization1,288
omega-providersAI backends: Claude Code CLI, Anthropic, OpenAI, Ollama, OpenRouter669
omega-channelsMessaging: Telegram, WhatsApp (with session store)1,904
omega-memorySQLite storage, conversations, facts, audit log, migrations1,516
omega-skillsMarkdown-based skill/plugin system, MCP integration971
omega-sandboxOS-level isolation: Seatbelt (macOS), Landlock (Linux)294
src/ (binary)Gateway, commands, init wizard, selfcheck, service manager4,478

The architecture follows a strict dependency hierarchy: channels and providers depend on core, memory depends on core, skills depends on core and memory, and the binary crate depends on everything. No circular dependencies exist. The Rust compiler enforces this at build time.

4. The Gateway Pipeline

Every message entering Omega passes through a 7-stage pipeline in gateway.rs (2,304 lines). No stage can be bypassed. The pipeline is sequential and deterministic:

StageFunctionFailure Mode
1. AUTHVerify sender against allowed_users listReject with "Not authorized"
2. SANITIZEStrip prompt injection patterns from inputClean input, log attempt
3. WELCOMEDetect first-time users, send localized greetingSkip if returning user
4. COMMANDSHandle /commands locally (no AI call)Pass through if not a command
5. MEMORYBuild context from history + facts + summariesProceed with empty context
6. PROVIDERSend to AI backend, receive responseReturn error message to user
7. STORE+AUDITSave exchange to SQLite, log to audit trailLog failure, continue

Three background loops run concurrently: a Summarizer that compresses idle conversations (30+ minutes of inactivity) into summaries for future context, a Scheduler that executes timed tasks, and a Heartbeat that monitors system health. All loops are async Tokio tasks with graceful shutdown via SIGTERM handling.

5. Memory System

Omega's memory is backed by SQLite via the sqlx crate with compile-time verified queries. The memory store (1,413 lines) manages four data types:

The database schema evolves through numbered migrations (currently 5). Each migration is idempotent and runs automatically on startup. The memory database is local to the machine and never transmitted externally.

Context assembly follows a priority hierarchy: current conversation messages first, then extracted facts, then recent summaries. A configurable max_context_messages parameter (default: 50) prevents token overflow while maintaining conversational coherence.

6. Provider Strategy

Omega implements a trait-based provider abstraction that decouples the gateway from any specific AI backend. The Provider trait defines a single async method: send a context and receive a response.

ProviderStatusMechanismCost
Claude Code CLIProductionSubprocess: claude -p --output-format jsonClaude Max ($200/mo)
Anthropic APIStubDirect HTTPS to api.anthropic.comPay per token
OpenAI / CodexStubDirect HTTPS or Codex CLI subprocessPay per token
OllamaStubLocal HTTP to localhost:11434Free (local GPU)
OpenRouterStubHTTPS proxy to multiple providersPay per token

The Claude Code CLI provider (657 lines) is the primary production backend. It invokes the claude binary as a subprocess with structured JSON output, handles max_turns auto-resume (when Claude hits its turn limit, Omega automatically restarts with context), and parses tool use results. The provider respects the user's local Claude authentication, requiring no additional API key management.

7. Security Model

Security in Omega operates at four layers, each independent of the others.

7.1 Memory Safety

Rust's ownership model eliminates buffer overflows, use-after-free, and data races at compile time. The entire codebase compiles with zero unsafe blocks. This is not a feature — it is a structural property of the language choice.

7.2 Authentication

Every incoming message is checked against an allowed_users list configured in config.toml. Messages from unauthorized senders are rejected immediately at stage 1 of the pipeline, before any processing occurs. The auth system supports both Telegram user IDs (numeric) and WhatsApp phone numbers.

7.3 Prompt Injection Filtering

The sanitize module (146 lines) strips known prompt injection patterns from user input before it reaches the AI provider. This includes attempts to override the system prompt, inject hidden instructions, or manipulate the conversation context. Patterns are matched and neutralized without blocking the underlying user intent.

7.4 OS-Level Sandboxing

When the AI provider executes commands (via Claude Code's Bash tool, for example), Omega can restrict what the AI is allowed to access at the operating system level:

ModeReadWriteUse Case
sandboxWorkspace onlyWorkspace onlyNon-technical users (default)
rxEntire hostWorkspace onlyAnalysis tasks, log reading
rwxEntire hostEntire hostDevelopers, sysadmins

On macOS, sandboxing is enforced via Seatbelt (the same mechanism used by the App Store). On Linux, it uses Landlock, a kernel-level access control system available since Linux 5.13. Both mechanisms are OS-native and cannot be circumvented by the AI provider.

8. Skills & Extensibility

Omega's skill system (971 lines) allows users to extend the agent's capabilities by dropping SKILL.md files into a directory. Skills are markdown documents that define:

Four skills ship with Omega: claude-code (default reasoning engine), google-workspace (Gmail, Drive, Calendar integration), playwright-mcp (browser automation via MCP), and skill-creator (a meta-skill for creating new skills). Users can create custom skills by writing a markdown file — no code compilation required.

The Model Context Protocol (MCP) is supported through skill definitions that reference MCP servers. When a skill with an MCP configuration is active, Omega connects to the specified MCP server and makes its tools available to the AI provider.

9. Distribution & Installation

Omega is distributed as a single binary for three platforms. The installation experience is designed around a 3-minute target for non-technical users.

9.1 Native Installers

macOS users download a .dmg, Windows users download a .exe, and Linux users download a .deb or .AppImage. All platforms also support a one-line shell installer:

curl -fsSL https://omegacortex.ai/install.sh | bash

9.2 The Installer Wizard

The native installer runs a guided setup that asks four questions: which messaging platform (WhatsApp, Telegram, or Discord), which AI provider (Ollama for free local inference, Claude API, or OpenAI), which security mode (sandbox, read+execute, or full access), and then displays a QR code for WhatsApp or prompts for a bot token for Telegram/Discord. The entire process completes in under 3 minutes.

9.3 Developer Installation

Developers can build from source via Cargo. The repository includes a flake.nix for reproducible builds via Nix. After building, omega init runs an interactive TUI wizard (built with cliclack) that configures the agent and optionally installs it as a system service (macOS LaunchAgent or Linux systemd unit).

10. Competitive Analysis

MetricOmegaOpenClawTypical Agent
LanguageRustJavaScriptPython / JS
Lines of code11,127430,000+varies
Runtime memory~20 MB200–500 MBvaries
Known CVEs0CVE-2026-25253varies
DistributionSingle binarynpm + dependenciespip / npm
OS-level sandbox
Prompt injection filter
Persistent memory✓ (SQLite)✓ (SQLite)
Multi-provider✓ (5 backends)✗ (Claude only)varies
Algo trading✓ (Binance TWAP)LLM-based (vibes)
Self-hosted✗ (usually cloud)

The comparison is not about feature count. OpenClaw offers more integrations (13+ messaging platforms, dozens of community skills). Omega's thesis is that for a personal AI agent — one that manages your messages, files, and potentially your finances — security properties are more important than integration breadth. A smaller, auditable codebase with OS-level isolation is a fundamentally different product than a large, extensible platform with known vulnerabilities.

11. Trading Integration

Omega includes an experimental algorithmic trading skill built as a standalone Rust CLI (binance-algo-rs) that interfaces with Binance Futures via HMAC-SHA256 signed REST API calls. The trading system supports TWAP (Time-Weighted Average Price) and VP (Volume Participation) execution algorithms — institutional-grade order types designed to minimize market impact.

The design philosophy explicitly rejects the "vibes trading" approach prevalent in the AI agent space, where users give LLMs direct access to funded wallets and ask them to make buy/sell decisions based on sentiment analysis. This approach has been shown to produce poor risk-adjusted returns (Sortino ratios below 0.05 in controlled benchmarks).

Instead, Omega's trading integration follows three principles:

12. Roadmap

PhaseDeliverableStatus
1Foundation: gateway, auth, memory, Telegram, Claude Code✓ Complete
2WhatsApp channel, skill system, sandbox, init wizard✓ Complete
3Native installers (.dmg, .exe, .deb), QR-based setupIn progress
4Ollama + OpenAI + Anthropic API providersPlanned
5Discord channel, voice messages (Whisper), filesystem watcherPlanned
6Trading skill GA, additional exchange integrationsPlanned

13. Conclusion

Omega represents a different philosophy in the AI agent space: that the agent managing your personal communications, files, and financial transactions should be built with the same engineering rigor applied to critical infrastructure. Memory safety is not optional. Sandboxing is not an afterthought. Auditability is not a requirement — it is the point.

At 11,127 lines of Rust, Omega is not the most feature-rich agent available. It is, by design, the most auditable. Every line can be read and understood by a single developer in a single sitting. The entire binary fits in 20 MB of RAM. The entire codebase fits in a single code review.

The question is not whether AI agents will become ubiquitous — they will. The question is whether users will trust them with access to the most sensitive parts of their digital lives. Omega is built for the users who demand that trust be earned through engineering, not marketing.