Air-Gapped · Local-First · Private

Your AI journey. Your data. Your terms.

DREA Desktop is the privacy-first, air-gapped AI app for macOS. She runs local LLMs on Apple Silicon for on-device discovery, research, and evaluation—and safely bridges you to external AI agents like Claude and ChatGPT through a built-in trust proxy, so you can explore fearlessly without ever compromising what's yours.

0 secrets exposed
Any AI safely connected
100% local by default
DREA v2.0.2
> |
Scroll to explore

You're already using AI.
DREA makes it safe.

You're a professional exploring what AI can do for your work—analyzing contracts, synthesizing research, evaluating strategies. The possibilities are extraordinary. But every step forward comes with a question: where is my data going?

DREA is built for that question.

She's your companion on this journey—a local-first platform that processes documents, runs AI reasoning, and manages your knowledge base right on your machine. And when you want to reach further—to leverage Claude, ChatGPT, or any external AI—DREA acts as a trust layer, sanitizing sensitive information before anything leaves your environment.

You lead the discovery. You direct the research. You make the evaluations. DREA is simply the tool that ensures you can do all of it without compromise.

🔒
Trust Layer Sanitizes data before it reaches any external AI
🧠
Deep Thinker Chain-of-thought reasoning via DeepSeek R1
📚
Total Recall Reads PDFs, DOCX, videos, images, and more
Always Ready Works fully offline or alongside cloud AI—your call

Use any AI.
Compromise nothing.

DREA Desktop combines on-device document processing, local semantic search, and air-gapped AI inference on Apple Silicon—and when you need more, she safely bridges you to external AI without exposing what matters.

Safe External AI Collaboration

The heart of DREA: work with Claude, ChatGPT, Copilot, or any AI agent—without your sensitive data ever reaching them. DREA's trust proxy automatically sanitizes PII and proprietary information before anything leaves your environment, with full audit logging so you always know exactly what was shared.

Your knowledge, amplified by any AI. Your secrets, kept by DREA.

Multi-Format Ingestion

Drag and drop PDFs, Word docs, PowerPoint, Excel, HTML, Markdown, images with OCR, and even videos up to 500MB. All processed locally.

PDF DOCX PPTX XLSX HTML MP4 Images

Intelligent Search

Every query is classified, expanded into semantic variants, and reranked by a dedicated helper AI—delivering grouped evidence cards with relevance notes, not raw excerpts.

Local LLM Reasoning on Apple Silicon

A lightweight helper LLM (Qwen3-4B) routes every query, while DeepSeek R1 handles deep chain-of-thought reasoning—both run natively on Apple Silicon, fully offline, no cloud needed.

Agentic System

Specialized agents for documents, queries, system management, and workspace operations—coordinating your research autonomously.

Zero-Trust, Air-Gapped Architecture

API key auth, path traversal protection, network scope validation, and trust-proxy PII sanitization. Privacy-first security at every layer of the DREA desktop app.

Admin Dashboard

Real-time health monitoring, agent session tracking, log viewer, hook management, and full configuration control.

See DREA in action.

A short walkthrough of the discovery flow — from blank prompt to cited answer, all on-device.

DREA · discovery.mov Recording

Local foundation.
Global reach.

Every core component runs on your machine. When you choose to reach external AI, DREA's trust proxy stands between your data and the outside world.

DREA Desktop
DREA Desktop Tauri + Rust
Search UI Vue 3
Service Manager Rust
DREA API FastAPI :8000
ChromaDB Embedded
llama.cpp LLM :8080
DREA API FastAPI :8000
Admin UI Vue 3 :8001
Qdrant Vector DB :6333
Embeddings BGE GGUF + GME
llama.cpp LLM :8080
Docker / Podman Compose

How Your Data Flows

📄
Documents
🔧
Parse & Chunk
🧮
Embed Locally
🗃
Vector Store
💡
AI-Powered Answers

Professionals exploring AI
who refuse to compromise.

Wherever your AI journey takes you, DREA Desktop keeps sensitive work grounded on-device—a private, air-gapped AI that stays local even when you reach for the cloud.

⚖️

Law Firms

Search every contract, brief, and filing in seconds. When local answers aren't enough, DREA drafts a safe instruction for external AI — stripped of client names, case details, and privileged information — and waits for your approval before sending.

🏥

Healthcare Organizations

Search patient records, clinical notes, and research locally. Need broader medical knowledge? DREA drafts an external AI instruction with all patient identifiers removed — you see exactly what will be sent and approve it before anything leaves.

💼

Consulting & Advisory

Client deliverables never leave your laptop. When you want external AI to help with analysis, DREA prepares a safe instruction that protects your client's information — you control what goes out and what stays private.

🏭

Government & Public Sector

Keep sensitive documents on your machine where they belong. If you ever want external AI input, DREA writes a safe, de-identified instruction for your review — you decide whether to send, edit, or discard it.

🔧

Engineering & Manufacturing

Technical specs, BOMs, and inspection reports searchable as clean tables. When you need industry standards or best practices, DREA drafts the instruction for external AI — your proprietary designs stay on your machine.

🏢

Small Business & Operations

Invoices, vendor contracts, employee docs, inventory spreadsheets — DREA turns messy files into searchable knowledge. External AI is there if you need it — DREA drafts the instruction, you make the call.

Runs on the Mac
you already have.

DREA Desktop is built for Apple Silicon and tuned for local, air-gapped AI. Pick the Mac configuration that fits how you work — you can always grow into more capability later.

Start Here
Intelligent Search
vRAM

DREA Lite

Apple Silicon · 16GB Memory
Qwen3-4B + BGE-small Helper LLM routes every query
  • AI-powered query classification, expansion, and reranking
  • Grouped evidence cards with relevance notes
  • Spreadsheet data rendered as formatted tables
  • Safely use Claude, ChatGPT, or Copilot via trust proxy
  • Detect topics, duplicates, and related content
  • Process PDFs, Word, Excel, images, and more
Maximum Power
vRAM

DREA Pro

Apple Silicon · 64GB+ Memory
Full Model Stack + Larger LLMs 32B+ reasoning models + full platform
  • Everything in Full AI, plus:
  • Highest quality reasoning with larger models (32B-70B)
  • Handle thousands of documents at once
  • Map relationships across your entire library
  • Built for teams and organizations

What you can do at each level

What DREA Can Do 16GBSearch & Discover 24GBFull AI 64GB+Professional
Intelligent Search (Qwen3-4B Helper LLM)
AI query classification, expansion, and reranking
Grouped evidence cards with relevance notes
Spreadsheet data rendered as formatted tables
Safely work with Claude, ChatGPT, or Copilot
Automatically summarize, tag, and classify documents
Process PDFs, Word, Excel, images, video & more
Deep Analysis (Requires Reasoning Model, 24GB+)
Written answers with step-by-step reasoning
Back-and-forth conversations about your docs
Multi-LLM orchestration (local + cloud AI)
Automated multi-step research and analysis
Image and video search (with vision model)
Map relationships across your entire library

With 16GB, DREA is a smart research assistant — Qwen3-4B classifies, expands, and reranks every query, delivers grouped evidence cards, and safely connects to external AI via the trust proxy. At 24GB, DREA becomes a full AI orchestration platform — local reasoning, cloud AI, and multi-step analysis working together through one interface. You start wherever you are and grow when you're ready.

System Requirements

OS macOS 12.0+ (Sonoma or Sequoia recommended)
Chip Apple Silicon
RAM 16GB minimum, 24GB+ recommended
Storage 20GB+ free (models + vector database)
Python 3.10+ (bundled with DREA)
Network Not required (fully offline operation)

The AI that powers DREA.
All local. All yours.

Every AI model runs entirely on your Mac. No cloud, no accounts, no data leaving your device. Two models handle every search out of the box—two more unlock vision and deep reasoning when you're ready.

Core — Powers Every Search
Core Engine

Qwen3-4B

The Brain Behind Every Search

Qwen3-4B is the intelligence layer that makes DREA's search actually smart. Before your query ever hits the document index, this model classifies your intent, expands your words into semantic variants, and reranks every result by relevance with a reason. It also powers the trust proxy that keeps your data safe when using External AI.

Why this model? At 2.5 GB quantized (Q4_K_M), Qwen3-4B handles six distinct tasks—classify, expand, rerank, summarize, tag, and conversational fallback—with sub-second response times. Built by the Alibaba Qwen Team, it's optimized for structured JSON output, making it ideal for pipeline utility tasks without the overhead of a large reasoning model. It runs on any Apple Silicon Mac, even 16 GB.
What Qwen3-4B does on every search:
Classifies your query as search, conversation, or needs-clarification
Expands your words into 3 semantic variants for better recall
Scores and reranks every result with a relevance reason
Sanitizes queries for External AI with zero data leakage
Always Active Open Source 2.5 GB · Sub-second 6 Tasks

BGE-small GGUF

Document Memory

When you search for something, BGE-small is why DREA finds it—even when you don't use the exact right words. It converts every document into a searchable vector, enabling semantic matching by meaning, not just keywords.

Why this model? At just 35 MB in a single GGUF file, it loads in under 2 seconds and runs on any Apple Silicon Mac with zero slowdown. Built by the Beijing Academy of AI (BAAI), BGE-small is one of the most widely deployed open-source embedding models. DREA ships the Q8-quantized variant—near-lossless quality at a fraction of the original size.
Always Active Open Source 35 MB · 2s Cold Start
Optional — Expand Your Capabilities

GME-Qwen2-VL

Visual Intelligence

Adds image and video understanding. Search across screenshots, charts, slides, and video frames with natural language. Built by Alibaba DAMO Academy. Without it, DREA still searches all text content.

Add Vision 4 GB
Add Reasoning

Reasoning Model

Your Choice of AI Analyst

Adds conversational Q&A and written analysis. This model writes answers, holds conversations, and explains its thinking step by step. You choose what runs here—or skip it entirely.

Default Recommendation
DeepSeek R1 Shows its reasoning process · Open source · Optimized for Apple Silicon
Also Compatible
Llama Mistral Phi Gemma Any GGUF model
Why is this optional? DREA already finds and ranks your documents intelligently without a reasoning model—Qwen3-4B and BGE-small handle that. Adding a reasoning model unlocks written answers and multi-step analysis, but requires 24GB+ of memory. We give you the choice so DREA works on the hardware you have.
What a reasoning model unlocks:
Ask questions, get written answers
Back-and-forth conversations about your docs
Step-by-step reasoning you can follow
Automated multi-step research
Optional Swappable 24GB+ Recommended
Where Your Data Lives

The AI models above understand your documents. The databases below remember everything they learn—every meaning, every connection, every visual detail—so search is instant and nothing is ever lost.

Built In

ChromaDB

Your Document Vault

ChromaDB is where DREA stores everything it learns about your documents—meanings, relationships, visual content, and more. When you search and get results in under a second, this is why. It runs inside DREA itself, so there's nothing extra to install, configure, or manage.

Why this database? ChromaDB is embedded directly into DREA—no separate server, no background process, no network connection. It works fully offline and keeps your data entirely on your machine. For personal and single-user workflows, it handles thousands of documents without breaking a sweat. You never have to think about it—it just works.
Always Active Embedded Zero Config
You Choose

Qdrant

Scalable Storage

When you need more—more users, more documents, or a dedicated database server—Qdrant steps in. It's a standalone vector database that separates your storage from the app, giving you the flexibility to scale without limits.

Why is this optional? ChromaDB handles everything for personal use beautifully. Qdrant is there when you outgrow what's built in—team environments, enterprise deployments, or document libraries in the tens of thousands. DREA can switch between them seamlessly, and your data works the same way regardless of which one you choose.
When to consider Qdrant:
Multiple users accessing the same document library
Very large document collections (10,000+)
Separating the database from the application
Server or enterprise deployments
Optional Server-Based Team Ready

Your documents are costing you money.
DREA pays for itself.

Stop losing hours digging through files, re-reading reports, and chasing down answers your team already has. DREA Desktop puts everything you know at your fingertips — privately, instantly, on your Apple Silicon Mac. One price. No subscriptions.

Have questions? Need a demo? Want to talk about your use case?

We'd love to hear from you.

Reach out to us — [email protected]