How PAI works

The interface layer that makes your existing AI stack feel domain‑native.

PAI isn't a new model or a competing assistant. It's a system for turning your real‑world needs into structured interfaces that any LLM can use. On this page, we'll walk through how PAI thinks, what it produces, and how it fits into your existing tools and workflows.

PAI sits between your domain and your assistants

Most AI setups jump straight from "user prompt" to "LLM output". PAI adds a missing layer in between:

  1. Understand the domain and intent. What are you actually trying to build or change? What systems are in play?

  2. Shape that into structure. PAI produces scopes, interfaces, contracts, and outlines instead of raw prose.

  3. Feed that structure into your preferred assistant or model. ChatGPT, Claude, Gemini, internal tools whatever you already use.

PAI doesn't try to replace your models. It tells them what they're working with and what "good" looks like in your world.

Core concepts inside PAI

PAI's behavior is built around a small set of powerful concepts:

Scopes

A scope is a structured description of a problem space. It captures:

  • What you're building or changing

  • Inputs and outputs

  • Constraints and edge cases

  • Relationships to other tools and services

You can think of scopes as PAI's way of saying: "Here is the exact box we're working inside."

Domain models

PAI builds and reuses domain models named concepts, data shapes, and relationships that show up again and again in your environment.

These models let your assistants stop guessing:

  • What does a "customer" look like here?

  • What is a "payment" in this system?

  • How does this system model data?

Instead of discovering that from scratch each time, PAI gives them a clear, reusable map.

Tools and interfaces

Scopes and domain models come together in interfaces:

  • What tools exist

  • What they accept and return

  • How they should be called

  • What "success" and "failure" look like

PAI treats tools as first-class citizens, not side notes in a prompt. That makes it much easier for assistants to call tools correctly, and for you to evolve them over time.

Knowledgebases

PAI can draw from public, vetted knowledgebases like your docs for PAI itself, pai-socket, UI component libraries and other systems you choose to expose.

For the web interface, those knowledgebases are:

  • Public only

  • Read-only

  • Curated specifically for safe use

This keeps the assistant grounded without giving it direct write access to anything.

Strict prompt structures

Every interaction with underlying LLMs happens through strict, programmatically defined prompt structures.

  • External users never send raw prompts straight into the model.

  • PAI assembles prompts from scopes, domain models, and vetted content.

  • The result is predictable, inspectable, and much easier to reason about.

This is the same discipline you apply to code applied to prompts.

From "I need X" to "my assistant knows what to do"

Here's what using PAI looks like in practice.

1

You describe what you're trying to do

You start inside your existing workflow a terminal, a code sandbox, or your favorite assistant.

Examples:

  • "I need a new tool for generating monthly settlement reports."

  • "I want to add a fraud-check step to this payment flow."

  • "I need an outline for a webhook service that notifies partners about refunds."

2

PAI scopes the problem

PAI turns that vague need into a scope:

  • What entities are involved

  • What inputs/outputs are expected

  • What constraints apply

  • How it interacts with existing tools or services

This scope is structured something that both humans and models can work with.

3

You feed the scope into your assistant

Once the scope looks right, you hand it off:

  • Paste it into ChatGPT, Claude, Gemini, etc.

  • Or wire PAI in as a tool (for example, via MCP) so your assistant can call it directly.

Now the assistant isn't starting from a blank prompt. It's starting from a clear, domain-aware framing of the problem.

4

The assistant generates code, docs, or plans

Your assistant uses the scope to generate:

  • Code and configuration

  • Tests and validation steps

  • Documentation and runbooks

  • Migration or rollout plans

PAI doesn't generate these artifacts itself; it prepares the ground so your assistant can do that well.

5

Iterate with PAI in the loop

As you refine the solution:

  • Update scopes as requirements shift.

  • Enrich domain models as you discover new concepts.

  • Evolve interfaces as you add or remove tools.

PAI becomes the living definition of how your domain fits together and your assistants benefit from that on every request.

How the paicodes web experience uses PAI

On paicodes, you'll interact with PAI through a web chat and UI. Behind the scenes:

  • pai-socket handles the streaming WebSocket connection and orchestration.

  • PAI builds the scopes, structures, and prompts for each interaction.

  • The underlying models produce responses based on that structure and the public knowledgeb knowledgebases you've allowed.

As a user of PAI, you don't need to touch pai-socket directly. You primarily work with:

  • The PAI CLI in your own development environment

  • PAI as a tool / MCP-style service for your existing assistants

  • The web UI for exploration and experimentation

pai-socket is part of the infrastructure that makes the experience feel real-time and responsive, not a separate product you have to manage.

The stack that built itself

PAI is not just a framework we designed on paper.

The entire PAI stack the CLI, the internal services, and this site was built using the same techniques PAI gives you:

  • We used PAI to scope new tools, services, and flows.

  • Those scopes went into LLMs to generate code, tests, and documentation.

  • We iterated with PAI in the loop, refining domain models and interfaces as we went.

We even ran a head-to-head comparison between PAI and Claude Code on a real integration task. Read the case study →

When you adopt PAI, you're not adopting a theory. You're stepping into the workflow that created PAI itself.

Built for the moment when AI is "just computing"

LLMs feel like "AI" today. Soon they'll just feel like normal infrastructure.

PAI is designed for that world:

  • Models come and go

  • Providers change

  • New assistants appear

What endures are the interfaces you define: your scopes, domain models, tools, and contracts. PAI is where you define those in a way that any assistant can use, now and later.