NarraNexusNarraNexus
NarraNexus · Core Concepts

Context Engineering

What the agent actually sees when it thinks. How Narrative context, memory, module instructions, and tools are assembled into the LLM input for every interaction.


What the Agent Sees

Every LLM call has two parts: a system prompt (stable context the model reads once) and a messages array (conversation turns the model follows). NarraNexus constructs both dynamically for every interaction.

Here is the structure of a typical agent call, from top to bottom:

System Promptread once, cached
Narrative Context

Narrative

The selected Narrative’s name, description, dynamic summary, participant list, and timestamps. This tells the agent what storyline it’s in and what has happened so far.

Relevant Memory

EverMemOS

If EverMemOS is enabled, semantically relevant episodes from past conversations are injected here. These are retrieved in parallel during pipeline initialization and provide deep cross-session recall.

Auxiliary Narratives

Narrative

Brief summaries of up to two other recent storylines, giving the agent cross-topic awareness. The agent knows what else you’ve been working on without loading full histories.

Module Instructions

Modules

Each active module contributes an instruction block describing its tools and how to use them. Sorted by priority (Awareness first, then Chat, Jobs, etc.) and deduplicated — even if multiple instances of a module exist, instructions appear only once.

Cross-Topic Memory

ChatModule

Recent messages (up to 15) from other Narratives — conversations about different topics. Gives the agent the ability to reference what you “just said” even if it was in a different storyline. Budget: 40,000 characters.

Bootstrap

Creator

On the first 3 interactions only, the agent creator’s Bootstrap.md is injected at highest priority. This provides initial personality, goals, and constraints. It expires automatically once the AwarenessModule has built a persistent profile.

Messages Arrayconversation turns
Current Topic History

ChatModule

The most recent 30 messages from the current Narrative’s ChatModule instance, as chronological user/assistant pairs. Each message is truncated at 4,000 characters to prevent any single paste from dominating context. This is the long-term memory for the active storyline.

Current Message

User

The user’s input for this turn — the message that triggered this pipeline run.

Available ToolsMCP tool servers

Each active module's MCP server URL is collected and passed to the agent framework. The LLM discovers available tools automatically. URLs are deduplicated by module class — if three JobModule instances exist, only one MCP URL is registered. See Tools & MCP for the full tool architecture.

Why This Structure

The placement of each piece is deliberate, optimized for how language models process context:

Narrative summary in system prompt

The system prompt is read once and cached. Putting the storyline overview here gives the agent broad awareness without consuming message slots. It’s the agent’s “briefing” before the conversation starts.

Chat history as message pairs

Conversation turns in the messages array let the LLM follow the dialogue naturally. The model treats them as real exchanges, maintaining coherence and tone across turns.

Cross-topic memory in system prompt

Short-term memory from other Narratives belongs in the system prompt because it’s reference material, not part of the current conversation. The agent can use it (“like you mentioned earlier”) without it cluttering the dialogue flow.

Module instructions sorted by priority

Awareness comes first (defines who the agent is), Chat second (defines how it communicates), then task modules. This ordering mirrors how a person would process context — identity before tools.

How Modules Contribute

Context assembly is not centralized — each module enriches the context through its hook_data_gathering hook, called sequentially before execution. This keeps context construction modular: add a module, and its data automatically appears in the agent's context.

ChatModuleLong-term memory (current Narrative history) and short-term memory (cross-Narrative recent messages)
AwarenessModuleAgent identity profile — role, personality, goals, injected as the awareness placeholder in instructions
SocialNetworkModuleEntity profiles for the current user and any relevant agents or groups
JobModuleActive job descriptions, progress, and completion conditions
SkillModuleInstalled skill definitions and workspace rules
MemoryModuleEverMemOS episode search results (launched in parallel, awaited before execution)

What's Next