1
[AI Summary] Nostr Protocol Architecture
jay edited this page 2025-12-30 09:52:43 -05:00

Nostr Protocol Library Architecture: A Decentralized Design

Foundational Principle

A frozen protocol core library (go-roots, roots.js, roots-rs, etc.) implements only mathematical invariants that define protocol compliance: event structure, serialization, cryptographic signatures, and filter matching. This core never changes after v1.0.0. All other functionality—transport, storage, semantic interpretation—lives in separate repositories as independent implementations competing in a permissionless marketplace.

The Boundary Question

Core contains what implementations must agree on to interoperate. Everything else belongs elsewhere. The test: if two implementations can disagree without breaking message exchange, it's not core protocol. Tag extraction helpers, kind classifications, address formats, threading logic—all conveniences or conventions that vary by use case. Core handles only the cryptographic proof that an event is protocol-valid.

Four-Layer Architecture

Core (Frozen)

Event and Filter structures, serialization for ID calculation, Schnorr signatures, validation functions, key generation. No transport, no storage, no semantic interpretation. Pure protocol mathematics.

Transport Layer

Establishes connections, sends/receives messages, maintains health, routes by type. Treats Events and Filters as opaque data. Never validates, matches, or interprets—purely a message pipe. Examples: WebSocket, LibP2P, Tor routing.

Storage Layer

Applies replaceable/ephemeral/addressable logic, maintains indices, executes queries, returns ordered streams. Graph-native operations: traverse relationships, filter by patterns, stream results. Calls Core validation but doesn't reimplement it. Examples: Neo4j, PostgreSQL, IndexedDB.

Extensions Layer

Transforms protocol-valid Events into domain types. Each NIP is pure functions: Event → DomainType and DomainType → Event. No I/O, no state, no side effects. Validates semantic correctness (does this profile have required fields?) not protocol correctness (is the signature valid?).

Convenience Layer (Optional)

Composes other layers via dependency injection. No novel logic, just wiring. Applications configure which implementations to use via options pattern. Can be skipped entirely—applications wire layers themselves if preferred.

Dependency Structure

Hub-and-spoke: Core at center, Transport/Storage/Extensions as spokes, Convenience importing all. Critical constraint: lateral imports forbidden. Transport never imports Storage. Storage never imports Extensions. Each spoke depends only on Core. This prevents circular dependencies and maintains data coupling.

All coupling is data-only. Layers pass Core structs (Event, Filter) as arguments and return values. No control flags, no callbacks, no middleware patterns. Each layer performs its function and returns. Applications compose layers in their own code.

Repository Separation

Each implementation is a separate repository with independent versioning and ownership. Not go-roots-transport-websocket but honeybee. Not go-roots-storage-postgres but heartwood. Creative names signal independence and philosophical stance. Discovery via GitHub topic tags (nostr-transport, go-roots-compatible), not naming conventions.

Applications declare exact implementations:

require (
    github.com/nostr-protocol/go-roots v1.0.0
    github.com/alice/honeybee v2.3.1
    github.com/alice/heartwood v1.5.0
    github.com/alice/symbios-social v3.0.0
)

No default implementations. No blessed stacks. Choice based on performance, license, trust, maintenance activity.

Per-NIP Granularity

NIPs suffer from governance capture. Repository separation at NIP level converts specification debates into implementation competition. alice/symbios-nip01 and bob/herald-nip01 offer competing interpretations of metadata structure. Applications choose based on observed behavior, not committee consensus.

Bundles (symbios-social) re-export multiple NIPs as curated sets. Applications choose between granular control (individual NIPs) or convenience (bundles). Both patterns coexist.

Cross-NIP dependencies declared explicitly in go.mod. NIP-10 threading depends on NIP-01 text notes? Declare it. Applications get transitive dependencies automatically.

Version Coordination

Core commits to protocol immutability. v1.0.0 → v1.0.99 are compatible. Bug fixes only. If protocol must change, that's a new project (go-roots-v2), not a version bump.

Layers use semantic versioning independently. honeybee v2.3.1 works with Core v1.0.x. Breaking API change bumps major version. New features bump minor. Bug fixes bump patch. No lockstep releases across layers.

Version conflicts surface immediately in go mod tidy. If Alice's transport requires Core v1.0.0 and Carol's storage requires v1.0.1, build fails. Resolution: wait for update, fork lagging layer, or switch implementations. Friction is intentional—prevents silent incompatibilities.

Discovery and Testing

Community registry aggregates by topic tags, not naming prefix. GitHub search for topic:nostr-transport topic:go-roots-compatible returns all transports. Registry displays stars, last update, Core version, notes. No gatekeeping—any repo matching topics is listed.

Layer self-tests verify logic against Core. Integration test repository verifies layer combinations. Contract tests export test suites that applications run against their chosen implementations. Applications test composition boundaries in their own suite.

Module Strength Analysis

Core achieves functional strength—each function performs one protocol operation. Transport achieves functional strength—each connection handler maintains one bidirectional channel. Storage achieves informational strength—multiple operations on single data structure (event store), but each operation is functional. Extensions achieve functional strength—each parser is one pure transformation. Convenience achieves informational strength—multiple workflows sharing composed context.

No procedural or coincidental modules. Every module has clear singular purpose or operates on singular data structure.

Resistance to Social Takeover

Repository separation eliminates single points of control. Capturing one repo provides no leverage over others. Fork costs are negligible—change import path, minimal code changes. Competition is structural—multiple implementations are expected. Authority must be earned continually—maintainers satisfy users every release or lose adoption.

Traditional governance bottlenecks (RFC approval, merge permissions, roadmap control) don't exist. Nothing to capture because there's no central authority. Attacker investing 24 months to control one layer watches users switch to competitor in 24 hours.

Attack surface analysis: Core capture gains nothing (formal spec exists, multiple implementations cross-validate, easy fork). Layer capture gains nothing (users switch implementations immediately). Simultaneous capture of all implementations is impractical (3x-10x resource cost, one honest fork defeats it).

Game theory: monorepo takeover ROI is high (12-24 months yields ecosystem control). Separated repos takeover ROI is negative (12-24 months per repo yields control of one optional layer). Fork viability determines resistance. Low fork costs make capture economically irrational.

Implementation Principles

Data coupling only: Layers receive Core structs, call Core functions, return Core structs. No shared utilities except Core.

Interface-first design: Layers export minimal interfaces. Competing implementations satisfy same interface. Applications depend on interface, not concrete type.

Channel-based results: Storage and Transport return channels, not slices. Events stream in timestamp order. No buffering entire result sets.

Reactive updates: Storage pushes new matches to active queries immediately. Applications don't poll.

Pure functions in Extensions: Every parser is Event → DomainType. No I/O, state, or side effects. Composable via function calls.

Composition over inheritance in Convenience: No base classes with virtual methods. Small interfaces composed via options pattern.

Ecosystem Dynamics

Phase 1: Organic naming and implementation. Discovery via word of mouth, topic search, tutorials.

Phase 2: Pattern recognition. Tree metaphors for storage (heartwood, sapwood, bedrock), insect metaphors for transport (honeybee, carrier-pigeon), symbiosis metaphors for extensions (symbios, herald).

Phase 3: Informal standards. Popular authors establish naming conventions in their ecosystems. Applications choose entire ecosystem or mix layers.

Phase 4: Market competition. Best-named, best-maintained, most-adopted layers rise. No authority picks winners.

Documentation strategy: Core docs authoritative protocol reference. Layer docs explain design choices. Ecosystem docs provide comparison matrices and migration guides. No central approval process.

Critical Success Factors

Protocol immutability: Once frozen, Core never breaks. Spec exists outside code for verification.

Zero lateral coupling: Layers compose in application code, never import each other.

Trivial fork cost: Single-layer forks complete in hours. Import path change migrates users.

Explicit choice: Applications declare exact implementations. No hidden dependencies or magic defaults.

Competing interpretations: Multiple implementations of same layer are healthy, not exceptional.

Decentralized trust: No blessed implementations, no official stacks, no approval boards.

The architecture doesn't prevent disagreement—it makes disagreement productive by converting specification battles into implementation competition. Code speaks. Users choose. Market decides.