No central server, user-owned data, reverse-chronological feed. Rust core + Tauri desktop + Android app + plain HTML/CSS/JS frontend. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1577 lines
151 KiB
HTML
1577 lines
151 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en">
|
|
<head>
|
|
<meta charset="UTF-8">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
<title>Design Document — ItsGoin</title>
|
|
<meta name="description" content="Full design document for ItsGoin: vision, architecture, protocol, encryption, content distribution, and roadmap. Version 0.3.0.">
|
|
<link rel="stylesheet" href="style.css">
|
|
<style>
|
|
.toc { margin: 1.5rem 0; }
|
|
.toc a { display: block; padding: 0.3rem 0; color: var(--text-muted); font-size: 0.9rem; }
|
|
.toc a:hover { color: var(--text); text-decoration: none; }
|
|
.scorecard td:first-child { font-weight: 500; color: var(--text); }
|
|
.version-badge {
|
|
display: inline-block;
|
|
background: var(--accent-dim);
|
|
color: var(--accent);
|
|
padding: 0.3rem 0.8rem;
|
|
border-radius: 6px;
|
|
font-size: 0.85rem;
|
|
font-weight: 600;
|
|
margin-bottom: 0.75rem;
|
|
}
|
|
</style>
|
|
</head>
|
|
<body>
|
|
<nav>
|
|
<a href="index.html" class="logo">ItsGoin</a>
|
|
<div class="links">
|
|
<a href="index.html">About</a>
|
|
<a href="tech.html">How It Works</a>
|
|
<a href="design.html" class="active">Design</a>
|
|
<a href="download.html">Download</a>
|
|
</div>
|
|
</nav>
|
|
|
|
<div class="container wide">
|
|
<section>
|
|
<span class="version-badge">v0.3.1 — 2026-03-13</span>
|
|
<h1 style="font-size: 2rem; font-weight: 800; letter-spacing: -0.03em; margin-bottom: 0.5rem;">Design Document</h1>
|
|
<p>This is the canonical technical reference for ItsGoin. It describes the vision, the architecture, and the current state of every subsystem — with full implementation detail. This document is versioned; each update records what changed.</p>
|
|
<div class="card" style="margin-top: 1rem;">
|
|
<strong style="font-size: 0.85rem; text-transform: uppercase; letter-spacing: 0.05em;">Changelog</strong>
|
|
<p style="margin-top: 0.5rem;"><strong>v0.3.1</strong> (2026-03-13): Share links + QUIC proxy + content search. Share link format: <code>itsgoin.net/p/<postid_hex>/<author_nodeid_hex></code> — simple, no host encoding needed. itsgoin.net web handler acts as QUIC proxy: receives browser request, searches the network for the post, fetches it on-demand via PostFetch (0xD4/0xD5), renders HTML, serves to browser. No permanent storage of fetched content. Extended worm search — <code>WormQuery</code> now carries optional <code>post_id</code> and <code>blob_id</code> fields for unified node/post/blob search. Each peer checks local storage, CDN downstream tree (up to 100 hosts per post), and blob store. <code>WormResponse</code> gains <code>post_holder</code> and <code>blob_holder</code> fields. Nova fan-out pattern — burst peers include one N2 wide referral; referred peer does its own 101-burst, reaching ~10K nodes with ~202 relay hops. PostFetch (0xD4/0xD5) — lightweight single-post retrieval after worm finds a holder, much lighter than full PullSync. itsgoin.net node deployed as anchor + web handler (<code>--web 8080</code>). “Unavailable” page with honest network model explanation + install CTA. Universal Links / App Links planned for native app interception. | Engagement sync — pull sync now fetches reactions, comments, and policies via BlobHeaderRequest/Response after every sync. Profile push fix — profile updates now sent to all connected mesh peers (not just audience). Auto-sync on follow — following a peer triggers immediate post pull + engagement fetch. Popover UI — notifications settings, network diagnostics, and message threads now open as popovers. Notification settings — per-key settings table in SQLite, configurable message/post/nearby notifications with JS Notification API. Tiered DM polling — smart message refresh based on conversation recency. Reaction display — posts show top 5 most popular emoji + total response count. UI cleanup — removed Suggested Peers and Find Nearby sections, placeholder text changed to “How’s it goin?”, clickable node IDs in activity log.</p>
|
|
<p><strong>v0.3.0</strong> (2026-03-12): Full rename distsoc → ItsGoin. ALPN, crypto contexts, data paths, Android package ID all changed. Clean break — incompatible with prior versions.</p>
|
|
<p><strong>v0.2.11</strong> (2026-03-12): Engagement system — reactions (public + private encrypted via X25519 DH + ChaCha20-Poly1305), inline comments with ed25519 signatures, author-controlled comment/react policies (audience-only, public, none), blocklist enforcement. CDN tree for all posts — new <code>post_downstream</code> table (keyed by PostId, max 100 peers) gives every post a propagation tree; <code>PostDownstreamRegister</code> (0xD3) sent when any peer stores a post. 4 new wire messages: BlobHeaderDiff (0xD0) for incremental engagement propagation, BlobHeaderRequest/Response (0xD1/0xD2), PostDownstreamRegister (0xD3). 6 new SQLite tables, 9 new IPC commands. Thread splitting — headers exceeding 16KB auto-split oldest comments into linked thread posts. Frontend: emoji picker, reaction pills, comment threads, policy selects in compose area.</p>
|
|
<p><strong>v0.2.10</strong> (2026-03-12): Per-family NAT classification — IPv4 and IPv6 public reachability now detected independently. Previously, a public IPv6 address incorrectly set <code>has_public_v4=true</code>, causing nodes behind IPv4 NAT to skip hole punching. STUN now always runs (unless <code>--bind</code>) so IPv6-only anchors correctly classify their IPv4 NAT. Anchor advertised address fallback — anchors without <code>--bind</code> or UPnP now advertise their first public bound address (e.g. IPv6 SLAAC), so peers store them in <code>known_anchors</code> for preferential reconnection. Bootstrap anchor deprioritization — startup connection sequence now tries discovered (non-bootstrap) anchors first, falling back to hardcoded bootstrap anchors only when no discovered anchor is reachable. Reduces load on bootstrap infrastructure as the network grows.</p>
|
|
<p><strong>v0.2.9</strong> (2026-03-12): ConnectionManager actor redesign — replaced single <code>Arc<Mutex<ConnectionManager>></code> with two-layer actor pattern: <code>ConnHandle</code> (cheap-to-clone command sender) + <code>ConnectionActor</code> (dedicated tokio task, owns state, processes commands via mpsc/oneshot channels). Eliminated lock contention from 14 code paths that previously held the mutex during network I/O (up to 15s for QUIC connects). All network.rs and node.rs callers now use ConnHandle (~60 call sites migrated). I/O-heavy functions extracted as standalone: broadcast_diff, push_circle_profile, push_visibility, pull_from_peer, send_relay_introduce, send_anchor_register, request_anchor_referrals. Public <code>conn_mgr()</code> accessor removed — <code>Arc<Mutex></code> is now an internal implementation detail of the actor.</p>
|
|
<p><strong>v0.2.8</strong> (2026-03-11): NAT filter probe (0xC6/0xC7) — anchor probes node’s filtering type by attempting QUIC connect from a different source port; address-restricted (Open) vs port-restricted determined in 2s, eliminating unnecessary scanning for most connections. Role-based NAT traversal — EIM nodes punch every 2s (stable port visible to peer scanner), EDM/Unknown nodes walk outward at ~100 ports/sec (opening firewall entries for peer punches to land). Steady scan replaces burst tiers (was 37K tasks, now ~20 in-flight). IPv4 vs IPv6 public differentiation — startup reports v4-only/v6-only/v4+v6, “Public” no longer assumes Open filtering. Task cleanup via JoinSet::abort_all().</p>
|
|
<p><strong>v0.2.7</strong> (2026-03-11): Port scanning refinement — scan only the anchor-observed IP (relay-injected first address) instead of all self-reported addresses, avoiding wasted scan budget on unreachable VPN/cellular IPs. Scanning now triggers when peer NAT type is unknown, not just when explicitly EDM.</p>
|
|
<p><strong>v0.2.6</strong> (2026-03-11): Anchor self-verification implemented (Section 8) — AnchorProbeRequest/Result (0xC3/0xC4) wire messages, witness-based cold reachability testing via N2 strangers, candidacy checklist (UPnP/public + 50 connections + 2h uptime + non-mobile), periodic re-probe in anchor register cycle, 2-failure revocation. Advanced NAT traversal implemented (Section 10) — NatMapping (EIM/EDM) + NatFiltering (Open/PortRestricted) profile types, <code>hole_punch_with_scanning()</code> replaces hard+hard skip at all 5 call sites, tiered port scanning (±500, ±2000, full ephemeral) at 50 concurrent probes, behavioral filtering inference from connection outcomes, PortScanHeartbeat (0xC5) message type. NAT profile shared in InitialExchange (<code>nat_mapping</code>/<code>nat_filtering</code> fields).</p>
|
|
<p><strong>v0.2.5</strong> (2026-03-11): Advanced NAT traversal design (Section 10) — relay-assisted port scanning protocol for EDM/symmetric NATs, full NAT combination matrix (mapping × filtering), tiered scan from observed port at 250/sec, 2s relay heartbeat feedback loop, makes hard+hard pairs solvable without full relay. Reconnection race fix — <code>run_mesh_streams</code> checks <code>stable_id()</code> before cleanup to prevent reconnecting peers from losing their connection entry.</p>
|
|
<p><strong>v0.2.4</strong> (2026-03-11): Anchor self-verification probe design (Section 8) — witness-based cold reachability testing via N2 strangers, candidacy checklist, periodic re-probe. Anchor selection simplified to LIFO on last_seen, removed success_count weighting, stale anchor cleanup (7-day probe). BlobHeader separation from blob content (Section 18) — immutable BLAKE3-addressed blobs require separate mutable headers, BlobHeader struct replaces CdnManifest, 25+25 post neighborhood, BlobHeaderDiff incremental propagation. Removed 3x hosting quota — CDN is attention-driven delivery infrastructure, not storage; author owns durability. Keep-alive session ceilings (Section 16) — desktop ~300-500, mobile ~25-50, mobile priority stack, hysteresis for borderline reachability. Mesh stranger controls — mutual mesh blacklist for targeted stranger relationships, --max-mesh CLI flag for topology testing. Phase 2 reciprocity simplified — attention model makes quota enforcement unnecessary.</p>
|
|
<p><strong>v0.2.3</strong> (2026-03-11): NAT type detection implemented (Section 10) — raw STUN probing classifies NAT as Public/Easy/Hard/Unknown on startup, shared in InitialExchange, stored per-peer, skip hole punch for hard+hard NAT pairs. LAN Discovery spec (Section 12) — mDNS scan loop for automatic LAN peer connection, keep-alive LAN sessions, local relay design. Pruning & timeout tuning — preferred peer prune 24h→7d, watcher expiry 24h→30d, N2/N3 startup sweep. Growth loop lock fix — resolve_address no longer blocks conn_mgr during network I/O.</p>
|
|
<p><strong>v0.2.2</strong> (2026-03-10): Hole punch fixes (Section 10) — session peers now fully participate in relay introduction (observed address injection for both requester and target), all hole punch paths use <code>hole_punch_parallel()</code> (parallel addresses, no more sequential timeouts), requester self-reported addresses filtered to publicly-routable only.</p>
|
|
<p><strong>v0.2.1</strong> (2026-03-10): Added UPnP port mapping (Section 11) — best-effort NAT traversal for desktop/home networks, external address in N+10 and peer advertisements, lease renewal cycle.</p>
|
|
<p><strong>v0.2.0</strong> (2026-03-09): Major design updates — three-layer architecture (Mesh/Social/File), N+10 identification, keep-alive sessions, 3-tier revocation, multi-device identity, growth loop redesign, pull sync from social/file layers, relay pipes default to own-device-only, remove anchor register loop.</p>
|
|
<p><strong>v0.1.0</strong> (2026-03-09): First versioned edition. Consolidated from ARCHITECTURE.md, code review, and gap analysis into a single source of truth.</p>
|
|
</div>
|
|
</section>
|
|
|
|
<section>
|
|
<div class="toc">
|
|
<strong style="font-size: 0.85rem; text-transform: uppercase; letter-spacing: 0.05em; color: var(--text-muted);">Contents</strong>
|
|
<a href="#vision">1. The Vision</a>
|
|
<a href="#identity">2. Identity & Bootstrap</a>
|
|
<a href="#nplus10">3. N+10 Identification</a>
|
|
<a href="#connections">4. Connections & Growth</a>
|
|
<a href="#lifecycle">5. Connection Lifecycle</a>
|
|
<a href="#layers">6. Network Knowledge Layers (N1/N2/N3)</a>
|
|
<a href="#three-layers">7. Three-Layer Architecture (Mesh / Social / File)</a>
|
|
<a href="#anchors">8. Anchors</a>
|
|
<a href="#referrals">9. Referrals</a>
|
|
<a href="#relay">10. Relay & NAT Traversal</a>
|
|
<a href="#upnp">11. UPnP Port Mapping</a>
|
|
<a href="#lan">12. LAN Discovery</a>
|
|
<a href="#worm">13. Worm Search</a>
|
|
<a href="#preferred">14. Preferred Peers</a>
|
|
<a href="#social-routing">15. Social Routing</a>
|
|
<a href="#keep-alive">16. Keep-Alive Sessions</a>
|
|
<a href="#content">17. Content Propagation</a>
|
|
<a href="#files">18. Files & Storage</a>
|
|
<a href="#sync">19. Sync Protocol</a>
|
|
<a href="#encryption">20. Encryption</a>
|
|
<a href="#deletes">21. Delete Propagation</a>
|
|
<a href="#privacy">22. Social Graph Privacy</a>
|
|
<a href="#multidevice">23. Multi-Device Identity</a>
|
|
<a href="#phase2">24. Phase 2: Reciprocity</a>
|
|
<a href="#http-delivery">25. HTTP Post Delivery</a>
|
|
<a href="#share-links">26. Share Links</a>
|
|
<a href="#timeouts">Appendix A: Timeout Reference</a>
|
|
<a href="#constraints">Appendix B: Design Constraints</a>
|
|
<a href="#scorecard">Appendix C: Implementation Scorecard</a>
|
|
<a href="#roadmap">Appendix D: Roadmap</a>
|
|
<a href="#not-built">Appendix E: Features Designed But Not Built</a>
|
|
<a href="#filemap">Appendix F: File Map</a>
|
|
</div>
|
|
</section>
|
|
|
|
<!-- 1. Vision -->
|
|
<section id="vision">
|
|
<h2>1. The Vision</h2>
|
|
<div class="card">
|
|
<p><em>"A decentralized fetch-cache-re-serve content network that supports public and private sharing without a central server. It replaces 'upload to a platform' with 'publish into a swarm' where attention creates distribution, privacy is client-side encryption, and availability comes from caching, not money."</em></p>
|
|
</div>
|
|
<p><strong>The honest promise</strong>: The CDN is an attention-driven delivery amplifier, not a storage guarantee. Hot content spreads naturally through demand; cold content decays unless intentionally hosted. Authors are responsible for their own content durability — a post backup/export tool is the author's safety net, not the network's job. The system is a loss-risk network — best-effort availability, not durability guarantees.</p>
|
|
<h3>Guiding principles</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Our distributed network first, direct connections always preferred</li>
|
|
<li>Social graph and friendly UX in front, infrastructure truth in back</li>
|
|
<li>Privacy by design: public profile is minimal, private profiles are per-circle, social graph visibility is controlled</li>
|
|
<li>Don't break content addressing (<code>PostId = BLAKE3(post)</code>, visibility is separate metadata)</li>
|
|
<li>Your feed is yours: reverse-chronological by default, no algorithmic ranking, user-controlled discovery</li>
|
|
<li>Three separate layers — Mesh (structural backbone), Social (follows/audience/DMs), File (content storage/distribution) — each with its own connections and routing</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<!-- 2. Identity & Bootstrap -->
|
|
<section id="identity">
|
|
<h2>2. Identity & Bootstrap</h2>
|
|
|
|
<h3>First startup</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Identity</strong>: Load or generate ed25519 keypair from <code>{data_dir}/identity.key</code>. <code>NodeId</code> = 32-byte public key. A unique <strong>device identity</strong> is also generated for multi-device coordination (see <a href="#multidevice">Section 23</a>).</li>
|
|
<li><strong>Storage</strong>: Open SQLite database (<code>distsoc.db</code>), auto-migrate schema.</li>
|
|
<li><strong>Blob store</strong>: Create <code>{data_dir}/blobs/</code> with 256 hex-prefix shards (<code>00/</code> through <code>ff/</code>).</li>
|
|
<li><strong>UPnP mapping</strong>: Attempt UPnP/NAT-PMP port mapping (2s timeout). If successful, store external address for advertisements. Do not block startup if unavailable. See <a href="#upnp">Section 11</a>.</li>
|
|
<li><strong>NAT type detection</strong>: STUN probes to two public servers (3s timeout each). Classifies as Public/Easy/Hard/Unknown. UPnP success overrides to Public. Anchors skip probing. Result stored on <code>ConnectionManager</code>, shared in <code>InitialExchangePayload</code>, stored per-peer. See <a href="#relay">Section 10</a>.</li>
|
|
<li><strong>Stale N2/N3 sweep</strong>: Remove all N2/N3 entries tagged to peers not in the current mesh. Clears stale reach data from previous sessions (e.g., unclean shutdown).</li>
|
|
<li><strong>Bootstrap anchors</strong>: Load from <code>{data_dir}/anchors.json</code>. If missing, use hardcoded default anchor.</li>
|
|
<li><strong>Bootstrap</strong>: If peers table is empty, connect to a bootstrap anchor. Request referrals and matchmaking (unless self or the other node is an anchor). Persist on that anchor's referral list until released (at referral count limit) while beginning the growth loop immediately.</li>
|
|
</ol>
|
|
|
|
<h3>Startup cycles</h3>
|
|
<p>Spawned after bootstrap completes:</p>
|
|
<table>
|
|
<tr><th>Cycle</th><th>Interval</th><th>Purpose</th></tr>
|
|
<tr><td>Pull sync</td><td>On demand (3h Self Last Encounter threshold)</td><td>Pull new posts from social + upstream file peers</td></tr>
|
|
<tr><td>Routing diff</td><td>120s (2 min)</td><td>Broadcast N1/N2 changes to mesh + keep-alive sessions</td></tr>
|
|
<tr><td>Rebalance</td><td>600s (10 min)</td><td>Clean dead connections, reconnect preferred, signal growth</td></tr>
|
|
<tr><td>Growth loop</td><td>60s + reactive (on N2/N3 receipt)</td><td>Fill empty mesh slots until 101 (90% threshold for reactive mode)</td></tr>
|
|
<tr><td>Recovery loop</td><td>Reactive (mesh empty)</td><td>Emergency reconnect via anchors</td></tr>
|
|
<tr><td>Social/File connectivity check</td><td>60s</td><td>Verify <N4 access to N+10 of active social + file peers; open keep-alive sessions as needed</td></tr>
|
|
<tr><td>UPnP lease renewal</td><td>2700s (45 min)</td><td>Refresh UPnP port mapping before TTL expiry (desktop only)</td></tr>
|
|
</table>
|
|
<div class="note">
|
|
<strong>Removed</strong>: Anchor register loop. Anchors are for forming initial mesh connections when bootstrapping, not for ongoing registration. Nodes only connect to anchors during bootstrap or recovery.
|
|
</div>
|
|
</section>
|
|
|
|
<!-- 3. N+10 Identification -->
|
|
<section id="nplus10">
|
|
<h2>3. N+10 Identification</h2>
|
|
|
|
<h3>Concept</h3>
|
|
<p>Every node is identified not just by its NodeId but by its <strong>N+10</strong>: the node's own NodeId plus the NodeIds of its 10 preferred peers. This accelerates the capacity to find any node — if you can reach any of the 11 nodes in someone's N+10, you can find them.</p>
|
|
|
|
<h3>Where N+10 appears</h3>
|
|
<table>
|
|
<tr><th>Context</th><th>What's included</th></tr>
|
|
<tr><td><strong>Self identification</strong></td><td>All self-identification messages include the sender's N+10</td></tr>
|
|
<tr><td><strong>Following someone</strong></td><td>When you follow a peer, you store and maintain their N+10 in your social routes</td></tr>
|
|
<tr><td><strong>Post headers</strong></td><td>Every post header includes the author's current N+10. Updated whenever they post.</td></tr>
|
|
<tr><td><strong>Blob headers</strong></td><td>Blob/file headers include: (1) the author's N+10, (2) the upstream file source's N+10 (if not the author), (3) N+10s of up to 100 downstream file hosts</td></tr>
|
|
<tr><td><strong>Recent post lists</strong></td><td>Author manifests include the author's N+10 alongside their recent post list</td></tr>
|
|
</table>
|
|
|
|
<h3>Why this works</h3>
|
|
<p>Preferred peers are bilateral agreements — stable, long-lived connections. By including them in identification, any node that can find any of your 10 preferred peers can transitively find you within one hop. This eliminates most discovery cascades for socially-connected nodes.</p>
|
|
|
|
<h3>Status: <span class="badge badge-partial">Partial</span></h3>
|
|
<p>N+10 is partially implemented — preferred peers exist and are tracked, but N+10 is not yet included in all identification contexts (post headers, blob headers, self-identification messages). Currently <code>preferred_tree</code> in social routes provides similar functionality for relay selection.</p>
|
|
</section>
|
|
|
|
<!-- 4. Connections & Growth -->
|
|
<section id="connections">
|
|
<h2>4. Connections & Growth</h2>
|
|
|
|
<h3>Connection types</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Mesh connection</strong> — long-lived routing slot. Structural backbone for discovery and propagation. DB table: <code>mesh_peers</code>.</li>
|
|
<li><strong>Keep-alive session</strong> — long-lived connection for social or file layer peers that aren't in the mesh 101. Participates in N2/N3 routing. See <a href="#keep-alive">Section 16</a>.</li>
|
|
<li><strong>Session connection</strong> — short-lived, held open for active interaction (DM conversations, group activity, anchor matchmaking). Tracks <code>remote_addr</code> so the relay can inject observed addresses for session peers during introductions.</li>
|
|
<li><strong>Ephemeral connection</strong> — single request/response, no slot allocation.</li>
|
|
</ul>
|
|
|
|
<h3>Slot architecture</h3>
|
|
<table>
|
|
<tr><th>Slot kind</th><th>Desktop</th><th>Mobile</th><th>Purpose</th></tr>
|
|
<tr><td>Preferred</td><td>10</td><td>3</td><td>Bilateral agreements, eviction-protected</td></tr>
|
|
<tr><td>Non-preferred</td><td>91</td><td>12</td><td>Growth loop fills these with diverse peers</td></tr>
|
|
<tr><td><strong>Total mesh</strong></td><td><strong>101</strong></td><td><strong>15</strong></td><td>Long-lived routing backbone</td></tr>
|
|
<tr><td>Keep-alive sessions</td><td>No hard limit</td><td>No hard limit</td><td>Social/file layer peers not in mesh (max 50% of session capacity reserved for keep-alive)</td></tr>
|
|
<tr><td>Sessions (interactive)</td><td>No hard limit</td><td>No hard limit</td><td>Active DM, group interaction, anchor matchmaking</td></tr>
|
|
<tr><td>Relay pipes</td><td>10</td><td>2</td><td>Own-device relay by default; opt-in for relaying for others</td></tr>
|
|
</table>
|
|
<div class="note">
|
|
<strong>v0.2.0 change</strong>: Removed the distinction between "local" (71) and "wide" (20) non-preferred slots. The growth loop goes wide by default. Session counts are no longer hard-limited — an average computer can sustain ~1000 QUIC sessions without strain. The 50% keep-alive reservation ensures sessions remain available for interactive use.
|
|
</div>
|
|
|
|
<h3>MeshConnection struct</h3>
|
|
<p>Each mesh connection tracks: <code>node_id</code>, <code>connection</code> (QUIC), <code>slot_kind</code> (Preferred or NonPreferred), <code>remote_addr</code> (captured from Incoming before accept), <code>last_activity</code> (AtomicU64), <code>created_at</code>.</p>
|
|
|
|
<h3>Mutual mesh blacklist <span class="badge badge-planned">Planned</span></h3>
|
|
<p>Targeted two-node stranger relationship. Both nodes opt in, maintaining genuine N2 stranger status indefinitely regardless of growth loop behavior. Stored in a local <code>mesh_blacklist { node_id }</code> table.</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Growth loop skips blacklisted nodes during candidate selection</li>
|
|
<li>Incoming mesh upgrade from blacklisted node → respond with <code>RefuseRedirect</code> (<code>0x05</code>)</li>
|
|
<li>Both nodes must add each other — asymmetric blacklist is valid but only prevents the blacklisting side from upgrading</li>
|
|
<li>Blacklisted nodes remain visible in N2 via shared N1 peers</li>
|
|
<li>Full session/ephemeral interaction still works — messages, probes, routing participation</li>
|
|
<li>Never consume each other's mesh slots</li>
|
|
</ul>
|
|
<p><strong>Production utility</strong>: Operators maintaining intentional stranger relationships for network diversity, preventing specific nodes from becoming preferred peers, or any scenario where two nodes want to cooperate at session level without mesh entanglement.</p>
|
|
|
|
<h3><code>--max-mesh <n></code> CLI flag <span class="badge badge-planned">Planned</span></h3>
|
|
<p>Topology control at network scale. Forces a node to cap its mesh connections, keeping it permanently in N2 of other nodes. <strong>Testing affordance only</strong> — not for production use.</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><code>--max-mesh 0</code>: Pure N2 participant, never takes mesh slots. <strong>Warning</strong>: free rider — consumes routing knowledge without carrying mesh load.</li>
|
|
<li><code>--max-mesh 3</code>: Partial mesh, useful for testing sparse topologies</li>
|
|
<li><code>--max-mesh 101</code>: Default, full normal behavior</li>
|
|
<li>Node responds to all protocol messages normally, never initiates or accepts mesh upgrades beyond the cap</li>
|
|
<li>Reuses existing <code>RefuseRedirect</code> (<code>0x05</code>) — no new protocol machinery</li>
|
|
</ul>
|
|
|
|
<h3>Keepalive</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Interval</strong>: 30 seconds (<code>MeshKeepalive</code> message, <code>0xE0</code>)</li>
|
|
<li><strong>Zombie detection</strong>: No stream activity for 600s (10 min) = zombie, removed in rebalance</li>
|
|
<li><code>last_activity</code> updated on every stream accept</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<!-- 5. Connection Lifecycle -->
|
|
<section id="lifecycle">
|
|
<h2>5. Connection Lifecycle</h2>
|
|
|
|
<h3>5.1 Growth Loop (60s timer + reactive on N2/N3 receipt)</h3>
|
|
<p><strong>Timer</strong>: Fires every 60 seconds. Checks current mesh count. If < 101, runs a growth cycle.</p>
|
|
<p><strong>Reactive trigger</strong>: Fires immediately after receiving a peer's N2/N3 list (from initial exchange or routing diff). Continues firing on each new N2/N3 receipt until mesh is 90% full (~91 connections). After 90%, switches to timer-only mode.</p>
|
|
|
|
<p><strong>Candidate selection</strong> (N2 diversity scoring):</p>
|
|
<pre><code>score = 1.0 / reporter_count + (0.3 if not_in_N3)</code></pre>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Fewer reporters = higher diversity = better candidate</li>
|
|
<li>Bonus for locally-discovered peers (not transitive)</li>
|
|
<li>Sorted descending, best candidate tried first</li>
|
|
<li>Growth loop goes wide by default — no local/wide distinction</li>
|
|
<li><strong>Blacklist filter</strong>: Skip nodes in <code>mesh_blacklist</code> table (see <a href="#connections">Section 4</a>)</li>
|
|
</ul>
|
|
<p><strong>Connection attempt cascade</strong>:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Direct connect</strong> (15s timeout) — use stored/resolved address</li>
|
|
<li><strong>Introduction fallback</strong> — find N2 reporters who know this peer, ask each to relay-introduce us</li>
|
|
</ol>
|
|
<p><strong>Failure handling</strong>: Track consecutive failures. After 3 consecutive failures, back off (break loop, wait for next signal). Mark unreachable peers for future skipping.</p>
|
|
|
|
<h3>5.2 Rebalance Cycle (every 600s)</h3>
|
|
<p>Executed in priority order:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Dead connection removal</strong>: Remove connections with <code>close_reason()</code> set, or idle > 600s (zombie)</li>
|
|
<li><strong>Stale entry pruning</strong>: N2/N3 entries tagged to a peer that is no longer connected are pruned immediately (on disconnect and on startup sweep). Age-based fallback: entries older than 7 days. Social route watchers older than 30 days.</li>
|
|
<li><strong>Priority 0 — Preferred peer reconnection</strong>: Iterate <code>preferred_peers</code> table, reconnect any that are disconnected. If at capacity, evict the lowest-diversity non-preferred peer to make room. Prune preferred peers unreachable for 7+ days (slot released, does NOT auto-return on reconnect — must re-negotiate via MeshPrefer). After 7 days, social checkin frequency drops from 1–3 hours to daily until the 30-day reconnect watcher expires.</li>
|
|
<li><strong>Priority 1 — Reconnect recently dead</strong>: Re-establish dropped non-preferred connections. <strong>Skip blacklisted nodes</strong> — do not attempt reconnection to peers in <code>mesh_blacklist</code>.</li>
|
|
<li><strong>Priority 2 — Signal growth loop</strong>: Fill remaining empty slots via growth loop</li>
|
|
<li><strong>Idle session cleanup</strong>: Reap interactive sessions idle > 300s (5 min). Keep-alive sessions are NOT reaped by idle timeout.</li>
|
|
<li><strong>Relay intro dedup pruning</strong>: Clear <code>seen_intros</code> entries older than 30s, cap at 500</li>
|
|
</ol>
|
|
<div class="note">
|
|
<strong>Note</strong>: Low diversity score alone does NOT trigger eviction. The only eviction path is Priority 0 (making room for a preferred peer).
|
|
</div>
|
|
|
|
<h3>5.3 Recovery Loop (reactive, mesh empty)</h3>
|
|
<p><strong>Trigger</strong>: <code>disconnect_peer()</code> fires when last mesh connection drops.</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Debounce 2 seconds (wait for cascading disconnects to settle)</li>
|
|
<li>Gather anchors: <code>known_anchors</code> table ordered by <code>last_seen DESC</code> (LIFO — most recently seen is most likely still reachable) → fallback to hardcoded default anchor(s) only if known_anchors empty or exhausted</li>
|
|
<li>For each anchor: connect, request referrals and matchmaking, try direct connect to each referral, fallback to hole punch via anchor for unreachable referrals</li>
|
|
<li>Persist on anchor's referral list until released, begin growth loop immediately</li>
|
|
<li><strong>Post-bootstrap stale anchor cleanup</strong>: After successful bootstrap/recovery, probe <code>known_anchors</code> entries where <code>last_seen > 7 days</code>. Success: update <code>last_seen</code>. Failure: DELETE from <code>known_anchors</code>. Reuses existing anchor probe machinery (<code>0xC3</code>/<code>0xC4</code>). No new cycle or timer — runs as final step of bootstrap/recovery.</li>
|
|
</ol>
|
|
|
|
<h3>5.4 Initial Exchange (on every new connection)</h3>
|
|
<p>When two nodes connect, they exchange:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>N+10</strong>: Our NodeId + 10 preferred peers' NodeIds</li>
|
|
<li><strong>N1 share</strong>: mesh peers + social contacts NodeIds (merged, no addresses)</li>
|
|
<li><strong>N2 share</strong>: deduplicated N2 NodeIds (no addresses)</li>
|
|
<li><strong>Profile</strong>: PublicProfile (display name, bio, avatar CID, <code>public_visible</code> flag)</li>
|
|
<li><strong>Delete records</strong>: Signed post deletions</li>
|
|
<li><strong>Post IDs</strong>: All local post IDs (for replica tracking)</li>
|
|
<li><strong>Peer addresses</strong>: N+10 address list for connected peers</li>
|
|
</ul>
|
|
<p><strong>Processing</strong>: Their N1 → our N2 table (tagged to reporter). Their N2 → our N3 table (tagged to reporter). Store profile, apply deletes, record replica overlaps. <strong>Trigger growth loop</strong> immediately with new N2/N3 candidates if mesh < 90% full.</p>
|
|
|
|
<h3>5.5 Incremental Routing Diffs (every 120s + on change)</h3>
|
|
<p><code>NodeListUpdate</code> (<code>0x01</code>) contains N1 added/removed, N2 added/removed. Sent via uni-stream to all mesh peers <strong>and keep-alive sessions</strong>. Receiver processes: their N1 adds → our N2 adds, their N2 adds → our N3 adds, etc.</p>
|
|
</section>
|
|
|
|
<!-- 6. Network Knowledge Layers -->
|
|
<section id="layers">
|
|
<h2>6. Network Knowledge Layers (N1/N2/N3)</h2>
|
|
|
|
<table>
|
|
<tr><th>Layer</th><th>Source</th><th>Contains</th><th>Shared?</th><th>Stored in</th></tr>
|
|
<tr><td>N1</td><td>Our connections + social contacts</td><td>NodeIds only</td><td>Yes (as "N1 share")</td><td><code>mesh_peers</code> + <code>social_routes</code></td></tr>
|
|
<tr><td>N2</td><td>Peers' N1 shares</td><td>NodeIds tagged by reporter</td><td>Yes (as "N2 share")</td><td><code>reachable_n2</code></td></tr>
|
|
<tr><td>N3</td><td>Peers' N2 shares</td><td>NodeIds tagged by reporter</td><td>Never</td><td><code>reachable_n3</code></td></tr>
|
|
</table>
|
|
|
|
<h3><N4 access</h3>
|
|
<p>A node has <strong><N4 access</strong> to a target if the target appears in its N1, N2, or N3 tables. This means the target is reachable within 3 hops without needing worm search or relay introduction. The social/file connectivity check (see <a href="#keep-alive">Section 16</a>) uses <N4 access to determine whether keep-alive sessions are needed.</p>
|
|
|
|
<h3>What is NEVER shared</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Addresses (resolved on-demand via chain queries)</li>
|
|
<li>N3 entries (search-only, never forwarded)</li>
|
|
<li>Duplication counts (topology leak)</li>
|
|
<li>Which NodeIds are social contacts vs mesh peers (merged in N1 share)</li>
|
|
</ul>
|
|
|
|
<h3>Address resolution cascade (<code>connect_by_node_id</code>)</h3>
|
|
<table>
|
|
<tr><th>Step</th><th>Method</th><th>Timeout</th><th>Source</th></tr>
|
|
<tr><td>0</td><td>Social route cache</td><td>—</td><td><code>social_routes</code> table (cached addresses for follows/audience)</td></tr>
|
|
<tr><td>1</td><td>Peers table</td><td>—</td><td>Stored address from previous connection</td></tr>
|
|
<tr><td>2</td><td>N2 ask reporter</td><td>varies</td><td>Ask the mesh peer who reported target in their N1</td></tr>
|
|
<tr><td>3</td><td>N3 chain resolve</td><td>varies</td><td>Ask reporter's reporter (2-hop chain)</td></tr>
|
|
<tr><td>4</td><td>Worm search</td><td>3s total</td><td>Burst to all peers → nova to N2 referrals (each does own burst)</td></tr>
|
|
<tr><td>5</td><td>Relay introduction</td><td>15s</td><td>Hole punch via intermediary relay</td></tr>
|
|
<tr><td>6</td><td>Session relay</td><td>—</td><td>Pipe traffic through intermediary (own-device or opt-in)</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 7. Three-Layer Architecture -->
|
|
<section id="three-layers">
|
|
<h2>7. Three-Layer Architecture (Mesh / Social / File)</h2>
|
|
|
|
<p>The network operates across three distinct layers, each with its own connections, routing, and purpose. The separation enables specialized behavior without the layers interfering with each other.</p>
|
|
|
|
<table>
|
|
<tr><th>Layer</th><th>Purpose</th><th>Connections</th><th>Sync trigger</th></tr>
|
|
<tr><td><strong>Mesh</strong></td><td>Structural backbone: N1/N2/N3 routing, diversity, discovery</td><td>101 mesh slots (preferred + non-preferred)</td><td>N/A — mesh is infrastructure, not content</td></tr>
|
|
<tr><td><strong>Social</strong></td><td>Follows, audience, DMs — the human relationships</td><td>Social routes + keep-alive sessions as needed</td><td>Pull posts when Self Last Encounter > 3 hours</td></tr>
|
|
<tr><td><strong>File</strong></td><td>Content storage and distribution — blobs, CDN trees</td><td>Upstream/downstream file peers + keep-alive sessions as needed</td><td>Pull on blob request, push on post creation</td></tr>
|
|
</table>
|
|
|
|
<h3>Key principle: mesh is not for content</h3>
|
|
<p>Pull sync does <strong>not</strong> pull posts from mesh peers. Mesh connections exist for routing diversity and discovery. Content flows through the social layer (posts from people you follow) and the file layer (blobs from upstream/downstream hosts). This separation means mesh connections can be optimized purely for network topology without social bias.</p>
|
|
|
|
<h3>Cross-layer benefits</h3>
|
|
<p>Each layer's connections contribute to finding nodes and referrals for the other layers. Keep-alive sessions from the social and file layers participate in N2/N3 routing, which improves <N4 access for all three layers. A social keep-alive session might provide the N2 entry that helps the mesh growth loop find a diverse new peer, and vice versa.</p>
|
|
</section>
|
|
|
|
<!-- 8. Anchors -->
|
|
<section id="anchors">
|
|
<h2>8. Anchors</h2>
|
|
<h3>Intent</h3>
|
|
<p>Anchors are "just peers that are directly reachable" — standard ItsGoin nodes with a routable address. They run the same code with no special protocol. Their value comes from being directly connectable for <strong>bootstrapping</strong> new nodes into the network and <strong>matchmaking</strong> (introducing peers to each other). Anchors include VPS-deployed nodes (always-on) and desktop nodes with UPnP port mappings (see <a href="#upnp">Section 11</a>).</p>
|
|
<p>Each profile can carry a preferred anchor list — infrastructure addresses, not social signals.</p>
|
|
|
|
<h3>Status: <span class="badge badge-complete">Complete</span> (with gaps)</h3>
|
|
|
|
<h3>When anchors are used</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Bootstrap</strong>: First startup with empty peers table. Connect to anchor, request referrals and matchmaking, persist on referral list while growing mesh.</li>
|
|
<li><strong>Recovery</strong>: When mesh drops to 0 connections. Same flow as bootstrap.</li>
|
|
<li><strong>Not ongoing</strong>: Nodes do NOT register with anchors on a loop. Anchors are for forming initial connections, not for ongoing presence.</li>
|
|
<li><strong>itsgoin.net node</strong>: A permanent, well-connected ItsGoin node runs on itsgoin.net as part of the share link redirect infrastructure (see <a href="#share-links">Section 26</a>). This node participates in the network as a standard anchor — it bootstraps new nodes, accepts referral requests, and is included in <code>known_anchors</code> by peers that connect through it. It is not special-cased in the protocol. Its value as an anchor comes from permanent uptime and high mesh connectivity, not from any privileged role.</li>
|
|
</ul>
|
|
|
|
<h3>Anchor referral mechanics</h3>
|
|
<p>When a bootstrapping node connects, the anchor provides referrals from its mesh and referral list. The node persists on the anchor's referral list until released at the referral count limit. During this time, the anchor can matchmake — introducing the new node to other peers requesting referrals.</p>
|
|
|
|
<h3>Anchor selection order</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong><code>known_anchors</code> table</strong> — <code>ORDER BY last_seen DESC</code> (LIFO). The most recently seen anchor is most likely still reachable, particularly given short-lived home desktop anchors.</li>
|
|
<li><strong>Hardcoded default anchor(s)</strong> — only if <code>known_anchors</code> is empty or exhausted. A brand-new node hits hardcoded anchors once on first bootstrap, populates <code>known_anchors</code> from that session, and the hardcoded list recedes to pure fallback.</li>
|
|
</ol>
|
|
<p>No scoring, no success counting, no prediction. Attempt, move to next on failure. The <code>known_anchors</code> table stores only: <code>node_id</code>, <code>addresses</code>, <code>last_seen</code>.</p>
|
|
|
|
<h3>Anchor self-verification <span class="badge badge-complete">Complete</span></h3>
|
|
<p>Nodes with UPnP-mapped IPv4 or IPv6 public addresses cannot self-certify as anchors — they need external verification that they are genuinely reachable by cold direct connect. A node is a viable anchor only if a complete stranger can connect to it directly with no introduction, no hole punch, and no relay.</p>
|
|
|
|
<h4>Witness selection</h4>
|
|
<p>Node A (candidate anchor) selects a witness from its own N2 table entries NOT present in its N1. These are genuine strangers — no prior connection, no cached address, no warm path. A selects one (call it C) and knows C's address via the N1 reporter (call it B) who reported C in their N1 share.</p>
|
|
|
|
<h4>Probe message flow</h4>
|
|
<pre><code>A → B (N1 reporter of C): AnchorProbeRequest {
|
|
target_addr, // A's external address to test
|
|
witness, // C's NodeId
|
|
return_via, // B's NodeId (for failure reporting)
|
|
}
|
|
|
|
B → C: forward AnchorProbeRequest
|
|
|
|
C: cold direct QUIC connect to target_addr
|
|
— MUST use only raw QUIC connect (step 1 of connect_by_node_id)
|
|
— MUST skip entire resolution cascade, hole punch, introduction, relay
|
|
— 15s timeout
|
|
|
|
SUCCESS: C → A directly (on new connection): AnchorProbeResult { reachable: true }
|
|
FAILURE: C → B → A: AnchorProbeResult { reachable: false }</code></pre>
|
|
<p><strong>Asymmetric return path</strong>: If cold connect fails, by definition there is no direct path from C to A. C reports failure through B (who has a live connection to A). On success, C has a fresh direct connection and uses it. The <code>return_via</code> field tells C which node to route failure through.</p>
|
|
<p><strong>Why bypass the cascade</strong>: The normal <code>connect_by_node_id</code> cascade has 7 steps including hole punch and relay. If C uses the full cascade, a successful result via relay is a false positive. The probe handler must be a special code path: raw QUIC connect only.</p>
|
|
|
|
<h4>Anchor candidacy checklist</h4>
|
|
<pre><code>is_anchor_candidate():
|
|
- has UPnP mapping OR has IPv6 public address
|
|
- probe succeeded within last 30 minutes
|
|
- mesh ≥ 50 peers (sufficient N2 density)
|
|
- uptime ≥ 2 hours continuous
|
|
- NOT mobile (platform check at build time)</code></pre>
|
|
|
|
<h4>Probe refresh schedule</h4>
|
|
<table>
|
|
<tr><th>Trigger</th><th>Action</th></tr>
|
|
<tr><td>Startup (after UPnP attempt)</td><td>Run initial probe</td></tr>
|
|
<tr><td>UPnP renewal if address changed</td><td>Re-probe</td></tr>
|
|
<tr><td>Every 30 minutes while anchor-declared</td><td>Periodic re-probe</td></tr>
|
|
<tr><td>Any failed inbound connection</td><td>Immediate re-probe</td></tr>
|
|
<tr><td>Two consecutive probe failures</td><td>Stop advertising as anchor, revert to normal peer</td></tr>
|
|
</table>
|
|
|
|
<h3>Session fallback for full anchors</h3>
|
|
<p>When an anchor's mesh is full (101/101), new nodes fall back to a session connection for matchmaking. The anchor accepts referral requests over session connections, not just mesh.</p>
|
|
|
|
<h3>Remaining gaps</h3>
|
|
<table>
|
|
<tr><th>Gap</th><th>Impact</th></tr>
|
|
<tr><td>Profile anchor lists not used for discovery</td><td>Profiles have an <code>anchors</code> field but it's not consulted during address resolution</td></tr>
|
|
<tr><td>No anchor-to-anchor awareness</td><td>Anchors don't discover each other unless they connect through normal mesh growth</td></tr>
|
|
<tr><td>Bootstrap chicken-and-egg</td><td>A fresh anchor with few peers produces few N2 candidates for new nodes. Growth stalls because there's nothing to grow from.</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 9. Referrals -->
|
|
<section id="referrals">
|
|
<h2>9. Referrals</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<h3>Referral list mechanics (anchor side)</h3>
|
|
<p>Anchors maintain an in-memory HashMap of registered peers. Each entry: <code>{ node_id, addresses, use_count, disconnected_at }</code>.</p>
|
|
<table>
|
|
<tr><th>Property</th><th>Value</th></tr>
|
|
<tr><td>Tiered usage caps</td><td>3 uses if list < 50, 2 uses at 50+, 1 use at 100+</td></tr>
|
|
<tr><td>Disconnect grace</td><td>2 minutes before pruning</td></tr>
|
|
<tr><td>Sort order</td><td>Least-used first (distributes load)</td></tr>
|
|
<tr><td>Auto-supplement</td><td>When explicit list is sparse (< 3 entries), supplement with random mesh peers</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 10. Relay & NAT Traversal -->
|
|
<section id="relay">
|
|
<h2>10. Relay & NAT Traversal</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<h3>Relay selection (<code>find_relays_for</code>)</h3>
|
|
<p>Find up to 3 relay candidates, prioritized:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Preferred tree intersection</strong>: Target's <code>preferred_tree</code> (from <code>social_routes</code>, ~100 NodeIds) intersected with our connections. Prefer our own preferred peers within that tree. TTL=0.</li>
|
|
<li><strong>N2 reporters</strong>: Our mesh peers who reported the target in their N1 share. TTL=0.</li>
|
|
<li><strong>N3 via preferred tree</strong>: Target's <code>preferred_tree</code> intersected with N3 reporters. TTL=1.</li>
|
|
<li><strong>N3 reporters</strong>: Any N3 reporter for the target. TTL=1.</li>
|
|
</ol>
|
|
|
|
<h3>RelayIntroduce flow (<code>0xB0</code>/<code>0xB1</code>)</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Requester</strong> → opens bi-stream to relay, sends <code>RelayIntroduce { target, requester, requester_addresses, ttl }</code></li>
|
|
<li><strong>Relay</strong> handles three cases:
|
|
<ul style="margin-top: 0.3rem;">
|
|
<li><strong>We ARE the target</strong>: Return our addresses, spawn hole punch to requester</li>
|
|
<li><strong>Target is our mesh or session peer</strong>: Forward request to target on new bi-stream, relay response back. Inject observed public addresses for both parties (session peers carry <code>remote_addr</code> from their inbound connection).</li>
|
|
<li><strong>TTL > 0 and target in our N2</strong>: Forward to the reporter with TTL-1 (chain forwarding, max TTL=2)</li>
|
|
</ul>
|
|
</li>
|
|
<li><strong>Requester</strong> receives <code>RelayIntroduceResult { target_addresses, relay_available }</code>, then:
|
|
<ul style="margin-top: 0.3rem;">
|
|
<li><code>hole_punch_parallel()</code>: Try all returned addresses in parallel, retry every 2s, 30s total timeout</li>
|
|
<li>If hole punch fails and <code>relay_available</code>: open <code>SessionRelay</code> (<code>0xB2</code>) pipe through the intermediary</li>
|
|
</ul>
|
|
</li>
|
|
</ol>
|
|
|
|
<h3>Session relay (relay pipes)</h3>
|
|
<p>Intermediary splices bi-streams between requester and target. Desktop: max 10 concurrent pipes. Mobile: max 2. Each pipe has a 50MB byte cap and 2-min idle timeout.</p>
|
|
<div class="note">
|
|
<strong>v0.2.0 change</strong>: Relay pipes are <strong>own-device-only by default</strong>. A node will only relay traffic between its own devices (same identity key, different device identity). Users can opt in to relaying for others in Settings, but this is not enabled automatically. This prevents nodes from unknowingly burning bandwidth for random peers while still enabling personal multi-device routing.
|
|
</div>
|
|
|
|
<h3>Deduplication & cooldowns</h3>
|
|
<table>
|
|
<tr><th>Mechanism</th><th>Window</th><th>Purpose</th></tr>
|
|
<tr><td><code>seen_intros</code></td><td>30s</td><td>Prevents forwarding loops</td></tr>
|
|
<tr><td><code>relay_cooldowns</code></td><td>5 min per target</td><td>Prevents relay spamming</td></tr>
|
|
</table>
|
|
|
|
<h3>Hole punch mechanics</h3>
|
|
<p>Both sides filter self-reported addresses to publicly-routable only (no Docker bridge, VPN, or LAN IPs) and prepend UPnP external address if available. The relay injects each party's observed public address (from the QUIC connection) at the front of the list. All paths use <code>hole_punch_parallel()</code>: parse returned addresses into QUIC <code>EndpointAddr</code>, spawn parallel connect attempts to every address simultaneously. Each attempt: 2s timeout, retried until 30s total deadline. First successful connection wins.</p>
|
|
|
|
<h3>NAT type detection</h3>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span> (interim: public STUN servers)</h3>
|
|
|
|
<p>On startup, each node classifies its NAT type as one of four categories:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Public</strong> — observed address matches local, or UPnP-mapped. Directly reachable.</li>
|
|
<li><strong>Easy</strong> — same mapped port from multiple probes (endpoint-independent mapping / cone NAT). Hole punch will likely succeed.</li>
|
|
<li><strong>Hard</strong> — different mapped ports per destination (symmetric / address-dependent mapping). Port is unpredictable.</li>
|
|
<li><strong>Unknown</strong> — detection failed or not yet run.</li>
|
|
</ul>
|
|
|
|
<h4>Current implementation (interim)</h4>
|
|
<p>Raw STUN Binding Requests (20 bytes, no crate dependency) sent to <code>stun.l.google.com:19302</code> and <code>stun.cloudflare.com:3478</code> from a single UDP socket. XOR-MAPPED-ADDRESS parsed from each response (IPv4 + IPv6 supported). Comparison: same mapped port = Easy, different = Hard, matches local = Public. 3s timeout per server. UPnP success overrides to Public. Anchors skip probing entirely (already Public).</p>
|
|
|
|
<h4>Target design (multi-anchor STUN)</h4>
|
|
<p>When the network has enough anchors, replace public STUN servers with anchor-reported <code>your_observed_addr</code> from InitialExchange. Connecting to <strong>two or more anchors</strong> at different public IPs provides the same classification without external dependencies.</p>
|
|
|
|
<h4>NAT type sharing</h4>
|
|
<p>NAT type is included as a string field (<code>"public"</code>/<code>"easy"</code>/<code>"hard"</code>/<code>"unknown"</code>) in <code>InitialExchangePayload</code>. Stored per-peer in the <code>peers</code> table (<code>nat_type TEXT</code> column). Available for hole punch decisions before any connection attempt.</p>
|
|
|
|
<h4>Hole punch strategy</h4>
|
|
<table>
|
|
<tr><th>Peer A</th><th>Peer B</th><th>Strategy</th></tr>
|
|
<tr><td>Public / Easy</td><td>Any</td><td>Hole punch (likely success)</td></tr>
|
|
<tr><td>Hard NAT</td><td>Easy NAT</td><td>Hole punch (B's port is predictable)</td></tr>
|
|
<tr><td>Hard NAT</td><td>Hard NAT</td><td><strong>Port scanning</strong> — <code>hole_punch_with_scanning()</code> tries standard punch first, then escalates to tiered port scanning (±500, ±2000, full ephemeral range)</td></tr>
|
|
</table>
|
|
<p>All hole punch paths use <code>hole_punch_with_scanning()</code> which replaces the former hard+hard skip. NAT profiles (NatMapping + NatFiltering) from InitialExchange determine whether scanning is attempted. Behavioral inference updates filtering classification from connection outcomes.</p>
|
|
|
|
<h3>Advanced NAT traversal</h3>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<p>NAT "hardness" has two independent dimensions:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Mapping</strong>: Endpoint-Independent Mapping (EIM / "easy") uses the same external port for all destinations. Endpoint-Dependent Mapping (EDM / "hard") assigns a different port per destination.</li>
|
|
<li><strong>Filtering</strong>: Address-restricted (Open) accepts from any port on an IP the host has sent to. Port-restricted accepts only from the exact IP:port the host has sent to.</li>
|
|
</ul>
|
|
<p>STUN probing at startup classifies mapping (EIM/EDM). Filtering is determined reliably via the anchor filter probe.</p>
|
|
|
|
<h4>NAT filter probe (0xC6/0xC7)</h4>
|
|
<p>After anchor registration, each node with Unknown filtering sends a <code>NatFilterProbe</code> bi-stream request to its anchor. The anchor creates a temporary QUIC endpoint on a random port and attempts to connect to the node’s observed address (2s timeout). If the connection succeeds, the node is <strong>Open</strong> (address-restricted or better — accepts packets from any port on the anchor’s IP). If it times out, the node is <strong>PortRestricted</strong>.</p>
|
|
<p>This probe runs once at startup (during anchor register cycle) and the result feeds into all subsequent InitialExchange payloads, so peers know each other’s exact filtering type.</p>
|
|
<p><strong>Note:</strong> “Public” NAT type does not automatically mean Open filtering. A node may be public on IPv6 but NATed on IPv4. The filter probe tests actual reachability from a different port, regardless of self-declared NAT type. Startup logs now report <code>public (v4 only)</code>, <code>public (v6 only)</code>, or <code>public (v4+v6)</code>.</p>
|
|
|
|
<h4>NAT combination matrix</h4>
|
|
<table>
|
|
<tr><th>Side A</th><th>Side B</th><th>Result</th></tr>
|
|
<tr><td>addr-restricted, EIM</td><td>addr-restricted, EDM</td><td>Basic hole punch</td></tr>
|
|
<tr><td>port-restricted, EIM</td><td>addr-restricted, EDM</td><td>A scans to find+open port; B punches A’s stable port regularly</td></tr>
|
|
<tr><td>addr-restricted, EDM</td><td>port-restricted, EDM</td><td>B scans to find+open port; A waits then responds</td></tr>
|
|
<tr><td>port-restricted, EDM</td><td>port-restricted, EDM</td><td>Both scan+punch alternately</td></tr>
|
|
<tr><td>addr-restricted, EIM</td><td>addr-restricted, EIM</td><td>Basic hole punch</td></tr>
|
|
<tr><td>port-restricted, EIM</td><td>addr-restricted, EIM</td><td>Basic hole punch</td></tr>
|
|
<tr><td>addr-restricted, EDM</td><td>port-restricted, EIM</td><td>B scans to find+open port; A punches B’s stable port regularly</td></tr>
|
|
<tr><td>port-restricted, EDM</td><td>port-restricted, EIM</td><td>B scans to find+open port; A punches B’s stable port regularly</td></tr>
|
|
</table>
|
|
<p>Key insight: if both sides have <strong>Open</strong> (address-restricted) filtering, scanning is never needed — <code>should_try_scanning()</code> returns false and basic hole punch handles it.</p>
|
|
|
|
<h4>Role-based scanning protocol</h4>
|
|
<p>Each side independently determines its role based on its own NAT profile:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>EIM (stable port) → Puncher</strong>: Punch peer’s anchor-observed address every 2s. Our port is stable — the peer’s scanner will find us.</li>
|
|
<li><strong>EDM or Unknown → Scanner+Puncher</strong>: Walk outward from peer’s anchor-observed base port at ~100 ports/sec (base, base+1, base-1, base+2, base-2, ...). Each probe opens a firewall entry on our NAT. Also punch every 2s to check if peer has opened their port for us.</li>
|
|
</ul>
|
|
<p>The scanner opens ports on its own firewall. The other side’s periodic punch (one every 2s to the scanner’s observed address) checks if the scanner has opened a port matching the puncher’s actual port. For both-EDM pairs, both sides scan and punch simultaneously.</p>
|
|
|
|
<h4>Scan parameters</h4>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Rate</strong>: ~100 ports/sec (one probe every 10ms)</li>
|
|
<li><strong>In-flight</strong>: ~20 concurrent (100/sec × 200ms connect timeout)</li>
|
|
<li><strong>Direction</strong>: Outward walk from anchor-observed base port</li>
|
|
<li><strong>Target address</strong>: Anchor-observed (relay-injected) address only — not VPN/cellular/LAN addresses</li>
|
|
<li><strong>Max duration</strong>: 5 minutes (covers full 65K port space at ~100/sec in ~11 minutes; ±2000 range covered in first 40 seconds)</li>
|
|
<li><strong>Task management</strong>: JoinSet with abort_all() on success or exhaustion — no orphaned tasks</li>
|
|
<li><strong>Punch interval</strong>: Every 2s to peer’s anchor-observed address</li>
|
|
</ul>
|
|
|
|
<h4>Why 5-minute scan duration is acceptable</h4>
|
|
<p>The cost is time, not resources (~20 in-flight at any time, ~100 probes/sec). For connections that would otherwise be <em>impossible</em> (both EDM + port-restricted), accepting a longer setup time is far better than giving up entirely. Most successful connections resolve within the first 40 seconds (±2000 port range).</p>
|
|
|
|
<div class="note">
|
|
<strong>Design principle</strong>: This protocol eliminates the need for full relay in virtually all NAT scenarios. Session relay remains <strong>opt-in only</strong> — it is never used as an automatic fallback. The scanning approach respects the user’s intent that peers communicate directly whenever physically possible.
|
|
</div>
|
|
</section>
|
|
|
|
<!-- 11. UPnP Port Mapping -->
|
|
<section id="upnp">
|
|
<h2>11. UPnP Port Mapping</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<h3>Purpose</h3>
|
|
<p>UPnP (Universal Plug and Play) allows a node to request its home router to forward an external port to its local QUIC port. This makes the node <strong>directly reachable from the internet</strong> without hole punching — any peer with the external address can connect immediately. This dramatically improves connection success rates for desktop nodes on home networks.</p>
|
|
|
|
<h3>Startup flow</h3>
|
|
<pre><code>bind Endpoint → attempt UPnP mapping (2s timeout) → store external addr → bootstrap</code></pre>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Discover gateway</strong>: Search for UPnP/NAT-PMP gateway with a 2-second timeout. If no gateway found, proceed without — do not block startup.</li>
|
|
<li><strong>Request mapping</strong>: Map both UDP and TCP for the local QUIC port to the same external port (or next available). UDP is required for QUIC (existing). TCP enables HTTP post delivery (see <a href="#http-delivery">Section 25</a>). Both use the same external port number. If the router supports one but not the other, accept the partial mapping gracefully — QUIC connectivity is not affected by TCP mapping failure. Request lease TTL of 3600s.</li>
|
|
<li><strong>Store external address</strong>: The resulting external <code>SocketAddr</code> is stored alongside iroh's observed addresses. It feeds into N+10 identification, InitialExchange, anchor registration, and all peer address advertisements.</li>
|
|
<li><strong>Log result</strong>: Clearly log whether UPnP succeeded, failed, or was unavailable. This is critical for diagnosing connectivity issues.</li>
|
|
</ol>
|
|
|
|
<h3>Lease renewal cycle (every 2700s / 45 min)</h3>
|
|
<p>UPnP mappings have a TTL (typically 3600s but varies by router). A renewal loop runs every 45 minutes to refresh the mapping before it expires. If renewal fails, the external address is removed from advertisements and the node falls back to hole punch / relay paths gracefully.</p>
|
|
|
|
<h3>Shutdown</h3>
|
|
<p>Explicitly release the UPnP mapping on clean shutdown. Routers have finite mapping tables — releasing is good citizenship. Tauri's shutdown hook handles this.</p>
|
|
|
|
<h3>Integration with existing address logic</h3>
|
|
<p>The UPnP external address is treated the same as any other address the node knows about. It feeds into:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>N+10 identification</strong>: Included in self-identification so peers store a routable address</li>
|
|
<li><strong>InitialExchange</strong>: Advertised to new connections</li>
|
|
<li><strong>Anchor registration</strong>: Included in bootstrap/recovery registration</li>
|
|
<li><strong>Social routing</strong>: Available in social route address cache for follows/audience</li>
|
|
<li><strong>Relay introduction results</strong>: Returned alongside hole-punch candidate addresses</li>
|
|
<li><strong>Share link host lists</strong>: The UPnP external address, when mapped for TCP, determines whether this node includes itself in share link host lists (see <a href="#share-links">Section 26</a>). A node only self-includes if it has confirmed TCP reachability — either via UPnP TCP mapping or a known public IPv6 address.</li>
|
|
</ul>
|
|
|
|
<h3>Why this matters for mobile</h3>
|
|
<p>Mobile devices on cellular networks cannot use UPnP (carrier NAT doesn't expose it). However, if the <strong>peers they're trying to reach</strong> (especially desktop nodes and anchors) have UPnP mappings, those peers become directly reachable from the phone without hole punching. The phone doesn't need UPnP — the other side does.</p>
|
|
|
|
<h3>Honest limitations</h3>
|
|
<table>
|
|
<tr><th>Limitation</th><th>Impact</th></tr>
|
|
<tr><td>UPnP disabled on router</td><td>Some ISPs ship routers with UPnP off. Mapping silently fails, fallback to hole punch.</td></tr>
|
|
<tr><td>Double NAT</td><td>ISP modem + user router: mapping reaches inner router but not outer. Partial help at best.</td></tr>
|
|
<tr><td>Cellular networks</td><td>No UPnP at all. This is purely a desktop/home-network feature.</td></tr>
|
|
<tr><td>Carrier-grade NAT (CGNAT)</td><td>ISP shares one public IP across many customers. UPnP maps to the ISP's NAT, not the internet. Same as double NAT.</td></tr>
|
|
</table>
|
|
|
|
<div class="note">
|
|
<strong>Design principle</strong>: UPnP is a best-effort enhancement that improves direct connection reliability for the common case. It is not a dependency. The hole punch + relay fallback chain already handles all failure cases — UPnP just reduces how often you fall back to them.
|
|
</div>
|
|
|
|
<h3>UPnP nodes are anchors</h3>
|
|
<p>A node with a successful UPnP mapping is directly reachable from the internet — which is the only thing that makes an anchor an anchor. When UPnP mapping succeeds, the node <strong>self-declares as an anchor</strong> (<code>is_anchor = true</code>). Other peers will add it to their <code>known_anchors</code> table, providing diverse bootstrap paths back into the network.</p>
|
|
<p>When the UPnP mapping is lost (lease renewal fails, shutdown), the node reverts to non-anchor. Peers that stored it as an anchor will naturally age it out via <code>last_seen</code> — LIFO ordering means stale anchors drop to the bottom. The 7-day post-bootstrap cleanup probes stale entries and removes failures. No special cleanup needed beyond the existing anchor infrastructure.</p>
|
|
<p>This means any desktop on a home network with UPnP-capable router becomes a potential bootstrap point for the network, dramatically increasing the number of available anchors without any manual server deployment.</p>
|
|
|
|
<h3>Implementation</h3>
|
|
<p>Crate: <code>igd-next</code> (async support, well-maintained fork of <code>igd</code>). Implementation lives in <code>network.rs</code> alongside the iroh Endpoint — UPnP mapping is an Endpoint concern, not a connection concern.</p>
|
|
</section>
|
|
|
|
<!-- 12. LAN Discovery -->
|
|
<section id="lan">
|
|
<h2>12. LAN Discovery</h2>
|
|
<h3>Status: <span class="badge badge-planned">Planned</span></h3>
|
|
|
|
<p>iroh's mDNS address lookup broadcasts peer presence on the local network via multicast DNS (service name <code>"irohv1"</code>, backed by the <code>swarm-discovery</code> crate). Currently this is configured as a passive address resolver — if we already know a peer's NodeId, mDNS can resolve its LAN address. But mDNS also <strong>discovers</strong> unknown peers on the same network, and iroh exposes this via <code>MdnsAddressLookup::subscribe()</code>.</p>
|
|
|
|
<h3>Discovery flow</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Hold the mDNS handle</strong>: Build <code>MdnsAddressLookup</code> explicitly (not via the endpoint builder) so we retain a clone for subscribing.</li>
|
|
<li><strong>Spawn a LAN scan loop</strong>: Call <code>mdns.subscribe().await</code> to get a stream of <code>DiscoveryEvent::Discovered</code> and <code>DiscoveryEvent::Expired</code> events.</li>
|
|
<li><strong>On discovery</strong>: Extract NodeId + LAN addresses from the event. If not already connected, initiate a direct connection + initial exchange. Register as a <strong>LAN session</strong> (a keep-alive session tagged as local).</li>
|
|
<li><strong>On expiry</strong>: Clean up the LAN session. Peer left the network or powered off.</li>
|
|
</ol>
|
|
|
|
<h3>LAN sessions</h3>
|
|
<p>LAN peers are special: zero-cost bandwidth, sub-millisecond latency, and very likely someone you know (same household/office). They deserve their own treatment beyond regular mesh or session slots:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Automatic keep-alive</strong>: LAN sessions stay open as long as the peer is on the network (mDNS heartbeat). No idle timeout. Not counted against session slot limits.</li>
|
|
<li><strong>Sync priority</strong>: Pull sync and push notifications go to LAN peers first — instant delivery over the local link.</li>
|
|
<li><strong>Local relay</strong>: LAN peers can relay for each other to the wider internet. A phone behind carrier NAT can relay through the desktop's UPnP-mapped connection. Bandwidth is free (local network), so relay limits can be much more generous than over the internet.</li>
|
|
<li><strong>Blob transfer</strong>: Large blob transfers between LAN peers are essentially free. Prefer LAN peers as blob sources when available.</li>
|
|
</ul>
|
|
|
|
<h3>Design rationale</h3>
|
|
<p>Today, two distsoc devices on the same WiFi network can only find each other if they happen to share a peer that reports them in N2. This is absurd — they're on the same network segment. LAN discovery turns mDNS from a passive address resolver into an active peer source, exploiting the fact that local bandwidth is essentially unlimited.</p>
|
|
<p>The keep-alive + relay pattern means a household with one well-connected desktop and several phones creates its own mini-mesh: the desktop provides anchor-like connectivity, the phones stay connected through it, and everyone syncs instantly over the LAN even when the internet connection drops.</p>
|
|
|
|
<div class="note">
|
|
<strong>Implementation note</strong>: iroh's <code>MdnsAddressLookup::subscribe()</code> returns a <code>Stream<DiscoveryEvent></code>. The <code>DiscoveryEvent::Discovered</code> variant includes <code>EndpointInfo</code> with NodeId + IP addresses. Custom <code>user_data</code> can be set via <code>endpoint.set_user_data_for_address_lookup()</code> to embed distsoc-specific metadata (e.g., display name) in the mDNS TXT record.
|
|
</div>
|
|
</section>
|
|
|
|
<!-- 13. Worm Search -->
|
|
<section id="worm">
|
|
<h2>13. Worm Search</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
<p>Used at step 4 of <code>connect_by_node_id</code>, after N2/N3 resolution fails.</p>
|
|
|
|
<h3>Algorithm</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Build needles</strong>: target NodeId + target's N+10 (up to 10 preferred peers from their profile/cached N+10)</li>
|
|
<li><strong>Local check</strong>: Search own connections + N2/N3 for any of the 11 needles. Also check local storage, CDN downstream tree, and blob store for any requested post/blob content.</li>
|
|
<li><strong>Burst</strong> (500ms timeout): Send <code>WormQuery{ttl=0}</code> (<code>0x60</code>) to all mesh peers in parallel. Each peer checks their local connections + N2/N3, plus local storage and CDN tree for post/blob content.</li>
|
|
<li><strong>Nova</strong> (1.5s timeout): Each burst response includes a random "wide referral" — an N2 peer. Connect to those referrals and send <code>WormQuery{ttl=1}</code>. The referred peer does its own 101-burst (fans out to all its mesh peers with ttl=0). This reaches ~10K nodes with only ~202 relay hops, keeping network pressure low by expanding one hop at a time rather than flooding.</li>
|
|
<li><strong>Total timeout</strong>: 3 seconds for the entire search.</li>
|
|
</ol>
|
|
|
|
<h3>Content search</h3>
|
|
<p><code>WormQuery</code> carries optional <code>post_id</code> and <code>blob_id</code> fields, enabling unified search for nodes, posts, and blobs in a single query. Each peer checks:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Posts</strong>: local storage (direct match), CDN downstream tree (<code>post_downstream</code> — up to 100 known hosts per post)</li>
|
|
<li><strong>Blobs</strong>: local blob store, CDN post ownership (<code>get_blob_post_id</code> → <code>post_downstream</code>)</li>
|
|
</ul>
|
|
<p><code>WormResponse</code> carries <code>post_holder</code> and <code>blob_holder</code> fields alongside the existing node search results. A content hit (post or blob holder found) is treated as a successful response even without a node match.</p>
|
|
<p>The CDN layer is the key multiplier: each node's downstream tree can cover hundreds of posts across dozens of hosts, giving every peer thousands of "I know where that is" answers. Combined with social layer knowledge, even a 202-hop nova covers enormous content space.</p>
|
|
|
|
<h3>PostFetch (<code>0xD4</code>/<code>0xD5</code>)</h3>
|
|
<p>Lightweight single-post retrieval after worm search identifies a holder. Opens a bi-stream to the holder and requests one post by ID. Much lighter than full <code>PullSync</code> — no follow filtering, no batch processing, just the target post.</p>
|
|
|
|
<h3>Dedup & cooldown</h3>
|
|
<table>
|
|
<tr><th>Mechanism</th><th>Window</th><th>Purpose</th></tr>
|
|
<tr><td><code>seen_worms</code></td><td>10s</td><td>Prevents loops during fan-out</td></tr>
|
|
<tr><td>Miss cooldown</td><td>5 min (in DB)</td><td>Prevents repeated searches for unreachable targets</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 14. Preferred Peers -->
|
|
<section id="preferred">
|
|
<h2>14. Preferred Peers</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<h3>Negotiation (<code>MeshPrefer</code>, <code>0xB3</code>)</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Bilateral</strong>: Requester sends <code>MeshPrefer{requesting: true}</code>, responder accepts/rejects</li>
|
|
<li><strong>Acceptance</strong>: Both sides persist to <code>preferred_peers</code> table, upgrade slot to <code>PeerSlotKind::Preferred</code></li>
|
|
<li><strong>Rejection reasons</strong>: "not connected", "preferred slots full (N/M)"</li>
|
|
</ul>
|
|
|
|
<h3>Properties</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Eviction-protected</strong>: Never evicted during rebalance (only non-preferred peers can be evicted)</li>
|
|
<li><strong>Priority reconnect</strong>: Reconnected first in rebalance (Priority 0), before any growth</li>
|
|
<li><strong>Pruned after 7 days unreachable</strong>: If a preferred peer can't be reached for 7 days, the slot is released. The bilateral agreement is cleared — reconnection requires a new MeshPrefer handshake. A reconnect watcher persists for 30 days at low priority (daily check). This prevents churn from aggressive pruning while ensuring slots aren't held indefinitely for offline peers.</li>
|
|
<li><strong>N+10 component</strong>: Your 10 preferred peers' NodeIds are included in your N+10 for all identification (see <a href="#nplus10">Section 3</a>)</li>
|
|
<li><strong>Preferred tree</strong>: Each social route caches a <code>preferred_tree</code> (~100 NodeIds) — the target's preferred peers' preferred peers. Used for relay selection.</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<!-- 15. Social Routing -->
|
|
<section id="social-routing">
|
|
<h2>15. Social Routing</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
<p>Caches addresses for follows and audience members, separate from mesh connections.</p>
|
|
|
|
<h3><code>social_routes</code> table</h3>
|
|
<table>
|
|
<tr><th>Field</th><th>Purpose</th></tr>
|
|
<tr><td><code>node_id</code></td><td>The social contact's NodeId</td></tr>
|
|
<tr><td><code>nplus10</code></td><td>Their N+10 (NodeId + 10 preferred peers)</td></tr>
|
|
<tr><td><code>addresses</code></td><td>Their known IP addresses</td></tr>
|
|
<tr><td><code>peer_addresses</code></td><td>Their N+10 contacts (PeerWithAddress list)</td></tr>
|
|
<tr><td><code>relation</code></td><td>Follow / Audience / Mutual</td></tr>
|
|
<tr><td><code>status</code></td><td>Online / Disconnected</td></tr>
|
|
<tr><td><code>last_connected_ms</code></td><td>When we last connected</td></tr>
|
|
<tr><td><code>reach_method</code></td><td>Direct / Relay / Indirect</td></tr>
|
|
<tr><td><code>preferred_tree</code></td><td>~100 NodeIds for relay tree</td></tr>
|
|
</table>
|
|
|
|
<h3>Wire messages</h3>
|
|
<table>
|
|
<tr><th>Code</th><th>Name</th><th>Stream</th><th>Purpose</th></tr>
|
|
<tr><td><code>0x70</code></td><td>SocialAddressUpdate</td><td>Uni</td><td>Sent when a social contact's address changes or they reconnect</td></tr>
|
|
<tr><td><code>0x71</code></td><td>SocialDisconnectNotice</td><td>Uni</td><td>Sent when a social contact disconnects</td></tr>
|
|
<tr><td><code>0x72</code></td><td>SocialCheckin</td><td>Bi</td><td>Keepalive with address + N+10 updates</td></tr>
|
|
</table>
|
|
|
|
<h3>Reconnect watchers</h3>
|
|
<p><code>reconnect_watchers</code> table: when peer A asks about disconnected peer B, A is registered as a watcher. When B reconnects, A gets a <code>SocialAddressUpdate</code> notification. Watchers pruned after 30 days. Low priority — daily check frequency for watchers older than 7 days.</p>
|
|
|
|
<h3>Social route lifecycle</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Follow</strong> → store their N+10, upgrade to Mutual (if audience)</li>
|
|
<li><strong>Unfollow</strong> → downgrade/remove</li>
|
|
<li><strong>Approve audience</strong> → Mutual/Audience</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<!-- 16. Keep-Alive Sessions -->
|
|
<section id="keep-alive">
|
|
<h2>16. Keep-Alive Sessions</h2>
|
|
<h3>Status: <span class="badge badge-planned">Planned</span></h3>
|
|
|
|
<h3>Purpose</h3>
|
|
<p>When the mesh 101 doesn't provide <N4 access to all the nodes we need for social and file operations, keep-alive sessions bridge the gap. These are long-lived connections that participate in N2/N3 routing but are <strong>not part of the mesh 101</strong>.</p>
|
|
|
|
<h3>Social/File connectivity check (every 60s)</h3>
|
|
<p>Periodically check whether we can reach every node we need. A node is considered reachable if <strong>either</strong>:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>We have <N4 access to their N+10 (within N1/N2/N3), <strong>or</strong></li>
|
|
<li>There is an <strong>anchor within N2</strong> of them — we can ask that anchor to matchmake on demand without maintaining a persistent connection</li>
|
|
</ul>
|
|
<p>Only when neither condition is met do we open a keep-alive session. With UPnP auto-anchors (see <a href="#upnp">Section 11</a>) scattered throughout the network, the odds of an anchor being within N2 of any given peer increase significantly, reducing the number of keep-alive sessions needed.</p>
|
|
<p>Nodes to check:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Nodes we DM'd in the last 4 hours</li>
|
|
<li>All follows</li>
|
|
<li>All audience members</li>
|
|
<li>All file upstream peers (for blobs we host)</li>
|
|
<li>All file downstream peers (for blobs we serve)</li>
|
|
</ul>
|
|
<p>For any node whose N+10 is NOT reachable within N3, open a <strong>keep-alive session</strong> to the closest available node in their N+10 (or to them directly if possible). This ensures we can always find and reach our social and file contacts without worm search.</p>
|
|
|
|
<h3>Keep-alive session behavior</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>N2/N3 routing</strong>: Keep-alive sessions exchange N1/N2 diffs and participate in routing, similar to mesh connections. They expand our network knowledge without consuming mesh slots.</li>
|
|
<li><strong>Not counted in mesh 101</strong>: Keep-alive sessions are a separate pool. They don't affect mesh diversity scoring or slot management.</li>
|
|
<li><strong>Capacity limit</strong>: Max 50% of total session capacity is reserved for keep-alive sessions. The other 50% remains available for interactive sessions (DMs, group activity).</li>
|
|
<li><strong>Not idle-reaped</strong>: Unlike interactive sessions (5-min idle timeout), keep-alive sessions persist as long as the connectivity need exists.</li>
|
|
<li><strong>Reevaluated periodically</strong>: The 60s connectivity check closes keep-alive sessions that are no longer needed (e.g., the target now appears in N3 via a mesh connection).</li>
|
|
</ul>
|
|
|
|
<h3>Practical ceilings</h3>
|
|
<table>
|
|
<tr><th>Platform</th><th>Ceiling</th><th>Binding constraint</th></tr>
|
|
<tr><td>Desktop</td><td>~300–500</td><td>Routing diff broadcast overhead — <code>NodeListUpdate</code> to all sessions every 120s. Memory and connection count are not the bottleneck.</td></tr>
|
|
<tr><td>Mobile</td><td>~25–50</td><td>Battery (radio wake-ups per heartbeat cycle) and OS background restrictions (iOS/Android will kill background sockets).</td></tr>
|
|
</table>
|
|
|
|
<h3>Mobile priority stack</h3>
|
|
<p>When approaching the mobile ceiling, keep-alive sessions are prioritized:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>DMs last 30 min</strong> — active conversations take highest priority</li>
|
|
<li><strong>Follows</strong> — people you follow</li>
|
|
<li><strong>Audience</strong> — people following you</li>
|
|
<li><strong>File peers</strong> — upstream/downstream blob hosts</li>
|
|
</ol>
|
|
<p>Lower-priority sessions are closed first to make room.</p>
|
|
|
|
<h3>Hysteresis</h3>
|
|
<p>Don't open a keep-alive session for a contact who just barely fell outside N3. Wait for <strong>persistent unreachability</strong> — the contact must be absent from N1/N2/N3 for multiple consecutive connectivity checks (e.g., 3 checks = 3 minutes) before opening a keep-alive. This prevents churn from nodes that transiently appear and disappear at the N3 boundary.</p>
|
|
|
|
<h3>Reject + redirect</h3>
|
|
<p>When a node is at its keep-alive session capacity (50% of total sessions), it refuses new keep-alive requests with a redirect — offering a random N2 node that also has <N4 access to the target. Same pattern as mesh <code>RefuseRedirect</code> but for the keep-alive pool. The requester tries the suggested peer instead.</p>
|
|
|
|
<h3>Cross-layer benefit</h3>
|
|
<p>Keep-alive sessions from the social and file layers feed N2/N3 entries back into the mesh layer. A social keep-alive to a friend's preferred peer might provide N2 entries that help the mesh growth loop. Similarly, a file keep-alive to an upstream host might provide access to nodes the mesh has never seen. The three layers compound each other's reach.</p>
|
|
</section>
|
|
|
|
<!-- 17. Content Propagation -->
|
|
<section id="content">
|
|
<h2>17. Content Propagation</h2>
|
|
<h3>Intent</h3>
|
|
<p><strong>"Attention creates propagation"</strong>: when you view something, you cache it. The cache is optionally offered for serving. Hot content spreads naturally through demand. Cold content decays unless intentionally hosted.</p>
|
|
<p>The CDN vision: every file by author X carries an author manifest with the author's N+10 and recent post list. If you hold any file by author X, you passively know X's recent posts and can find X through their N+10.</p>
|
|
|
|
<h3>Status: <span class="badge badge-partial">Partial</span></h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><code>BlobRequest</code>/<code>BlobResponse</code> (<code>0x90</code>/<code>0x91</code>) for peer-to-peer blob fetch</li>
|
|
<li>AuthorManifest (ed25519-signed, 10+10 post neighborhood) travels with blob responses</li>
|
|
<li>CDN hosting tree (1 upstream + 100 downstream per blob)</li>
|
|
<li>ManifestPush propagates updates down the tree</li>
|
|
<li>BlobDeleteNotice for tree healing on eviction</li>
|
|
<li>Blob eviction with social-aware priority scoring</li>
|
|
</ul>
|
|
|
|
<h3>Passive discovery via neighborhood diffs</h3>
|
|
<p>Passive file-chain propagation is enabled through BlobHeader neighborhood diffs. Every blob header carries the author's 25+25 post neighborhood (25 previous + 25 following). When a host receives a <code>BlobHeaderDiff</code> (<code>0x96</code>), they learn about the author's newer posts without explicit subscription. Hosts of old content are pulled toward new content by the same author naturally — attention creates propagation.</p>
|
|
|
|
<h3>Remaining gaps</h3>
|
|
<table>
|
|
<tr><th>Gap</th><th>Impact</th></tr>
|
|
<tr><td>N+10 not yet in file headers</td><td>Blob headers should include author N+10, upstream N+10, and downstream N+10s. Currently only AuthorManifest travels with blobs.</td></tr>
|
|
<tr><td>No "fetch from any peer who has it"</td><td>Blobs are fetched from specific peers. No content-addressed routing ("who has blob X?").</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 18. Files & Storage -->
|
|
<section id="files">
|
|
<h2>18. Files & Storage</h2>
|
|
|
|
<h3>Blob storage <span class="badge badge-complete">Complete</span></h3>
|
|
<table>
|
|
<tr><th>Property</th><th>Value</th></tr>
|
|
<tr><td>CID format</td><td>BLAKE3 hash of blob data (32 bytes, hex-encoded)</td></tr>
|
|
<tr><td>Filesystem path</td><td><code>{data_dir}/blobs/{hex[0..2]}/{hex}</code> (256 shards)</td></tr>
|
|
<tr><td>Metadata table</td><td><code>blobs</code> (cid, post_id, author, size_bytes, created_at, last_accessed_at, pinned)</td></tr>
|
|
<tr><td>Max blob size</td><td>10 MB</td></tr>
|
|
<tr><td>Max attachments per post</td><td>4</td></tr>
|
|
</table>
|
|
|
|
<h3>Blob content immutability</h3>
|
|
<p>Blob data is BLAKE3-addressed — the CID <em>is</em> the hash of the content. This means blob content is <strong>immutable by definition</strong>. Any mutable metadata (neighborhood, host lists, signatures) MUST be stored separately in a <strong>BlobHeader</strong>. Inline mutable headers are architecturally incompatible with content addressing.</p>
|
|
|
|
<h3>BlobHeader <span class="badge badge-planned">Planned</span></h3>
|
|
<p>Formal mutable structure replacing/extending CdnManifest. Stored and transmitted separately from blob data.</p>
|
|
<pre><code>BlobHeader {
|
|
cid, // BLAKE3 hash of blob content
|
|
author_nplus10, // Author's N+10 (NodeId + 10 preferred peers)
|
|
author_recent_posts, // 25 previous + 25 following PostIds (neighborhood)
|
|
upstream_nplus10, // Upstream file source's N+10 (if not author)
|
|
downstream_hosts, // Up to min(100, floor(170MB / blob_size)) downstream hosts
|
|
author_signature, // ed25519 signature over author fields
|
|
host_signature, // ed25519 signature by current host
|
|
updated_at, // Timestamp of last header update
|
|
}</code></pre>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Post neighborhood</strong>: 25 previous + 25 following PostIds. Forward slots are empty at publish time and populate via <code>BlobHeaderDiff</code> propagation as the author continues posting. Empty forward slots are not an error condition.</li>
|
|
<li><strong>Downstream host count</strong>: <code>min(100, floor(170MB / blob_size_bytes))</code> — smaller blobs allow more downstream hosts, larger blobs reduce the count to cap per-host storage overhead.</li>
|
|
<li><strong>BlobHeaderRequest</strong>: Lightweight header-only fetch — retrieve just the header without retransferring blob data. Useful for neighborhood updates and host discovery.</li>
|
|
<li><strong>Self Last Encounter</strong>: Stored per-author, becomes the newer of what's stored and "file last update." Determines when to trigger pull sync.</li>
|
|
</ul>
|
|
|
|
<h3>Blob transfer flow (<code>0x90</code>/<code>0x91</code>)</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Requester sends <code>BlobRequest { cid, requester_addresses }</code></li>
|
|
<li>Host checks local BlobStore:
|
|
<ul style="margin-top: 0.3rem;">
|
|
<li><strong>Has blob</strong>: Return base64-encoded data + CDN manifest + file header (N+10s, recent posts). Try to register requester as downstream (max 100). If full, return existing downstream as redirect candidates.</li>
|
|
<li><strong>No blob</strong>: Return <code>found: false</code></li>
|
|
</ul>
|
|
</li>
|
|
<li>Requester verifies CID, stores blob locally, records upstream in <code>blob_upstream</code> table. Updates Self Last Encounter for the author based on file header.</li>
|
|
</ol>
|
|
|
|
<h3>CDN hosting tree <span class="badge badge-complete">Complete</span></h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>AuthorManifest</strong>: ed25519-signed by post author, contains post neighborhood (25 previous + 25 following posts — see BlobHeader above), author N+10, author addresses</li>
|
|
<li><strong>CdnManifest</strong>: AuthorManifest + hosting metadata (host NodeId/addresses, source, downstream count)</li>
|
|
<li><strong>Tree structure</strong>: Each blob has 1 upstream source + up to 100 downstream hosts</li>
|
|
<li><strong>ManifestPush</strong> (<code>0x94</code>): Author/admin pushes updated manifests downstream, which relay to their downstream</li>
|
|
<li><strong>ManifestRefreshRequest/Response</strong> (<code>0x92</code>/<code>0x93</code>): Check if manifest has been updated since last fetch</li>
|
|
<li><strong>BlobDeleteNotice</strong> (<code>0x95</code>): Notify tree when blob is deleted; includes upstream info for tree healing</li>
|
|
</ul>
|
|
|
|
<h3>Blob eviction <span class="badge badge-complete">Complete</span></h3>
|
|
<pre><code>priority = pin_boost + (relationship * heart_recency * freshness / (peer_copies + 1))</code></pre>
|
|
<table>
|
|
<tr><th>Factor</th><th>Calculation</th></tr>
|
|
<tr><td><code>pin_boost</code></td><td>1000.0 if pinned, else 0.0. Own blobs auto-pinned.</td></tr>
|
|
<tr><td><code>relationship</code></td><td>5.0 (us), 3.0 (mutual follow+audience), 2.0 (follow), 1.0 (audience), 0.1 (stranger)</td></tr>
|
|
<tr><td><code>heart_recency</code></td><td>Linear decay over 30 days: <code>max(0, 1 - age/30d)</code></td></tr>
|
|
<tr><td><code>freshness</code></td><td><code>1 / (1 + post_age_days)</code></td></tr>
|
|
<tr><td><code>peer_copies</code></td><td>Known replica count (from <code>post_replicas</code>, only if < 1 hour old)</td></tr>
|
|
</table>
|
|
|
|
<h3>Pin modes <span class="badge badge-planned">Planned</span></h3>
|
|
<p>The CDN is delivery infrastructure, not storage. Authors own durability. Pinning extends content in the local delivery pool — it is not a network obligation.</p>
|
|
<table>
|
|
<tr><th>Concept</th><th>Status</th></tr>
|
|
<tr><td>Anchor pin vs Fork pin</td><td>Not started. Anchor pin = host the original (author retains control). Fork pin = independent copy (you become key owner).</td></tr>
|
|
<tr><td>Personal vault</td><td>Not started. Private durability for saved/pinned items.</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- 19. Sync Protocol -->
|
|
<section id="sync">
|
|
<h2>19. Sync Protocol</h2>
|
|
|
|
<h3>Wire format</h3>
|
|
<pre><code>[1 byte: MessageType] [4 bytes: length (big-endian)] [length bytes: JSON payload]</code></pre>
|
|
<p>Max payload: 16 MB. ALPN: <code>itsgoin/3</code>.</p>
|
|
|
|
<h3>Pull sync: social + file layers, not mesh</h3>
|
|
<div class="note">
|
|
<strong>v0.2.0 change</strong>: Pull sync pulls posts from <strong>social layer peers</strong> (follows, audience) and <strong>upstream file peers</strong>, NOT from mesh peers. Mesh connections exist for routing diversity, not content. This separates infrastructure from content flow.
|
|
</div>
|
|
<p><strong>Self Last Encounter</strong>: For each peer we sync with, we track the timestamp of our last successful sync. When Self Last Encounter ages beyond <strong>3 hours</strong>, a pull sync is triggered. Self Last Encounter is updated to the newer of: (a) what's currently stored, or (b) the "file last update" timestamp from file headers received during blob transfers. Since file headers include the author's recent post list, downloading a blob from any peer hosting that author's content can update Self Last Encounter for the author.</p>
|
|
|
|
<h3>Pull sync filtering</h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>PullSyncRequest</strong>: Includes requester's follow list + post IDs they already have</li>
|
|
<li><strong>PullSyncResponse</strong>: Sender filters posts through <code>should_send_post()</code>:
|
|
<ol style="margin-top: 0.3rem;">
|
|
<li>Author is requester → always send (own posts relayed back)</li>
|
|
<li>Public + author in requester's follows → send</li>
|
|
<li>Encrypted + requester in wrapped key recipients → send</li>
|
|
<li>Otherwise → skip</li>
|
|
</ol>
|
|
</li>
|
|
</ul>
|
|
|
|
<h3>Message types (41 total)</h3>
|
|
<table>
|
|
<tr><th>Hex</th><th>Name</th><th>Stream</th><th>Purpose</th></tr>
|
|
<tr><td><code>0x01</code></td><td>NodeListUpdate</td><td>Uni</td><td>Incremental N1/N2 diff broadcast</td></tr>
|
|
<tr><td><code>0x02</code></td><td>InitialExchange</td><td>Bi</td><td>Full state exchange on connect</td></tr>
|
|
<tr><td><code>0x03</code></td><td>AddressRequest</td><td>Bi</td><td>Resolve NodeId → address via reporter</td></tr>
|
|
<tr><td><code>0x04</code></td><td>AddressResponse</td><td>Bi</td><td>Address resolution reply</td></tr>
|
|
<tr><td><code>0x05</code></td><td>RefuseRedirect</td><td>Uni</td><td>Refuse mesh + suggest alternative</td></tr>
|
|
<tr><td><code>0x40</code></td><td>PullSyncRequest</td><td>Bi</td><td>Request posts filtered by follows</td></tr>
|
|
<tr><td><code>0x41</code></td><td>PullSyncResponse</td><td>Bi</td><td>Respond with filtered posts</td></tr>
|
|
<tr><td><code>0x42</code></td><td>PostNotification</td><td>Uni</td><td>Lightweight "new post" push to social contacts</td></tr>
|
|
<tr><td><code>0x43</code></td><td>PostPush</td><td>Uni</td><td>Direct encrypted post delivery to recipients</td></tr>
|
|
<tr><td><code>0x44</code></td><td>AudienceRequest</td><td>Bi</td><td>Request audience member list</td></tr>
|
|
<tr><td><code>0x45</code></td><td>AudienceResponse</td><td>Bi</td><td>Audience list reply</td></tr>
|
|
<tr><td><code>0x50</code></td><td>ProfileUpdate</td><td>Uni</td><td>Push profile changes</td></tr>
|
|
<tr><td><code>0x51</code></td><td>DeleteRecord</td><td>Uni</td><td>Signed post deletion</td></tr>
|
|
<tr><td><code>0x52</code></td><td>VisibilityUpdate</td><td>Uni</td><td>Re-wrapped visibility after revocation</td></tr>
|
|
<tr><td><code>0x60</code></td><td>WormQuery</td><td>Bi</td><td>Burst/nova search for nodes, posts, or blobs beyond N3</td></tr>
|
|
<tr><td><code>0x61</code></td><td>WormResponse</td><td>Bi</td><td>Worm search reply (node + post_holder + blob_holder)</td></tr>
|
|
<tr><td><code>0x70</code></td><td>SocialAddressUpdate</td><td>Uni</td><td>Social contact address changed</td></tr>
|
|
<tr><td><code>0x71</code></td><td>SocialDisconnectNotice</td><td>Uni</td><td>Social contact disconnected</td></tr>
|
|
<tr><td><code>0x72</code></td><td>SocialCheckin</td><td>Bi</td><td>Keepalive + address + N+10 update</td></tr>
|
|
<tr><td><code>0x90</code></td><td>BlobRequest</td><td>Bi</td><td>Fetch blob by CID</td></tr>
|
|
<tr><td><code>0x91</code></td><td>BlobResponse</td><td>Bi</td><td>Blob data + CDN manifest + file header</td></tr>
|
|
<tr><td><code>0x92</code></td><td>ManifestRefreshRequest</td><td>Bi</td><td>Check manifest freshness</td></tr>
|
|
<tr><td><code>0x93</code></td><td>ManifestRefreshResponse</td><td>Bi</td><td>Updated manifest reply</td></tr>
|
|
<tr><td><code>0x94</code></td><td>ManifestPush</td><td>Uni</td><td>Push updated manifests downstream</td></tr>
|
|
<tr><td><code>0x95</code></td><td>BlobDeleteNotice</td><td>Uni</td><td>CDN tree healing on eviction</td></tr>
|
|
<tr><td><code>0xA0</code></td><td>GroupKeyDistribute</td><td>Uni</td><td>Distribute circle group key to member</td></tr>
|
|
<tr><td><code>0xA1</code></td><td>GroupKeyRequest</td><td>Bi</td><td>Request group key for a circle</td></tr>
|
|
<tr><td><code>0xA2</code></td><td>GroupKeyResponse</td><td>Bi</td><td>Group key reply</td></tr>
|
|
<tr><td><code>0xB0</code></td><td>RelayIntroduce</td><td>Bi</td><td>Request relay introduction</td></tr>
|
|
<tr><td><code>0xB1</code></td><td>RelayIntroduceResult</td><td>Bi</td><td>Introduction result with addresses</td></tr>
|
|
<tr><td><code>0xB2</code></td><td>SessionRelay</td><td>Bi</td><td>Splice bi-streams (own-device default)</td></tr>
|
|
<tr><td><code>0xB3</code></td><td>MeshPrefer</td><td>Bi</td><td>Preferred peer negotiation</td></tr>
|
|
<tr><td><code>0xB4</code></td><td>CircleProfileUpdate</td><td>Uni</td><td>Encrypted circle profile variant</td></tr>
|
|
<tr><td><code>0xC0</code></td><td>AnchorRegister</td><td>Uni</td><td>Register with anchor (bootstrap/recovery only)</td></tr>
|
|
<tr><td><code>0xC1</code></td><td>AnchorReferralRequest</td><td>Bi</td><td>Request peer referrals from anchor</td></tr>
|
|
<tr><td><code>0xC2</code></td><td>AnchorReferralResponse</td><td>Bi</td><td>Referral list reply</td></tr>
|
|
<tr><td><code>0xC3</code></td><td>AnchorProbeRequest</td><td>Bi</td><td>A → B → C: test cold reachability of address</td></tr>
|
|
<tr><td><code>0xC4</code></td><td>AnchorProbeResult</td><td>Bi</td><td>C → A (success) or C → B → A (failure)</td></tr>
|
|
<tr><td><code>0xD0</code></td><td>BlobHeaderDiff</td><td>Uni</td><td>Incremental engagement update (reactions, comments, policy, thread splits)</td></tr>
|
|
<tr><td><code>0xD1</code></td><td>BlobHeaderRequest</td><td>Bi</td><td>Request full engagement header for a post</td></tr>
|
|
<tr><td><code>0xD2</code></td><td>BlobHeaderResponse</td><td>Bi</td><td>Full engagement header response (JSON)</td></tr>
|
|
<tr><td><code>0xD3</code></td><td>PostDownstreamRegister</td><td>Uni</td><td>Register as downstream for a post (CDN tree entry)</td></tr>
|
|
<tr><td><code>0xD4</code></td><td>PostFetchRequest</td><td>Bi</td><td>Request a single post by ID from a known holder</td></tr>
|
|
<tr><td><code>0xD5</code></td><td>PostFetchResponse</td><td>Bi</td><td>Single post response (SyncPost or not-found)</td></tr>
|
|
<tr><td><code>0xD6</code></td><td>TcpPunchRequest</td><td>Bi</td><td>Ask holder to punch TCP toward browser IP</td></tr>
|
|
<tr><td><code>0xD7</code></td><td>TcpPunchResult</td><td>Bi</td><td>Punch result + HTTP address for redirect</td></tr>
|
|
<tr><td><code>0xE0</code></td><td>MeshKeepalive</td><td>Uni</td><td>30s connection heartbeat</td></tr>
|
|
</table>
|
|
|
|
<h3>Engagement propagation</h3>
|
|
<p>Reactions, comments, and policy changes propagate via <code>BlobHeaderDiff</code> (0xD0) through the CDN tree:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Push (real-time)</strong>: On react/comment, the diff is sent to both <strong>downstream</strong> peers (CDN tree children) and the <strong>upstream</strong> peer (who we got the post from). Each intermediate node re-propagates both directions, excluding sender. This flows the diff up to the author and down to all holders.</li>
|
|
<li><strong>Auto downstream registration</strong>: Nodes that receive a post via pull sync or push notification automatically send <code>PostDownstreamRegister</code> (0xD3) to the sender, ensuring bidirectional diff flow.</li>
|
|
<li><strong>Pull (safety net)</strong>: Every 5 minutes, the pull cycle requests <code>BlobHeaderRequest</code> (0xD1) with the local header timestamp. Peers respond with the full header only if theirs is newer. Additive merge — <code>store_reaction</code> upserts, <code>store_comment</code> inserts with ON CONFLICT DO NOTHING.</li>
|
|
<li><strong>Planned</strong>: Pull engagement from both upstream and downstream peers to catch missed diffs from either direction.</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<!-- 20. Encryption -->
|
|
<section id="encryption">
|
|
<h2>20. Encryption</h2>
|
|
|
|
<h3>Envelope encryption (1-layer) <span class="badge badge-complete">Complete</span></h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Generate random 32-byte CEK (Content Encryption Key)</li>
|
|
<li>Encrypt content: <code>ChaCha20-Poly1305(plaintext, CEK, random_nonce)</code></li>
|
|
<li>Store as: <code>base64(nonce[12] || ciphertext || tag[16])</code></li>
|
|
<li>For each recipient (including self):
|
|
<ul style="margin-top: 0.3rem;">
|
|
<li>X25519 DH: <code>our_ed25519_private (as X25519) * their_ed25519_public (as montgomery)</code></li>
|
|
<li>Derive wrapping key: <code>BLAKE3_derive_key("distsoc/cek-wrap/v1", shared_secret)</code></li>
|
|
<li>Wrap CEK: <code>ChaCha20-Poly1305(CEK, wrapping_key, random_nonce)</code> → 60 bytes per recipient</li>
|
|
</ul>
|
|
</li>
|
|
</ol>
|
|
|
|
<h3>Visibility variants</h3>
|
|
<table>
|
|
<tr><th>Variant</th><th>Overhead</th><th>Audience limit</th></tr>
|
|
<tr><td><code>Public</code></td><td>None</td><td>Unlimited</td></tr>
|
|
<tr><td><code>Encrypted { recipients }</code></td><td>~60 bytes per recipient</td><td>~500 (256KB cap)</td></tr>
|
|
<tr><td><code>GroupEncrypted { group_id, epoch, wrapped_cek }</code></td><td>~100 bytes total</td><td>Unlimited (one CEK wrap for the group)</td></tr>
|
|
</table>
|
|
|
|
<h3>PostId integrity</h3>
|
|
<p><code>PostId = BLAKE3(Post)</code> covers the ciphertext, NOT the recipient list. Visibility is separate metadata. This means visibility can be updated (re-wrapped) without changing the PostId.</p>
|
|
|
|
<h3>Group keys (circles) <span class="badge badge-complete">Complete</span></h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Each circle gets its own ed25519 keypair</li>
|
|
<li><code>group_id = BLAKE3(initial_public_key)</code> — permanent identifier</li>
|
|
<li>Group seed wrapped per-member via X25519 DH (KDF domain: <code>"distsoc/group-key-wrap/v1"</code>)</li>
|
|
<li><strong>Epoch rotation</strong>: On member removal, generate new keypair, increment epoch, re-wrap for remaining members</li>
|
|
<li>Wire: <code>GroupKeyDistribute</code> (<code>0xA0</code>), <code>GroupKeyRequest/Response</code> (<code>0xA1</code>/<code>0xA2</code>)</li>
|
|
</ul>
|
|
|
|
<h3>Three-tier access revocation</h3>
|
|
<p>Three levels of revocation, chosen based on threat level:</p>
|
|
|
|
<div class="card">
|
|
<h3>Tier 1: Remove Going Forward (default)</h3>
|
|
<p>Revoked member is excluded from future posts automatically. They retain access to anything they already received. This is the default behavior when removing a circle member — no special action needed.</p>
|
|
<p><strong>When to use</strong>: Normal membership changes. Someone leaves a group, you unfollow someone. The common case.</p>
|
|
<p><strong>Cost</strong>: Zero. Just stop including them in future recipient lists.</p>
|
|
</div>
|
|
|
|
<div class="card">
|
|
<h3>Tier 2: Rewrap Old Posts (cleanup)</h3>
|
|
<p>Same CEK, re-wrap for remaining recipients only. The revoked member can no longer unwrap the CEK even if they later obtain the ciphertext. Propagate updated visibility headers via <code>VisibilityUpdate</code> (<code>0x52</code>).</p>
|
|
<p><strong>When to use</strong>: Revoked member never synced the post (common with pull-based sync — encrypted posts only sent to recipients). You want to clean up access lists.</p>
|
|
<p><strong>Cost</strong>: One WrappedKey operation per remaining recipient, no content re-encryption.</p>
|
|
</div>
|
|
|
|
<div class="card">
|
|
<h3>Tier 3: Delete & Re-encrypt (nuclear)</h3>
|
|
<p>Generate new CEK, re-encrypt content, wrap new CEK for remaining recipients, push delete for old post ID, repost with new content but same logical identity. Well-behaved nodes honor the delete.</p>
|
|
<p><strong>When to use</strong>: Revoked member already has the ciphertext and could unwrap the old CEK. Only for content that poses an actual danger/risk if the revoked member retains access. <strong>Recommended against</strong> in most cases.</p>
|
|
<p><strong>Cost</strong>: Full re-encryption + delete propagation + new post propagation. Heavy.</p>
|
|
</div>
|
|
|
|
<div class="note">
|
|
<strong>Trust model</strong>: The app honors delete requests from content authors by default. A modified client could ignore deletes, but this is true of any decentralized system. For legal purposes: the author has proof they issued the delete and revoked access.
|
|
</div>
|
|
|
|
<h3>Private profiles (Phase D-4) <span class="badge badge-complete">Complete</span></h3>
|
|
<p>Different profile versions per circle, encrypted with the circle/group key. A peer sees the profile version for the most-privileged circle they belong to. <code>CircleProfileUpdate</code> (<code>0xB4</code>) wire message. Public profiles can be hidden (<code>public_visible=false</code> strips display_name/bio).</p>
|
|
</section>
|
|
|
|
<!-- 21. Delete Propagation -->
|
|
<section id="deletes">
|
|
<h2>21. Delete Propagation</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
|
|
<h3>Delete records</h3>
|
|
<p><code>DeleteRecord { post_id, author, timestamp_ms, signature }</code> — ed25519-signed by author. Stored in <code>deleted_posts</code> table (INSERT OR IGNORE). Applied: DELETE from <code>posts</code> table WHERE post_id AND author match.</p>
|
|
|
|
<h3>Propagation paths</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>InitialExchange</strong>: All delete records exchanged on connect</li>
|
|
<li><strong>DeleteRecord message</strong> (<code>0x51</code>): Pushed via uni-stream to connected peers on creation</li>
|
|
<li><strong>PullSync</strong>: Included in responses for eventual consistency</li>
|
|
</ol>
|
|
|
|
<h3>CDN cascade on delete</h3>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Send BlobDeleteNotice to all downstream hosts (with our upstream info for tree healing)</li>
|
|
<li>Send BlobDeleteNotice to upstream</li>
|
|
<li>Clean up blob metadata, manifests, downstream/upstream records</li>
|
|
<li>Delete blob from filesystem</li>
|
|
</ol>
|
|
</section>
|
|
|
|
<!-- 22. Social Graph Privacy -->
|
|
<section id="privacy">
|
|
<h2>22. Social Graph Privacy</h2>
|
|
<h3>Status: <span class="badge badge-complete">Complete</span></h3>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Follows are never shared in gossip or profiles</li>
|
|
<li>N1 share merges mesh peers + social contacts into one list (indistinguishable)</li>
|
|
<li>No addresses ever shared in routing updates</li>
|
|
<li>N3 is never shared outward (search-only)</li>
|
|
</ul>
|
|
<div class="note">
|
|
<strong>Known temporary weakness</strong>: An observer who diffs your N1 share over time can infer your social contacts (they're the stable members while mesh peers rotate). This will be addressed when CDN file-swap peers are added to N1, making the stable set larger and harder to distinguish.
|
|
</div>
|
|
</section>
|
|
|
|
<!-- 23. Multi-Device Identity -->
|
|
<section id="multidevice">
|
|
<h2>23. Multi-Device Identity</h2>
|
|
<h3>Status: <span class="badge badge-planned">Planned</span></h3>
|
|
|
|
<h3>Concept</h3>
|
|
<p>Multiple devices share the <strong>same identity key</strong> (ed25519 keypair, same NodeId). All devices ARE the same node from the network's perspective. Posts from any device appear as the same author.</p>
|
|
|
|
<h3>Device identity</h3>
|
|
<p>Each device also generates a unique <strong>device identity</strong> (separate ed25519 keypair). This device-specific key is used to:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Find each other</strong>: Devices with the same shared identity can search for each other using their device identities to facilitate syncs and self-routing</li>
|
|
<li><strong>Own-device relay</strong>: Route traffic through your own devices (e.g., home computer relaying for your phone) using the device identity for authentication</li>
|
|
<li><strong>Conflict resolution</strong>: When devices post simultaneously, device identity helps order and deduplicate</li>
|
|
</ul>
|
|
|
|
<h3>Setup</h3>
|
|
<p>Export <code>identity.key</code> from one device, import on another. The device identity is generated automatically on each device. Once two devices share an identity key, they can discover each other through normal network routing (same NodeId appears at multiple addresses).</p>
|
|
</section>
|
|
|
|
<!-- 24. Phase 2 -->
|
|
<section id="phase2">
|
|
<h2>24. Phase 2: Reciprocity (Reconsidered)</h2>
|
|
<h3>Status: <span class="badge badge-planned">Reconsidered</span></h3>
|
|
<p>The original Phase 2 design centered on hosting quotas (3x rule), chunk audits, and tit-for-tat QoS. On reflection, the attention-driven delivery model makes quota enforcement unnecessary. The CDN is a delivery amplifier, not a storage system — hot content propagates through demand, cold content decays. Authors are responsible for their own content durability.</p>
|
|
<p>Tit-for-tat QoS solves the wrong problem: it optimizes for fairness in a storage-obligation model that no longer exists. What matters is that the delivery network functions efficiently, which it does through natural attention dynamics.</p>
|
|
<p>If reciprocity mechanisms are needed at scale, they should address <strong>delivery quality</strong> (bandwidth, latency, uptime) rather than storage quotas. This remains an open design area.</p>
|
|
</section>
|
|
|
|
<!-- 25. HTTP Post Delivery -->
|
|
<section id="http-delivery">
|
|
<h2>25. HTTP Post Delivery</h2>
|
|
<h3>Intent</h3>
|
|
<p>Every ItsGoin node that is publicly reachable can serve its cached public posts directly to browsers over HTTP — no extra infrastructure, no additional dependencies, no new binary. The same QUIC UDP port used for app traffic is accompanied by a TCP listener on the same port number. UDP goes to the QUIC stack as always. TCP goes to a minimal raw HTTP/1.1 handler baked into the binary.</p>
|
|
<p>This makes every publicly-reachable node a browser-accessible content endpoint, enabling share links that deliver content peer-to-browser without routing any post bytes through itsgoin.net.</p>
|
|
|
|
<h3>Dual listener architecture</h3>
|
|
<pre><code><port>/UDP → QUIC (existing app protocol)
|
|
<port>/TCP → HTTP/1.1 (new, read-only, single route)</code></pre>
|
|
<p>Both listeners bind on the same port. The OS routes UDP and TCP to separate sockets — no conflict, no protocol ambiguity.</p>
|
|
|
|
<h3>HTTP handler</h3>
|
|
<p>The handler is intentionally minimal — implemented with raw <code>tokio::net::TcpListener</code>, no HTTP crate, no new dependencies. Approximately 150–200 lines of Rust.</p>
|
|
<p>Single valid route: <code>GET /p/<postid_hex> HTTP/1.1</code></p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><code>postid_hex</code> must be exactly 64 lowercase hex characters (BLAKE3 hash)</li>
|
|
<li>Any other path, method, or malformed request: hard close with no response (not even a 400). Do not be helpful to malformed requests.</li>
|
|
<li>Post must be public (<code>PostVisibility::Public</code>). Encrypted posts are never served over HTTP regardless of whether the node holds the content.</li>
|
|
</ul>
|
|
<p>Response: Minimal HTML page containing the post content with a small footer:</p>
|
|
<pre><code><footer>
|
|
This post is on the ItsGoin network — content lives on people's devices,
|
|
not servers. <a href="https://itsgoin.com">Get ItsGoin</a>
|
|
</footer></code></pre>
|
|
<p>The footer HTML is a static string constant compiled into the binary (~2KB). No template engine, no dynamic footer generation.</p>
|
|
|
|
<h3>Security constraints</h3>
|
|
<table>
|
|
<tr><th>Concern</th><th>Mitigation</th></tr>
|
|
<tr><td>Connection exhaustion</td><td>Hard cap: 20 concurrent HTTP connections. New connections over the cap are immediately closed. No queue, no wait.</td></tr>
|
|
<tr><td>Slow HTTP attacks</td><td>5-second read timeout for complete request headers. Exceeded → hard close.</td></tr>
|
|
<tr><td>Content enumeration</td><td>Identical response (hard close) for “post not found” and “post not public.” No timing oracle, no distinguishable error codes.</td></tr>
|
|
<tr><td>Malformed requests</td><td>Hard close only. No error response.</td></tr>
|
|
<tr><td>Encrypted content</td><td>Never served. Public visibility check is mandatory before any response.</td></tr>
|
|
</table>
|
|
|
|
<h3>Which nodes serve HTTP</h3>
|
|
<p>A node serves HTTP only if it is publicly TCP-reachable:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>IPv6 public address</strong> — serves directly</li>
|
|
<li><strong>IPv4 + UPnP mapping</strong> — serves if TCP is included in the UPnP mapping (see <a href="#upnp">Section 11</a> update)</li>
|
|
<li><strong>IPv4 behind NAT without UPnP</strong> — cannot serve HTTP, but can still appear as a host in share links for app-protocol delivery. The CDN tree and itsgoin.net redirect handler route around unreachable nodes automatically.</li>
|
|
</ul>
|
|
|
|
<h3>302 load shedding via CDN tree</h3>
|
|
<p>When a node is overwhelmed (at the 20-connection cap) or chooses to redirect:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Query <code>post_downstream</code> table for the requested postid</li>
|
|
<li>Filter downstream hosts to those with a known public address (IPv6 or UPnP-mapped IPv4)</li>
|
|
<li><code>302 → http://[their_address]:<port>/p/<postid></code></li>
|
|
</ol>
|
|
<p>The receiving node applies the same logic recursively if needed. This mirrors the app-layer CDN tree behavior at the HTTP layer — the same attention-driven propagation model, the same tree structure, now accessible to browsers.</p>
|
|
|
|
<h3>Binary size impact</h3>
|
|
<p>Zero new dependencies. Negligible compiled size delta (~10–20KB). No App Store size concerns. No install size impact for existing users.</p>
|
|
</section>
|
|
|
|
<!-- 26. Share Links -->
|
|
<section id="share-links">
|
|
<h2>26. Share Links</h2>
|
|
<h3>Intent</h3>
|
|
<p>Every public post can be shared as a URL that works for both app users and browser users:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>App installed</strong>: OS intercepts the URL via Universal Links (iOS) / App Links (Android) before the browser loads. App opens directly to the post, fetched via QUIC. Zero browser involvement.</li>
|
|
<li><strong>No app</strong>: Browser loads itsgoin.net, which searches the ItsGoin network for the post and redirects the browser to a live node serving it over HTTP. The share link becomes a product demo and install opportunity.</li>
|
|
</ul>
|
|
|
|
<h3>URL format</h3>
|
|
<pre><code>https://itsgoin.net/p/<postid_hex>/<encoded_hostlist></code></pre>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><code>postid_hex</code>: 64 hex characters (BLAKE3 post hash)</li>
|
|
<li><code>encoded_hostlist</code>: base64url-encoded binary list of up to 5 host entries (see encoding below)</li>
|
|
</ul>
|
|
<p>Example: <code>https://itsgoin.net/p/3a7f...c921/AAEC...Zg==</code></p>
|
|
|
|
<h3>Host list encoding</h3>
|
|
<p>Compact binary encoding — optimized for QR code scanability:</p>
|
|
<pre><code>Per IPv6 host: [0x06][16 bytes IP][2 bytes port] = 19 bytes
|
|
Per IPv4 host: [0x04][4 bytes IP][2 bytes port] = 7 bytes
|
|
|
|
5× IPv6: 95 bytes → ~127 chars base64url (comfortably scannable QR)</code></pre>
|
|
<p>All integers big-endian. base64url-encoded (URL-safe, no padding).</p>
|
|
|
|
<h3>Host list generation (at share time)</h3>
|
|
<p>When a user taps “Share” on a post:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>Query <code>post_downstream</code> for this postid</li>
|
|
<li>Filter to hosts with a known public address (IPv6 or UPnP-mapped IPv4)</li>
|
|
<li>Select up to 5 — prefer IPv6 public over UPnP IPv4, prefer most recently seen over stale</li>
|
|
<li>Include self if this node is publicly reachable</li>
|
|
<li>Encode and embed in URL</li>
|
|
</ol>
|
|
|
|
<h3>Availability math</h3>
|
|
<p>At 80% per-node uptime (conservative for a mix of home and mobile nodes), 5 independent hosts gives <strong>1 - (0.2<sup>5</sup>) = 99.97%</strong> link availability. Hosts are selected from nodes that have already demonstrated they cached this specific post — not random peers.</p>
|
|
|
|
<h3>itsgoin.net QUIC proxy handler</h3>
|
|
<p>Route: <code>GET /p/<postid_hex>/<author_nodeid_hex></code></p>
|
|
<pre><code>1. Check local storage (fast path — post already fetched recently)
|
|
2. Connect to author via QUIC (connect_by_node_id cascade) → PostFetch
|
|
3. Content search (extended worm with post_id) → find holder → PostFetch
|
|
4. Found? Store temporarily + render HTML + serve to browser
|
|
5. Not found? Serve "unavailable" page</code></pre>
|
|
<p>The itsgoin.net server acts as a QUIC proxy — it fetches posts on-demand from the peer network and renders them as HTML for the browser. Posts are not permanently stored on the server. This is a marketing/convenience service, not a data store. The proxy model works because most nodes are behind NAT and can't serve HTTP directly to browsers.</p>
|
|
<p>Scalable via additional instances: <code>1.itsgoin.net</code>, <code>2.itsgoin.net</code>, etc. Each runs its own ItsGoin node with its own mesh connections, increasing total search coverage.</p>
|
|
|
|
<h3>itsgoin.net node</h3>
|
|
<p>itsgoin.net runs a permanent, well-connected ItsGoin node (<code>--bind 0.0.0.0:4433 --daemon --web 8080</code>). This serves two purposes:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>Content search</strong> — when a share link is visited, the server-side node uses the extended worm search (with <code>post_id</code>) to find any peer holding the post. The CDN tree means each peer knows about hundreds of posts across their downstream hosts, making hits highly likely even for content the server has never synced.</li>
|
|
<li><strong>Anchor</strong> — a permanently-online, high-uptime node that bootstrap peers can rely on. Strengthens the network’s anchor infrastructure without any special protocol — it’s just a well-connected peer that happens to also serve the web handler.</li>
|
|
</ol>
|
|
|
|
<h3>Share link format</h3>
|
|
<p><code>https://itsgoin.net/p/<postid_hex>/<author_nodeid_hex></code></p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>PostID: 64 hex chars (32 bytes, BLAKE3 content hash)</li>
|
|
<li>Author NodeID: 64 hex chars (32 bytes, ed25519 public key)</li>
|
|
<li>Author ID enables direct QUIC connection as fast path; worm search handles the case where author is unreachable</li>
|
|
<li>Only public posts can generate share links</li>
|
|
</ul>
|
|
|
|
<h3>Unavailable page</h3>
|
|
<pre><code>⬡ This content isn't currently reachable.
|
|
|
|
It may be available again when someone
|
|
who has it comes back online.
|
|
|
|
[ Install ItsGoin to find it when it resurfaces ]</code></pre>
|
|
<p>This is not a 404. It communicates the honest model: content lives on devices, not servers. Cold content decays. The install CTA is the honest answer to “how do I get this.”</p>
|
|
|
|
<h3>Universal Links / App Links</h3>
|
|
<p>Same URL — <code>itsgoin.net/p/...</code> — intercepts to the native app for users who have ItsGoin installed. No separate URL scheme, no <code>app://</code> links.</p>
|
|
<p>Required static files on itsgoin.net:</p>
|
|
<pre><code>/.well-known/apple-app-site-association (iOS Universal Links)
|
|
/.well-known/assetlinks.json (Android App Links)</code></pre>
|
|
<p>Both are static JSON deployed once, pointing to the ItsGoin app package ID for the path pattern <code>/p/*</code>.</p>
|
|
<p><strong>App-side handling</strong>: Register the URL pattern in Tauri config. On receipt of <code>itsgoin.net/p/<postid>/<hostlist></code>, parse the postid, decode the hostlist, fetch via QUIC from the hostlist peers. If the post is already in local SQLite, render immediately. Universal Links intercept before the browser loads — itsgoin.net sees zero traffic for app users.</p>
|
|
<p><strong>iOS caveat</strong>: Universal Links require the app to have been opened manually at least once before OS interception activates. First-time tap from a link goes to the browser fallback. All subsequent taps open the app directly. The browser fallback is the full loading screen experience — first-time users see the product demo, which is the right outcome anyway.</p>
|
|
|
|
<h3>QR codes</h3>
|
|
<p>Share links are also valid QR codes. At ~127 chars base64url for 5 IPv6 hosts plus postid and domain, total URL length stays well under 200 characters — comfortably scannable at low error correction.</p>
|
|
<p>QR codes for share links use the same Universal Links / App Links interception path. A generic phone camera scanning the QR sees an itsgoin.net URL and offers to open it — either in the app (if installed) or in the browser.</p>
|
|
<p>No custom QR scheme needed. The HTTPS URL is the QR payload.</p>
|
|
|
|
<h3>Chrome HTTPS (October 2026)</h3>
|
|
<p>Chrome 154 (October 2026) enables “Always Use Secure Connections” by default, warning before HTTP sites. This does not affect the share link architecture:</p>
|
|
<ul style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li><strong>itsgoin.net is HTTPS</strong> — no warning, Universal Links work normally</li>
|
|
<li>The 302 redirect takes the browser off the HTTPS page before content loads, so no mixed-content issue</li>
|
|
<li>Node HTTP endpoints are raw IP:port addresses, which Chrome treats as private network addresses (exempt from the public-site warning requirement)</li>
|
|
<li>The redirect itself is a header only — no content flows through itsgoin.net</li>
|
|
</ul>
|
|
<p>No architecture changes needed before or after October 2026.</p>
|
|
</section>
|
|
|
|
<!-- Appendix A: Timeouts -->
|
|
<section id="timeouts">
|
|
<h2>Appendix A: Timeout Reference</h2>
|
|
<table>
|
|
<tr><th>Constant</th><th>Value</th><th>Purpose</th></tr>
|
|
<tr><td>MESH_KEEPALIVE_INTERVAL</td><td>30s</td><td>Ping to prevent zombie detection</td></tr>
|
|
<tr><td>ZOMBIE_TIMEOUT</td><td>600s (10 min)</td><td>No activity → dead connection</td></tr>
|
|
<tr><td>SESSION_IDLE_TIMEOUT</td><td>300s (5 min)</td><td>Reap idle interactive sessions (NOT keep-alive)</td></tr>
|
|
<tr><td>SELF_LAST_ENCOUNTER_THRESHOLD</td><td>10800s (3 hours)</td><td>Trigger pull sync when last encounter exceeds this</td></tr>
|
|
<tr><td>QUIC_CONNECT_TIMEOUT</td><td>15s</td><td>Direct connection establishment</td></tr>
|
|
<tr><td>HOLE_PUNCH_TIMEOUT</td><td>30s</td><td>Overall hole punch window</td></tr>
|
|
<tr><td>HOLE_PUNCH_ATTEMPT</td><td>2s</td><td>Per-address attempt within window</td></tr>
|
|
<tr><td>RELAY_INTRO_TIMEOUT</td><td>15s</td><td>Relay introduction request</td></tr>
|
|
<tr><td>RELAY_PIPE_IDLE</td><td>120s (2 min)</td><td>Relay pipe idle before close</td></tr>
|
|
<tr><td>RELAY_COOLDOWN</td><td>300s (5 min)</td><td>Per-target relay cooldown</td></tr>
|
|
<tr><td>RELAY_INTRO_DEDUP</td><td>30s</td><td>Dedup intro forwarding</td></tr>
|
|
<tr><td>WORM_TOTAL_TIMEOUT</td><td>3s</td><td>Entire worm search</td></tr>
|
|
<tr><td>WORM_FAN_OUT_TIMEOUT</td><td>500ms</td><td>Per-peer fan-out query</td></tr>
|
|
<tr><td>WORM_BLOOM_TIMEOUT</td><td>1.5s</td><td>Bloom round to wide referrals</td></tr>
|
|
<tr><td>WORM_DEDUP</td><td>10s</td><td>In-flight worm dedup</td></tr>
|
|
<tr><td>WORM_COOLDOWN</td><td>300s (5 min)</td><td>Miss cooldown before retry</td></tr>
|
|
<tr><td>REFERRAL_DISCONNECT_GRACE</td><td>120s (2 min)</td><td>Anchor keeps peer in referral list after disconnect</td></tr>
|
|
<tr><td>N2/N3_STALE_PRUNE</td><td>Immediate on disconnect + 7 day fallback</td><td>Remove reach entries tagged to disconnected peers; age-based fallback for stragglers</td></tr>
|
|
<tr><td>N2/N3_STARTUP_SWEEP</td><td>On boot</td><td>Remove all N2/N3 entries tagged to peers not in current mesh</td></tr>
|
|
<tr><td>PREFERRED_UNREACHABLE_PRUNE</td><td>7 days</td><td>Release preferred slot (must re-negotiate MeshPrefer on reconnect)</td></tr>
|
|
<tr><td>RECONNECT_WATCHER_EXPIRY</td><td>30 days</td><td>Low-priority reconnect awareness; daily check after 7 days</td></tr>
|
|
<tr><td>GROWTH_LOOP_TIMER</td><td>60s</td><td>Periodic growth loop check</td></tr>
|
|
<tr><td>CONNECTIVITY_CHECK</td><td>60s</td><td>Social/file <N4 access check for keep-alive sessions</td></tr>
|
|
<tr><td>DM_RECENCY_WINDOW</td><td>14400s (4 hours)</td><td>DM'd nodes included in connectivity check</td></tr>
|
|
<tr><td>UPNP_DISCOVERY_TIMEOUT</td><td>2s</td><td>Gateway discovery on startup (do not block)</td></tr>
|
|
<tr><td>UPNP_LEASE_RENEWAL</td><td>2700s (45 min)</td><td>Refresh port mapping before TTL expiry</td></tr>
|
|
<tr><td>ANCHOR_PROBE_INTERVAL</td><td>1800s (30 min)</td><td>Periodic re-probe while anchor-declared</td></tr>
|
|
<tr><td>ANCHOR_PROBE_TIMEOUT</td><td>15s</td><td>Cold connect attempt by witness</td></tr>
|
|
<tr><td>ANCHOR_STALE_THRESHOLD</td><td>7 days</td><td>Post-bootstrap cleanup probes known_anchors older than this</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- Appendix B: Design Constraints -->
|
|
<section id="constraints">
|
|
<h2>Appendix B: Design Constraints</h2>
|
|
<table>
|
|
<tr><th>Constraint</th><th>Value</th><th>Notes</th></tr>
|
|
<tr><td>Visibility metadata cap</td><td>256 KB</td><td>Applies to WrappedKey lists in encrypted posts</td></tr>
|
|
<tr><td>Max recipients (per-recipient wrapping)</td><td>~500</td><td>256KB / ~500 bytes JSON per WrappedKey</td></tr>
|
|
<tr><td>Max blob size</td><td>10 MB</td><td>Per attachment</td></tr>
|
|
<tr><td>Max attachments per post</td><td>4</td><td></td></tr>
|
|
<tr><td>Public post encryption overhead</td><td>Zero</td><td>No WrappedKeys, no sharding, unlimited audience</td></tr>
|
|
<tr><td>Max payload (wire)</td><td>16 MB</td><td>Length-prefixed JSON framing</td></tr>
|
|
<tr><td>Mesh slots</td><td>101 (Desktop) / 15 (Mobile)</td><td>Preferred + non-preferred, no local/wide distinction</td></tr>
|
|
<tr><td>Keep-alive session cap</td><td>50% of session capacity</td><td>Ensures interactive sessions remain available</td></tr>
|
|
<tr><td>Keep-alive ceiling (desktop)</td><td>~300–500</td><td>Binding constraint: routing diff broadcast overhead</td></tr>
|
|
<tr><td>Keep-alive ceiling (mobile)</td><td>~25–50</td><td>Binding constraint: battery + OS background restrictions</td></tr>
|
|
<tr><td><code>mesh_blacklist</code> table</td><td><code>{ node_id }</code></td><td>Targeted mutual stranger relationships for testing/diversity</td></tr>
|
|
<tr><td><code>known_anchors</code> table</td><td><code>{ node_id, addresses, last_seen }</code></td><td>LIFO ordered, 7-day stale cleanup via probe</td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- Appendix C: Scorecard -->
|
|
<section id="scorecard">
|
|
<h2>Appendix C: Implementation Scorecard</h2>
|
|
<table class="scorecard">
|
|
<tr><th>Area</th><th>Status</th></tr>
|
|
<tr><td>Mesh connection architecture (101 slots, preferred/non-preferred)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>N1/N2/N3 knowledge layers</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Growth loop (60s timer + reactive on N2/N3)</td><td><span class="badge badge-partial">Partial</span> (timer exists, reactive trigger needs update)</td></tr>
|
|
<tr><td>Preferred peers + bilateral negotiation</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>N+10 identification</td><td><span class="badge badge-partial">Partial</span> (preferred peers exist, N+10 not in all headers)</td></tr>
|
|
<tr><td>Worm search (nodes + content search for posts/blobs)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Relay introduction + hole punch</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Session relay (own-device default)</td><td><span class="badge badge-partial">Partial</span> (relay works, own-device restriction not implemented)</td></tr>
|
|
<tr><td>Social routing cache</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Three-layer architecture (Mesh/Social/File)</td><td><span class="badge badge-partial">Partial</span> (layers exist conceptually, pull sync still uses mesh)</td></tr>
|
|
<tr><td>Keep-alive sessions</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Self Last Encounter sync trigger</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Algorithm-free reverse-chronological feed</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Envelope encryption (1-layer)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Group keys for circles</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Three-tier access revocation</td><td><span class="badge badge-partial">Partial</span> (Tier 1+2 work, Tier 3 crypto exists but no UI)</td></tr>
|
|
<tr><td>Private profiles per circle</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Pull-based sync with follow filtering</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Push notifications (post/profile/delete)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Blob storage + transfer</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>CDN hosting tree + manifests</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Blob eviction with priority scoring</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Anchor bootstrap + referrals</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Delete propagation + CDN cascade</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Multi-device identity</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>UPnP port mapping (desktop)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>NAT type detection (STUN) + hard+hard skip</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Advanced NAT traversal (role-based scanning + filter probe)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>LAN discovery (mDNS scan + auto-connect)</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Content propagation via attention</td><td><span class="badge badge-partial">Partial</span></td></tr>
|
|
<tr><td>BlobHeader separation from blob content</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>25+25 neighborhood with HeaderDiff propagation</td><td><span class="badge badge-partial">Partial</span> (engagement diffs work, neighborhood diffs planned)</td></tr>
|
|
<tr><td>BlobHeaderDiff message (engagement)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Reactions (public + private encrypted)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Comments + author policy enforcement</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Engagement sync via BlobHeaderRequest after pull sync</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Notification settings (messages/posts/nearby)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Tiered DM polling (recency-based schedule)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Auto-sync on follow</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Post CDN tree (post_downstream)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Anchor self-verification (reachability probe)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Mutual mesh blacklist</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td><code>--max-mesh</code> flag (test affordance)</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Audience sharding</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Custom feeds</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>HTTP post delivery (TCP listener, single route, load shedding)</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Share link generation (postid + author NodeId)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>itsgoin.net QUIC proxy handler (on-demand fetch + render)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>PostFetch (0xD4/0xD5) single-post retrieval</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Universal Links / App Links (itsgoin.net/p/*)</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>itsgoin.net ItsGoin node (anchor + web handler)</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>UPnP TCP port mapping alongside UDP</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- Appendix D: Roadmap -->
|
|
<section id="roadmap">
|
|
<h2>Appendix D: Critical Path Forward</h2>
|
|
<p>The highest-impact items, in priority order:</p>
|
|
<div class="card">
|
|
<h3>1. Three-layer separation (pull sync from social/file, not mesh)</h3>
|
|
<p>Implement Self Last Encounter tracking and move pull sync to social + upstream file peers. This is the foundation for the layered architecture.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>2. N+10 in all identification</h3>
|
|
<p>Add N+10 (NodeId + 10 preferred peers) to self-identification, post headers, blob headers, and social routes. Dramatically improves findability.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>3. Keep-alive sessions</h3>
|
|
<p>Implement social/file connectivity check and keep-alive sessions for peers not reachable within N3. Cross-layer N2/N3 routing from keep-alive sessions.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>4. UPnP port mapping</h3>
|
|
<p>Best-effort NAT traversal for desktop/home networks. Makes nodes directly reachable without hole punching. External address feeds into N+10 and all peer advertisements. Especially impactful for mobile-to-desktop connectivity.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>5. Growth loop reactive trigger</h3>
|
|
<p>Fire growth loop immediately on N2/N3 receipt until 90% full. Currently only timer-based.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>6. Multi-device identity</h3>
|
|
<p>Same identity key across devices with device-specific identity for self-discovery and own-device relay.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>7. File-chain propagation</h3>
|
|
<p>Make AuthorManifest with N+10 and recent posts work passively. Enable discovery of new content from any blob holder.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>8. Share links + HTTP post delivery</h3>
|
|
<p>The viral growth mechanism. Every share becomes a product demo for non-app users and opens natively for app users. Dependencies in order:</p>
|
|
<ol style="padding-left: 1.25rem; margin: 0.5rem 0; color: var(--text-muted);">
|
|
<li>UPnP TCP mapping (small addition to existing UPnP code)</li>
|
|
<li>Raw TCP HTTP listener (150–200 lines, zero new dependencies)</li>
|
|
<li>Host list generation at share time (query post_downstream, encode, embed in URL)</li>
|
|
<li>itsgoin.net redirect handler + known_good DB (server-side, independent of app releases)</li>
|
|
<li>itsgoin.net loading screen</li>
|
|
<li>Universal Links / App Links registration (static JSON files + Tauri config)</li>
|
|
<li>itsgoin.net ItsGoin node (run the binary, configure as anchor)</li>
|
|
</ol>
|
|
<p>Steps 4–7 are itsgoin.net infrastructure, deployable independently of app releases. Steps 1–3 ship in the app. Step 6 requires an app store release to activate but can be deployed to itsgoin.net ahead of time.</p>
|
|
</div>
|
|
<div class="card">
|
|
<h3>9. Own-device relay restriction</h3>
|
|
<p>Restrict relay pipes to own-device by default, opt-in for relaying for others.</p>
|
|
</div>
|
|
</section>
|
|
|
|
<!-- Appendix E: Not Built -->
|
|
<section id="not-built">
|
|
<h2>Appendix E: Features Designed But Not Built</h2>
|
|
<table>
|
|
<tr><th>Feature</th><th>Source</th><th>Status</th></tr>
|
|
<tr><td>Three-layer pull sync (social/file, not mesh)</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>N+10 in all identification & headers</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Keep-alive sessions</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Multi-device identity</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Own-device relay restriction</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Self Last Encounter sync trigger</td><td>v0.2.0 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Anchor pin vs Fork pin distinction</td><td>project discussion.txt</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Audience sharding for groups > 250</td><td>ARCHITECTURE.md</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Repost as first-class post type</td><td>project discussion.txt</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Custom feeds (keyword/media/family rules)</td><td>project discussion.txt</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Bounce routing (social graph as routing)</td><td>ARCHITECTURE.md</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Reactions (public + private encrypted)</td><td>v0.2.11</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>RefuseRedirect handling (retry suggested peer)</td><td>protocol.rs</td><td><span class="badge badge-partial">Partial</span> (send-only)</td></tr>
|
|
<tr><td>Profile anchor list used for discovery</td><td>ARCHITECTURE.md</td><td><span class="badge badge-partial">Partial</span> (field exists)</td></tr>
|
|
<tr><td>File-chain propagation (passive post discovery)</td><td>Design</td><td><span class="badge badge-partial">Partial</span> (manifest exists)</td></tr>
|
|
<tr><td>Anchor-to-anchor gossip/registry</td><td>Observed gap</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>BlobHeader as separate mutable structure</td><td>v0.2.11</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>BlobHeaderDiff incremental propagation (engagement)</td><td>v0.2.11</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Post export/backup tooling (author durability)</td><td>v0.2.4 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Anchor reachability probe (self-verification)</td><td>v0.2.6</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
<tr><td>Mutual mesh blacklist</td><td>v0.2.4 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td><code>--max-mesh</code> flag (test topology control)</td><td>v0.2.4 design</td><td><span class="badge badge-planned">Planned</span></td></tr>
|
|
<tr><td>Relay-assisted port scanning (advanced NAT traversal)</td><td>v0.2.6</td><td><span class="badge badge-complete">Complete</span></td></tr>
|
|
</table>
|
|
</section>
|
|
|
|
<!-- Appendix F: File Map -->
|
|
<section id="filemap">
|
|
<h2>Appendix F: File Map</h2>
|
|
<pre><code>crates/core/
|
|
src/
|
|
lib.rs — module registration, parse_connect_string, parse_node_id_hex
|
|
types.rs — Post, PostId, NodeId, PublicProfile, PostVisibility, WrappedKey,
|
|
VisibilityIntent, Circle, PeerRecord, Attachment
|
|
content.rs — compute_post_id (BLAKE3), verify_post_id
|
|
crypto.rs — X25519 key conversion, DH, encrypt_post, decrypt_post, BLAKE3 KDF
|
|
blob.rs — BlobStore, compute_blob_id, verify_blob
|
|
storage.rs — SQLite: posts, peers, follows, profiles, circles, circle_members,
|
|
mesh_peers, reachable_n2/n3, social_routes, blobs, group_keys,
|
|
preferred_peers, known_anchors; auto-migration
|
|
protocol.rs — MessageType enum (39 types), ALPN (itsgoin/3),
|
|
length-prefixed JSON framing, read/write helpers
|
|
connection.rs — ConnectionManager + ConnHandle/ConnectionActor (actor pattern):
|
|
mesh QUIC connections (MeshConnection), session connections,
|
|
slot management, initial exchange, N1/N2 diff broadcast,
|
|
pull sync, relay introduction. All external access via ConnHandle.
|
|
network.rs — iroh Endpoint, accept loop, connect_to_peer,
|
|
connect_by_node_id (7-step cascade), mDNS discovery
|
|
node.rs — Node struct (ties identity + storage + network), post CRUD,
|
|
follow/unfollow, profile CRUD, circle CRUD, encrypted post creation,
|
|
startup cycles, bootstrap, anchor register cycle
|
|
web.rs — itsgoin.net web handler: QUIC proxy for share links,
|
|
on-demand post fetch via content search, blob serving
|
|
http.rs — HTML rendering for shared posts (render_post_html)
|
|
|
|
crates/cli/
|
|
src/main.rs — interactive REPL + anchor mode (--bind, --daemon, --web)
|
|
|
|
crates/tauri-app/
|
|
src/lib.rs — Tauri v2 commands (38 IPC handlers), DTOs
|
|
|
|
frontend/
|
|
index.html — single-page UI: 5 tabs (Feed / My Posts / People / Messages / Settings)
|
|
app.js — Tauri invoke calls, rendering, identicon generator, circle CRUD
|
|
style.css — dark theme, post cards, visibility badges, transitions</code></pre>
|
|
</section>
|
|
|
|
<section>
|
|
<h2>License</h2>
|
|
<p>ItsGoin is released under the <strong>Apache License, Version 2.0</strong>. You may use, modify, and distribute this software freely under the terms of that license.</p>
|
|
<p>This is a gift. Use it well.</p>
|
|
</section>
|
|
</div>
|
|
|
|
<footer>
|
|
<p>ItsGoin — Apache 2.0 License — <a href="https://itsgoin.com">itsgoin.com</a></p>
|
|
</footer>
|
|
</body>
|
|
</html>
|