From 9f2603f981958d69fbd73b4ab434311ed4de7d3e Mon Sep 17 00:00:00 2001 From: Scott Reimers Date: Tue, 21 Apr 2026 18:06:26 -0400 Subject: [PATCH] =?UTF-8?q?0.5.3=20=E2=86=92=20stable;=20document=200.6.x?= =?UTF-8?q?=20Identity=20Architecture=20plan?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Promote 0.5.3 to stable on download page - "Coming in Beta" section describes 0.6.x privacy features - Add design.html §28: Identity Architecture (Planned) — network/posting ID split, multi-persona, ephemeral DM IDs, file-holder CDN, CDN-only DM privacy - IMPLEMENTATION_PLAN_0.6.md: phased rollout across 0.6.0 through 0.6.5, each backward compatible, each a standalone release Co-Authored-By: Claude Opus 4.7 (1M context) --- IMPLEMENTATION_PLAN_0.6.md | 199 ++++++++++++++++++++++++++++++++ website/design.html | 224 ++++++++++++++++++++++++++++++++----- website/download.html | 48 ++++---- 3 files changed, 417 insertions(+), 54 deletions(-) create mode 100644 IMPLEMENTATION_PLAN_0.6.md diff --git a/IMPLEMENTATION_PLAN_0.6.md b/IMPLEMENTATION_PLAN_0.6.md new file mode 100644 index 0000000..da06ecd --- /dev/null +++ b/IMPLEMENTATION_PLAN_0.6.md @@ -0,0 +1,199 @@ +# Implementation Plan: Identity Architecture Rollout (0.6.x beta cycle) + +## Context + +0.5.3-beta is graduating to stable. The 0.6.x beta line introduces the Network-ID / Posting-ID split, multi-persona support, ephemeral rotating DM identities, file-holder CDN restructure, and CDN-only DM privacy. + +Full architectural plan: `/home/sologretto/.claude/plans/woolly-nibbling-glade.md` +Canonical reference: `website/design.html` §28 +Memory summary: `reference_identity_architecture.md` + +Each phase below is a standalone release. Each phase is backward compatible with peers on earlier versions (they degrade gracefully or stay on the old path). After all phases ship, 0.7.0-beta consolidates and 0.7.x becomes the candidate for the next stable promotion. + +--- + +## Phase 1 (v0.6.0-beta): Remove direct `PostPush` for encrypted posts + +**Goal:** Eliminate the sender→recipient traffic signal. Encrypted DMs propagate via the existing ManifestPush / CDN tree, indistinguishable on the wire from any other encrypted post. + +**Scope:** +- Remove or gate `push_post_to_recipients` in `crates/core/src/node.rs` +- Ensure new encrypted posts still trigger a normal header update on neighbor posts (already done by existing CDN logic) so they propagate +- Verify the existing ManifestPush fan-out path reaches recipients who follow the author's posting ID (today they do since follows pull posts) +- Add a "CDN delivery SLA" doc note: expect ~seconds to tens-of-seconds latency for DMs to followers, minutes worst case for offline-then-online recipients + +**Verification:** +- Send an encrypted DM with two test devices. Confirm the recipient receives it without any direct-push message firing (network log inspection) +- Measure p50/p95 delivery latency on a small mesh +- Backward compat: verify a v0.6 client can DM a v0.5 client and vice versa + +**Risks:** +- Latency regression for non-follower recipients (they won't get it until Phase 3 or comment-intro shipping). For Phase 1, keep the push path behind a compile flag for non-follower recipients if safety is a concern — or explicitly document that "DMs to non-followers don't reach yet; follow or comment on their post" + +--- + +## Phase 2 (v0.6.1-beta): File-holder CDN + header-diff propagation + +**Goal:** Replace the upstream/downstream tree with per-file flat holder sets. Prerequisite for multi-device (because upstream-toward-author doesn't work with multiple authoring network IDs). + +**Scope:** +- New `file_holders` table: `(file_id, peer_id, last_interaction_ms, direction)` — direction records whether we sent or received the file with this peer, for potential reuse +- Populate on ManifestPush receive, blob fetch, blob serve, engagement diff exchange +- Cap per file_id at 5 holders (LRU on `last_interaction_ms`) +- Refactor engagement diff delivery: instead of "send to upstream," send to the file's up-to-5 known holders +- Keep `post_upstream` / `post_downstream` tables for compat reads during the transition +- ManifestPush propagation logic: when header of file A changes, look up A's holders, send diff to them +- Receiver behavior: apply header diff, pull any referenced new files + +**Wire protocol:** +- `BlobHeaderDiff` and `ManifestPush` messages stay the same on the wire +- What changes is the sender's list of destinations (was: upstreams; now: holders) + +**Verification:** +- Mesh test: create a post, track its propagation; confirm holders accumulate up to 5 diverse peers +- Engagement propagation: a reaction on a deeply-nested post still reaches the author (now: via the post's holders) +- Backward compat: a v0.6.1 sending to a v0.6.0 holder should still work (v0.6.0 applies header diff normally) + +**Risks:** +- Churn in holder sets during network instability (holders going offline trigger LRU replacement). Need soak testing. +- Potential duplicate diff delivery if holder sets overlap. Idempotent application is already required by existing code. + +--- + +## Phase 3 (v0.6.2-beta): Merged pull + recipient-match + +**Goal:** Nodes can find DMs addressed to them without a distinguishable "searching for DMs" traffic pattern. Handles the case where a DM exists on a peer you're connected to but you don't follow the author. + +**Scope:** +- Add index on `wrapped_key.recipient` in storage (migration) +- Extend `PullSyncRequestPayload` handling: peer returns posts matching `author ∈ query_ids` OR `wrapped_key.recipient ∈ query_ids` +- Client: always include own NodeId in the pull query's NodeId list +- No new message type; existing PullSync handler gets smarter +- UX: no user-visible change (pull is internal) + +**Wire protocol:** +- No wire format change. Same fields, broader server-side matching logic. + +**Verification:** +- Send a DM from A to B where A is not followed by B. Verify B receives it on next pull cycle (via the recipient-match). +- Benchmark pull query cost with the new OR-clause. Should be near-zero with the index. +- Backward compat: a v0.6.2 pulling from a v0.6.1 peer still works (peer ignores the "recipient match" intent since it only indexes author) + +**Risks:** +- Query cost grows with posts held × recipient list length. Index is mandatory. +- Peer can keep a log of "query included NodeId X" — tied to your connection anyway, not new info. + +--- + +## Phase 4 (v0.6.3-beta): Posting-key / network-key split + +**Goal:** Decouple signing identity from network identity. Foundation for multi-device and multi-persona. This phase ships WITHOUT UI for creating multiple personas — it's the plumbing. + +**Scope:** +- `PostingIdentity` struct in `crates/core/src/types.rs`: `{ node_id, secret_seed, display_name, created_at }` +- Storage: `posting_identities` table (list of all held posting keys), `active_default_posting_id` setting +- Storage: migrate existing identity → single posting identity with the same key as the network key (no behavior change for existing users) +- `crypto.rs`: separate signing primitives — `sign_with_posting_key` vs `sign_with_network_key`. Keep existing `sign_manifest` working by delegating to posting-key variant when available +- `Node`: load posting keys alongside network key at startup +- `BlobHeader.author`: populated from posting key (was: network key, but they were equal) +- Posts signed with posting key; connections still use network key +- Export/import bundle includes posting key (in addition to network key) +- Wire: no new message types; `InitialExchange` doesn't need to change because posting IDs are only relevant for signed content, not connection setup + +**Wire protocol:** +- `BlobHeader` already has an `author` field. We just populate it from posting key instead of network key. +- For mixed-version networks: v0.6.3 posts signed by a posting_id that happens to equal the author's network_id are indistinguishable from v0.6.2 posts. Backward compat is automatic. +- First-run migration: existing users have network_id == posting_id. Nothing changes for them until they explicitly create a second persona. + +**Verification:** +- Existing identity still works; no data loss +- Posts from upgraded clients still validate on older clients +- Posts from older clients still decrypt and render on upgraded clients + +**Risks:** +- Signature verification regression if `author` field handling changes subtly. Need extensive cross-version testing. +- Storage migration needs transaction safety. + +--- + +## Phase 5 (v0.6.4-beta): Multi-persona UX + +**Goal:** Let users create and use multiple posting identities with clean UX. + +**Scope:** +- IPC: `list_posting_identities`, `create_posting_identity(name, avatar)`, `set_default_posting_identity(id)`, `delete_posting_identity(id)` +- Frontend: Settings > Personas page with create/list/delete +- Compose box: persona picker (avatar + name + dropdown) +- Contextual defaults: posting to a circle uses that circle's last-used persona +- Feed: merged view, filter pills per persona +- Reply/comment: default persona = whichever decrypted the post +- Subtle per-post labels showing which persona's follow surfaced each item + +**Wire protocol:** +- No changes (posting keys are already supported from Phase 4) + +**Verification:** +- Create three personas, post from each, confirm peers see three distinct authors +- DM each persona from a peer; confirm messages route to separate inbox threads locally +- Social graph separation test: follow one peer from Persona A and a different peer from Persona B; confirm merged feed shows both but filter isolates each + +**Risks:** +- UX complexity regression — the merged feed with filters is non-trivial. Start with two-persona users and expand. +- Existing users with single identity should see ZERO UI change until they opt in to creating a second persona. + +--- + +## Phase 6 (v0.6.5-beta): Ephemeral rotating DM IDs + local archive + +**Goal:** Maximum traffic-graph concealment for DMs. Each thread gets a rotating posting ID, messages include handshake for the next ID, local archive preserves history. + +**Scope:** +- Per-thread ephemeral posting ID generation +- Handshake field in encrypted post payload: `next_posting_id: SecretSeed` +- Sliding window of last 10 accepted IDs per thread +- Local archive post: encrypted-to-self, replicates across user's linked devices via the multi-device shared-posting-key mechanism +- UX: DMs appear as continuous thread in the UI despite wire-level rotation +- Group thread rotation + +**Wire protocol:** +- No new message types — ephemeral posting IDs are just short-lived posting keys. The handshake field lives inside the encrypted payload. + +**Verification:** +- Observer test: capture a DM thread's wire traffic; confirm no cryptographic tie between successive messages +- Resilience: drop a message mid-thread; confirm the next message catches up via the sliding window +- Archive test: scroll back through a 100-message thread; confirm all messages visible via local archive even though wire-level IDs are long forgotten + +**Risks:** +- Complexity of rotation + archive is high. Lots of state to get right. +- UX: users need to understand that "they can't recover messages from a new device without importing the archive" — this is true already for any encrypted history, but doubly so here. + +--- + +## Version promotion plan + +- **0.5.3-beta → stable** (this work). Last stable of the pre-split architecture. +- 0.6.0 through 0.6.5 are beta releases shipping Phases 1-6. +- **0.7.0-beta** consolidates and cleans up deprecated code paths (post_upstream/downstream tables dropped, old push code removed). +- **0.7.x-beta** gets real-world soak testing as the full new architecture. +- **0.7.N → stable** once the new architecture is proven. + +## Order-of-operations recommendations + +- Phases 1 and 2 can overlap (CDN restructure can happen while PostPush is being removed — they touch different code paths). +- Phase 3 depends on Phase 2 being in place (the file-holder refactor touches pull handling). +- Phase 4 is a prerequisite for 5 and 6 but doesn't need to wait for 1-3 (it's local/storage only, wire protocol unchanged). +- Phase 5 UX should wait for Phase 4 to ship and stabilize so persona-aware code is well tested. +- Phase 6 is the most complex; ship last. + +## What must not regress + +- Existing users upgrading from 0.5.3 should see no behavior change until they explicitly opt in to new features. +- Cross-version interoperability is required at every phase boundary. +- Data integrity: all migrations must be transaction-safe and reversible (keep old tables until N+1 phase). + +## Not in scope for 0.6.x + +- Search/discovery improvements (existing Worm still used) +- Anchor/directory redesign (separate planned track) +- Erasure-coded CDN replication (separate planned track) +- Reciprocity / Phase 2 economic features (still deferred) diff --git a/website/design.html b/website/design.html index a2f6b58..b961cdf 100644 --- a/website/design.html +++ b/website/design.html @@ -41,35 +41,7 @@
v0.5.3-beta — 2026-04-19

Design Document

-

This is the canonical technical reference for ItsGoin. It describes the vision, the architecture, and the current state of every subsystem — with full implementation detail. This document is versioned; each update records what changed.

-
- Changelog -

v0.4.4 (2026-03-23): UI overhaul — sticky header with tabs as one floating block on desktop, fixed header+bottom nav on mobile. Full-width dark header (#0a0a1a) edge-to-edge with 15px fade gradient into content. Tab icons visible on desktop (inline) and mobile (stacked). Safe area inset support for phone notches/camera cutouts. Lightbox close on tab switch. Profiles lightbox (name, bio, visibility, circle profiles) moved from settings to My Posts. Redundancy lightbox moved from settings to My Posts. Sync All and Stored Anchors moved into Network Diagnostics popover. Network indicator click opens diagnostics. Diagnostics buttons centered in rows. Settings streamlined — removed profile editor, diagnostics button, sync button, redundancy panel, anchor management. Attach button centered in compose. Manage Circles released from full-width constraint.

-

v0.4.3 (2026-03-22): Lock contention overhaul — all conn_mgr lock holds during network I/O eliminated. PostFetch, TcpPunch, PullFromPeer, FetchEngagement, ResolveAddress, AnchorProbe, WormLookup, ContentSearch now use brief locks for data gathering only. Bi-stream handlers (BlobRequest, WormQuery, RelayIntroduce, PostFetchRequest, ManifestRefresh) fully lock-free for I/O. ConnectionActor hoists shared Arcs (storage, blob_store, endpoint) for lock-free access. ResolveAddress adds 5s per-query timeout (was unbounded). Worm cascade uses connection snapshots. Initial exchange failure now aborts mesh upgrade (was silently continuing). connect_to_peer/connect_to_anchor use 15s timeout. StoragePool — 8 concurrent SQLite connections in WAL mode replace single Mutex<Storage>. Reads run fully parallel; writes serialize only at SQLite level. Bottom nav bar for mobile/tablet (≤768px) with icon tabs. Text sizes: XS 75%, S 100%, M 125% (default), L 150%, XL 200%. Text size persisted to localStorage for instant restore. Fix: blocking_lock panic inside async runtime (prevented app startup). StoragePool reduced to 4 connections for Android compatibility. Keepalive fix — tokio::time::sleep inside select! was resetting every loop iteration, keepalives never fired; switched to tokio::time::interval. Auto-reconnect on unexpected disconnect — 3s delay then direct reconnect to last known address; falls back to growth loop. notify_growth on disconnect — immediately signals growth loop to fill empty slot instead of waiting 10min rebalance. Tab badge fix — updateTabBadge was using textContent which destroyed icon+label spans; now updates only the label and manages badge span separately. Feed re-render skip during media playback — prevents video echo from DOM destruction.

-

v0.4.2 (2026-03-22): Welcome screen — startup shows “How’s it goin?” with staggered counters (connections, posts, messages, reacts, comments) while backend bootstraps. Status ticker — header ticker for new posts, messages, reactions, comments, connection changes. Notification improvements — Tauri plugin → Web Notification → notify-rust fallback chain, Linux native notifications. Responsive text scaling — Small/Normal/Large (100%/150%/200%), persisted via settings. Diagnostics popover — diagnostics moved from inline section to overlay, connections on-demand, timers removed. Share details lightbox with QR code. Connect string prefers external address (UPnP/public IPv6/observed). Stale N1 fix — disconnected social routes excluded from N1 share. Replication handler fix — actively fetches posts + blobs from requester after accepting replication. Hole punch fix — target-side registers publicly routable remote address for relay introduction. Replication semaphore (3 concurrent max). Peer labels show truncated node ID.

-

v0.4.1 (2026-03-21): Security hardening — reaction signatures (ed25519), comment signature verification on receipt, reaction removal authorization, BlobHeader author verification. Lock contention fixes — ManifestPush discovery (cm lock released during I/O), pull request handler (filter without lock), pull sender (split into brief locks), engagement checker (batch writes per chunk). Data cleanup — post deletion cleans downstream/upstream/seen tables.

-

v0.4.0 (2026-03-21): Protocol v4 — header-driven sync. ManifestPush as primary post notification. Slim PullSyncRequest (per-author timestamps, not full post ID list). Tiered engagement checks (5min/1hr/4hr/24hr by content age). Multi-upstream (3 max) with fallback chain. Auto-prefetch followed authors <90d. Self Last Encounter per-author tracking. Encrypted-but-not-for-us CDN caching. Serial engagement polling. ~90% bandwidth reduction for established nodes.

-

v0.3.6 (2026-03-20): Active CDN replication — all devices proactively replicate recent posts to peers (desktops > anchors > phones priority). ReplicationRequest/Response (0xE1/0xE2). Device roles (Intermittent/Available/Persistent) advertised in InitialExchange. Bandwidth budgets: replication (pull to cache) + delivery (serve requests), hourly auto-reset, phones 100MB/1GB, desktops 200MB/2GB, anchors 200MB/1GB. Cache management: 1GB default, configurable, eviction cycle activated with share-link priority boost. Engagement distribution fix — BlobHeader JSON rebuilt after diff ops. Tombstone system — deleted reactions/comments tombstoned, propagate via pull sync. Persistent notifications via seen_engagement/seen_messages tables. DOS hardening: fan-out cap (10), prefetch cap (20), downstream registration cap (50), delivery budget enforcement. Pull preference reordered: non-anchors first. Network indicator — header dot (black/red/yellow/green) + capability labels. Tab badges — contextual counts (new posts, engagement, online, unread). Message read tracking on open/close/send. Stats bar removed.

-

v0.3.5 (2026-03-20): Private blob encryption — attachments on encrypted posts (Friends/Circle/Direct) now encrypted with same CEK as post text; public blobs unchanged; CID on ciphertext. Blob prefetch on sync — attachments eagerly fetched after post pull for offline availability. Crypto refactoring — extracted reusable primitives (encrypt/decrypt_bytes_with_cek, unwrap_cek_for_recipient, unwrap_group_cek). Intent-based post filtering — feed/myposts/messages filter on intentKind instead of encryption state. Blob decryption API (get_blob_for_post). Download filename sanitization. Encrypted receipt & comment slots — private posts carry noise-prefilled encrypted slots in BlobHeader for delivery/read/react receipts and private comments; CDN-propagated as opaque bytes; slot key derived from post CEK; 3 new BlobHeaderDiffOps (WriteReceiptSlot, WriteCommentSlot, AddCommentSlots). Message UI — DM delivery indicators (checkmark/double/blue/emoji), auto-seen on view, react button on messages.

-

v0.3.4 (2026-03-18): Comment edit & delete with trust-based propagation. Native notifications via Tauri plugin (messages, posts, reactions, comments). Forward-compatible BlobHeaderDiffOp::Unknown variant. Following Online/Offline lightbox. Comment threading scoping fix. Dropdown text legibility fix. Mobile hamburger nav for website.

-

v0.3.3 (2026-03-16): Connection rate limiting — incoming auth failures rate-limited per source IP (3 attempts, exponential backoff to ~256s). Schema versioning — PRAGMA user_version tracks DB version with migration framework. N2/N3 freshness — TTL 7d→5h, full N1/N2 re-broadcast every 4h, startup sweep clears stale entries. Bootstrap isolation recovery — 24h check verifies bootstrap is in N1/N2/N3, reconnects + sticky N1 advertisement if absent. IPv6 HTTP address fix — nodes advertise actual public IPv6 (not 0.0.0.0) for share link redirects. Upstream tracking — post_upstream table records post source for engagement diff routing toward author. Video preload fix — share links and in-app videos use preload=auto. Following Online/Offline split. DM filter from My Posts. Any-type file attachments with download prompt + trust warning. Image lightbox. Audio player.

-

v0.3.2 (2026-03-14): Bidirectional engagement propagation — BlobHeaderDiff flows upstream + downstream through CDN tree. Auto downstream registration on pull sync/push notification. TCP hole punch protocol (TcpPunchRequest/Result 0xD6/0xD7). Tiered web serving (redirect → TCP punch → QUIC proxy). Video playback fix (asset protocol + blob URL fallback). On-demand blob fetch for synced posts missing blob data.

-

v0.3.1 (2026-03-13): Share links + QUIC proxy + content search. Share link format: itsgoin.net/p/<postid_hex>/<author_nodeid_hex> — simple, no host encoding needed. itsgoin.net web handler acts as QUIC proxy: receives browser request, searches the network for the post, fetches it on-demand via PostFetch (0xD4/0xD5), renders HTML, serves to browser. No permanent storage of fetched content. Extended worm search — WormQuery now carries optional post_id and blob_id fields for unified node/post/blob search. Each peer checks local storage, CDN downstream tree (up to 100 hosts per post), and blob store. WormResponse gains post_holder and blob_holder fields. Nova fan-out pattern — burst peers include one N2 wide referral; referred peer does its own 101-burst, reaching ~10K nodes with ~202 relay hops. PostFetch (0xD4/0xD5) — lightweight single-post retrieval after worm finds a holder, much lighter than full PullSync. itsgoin.net node deployed as anchor + web handler (--web 8080). “Unavailable” page with honest network model explanation + install CTA. Universal Links / App Links planned for native app interception. | Engagement sync — pull sync now fetches reactions, comments, and policies via BlobHeaderRequest/Response after every sync. Profile push fix — profile updates now sent to all connected mesh peers (not just audience). Auto-sync on follow — following a peer triggers immediate post pull + engagement fetch. Popover UI — notifications settings, network diagnostics, and message threads now open as popovers. Notification settings — per-key settings table in SQLite, configurable message/post/nearby notifications with JS Notification API. Tiered DM polling — smart message refresh based on conversation recency. Reaction display — posts show top 5 most popular emoji + total response count. UI cleanup — removed Suggested Peers and Find Nearby sections, placeholder text changed to “How’s it goin?”, clickable node IDs in activity log.

-

v0.3.0 (2026-03-12): Full rename distsoc → ItsGoin. ALPN, crypto contexts, data paths, Android package ID all changed. Clean break — incompatible with prior versions.

-

v0.2.11 (2026-03-12): Engagement system — reactions (public + private encrypted via X25519 DH + ChaCha20-Poly1305), inline comments with ed25519 signatures, author-controlled comment/react policies (audience-only, public, none), blocklist enforcement. CDN tree for all posts — new post_downstream table (keyed by PostId, max 100 peers) gives every post a propagation tree; PostDownstreamRegister (0xD3) sent when any peer stores a post. 4 new wire messages: BlobHeaderDiff (0xD0) for incremental engagement propagation, BlobHeaderRequest/Response (0xD1/0xD2), PostDownstreamRegister (0xD3). 6 new SQLite tables, 9 new IPC commands. Thread splitting — headers exceeding 16KB auto-split oldest comments into linked thread posts. Frontend: emoji picker, reaction pills, comment threads, policy selects in compose area.

-

v0.2.10 (2026-03-12): Per-family NAT classification — IPv4 and IPv6 public reachability now detected independently. Previously, a public IPv6 address incorrectly set has_public_v4=true, causing nodes behind IPv4 NAT to skip hole punching. STUN now always runs (unless --bind) so IPv6-only anchors correctly classify their IPv4 NAT. Anchor advertised address fallback — anchors without --bind or UPnP now advertise their first public bound address (e.g. IPv6 SLAAC), so peers store them in known_anchors for preferential reconnection. Bootstrap anchor deprioritization — startup connection sequence now tries discovered (non-bootstrap) anchors first, falling back to hardcoded bootstrap anchors only when no discovered anchor is reachable. Reduces load on bootstrap infrastructure as the network grows.

-

v0.2.9 (2026-03-12): ConnectionManager actor redesign — replaced single Arc<Mutex<ConnectionManager>> with two-layer actor pattern: ConnHandle (cheap-to-clone command sender) + ConnectionActor (dedicated tokio task, owns state, processes commands via mpsc/oneshot channels). Eliminated lock contention from 14 code paths that previously held the mutex during network I/O (up to 15s for QUIC connects). All network.rs and node.rs callers now use ConnHandle (~60 call sites migrated). I/O-heavy functions extracted as standalone: broadcast_diff, push_circle_profile, push_visibility, pull_from_peer, send_relay_introduce, send_anchor_register, request_anchor_referrals. Public conn_mgr() accessor removed — Arc<Mutex> is now an internal implementation detail of the actor.

-

v0.2.8 (2026-03-11): NAT filter probe (0xC6/0xC7) — anchor probes node’s filtering type by attempting QUIC connect from a different source port; address-restricted (Open) vs port-restricted determined in 2s, eliminating unnecessary scanning for most connections. Role-based NAT traversal — EIM nodes punch every 2s (stable port visible to peer scanner), EDM/Unknown nodes walk outward at ~100 ports/sec (opening firewall entries for peer punches to land). Steady scan replaces burst tiers (was 37K tasks, now ~20 in-flight). IPv4 vs IPv6 public differentiation — startup reports v4-only/v6-only/v4+v6, “Public” no longer assumes Open filtering. Task cleanup via JoinSet::abort_all().

-

v0.2.7 (2026-03-11): Port scanning refinement — scan only the anchor-observed IP (relay-injected first address) instead of all self-reported addresses, avoiding wasted scan budget on unreachable VPN/cellular IPs. Scanning now triggers when peer NAT type is unknown, not just when explicitly EDM.

-

v0.2.6 (2026-03-11): Anchor self-verification implemented (Section 8) — AnchorProbeRequest/Result (0xC3/0xC4) wire messages, witness-based cold reachability testing via N2 strangers, candidacy checklist (UPnP/public + 50 connections + 2h uptime + non-mobile), periodic re-probe in anchor register cycle, 2-failure revocation. Advanced NAT traversal implemented (Section 10) — NatMapping (EIM/EDM) + NatFiltering (Open/PortRestricted) profile types, hole_punch_with_scanning() replaces hard+hard skip at all 5 call sites, tiered port scanning (±500, ±2000, full ephemeral) at 50 concurrent probes, behavioral filtering inference from connection outcomes, PortScanHeartbeat (0xC5) message type. NAT profile shared in InitialExchange (nat_mapping/nat_filtering fields).

-

v0.2.5 (2026-03-11): Advanced NAT traversal design (Section 10) — relay-assisted port scanning protocol for EDM/symmetric NATs, full NAT combination matrix (mapping × filtering), tiered scan from observed port at 250/sec, 2s relay heartbeat feedback loop, makes hard+hard pairs solvable without full relay. Reconnection race fix — run_mesh_streams checks stable_id() before cleanup to prevent reconnecting peers from losing their connection entry.

-

v0.2.4 (2026-03-11): Anchor self-verification probe design (Section 8) — witness-based cold reachability testing via N2 strangers, candidacy checklist, periodic re-probe. Anchor selection simplified to LIFO on last_seen, removed success_count weighting, stale anchor cleanup (7-day probe). BlobHeader separation from blob content (Section 18) — immutable BLAKE3-addressed blobs require separate mutable headers, BlobHeader struct replaces CdnManifest, 25+25 post neighborhood, BlobHeaderDiff incremental propagation. Removed 3x hosting quota — CDN is attention-driven delivery infrastructure, not storage; author owns durability. Keep-alive session ceilings (Section 16) — desktop ~300-500, mobile ~25-50, mobile priority stack, hysteresis for borderline reachability. Mesh stranger controls — mutual mesh blacklist for targeted stranger relationships, --max-mesh CLI flag for topology testing. Phase 2 reciprocity simplified — attention model makes quota enforcement unnecessary.

-

v0.2.3 (2026-03-11): NAT type detection implemented (Section 10) — raw STUN probing classifies NAT as Public/Easy/Hard/Unknown on startup, shared in InitialExchange, stored per-peer, skip hole punch for hard+hard NAT pairs. LAN Discovery spec (Section 12) — mDNS scan loop for automatic LAN peer connection, keep-alive LAN sessions, local relay design. Pruning & timeout tuning — preferred peer prune 24h→7d, watcher expiry 24h→30d, N2/N3 startup sweep. Growth loop lock fix — resolve_address no longer blocks conn_mgr during network I/O.

-

v0.2.2 (2026-03-10): Hole punch fixes (Section 10) — session peers now fully participate in relay introduction (observed address injection for both requester and target), all hole punch paths use hole_punch_parallel() (parallel addresses, no more sequential timeouts), requester self-reported addresses filtered to publicly-routable only.

-

v0.2.1 (2026-03-10): Added UPnP port mapping (Section 11) — best-effort NAT traversal for desktop/home networks, external address in N+10 and peer advertisements, lease renewal cycle.

-

v0.2.0 (2026-03-09): Major design updates — three-layer architecture (Mesh/Social/File), N+10 identification, keep-alive sessions, 3-tier revocation, multi-device identity, growth loop redesign, pull sync from social/file layers, relay pipes default to own-device-only, remove anchor register loop.

-

v0.1.0 (2026-03-09): First versioned edition. Consolidated from ARCHITECTURE.md, code review, and gap analysis into a single source of truth.

-
+

This is the canonical technical reference for ItsGoin. It describes the vision, the architecture, and the current state of every subsystem — with full implementation detail. See the download page for the release changelog.

@@ -102,6 +74,8 @@ 24. Phase 2: Reciprocity 25. HTTP Post Delivery 26. Share Links + 27. Directory Service (Planned) + 28. Identity Architecture (Planned) Appendix A: Timeout Reference Appendix B: Design Constraints Appendix C: Implementation Scorecard @@ -1596,6 +1570,198 @@ END

No architecture changes needed before or after October 2026.

+ +
+

27. Directory Service (Planned)

+ +
+

The directory is an opt-in convenience layer for discovery and creator protection. It is not node access — losing directory presence does not disconnect anyone from the network or from their existing connections. This asymmetry is load-bearing: humans with mature relationships shrug off directory loss; bots and content thieves depend on it entirely.

+
+ +

Scope

+
    +
  • Whitelist track — discoverability, vouch-based entry, graph-scoped visibility.
  • +
  • Blacklist track — community-flagged accounts and content; voluntary node-level replication refusal.
  • +
  • Out of scope — node access, message delivery, post sync, existing follows. All continue to work without directory membership.
  • +
+

Because directory loss is a low-cost outcome for real humans and a high-cost outcome for bad actors, enforcement thresholds can be deliberately aggressive without meaningful false-positive risk.

+ +

Entry

+

Two paths to directory listing, both yielding equal discoverability but different trust-building capacity:

+
    +
  • Vouch entry — 1 vouch from a directory member with remaining vouch capacity.
  • +
  • Paid entry — fee in lieu of vouch. Grants directory presence only; vouch capacity starts at zero and grows only from received human vouches. Prevents ring-bootstrapping: a bad actor cannot pay their way to vouch power.
  • +
+ +

Vouch capacity

+

A member's outbound vouch capacity is derived from received vouches:

+
    +
  • Base rate: 1 outbound vouch per 30 days per received vouch, up to a hard cap of 5 outbound vouches per 30 days.
  • +
  • A member with zero received vouches (paid-entry only) has zero outbound capacity.
  • +
  • Capacity regenerates; unused capacity does not accumulate beyond the monthly window.
  • +
+

Trust signals are internal, not purchasable. Aged off-platform accounts (FB, X, etc.) can be bought cheaply; in-system tenure crossed with graph density cannot. Vouch weight derives from the voucher's in-system graph depth and their history of non-revoked vouches — not account age.

+ +

Cascade punishment

+

A "bad vouch" is a vouch later determined to have been extended to a bot, impersonator, content thief, or other directory-removable actor. Punishment is asymmetric — mild for humans who miscalled a single vouch, devastating for botnets whose strength is their dense internal vouch graph.

+ + + + + +
OffenseImmediate consequenceRecovery requirement
1st bad vouchAll given and received vouches invalidated (member remains listed, but cannot vouch)2 NEW vouches received before outbound vouching resumes
2nd bad vouchRemoved from directory1 new vouch to relist + 2 additional new vouches before vouching again
3rd bad vouchRemoved from directory; 1-year outbound vouch freeze4 new vouches to relist; no outbound vouching for 12 months regardless of received vouches
+

"NEW" is strictly defined: a voucher who has never previously vouched this member and is not within the first-degree graph of any prior voucher for this member. This blocks ring members from cycling each other through as "recovery" vouchers.

+

Cascade radius. Invalidation of received vouches propagates the effective penalty upward: the bad actor's vouchers lose the credibility granted by that downstream relationship. One verified human assertion against a single botnet node can cascade through the entire dense cluster because bots' own topology carries the invalidation. They build the weapon that points back at them.

+ +

Graph-relative visibility

+

The directory is not global. A viewer sees directory entries N hops from their own social graph (N tunable; start with N=3). Bots at the fringe of the graph are structurally invisible to most humans. Visibility is further rate-limited (Y new profiles per viewer per day, Y tunable) to make harvesting economically unattractive without affecting normal discovery.

+

This scoping is also why the vouch system's bot-ring problem is bounded: even a successful ring has no harvest value if it cannot be seen by real users.

+ +

Verification (circuit breaker)

+

Verification is an emergency override, not a standing credential. In normal operation, the vouch system runs unassisted. When a creator notices their content suppressed or a bot cluster drowning legitimate signals, they invoke verification on their own terms.

+
    +
  • Trigger: creator-initiated only. No automatic verification prompts.
  • +
  • Method: human interaction + fresh content submission (specifics TBD; must resist CAPTCHA-farmed and LLM-automated completion).
  • +
  • Effect: verified status outweighs normal vouches for the duration it applies. A verified creator's reports of content theft or impersonation trigger cascade invalidation across the offender's vouch graph.
  • +
  • Cost asymmetry: a botnet must sustain attacks indefinitely; the defender verifies once.
  • +
+ +

Reporting

+

The reporting pipeline is lightweight, member-facing, and feeds a single review queue with multiple severity tracks:

+
    +
  • Impersonation — a member reports an account copying their (or someone they know's) identity. High-confidence signal: verifiable via profile comparison and content signatures.
  • +
  • Bad vouch — a member flags a directory entry they vouched for, or vouches in their extended graph. Triggers cascade punishment on confirmation.
  • +
  • Content theft — a creator reports unauthorized reposting. Evidence is the signed original versus the repost.
  • +
  • Automated topology detection — graph analysis flags ring structures, abnormal clustering coefficients, and low-bridging clusters. Feeds the same queue as human reports.
  • +
+

Reports from verified accounts carry higher weight. A confirmed report against one member in a tight cluster opens review on the whole cluster.

+ +

Blacklist

+

Blacklist is a higher-severity tier than directory suspension. Two states:

+
    +
  • Under review — enough independent reports to warrant scrutiny; member still discoverable but flagged.
  • +
  • Confirmed blacklisted — entry persists at the identity level. A bad actor creating a new identity starts from zero; the old identity's blacklist entry remains as a reputational cost that outlasts the account.
  • +
+

Escalation paths:

+
    +
  • Confirmed impersonation → direct blacklist.
  • +
  • Vouch violations → directory suspension first, blacklist only on repeated/escalated offenses.
  • +
  • Content theft → voluntary replication refusal (below); blacklist for repeat offenders.
  • +
+

The blacklist must be slower and more evidence-bound than directory suspension. Suspension costs a bot everything but costs a human almost nothing; blacklist has real network consequences (below) and must not be a censorship weapon.

+ +

Voluntary network compliance

+

Nodes may configure policies to decline replication or delivery of blacklisted content and accounts. This is opt-in, not forced. The effect is architectural starvation: stolen or abusive content cannot sustain replication when hosts collectively decline to be its infrastructure, while the legitimately signed original continues propagating normally.

+

This is the key lever for creator protection. ItsGoin does not enforce copyright; it gives creators a network that structurally prefers their signed original over any derivative copy. Combined with embedded ads (below), there is no version of content theft that economically benefits the thief inside ItsGoin.

+ +

Creator-embedded ads

+

The platform does not insert or intermediate ads. Creators may embed ads directly in their content or feed, as part of the signed post.

+
    +
  • Ads inside signed content cannot be stripped without breaking the ed25519 signature — tampering is detectable.
  • +
  • A repost that strips ads produces an unsigned-or-differently-signed copy, which compliance rules (below) treat as non-compliant.
  • +
  • A repost that preserves ads intact monetizes the original creator automatically.
  • +
  • Ad revenue therefore follows the legitimate copy by construction, not by enforcement.
  • +
+ +

Repost framework

+

A two-track fair-use model. Compliant reposts are unblocked; non-compliant reposts feed the content-theft reporting pipeline.

+ + + + +
TrackContent limitAdsBacklinkValue requirement
AmplificationFullMust preserveRequired to signed originalReach is value; no further justification needed
Discussion / criticism≤1 minute per 4 minutes of originalNot requiredRequired to signed originalCommentary, review, or response
+

Edge cases (what counts as "amplification" vs. "wholesale copy masquerading as amplification") route to human reviewer capacity. The signed original is cryptographic evidence against the repost in any contested case — reviewers do not need to take anyone's word for what the original contained.

+ +

Philosophical position

+

Most platforms have a structural conflict of interest with creators: fakes inflate engagement metrics, thieves generate content volume, both drive ad revenue. ItsGoin's incentives are inverted by design. Fakes degrade the vouch system; thieves attack the network's most valuable users; bots pollute the replication layer the network depends on. Every bad actor makes ItsGoin worse as software, not just ethically. The enforcement mechanisms above are therefore load-bearing, not policy theater.

+ +

Implementation status

+

Designed, not implemented. Requires:

+
    +
  • Directory storage schema (vouch records, blacklist records, report queue, graph-density caches)
  • +
  • New protocol messages (vouch, revoke, report, verify challenge/response)
  • +
  • Graph-topology analysis job (automated ring/cluster detection)
  • +
  • Client UX for discovery, vouching, reporting, verification
  • +
  • Node-side replication policy config (blacklist honoring)
  • +
  • Review-queue tooling for contested cases
  • +
+

Recommended staging: minimum viable slice = invite-tree directory + single-vouch entry + impersonation reports → manual review queue. Defer cascade math, automated topology detection, verification override, and repost compliance until real graph data exists to calibrate thresholds.

+ +

Tunable parameters

+ + + + + + + + + + + +
ConstantInitial valuePurpose
DIRECTORY_VOUCH_INTERVAL30 daysTime between outbound vouches per received vouch
DIRECTORY_VOUCH_CAP5Hard cap on outbound vouches per 30-day window
DIRECTORY_GRAPH_HOPS3Visibility radius for discovery
DIRECTORY_DISCOVERY_RATETBD / dayNew profiles visible per viewer per day
RECOVERY_VOUCHES_1ST2NEW vouches to restore voucher status after 1st bad vouch
RECOVERY_VOUCHES_2ND1 + 2Relist + additional vouches after 2nd bad vouch
RECOVERY_VOUCHES_3RD4Relist vouches after 3rd bad vouch
RECOVERY_FREEZE_3RD365 daysOutbound vouch freeze after 3rd bad vouch
REPOST_DISCUSSION_RATIO1:4Max embed duration relative to original (discussion track)
+
+ + +
+

28. Identity Architecture Planned

+

The 0.6.x beta line introduces a separation between network identity (per-device routing/connection key) and posting identity (the face/persona authoring content). This is the architectural foundation for multi-device, multi-persona, and DM-level traffic-graph privacy.

+ +

28.1 Two layers of identity

+

Each device has ONE network key — used for QUIC connections, endpoint ID, mesh routing. It's never linked on the wire to any posting key.

+

Each user can hold MANY posting keys simultaneously — no "active" persona, no switching. Each posting key is a persona (Public, Private, Work, Family, per-conversation ephemeral, etc). Posts are signed with the posting key chosen at compose time.

+

Privacy invariant: peers cannot determine which network IDs belong to a given posting ID, which posting IDs belong to the same network ID, or which posting IDs belong to the same user. These associations are private to the device owner.

+ +

28.2 Persona types

+
    +
  • Public posting IDs — main persona(s), openly associated with "you"
  • +
  • Private posting IDs — smaller-context personas for close contacts or specific groups
  • +
  • Contextual / ephemeral posting IDs — per-relationship, per-thread, or one-off; auto-generated and isolated
  • +
+ +

28.3 Multi-device is a special case

+

"Two devices holding the same posting key" is a trivial case of the multi-key model. Link happens out-of-band between the user's own devices (QR / file / copy-paste bundle). The network sees no cross-device message announcing the relationship. Each device pulls content for its posting IDs via the normal CDN — the fact that two network nodes hold the same posting key is only discoverable if an observer has private knowledge (which they shouldn't).

+ +

28.4 Ephemeral rotating IDs for DM threads

+

DM threads and group messages use per-thread unique posting IDs that rotate per message. Each encrypted message includes a handshake field — the next posting ID to use. Observer sees a stream of distinct posting IDs with no cryptographic tie between them, defeating thread-level traffic correlation.

+

Desync recovery: receivers accept messages signed by any of the last N ephemeral IDs (sliding window). Message history (which can't be searched by rotating ID) is kept in a local encrypted-to-self archive — the archive is implemented as normal encrypted posts with recipient = user's own archive persona, replicating across the user's linked devices via self-follow.

+ +

28.5 CDN restructure: per-file holder sets

+

The current upstream/downstream tree (which assumed a single author network endpoint) is replaced by a flat per-file holder set with header-diff propagation.

+

Each file (post, blob, manifest) has its own holder set. Each holder tracks up to 5 peers it recently interacted with about that specific file. When a new post is created, the creator updates the headers of recent prior posts (their manifests now reference the new post), then pushes the header diff to the up-to-5 known holders of each updated prior post. Recipients apply the header diff, see the reference to the new post, and pull it through normal sync.

+

Notifications thus route via network-ID peers who happen to hold related files — not via any tree rooted at the author. Content always arrives via pull, never pushed directly.

+ +

28.6 DM privacy model

+

Three complementary mechanisms eliminate the "A messaged B" traffic signal:

+
    +
  1. CDN-only propagation. Direct PostPush for encrypted posts is removed. All encrypted posts propagate via the file-holder CDN, indistinguishable from any other encrypted content.
  2. +
  3. Merged pull + recipient-match. Pull sync's query is extended so peers return posts matching author ∈ query_list OR recipient ∈ wrapped_keys. Client always includes its own NodeId alongside follows. The search pattern is indistinguishable from routine pull — no "searching for DMs" traffic fingerprint.
  4. +
  5. Comment-as-introduction for cold contact. Any public post with open comments serves as a message-request surface. Comments flow via engagement diffs through the post's holder network, reaching the author's linked devices via normal pull.
  6. +
+ +

28.7 What the user sees

+
    +
  • One merged incoming feed (all content to all personas), with filter-by-persona pills
  • +
  • Reply/comment defaults to the persona whose key decrypted the post (override available)
  • +
  • Persona picker only appears at post-creation time, with contextual defaults
  • +
  • DMs to different personas appear as distinct conversation threads in the inbox
  • +
  • "New conversation with Alice" can offer a fresh per-thread ephemeral ID
  • +
+ +

28.8 Key/collision safety

+

Posting keys and network keys are ed25519 seeds (256 bits of entropy). Birthday paradox reaches 50% collision at ~2128 keys generated — not a concern even at aggressive rotation rates across a global userbase. The operational risk is weak RNG during key generation; we rely on the platform CSPRNG everywhere.

+ +

28.9 Phased rollout

+
    +
  1. Phase 1 — Remove direct PostPush for encrypted posts (keeps existing CDN tree). Encrypted DMs propagate via ManifestPush like any other content.
  2. +
  3. Phase 2 — File-holder model + header-diff propagation replaces upstream/downstream. Diverse lateral holder networks per file.
  4. +
  5. Phase 3 — Merged pull + recipient-match search. DM search becomes indistinguishable from follow-pull.
  6. +
  7. Phase 4 — Posting-key / network-key split. Local-only linked-devices hints. Multi-persona UX.
  8. +
  9. Phase 5 — Ephemeral rotating IDs for DM threads + local self-archive.
  10. +
+

Each phase is backward compatible with peers still on the previous mechanisms.

+
+

Appendix A: Timeout Reference

diff --git a/website/download.html b/website/download.html index 04675cc..f15b207 100644 --- a/website/download.html +++ b/website/download.html @@ -4,7 +4,7 @@ Download — ItsGoin - + @@ -27,41 +27,39 @@

Available for Android and Linux. Free and open source.

Stable Release

-

Version 0.4.4 — March 23, 2026

+

Version 0.5.3 — April 19, 2026

+

Multi-identity, export/import, fast startup, pagination, SAF support, AppImage video, and more. Recommended for daily use.

- -

Beta Release

-

Version 0.5.3-beta — April 19, 2026

-

Multi-identity, export/import, post merge with decryption key. May contain bugs — stable release recommended for daily use.

- - + +

Coming in Beta (0.6.x)

+

The next beta introduces a major privacy architecture overhaul:

+
    +
  • Multiple personas on one device — Public, Private, Work, per-conversation. No switching; all active at once.
  • +
  • Network-ID / Posting-ID split — peers can no longer correlate network traffic to a posting identity.
  • +
  • Ephemeral rotating DM identities — observers cannot link successive messages in a thread.
  • +
  • CDN-only DM propagation — removes the sender→recipient traffic signal entirely.
  • +
  • Multi-device support — phone + desktop share a persona without exposing the relationship.
  • +
  • File-holder CDN — replaces the upstream/downstream tree with flat lateral replication.
  • +
+

See the Identity Architecture section of the design doc for the full plan. Beta releases will appear here as each phase ships.