Switch ALL propagation-decision reads to the flat holder set.
push_manifest_to_downstream now targets file_holders instead of
blob_downstream. ManifestPush receive-side relay likewise — known
holders fan out to up to 5 most-recent peers instead of a directional
tree.
Blob delete notices: single flat fan-out to file_holders; the legacy
upstream_node tree-healing field is emitted as None (wire-stable via
serde default) and ignored on receive — the post-0.6 flat model
doesn't need sender-role distinction. send_blob_delete_notices keeps
its Option<&Upstream> parameter as an unused placeholder for signature
stability with the call sites in this commit.
Other reads migrated:
- blob fetch cascade: step 2 now tries "known holders" (up to 5)
instead of a single upstream
- manifest refresh: downstream_count reported from file_holder_count
- web/http post holder enumeration
- Worm search post/blob holder fallback (both connection.rs paths)
- DeleteRecord fan-out rewires to file_holders
- Under-replication replication check: < 2 holders
Storage additions:
- get_file_holder_count(file_id)
- remove_file_holder(file_id, peer_id)
Legacy upstream/downstream writes are still happening from Phase 2b;
those + the tables themselves go in 2e.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Eliminate all conn_mgr lock holds during network I/O across 14 actor commands
and bi-stream handlers. PostFetch, TcpPunch, PullFromPeer, FetchEngagement,
ResolveAddress, AnchorProbe use brief locks for data gathering only. WormLookup,
ContentSearch, WormQuery use connection snapshots for lock-free cascade fan-out.
RelayIntroduce extracts forwarding data under brief lock, does I/O outside.
BlobRequest, PostFetchRequest, ManifestRefresh use Arc clones instead of conn_mgr
lock. ConnectionActor hoists shared Arcs (storage, blob_store, endpoint) for
lock-free access. ResolveAddress adds 5s per-query timeout (was unbounded).
Initial exchange failure now aborts mesh upgrade (was silently continuing with
broken connection). connect_to_peer/connect_to_anchor use consistent 15s timeout.
Rebalance connects outside the lock via pending_connects pattern.
StoragePool: 8 concurrent SQLite connections in WAL mode replace single
Mutex<Storage>. Reads run fully parallel; writes serialize at SQLite level only.
PRAGMA busy_timeout=5000 for graceful write contention.
Mobile bottom nav bar (<=768px) with icon tabs. Text sizes: XS/S/M/L/XL
(75%/100%/125%/150%/200%), default M. localStorage persistence for instant
restore. Toast repositioned above mobile nav.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Security & stability:
- Incoming auth-fail rate limiting per source IP (3 attempts, then exponential backoff)
- Schema versioning via PRAGMA user_version with migration framework
Networking:
- IPv6 http_addr fix: advertise actual public IPv6 instead of 0.0.0.0
- N2/N3 TTL reduced from 7 days to 5 hours
- Full N1/N2 state re-broadcast every 4 hours
- Bootstrap isolation recovery: 24h check with sticky N1 advertising
- Bidirectional engagement propagation (upstream + downstream)
- Auto downstream registration on pull sync and push notification
- post_upstream table for CDN tree traversal
Media & UI:
- Video preload="auto" for share links and in-app blob URLs
- Following: Online/Offline split with last-seen timestamps
- DMs filtered from My Posts tab
- Image lightbox, audio player, file attachments with download prompt
- Share link unroutable address filtering
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>