per-author feed filter, ignore primitive The old People tab was built on network-layer presence (`is_online`, `last_seen` from the mesh), which was lost when v0.6.1 anonymized the network id from the posting id. Every named follow is authored under a posting id that doesn't appear in the connection-layer tables; the "Online" section listed nobody useful and Discover depended on the same broken signal. Replaced with signals derived from signed content: - Following is sorted by most-recent-post timestamp (the real meaning of "activity" in a post-anonymization world). - Discover lists named peers we've received signed profile posts from (via Phase 2d), filtered by follows / ignores / self. - Click-a-name surfaces a bio modal with View Posts / Follow / Message / Ignore actions. - Author-scoped feed filter (`View Posts` on any person) renders a "Showing posts from X" banner with a Clear button. - Ignore is a new local-only primitive; ignored peers' posts and profiles are excluded everywhere and the ignored list is editable in Settings. Core changes: - New `ignored_peers(node_id, ignored_at)` table + storage helpers (`add_ignored_peer`, `remove_ignored_peer`, `list_ignored_peers`, `is_ignored_peer`). Schema created fresh; no migration since the table is purely additive and empty on prior installs. - All 6 feed-query sites now also exclude `author IN ignored_peers`. - New `Storage::last_activity_for_authors(&[NodeId])` — one batched query returning max post timestamp per author, excluding non-feed intents (Control / Profile / Announcement / GroupKeyDistribute). - New `Storage::list_discoverable_profiles(&self_id)` — named profile rows where node_id is not self, not in follows, not in ignored, and `public_visible = 1`. Sorted by profile `updated_at` DESC. - New `Storage::delete_setting(key)` — missing counterpart to set/get. - Node wrappers: `last_activity_for_follows`, `ignore_peer` (also drops any follow + social route for the ignored peer), `unignore_peer`, `list_ignored_peers`, `list_discoverable_profiles`. - `list_follows` Tauri command now sources `last_activity_ms` from the posts-driven batched query rather than the network peer record. Tauri commands: `list_discover`, `ignore_peer`, `unignore_peer`, `list_ignored_peers`. Frontend: - Following list: see-new-activity button pattern (staged data + explicit user click to rearrange, so the list doesn't reorder under a tap mid-scroll). Periodic people-tab polling stages + lights up the button; clicking it re-renders. - Discover: rewrites the old peer-table-based list to a profile-post feed. Each card shows name + bio + profile-update age, plus Follow / Posts / Ignore actions. - Bio modal: reuses the existing generic popover. Loads display name + bio via `resolve_display`, shows follow state, offers View Posts / Follow-or-Unfollow / Message / Ignore-or-Unignore. - Author filter: banner renders at the top of the feed when active; clear button restores full feed. Filter state is a single `authorFilterNodeId` field consumed by `filterFeedPosts`. - Settings → Ignored section lists ignored peers with unignore buttons. 124 / 124 core tests pass.
5228 lines
220 KiB
Rust
5228 lines
220 KiB
Rust
use std::net::SocketAddr;
|
||
use std::path::{Path, PathBuf};
|
||
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering as AtomicOrdering};
|
||
use std::sync::Arc;
|
||
|
||
use tracing::{debug, info, warn};
|
||
|
||
use crate::activity::{ActivityCategory, ActivityEvent, ActivityLevel, ActivityLog};
|
||
use crate::blob::BlobStore;
|
||
use crate::content::compute_post_id;
|
||
use crate::crypto;
|
||
use crate::network::Network;
|
||
use crate::storage::StoragePool;
|
||
use crate::types::{
|
||
Attachment, Circle,
|
||
DeviceProfile, DeviceRole, NodeId, PeerRecord, PeerSlotKind, PeerWithAddress, Post, PostId,
|
||
PostVisibility, PublicProfile, ReachMethod, RevocationMode, SessionReachMethod, SocialRelation,
|
||
SocialRouteEntry, SocialStatus, VisibilityIntent, WormResult,
|
||
};
|
||
|
||
/// Built-in default anchor — always available as a bootstrap fallback.
|
||
/// Bootstrap anchor connect string. The NodeId here is the anchor's CURRENT
|
||
/// network identity (used for QUIC handshake / cert verification). It was
|
||
/// rotated from `17af14...` to `ab2b72...` by v0.6.1's upgrade path on the
|
||
/// anchor host at 2026-04-22 22:57 UTC. The old key became the anchor's
|
||
/// posting identity (see `DEFAULT_ANCHOR_POSTING_ID` in lib.rs) and is
|
||
/// used to verify signed announcements; it is NOT used for connection
|
||
/// verification.
|
||
///
|
||
/// Clients compiled against the pre-rotation value fail the TLS handshake
|
||
/// with "UnknownIssuer" because they pin the wrong cert identity.
|
||
const DEFAULT_ANCHOR: &str = "ab2b7258ef0b75b2c6ee8bf6595232055f6199d584d3c0fc10b15a1ed549aa13@itsgoin.net:4433";
|
||
|
||
/// A distsoc node: ties together identity, storage, and networking
|
||
pub struct Node {
|
||
pub data_dir: PathBuf,
|
||
pub storage: Arc<StoragePool>,
|
||
pub network: Arc<Network>,
|
||
/// Network identity — used for QUIC connections / routing. Stays hidden
|
||
/// from peers after the posting-key split ships end-to-end.
|
||
pub node_id: NodeId,
|
||
pub blob_store: Arc<BlobStore>,
|
||
/// Active default posting identity's public NodeId. Used as `author` on
|
||
/// content signed by this device.
|
||
pub default_posting_id: NodeId,
|
||
/// Active default posting identity's secret seed. Used to sign content
|
||
/// (posts, manifests, reactions, comments, deletes) and to wrap/unwrap
|
||
/// encryption keys.
|
||
default_posting_secret: [u8; 32],
|
||
bootstrap_anchors: tokio::sync::Mutex<Vec<(NodeId, iroh::EndpointAddr)>>,
|
||
/// True if an anchor reported another instance of this identity is already active
|
||
pub duplicate_detected: Arc<AtomicBool>,
|
||
#[allow(dead_code)]
|
||
profile: DeviceProfile,
|
||
pub activity_log: Arc<std::sync::Mutex<ActivityLog>>,
|
||
pub last_rebalance_ms: Arc<AtomicU64>,
|
||
pub last_anchor_register_ms: Arc<AtomicU64>,
|
||
/// CDN replication budget: bytes remaining we're willing to pull and cache this hour
|
||
replication_budget_remaining: Arc<AtomicU64>,
|
||
/// CDN delivery budget: bytes remaining we're willing to serve this hour
|
||
delivery_budget_remaining: Arc<AtomicU64>,
|
||
/// Last budget reset timestamp (ms)
|
||
budget_last_reset_ms: Arc<AtomicU64>,
|
||
}
|
||
|
||
impl Node {
|
||
/// Create or open a node in the given data directory (Desktop profile)
|
||
pub async fn open(data_dir: impl AsRef<Path>) -> anyhow::Result<Self> {
|
||
Self::open_with_bind(data_dir, None, DeviceProfile::Desktop).await
|
||
}
|
||
|
||
/// Create or open a mobile node in the given data directory
|
||
pub async fn open_mobile(data_dir: impl AsRef<Path>) -> anyhow::Result<Self> {
|
||
Self::open_with_bind(data_dir, None, DeviceProfile::Mobile).await
|
||
}
|
||
|
||
/// Create or open a node, optionally binding to a specific address
|
||
pub async fn open_with_bind(
|
||
data_dir: impl AsRef<Path>,
|
||
bind_addr: Option<SocketAddr>,
|
||
profile: DeviceProfile,
|
||
) -> anyhow::Result<Self> {
|
||
let data_dir = data_dir.as_ref().to_path_buf();
|
||
std::fs::create_dir_all(&data_dir)?;
|
||
|
||
// Load or generate identity key (network secret — QUIC endpoint only,
|
||
// never used as content author under the v0.6.1+ clean model).
|
||
let key_path = data_dir.join("identity.key");
|
||
let (mut secret_key, mut secret_seed) = if key_path.exists() {
|
||
let key_bytes = std::fs::read(&key_path)?;
|
||
let bytes: [u8; 32] = key_bytes
|
||
.try_into()
|
||
.map_err(|_| anyhow::anyhow!("invalid key file"))?;
|
||
(iroh::SecretKey::from_bytes(&bytes), bytes)
|
||
} else {
|
||
let key = iroh::SecretKey::generate(&mut rand::rng());
|
||
let seed = key.to_bytes();
|
||
std::fs::write(&key_path, seed)?;
|
||
info!("Generated new network identity key");
|
||
(key, seed)
|
||
};
|
||
|
||
// Open storage
|
||
let db_path = data_dir.join("itsgoin.db");
|
||
let storage = Arc::new(StoragePool::open(&db_path)?);
|
||
|
||
// Startup sweep: clear stale N2/N3 and mesh_peers from prior session
|
||
{
|
||
let s = storage.get().await;
|
||
let n_cleared = s.clear_all_n2_n3().unwrap_or(0);
|
||
let m_cleared = s.clear_all_mesh_peers().unwrap_or(0);
|
||
if n_cleared > 0 || m_cleared > 0 {
|
||
info!(n2_n3 = n_cleared, mesh_peers = m_cleared, "Startup sweep: cleared stale entries");
|
||
}
|
||
}
|
||
|
||
// Ensure a default posting identity exists, INDEPENDENT of the network
|
||
// key. On a fresh install we generate a new random ed25519 key as the
|
||
// default persona. Peers who see our posts never learn our network key.
|
||
{
|
||
let s = storage.get().await;
|
||
if s.count_posting_identities()? == 0 {
|
||
let pk = iroh::SecretKey::generate(&mut rand::rng());
|
||
let seed = pk.to_bytes();
|
||
let nid: NodeId = *pk.public().as_bytes();
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
s.upsert_posting_identity(&crate::types::PostingIdentity {
|
||
node_id: nid,
|
||
secret_seed: seed,
|
||
display_name: String::new(),
|
||
created_at: now,
|
||
})?;
|
||
s.set_default_posting_id(&nid)?;
|
||
// Mark this as the disposable auto-gen persona from the
|
||
// fresh-install flow. If the user subsequently imports, we
|
||
// prune this id iff it's still pristine (no name, no posts,
|
||
// no engagement). See `try_prune_first_run_auto_persona`.
|
||
let _ = s.set_setting("first_run_auto_persona_id", &hex::encode(nid));
|
||
info!(posting_id = %hex::encode(nid), "Generated initial posting identity (independent of network key)");
|
||
}
|
||
}
|
||
|
||
// v0.6.0 → v0.6.1 migration: if the default posting key equals the
|
||
// network key (which is what the Phase 4 migration did on upgrade from
|
||
// v0.5), rotate the network key so they become independent. The old
|
||
// key stays as the default posting identity — peers keep seeing the
|
||
// same author; only the QUIC NodeId changes.
|
||
{
|
||
let s = storage.get().await;
|
||
if let Some(default_id) = s.get_default_posting_id()? {
|
||
if let Some(default_pi) = s.get_posting_identity(&default_id)? {
|
||
if default_pi.secret_seed == secret_seed {
|
||
let new_key = iroh::SecretKey::generate(&mut rand::rng());
|
||
let new_seed = new_key.to_bytes();
|
||
std::fs::write(&key_path, new_seed)?;
|
||
info!("v0.6.1 migration: rotated network key to decouple from default posting key");
|
||
secret_key = new_key;
|
||
secret_seed = new_seed;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Open blob store
|
||
let blob_store = Arc::new(BlobStore::open(&data_dir)?);
|
||
|
||
// Activity log + timer atomics
|
||
let activity_log = Arc::new(std::sync::Mutex::new(ActivityLog::new()));
|
||
let last_rebalance_ms = Arc::new(AtomicU64::new(0));
|
||
let last_anchor_register_ms = Arc::new(AtomicU64::new(0));
|
||
|
||
// Start network (v2: single ALPN, connection manager)
|
||
let network = Arc::new(
|
||
Network::new(secret_key, Arc::clone(&storage), bind_addr, secret_seed, Arc::clone(&blob_store), profile, Arc::clone(&activity_log)).await?,
|
||
);
|
||
let node_id = network.node_id_bytes();
|
||
|
||
// Resolve default posting identity (now guaranteed to exist).
|
||
let (default_posting_id, default_posting_secret) = {
|
||
let s = storage.get().await;
|
||
let default_id = s.get_default_posting_id()?
|
||
.ok_or_else(|| anyhow::anyhow!("default posting identity missing after initialization"))?;
|
||
let pi = s.get_posting_identity(&default_id)?
|
||
.ok_or_else(|| anyhow::anyhow!("default posting identity row missing"))?;
|
||
(pi.node_id, pi.secret_seed)
|
||
};
|
||
|
||
// Auto-follow our default posting identity so our own posts show in
|
||
// the feed. The network NodeId is not followed — it's never an author.
|
||
{
|
||
let s = storage.get().await;
|
||
s.add_follow(&default_posting_id)?;
|
||
}
|
||
|
||
// Build the node (fast path — no network I/O beyond endpoint creation)
|
||
let activity_log_ref = Arc::clone(&activity_log);
|
||
let last_rebalance_ms = Arc::new(AtomicU64::new(0));
|
||
let last_anchor_register_ms = Arc::new(AtomicU64::new(0));
|
||
|
||
let role = network.device_role();
|
||
let (replication_budget, delivery_budget) = (role.replication_limit(), role.delivery_limit());
|
||
let replication_budget_remaining = Arc::new(AtomicU64::new(replication_budget));
|
||
let delivery_budget_remaining = Arc::new(AtomicU64::new(delivery_budget));
|
||
let budget_last_reset_ms = Arc::new(AtomicU64::new(
|
||
std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default().as_millis() as u64
|
||
));
|
||
blob_store.set_delivery_budget(delivery_budget);
|
||
|
||
let mut node = Self {
|
||
data_dir: data_dir.clone(),
|
||
storage: Arc::clone(&storage),
|
||
network: Arc::clone(&network),
|
||
node_id,
|
||
blob_store,
|
||
default_posting_id,
|
||
default_posting_secret,
|
||
bootstrap_anchors: tokio::sync::Mutex::new(Vec::new()),
|
||
duplicate_detected: Arc::new(AtomicBool::new(false)),
|
||
profile,
|
||
activity_log: activity_log_ref,
|
||
last_rebalance_ms,
|
||
last_anchor_register_ms,
|
||
replication_budget_remaining,
|
||
delivery_budget_remaining,
|
||
budget_last_reset_ms,
|
||
};
|
||
|
||
// Startup backfill: any named persona without a profile post gets
|
||
// one synthesized at its own `created_at`. Makes legacy / imported
|
||
// named personas Discover-able without requiring a manual rename.
|
||
// Swallow errors — backfill is best-effort; no reason to block init.
|
||
if let Err(e) = node.backfill_profile_posts_for_named_personas().await {
|
||
warn!(error = %e, "Profile-post backfill failed; continuing init");
|
||
}
|
||
|
||
Ok(node)
|
||
}
|
||
|
||
/// Bootstrap: connect to anchors, pull initial data, NAT probe, referrals.
|
||
/// Can be called during open_with_bind (blocking startup) or deferred to background.
|
||
pub async fn run_bootstrap(&self, data_dir: &Path) -> anyhow::Result<()> {
|
||
let storage = &self.storage;
|
||
let network = &self.network;
|
||
let node_id = self.node_id;
|
||
|
||
// Bootstrap: if peers table is empty, try bootstrap.json then default anchor
|
||
{
|
||
let s = storage.get().await;
|
||
let has_peers = s.has_peers()?;
|
||
drop(s);
|
||
|
||
if !has_peers {
|
||
let mut entries = Vec::new();
|
||
let bootstrap_path = data_dir.join("bootstrap.json");
|
||
if bootstrap_path.exists() {
|
||
info!("Loading bootstrap peers from {:?}", bootstrap_path);
|
||
if let Ok(data) = std::fs::read_to_string(&bootstrap_path) {
|
||
if let Ok(file_entries) = serde_json::from_str::<Vec<String>>(&data) {
|
||
entries.extend(file_entries);
|
||
}
|
||
}
|
||
}
|
||
let default = DEFAULT_ANCHOR.to_string();
|
||
if !entries.contains(&default) {
|
||
entries.push(default);
|
||
}
|
||
|
||
for entry in entries {
|
||
match crate::parse_connect_string(&entry) {
|
||
Ok((nid, addr)) => {
|
||
if nid == node_id {
|
||
continue;
|
||
}
|
||
info!(peer = hex::encode(nid), "Bootstrap: connecting to peer");
|
||
let ip_addrs: Vec<_> = addr.ip_addrs().copied().collect();
|
||
{
|
||
let s = storage.get().await;
|
||
if ip_addrs.is_empty() {
|
||
let _ = s.add_peer(&nid);
|
||
} else {
|
||
let _ = s.upsert_peer(&nid, &ip_addrs, None);
|
||
}
|
||
// Mark as anchor — bootstrap peers are infrastructure, not social follows
|
||
let _ = s.set_peer_anchor(&nid, true);
|
||
}
|
||
// Connect persistently
|
||
match network.connect_to_peer(nid, addr).await {
|
||
Ok(()) => {
|
||
info!(peer = hex::encode(nid), "Bootstrap: connected");
|
||
// Pull posts from the bootstrap peer
|
||
match network.pull_from_all().await {
|
||
Ok(stats) => {
|
||
info!(
|
||
"Bootstrap pull: {} posts from {} peers",
|
||
stats.posts_received, stats.peers_pulled
|
||
);
|
||
}
|
||
Err(e) => warn!(error = %e, "Bootstrap pull failed"),
|
||
}
|
||
// Always store anchor in known_anchors (even before referrals)
|
||
// so the periodic cycle can re-register and request referrals later
|
||
{
|
||
let s = storage.get().await;
|
||
let anchor_addrs: Vec<std::net::SocketAddr> = s.get_peer_record(&nid)
|
||
.ok().flatten()
|
||
.map(|r| r.addresses).unwrap_or_default();
|
||
if !anchor_addrs.is_empty() {
|
||
let _ = s.upsert_known_anchor(&nid, &anchor_addrs);
|
||
} else if !ip_addrs.is_empty() {
|
||
let _ = s.upsert_known_anchor(&nid, &ip_addrs);
|
||
}
|
||
}
|
||
|
||
// Request referrals from anchor (10s timeout)
|
||
match tokio::time::timeout(std::time::Duration::from_secs(10), network.request_anchor_referrals(&nid)).await {
|
||
Ok(Ok(referrals)) if !referrals.is_empty() => {
|
||
info!(count = referrals.len(), "Bootstrap: got anchor referrals");
|
||
// Spawn referral connections in background — don't block startup
|
||
let net = Arc::clone(&network);
|
||
let my_id = node_id;
|
||
let anchor = nid;
|
||
tokio::spawn(async move {
|
||
for referral in referrals {
|
||
if referral.node_id == my_id {
|
||
continue;
|
||
}
|
||
if let Some(addr_str) = referral.addresses.first() {
|
||
let connect_str = format!(
|
||
"{}@{}",
|
||
hex::encode(referral.node_id),
|
||
addr_str,
|
||
);
|
||
if let Ok((rid, raddr)) = crate::parse_connect_string(&connect_str) {
|
||
let connect_fut = async {
|
||
match net.connect_to_peer(rid, raddr).await {
|
||
Ok(()) => { info!(peer = hex::encode(rid), "Connected to referred peer"); Ok(()) },
|
||
Err(e) => {
|
||
debug!(error = %e, peer = hex::encode(rid), "One-sided connect failed, requesting introduction from anchor");
|
||
match net.connect_via_introduction(rid, anchor).await {
|
||
Ok(()) => { info!(peer = hex::encode(rid), "Connected to referred peer via hole punch"); Ok(()) },
|
||
Err(e2) => Err(e2),
|
||
}
|
||
}
|
||
}
|
||
};
|
||
match tokio::time::timeout(std::time::Duration::from_secs(15), connect_fut).await {
|
||
Ok(Ok(())) => {},
|
||
Ok(Err(e)) => debug!(error = %e, peer = hex::encode(rid), "Bootstrap referral connect failed"),
|
||
Err(_) => debug!(peer = hex::encode(rid), "Bootstrap referral connect timed out"),
|
||
}
|
||
}
|
||
}
|
||
}
|
||
net.notify_growth().await;
|
||
});
|
||
}
|
||
Ok(Ok(_)) => debug!("Bootstrap: no referrals from anchor (first to register)"),
|
||
Ok(Err(e)) => debug!(error = %e, "Bootstrap: referral request failed"),
|
||
Err(_) => debug!("Bootstrap: referral request timed out"),
|
||
}
|
||
break;
|
||
}
|
||
Err(e) => {
|
||
warn!(error = %e, "Bootstrap peer failed, trying next");
|
||
}
|
||
}
|
||
}
|
||
Err(e) => {
|
||
warn!(entry = %entry, error = %e, "Invalid bootstrap entry");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Load bootstrap anchors: anchors.json + built-in default
|
||
let mut bootstrap_anchors = Vec::new();
|
||
let mut anchor_ids = std::collections::HashSet::new();
|
||
|
||
let anchors_path = data_dir.join("anchors.json");
|
||
if anchors_path.exists() {
|
||
if let Ok(data) = std::fs::read_to_string(&anchors_path) {
|
||
if let Ok(entries) = serde_json::from_str::<Vec<String>>(&data) {
|
||
for entry in entries {
|
||
match crate::parse_connect_string(&entry) {
|
||
Ok((nid, addr)) => {
|
||
info!(peer = hex::encode(nid), "Loaded bootstrap anchor");
|
||
anchor_ids.insert(nid);
|
||
bootstrap_anchors.push((nid, addr));
|
||
}
|
||
Err(e) => {
|
||
warn!(entry = %entry, error = %e, "Invalid bootstrap anchor entry");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
if let Ok((nid, addr)) = crate::parse_connect_string(DEFAULT_ANCHOR) {
|
||
if nid != node_id && !anchor_ids.contains(&nid) {
|
||
info!("Including built-in default anchor");
|
||
bootstrap_anchors.push((nid, addr));
|
||
}
|
||
}
|
||
|
||
// Collect bootstrap anchor node IDs so we can deprioritize them
|
||
let bootstrap_anchor_ids: std::collections::HashSet<NodeId> =
|
||
bootstrap_anchors.iter().map(|(nid, _)| *nid).collect();
|
||
|
||
// Update known_anchors + peers with freshly DNS-resolved bootstrap addresses.
|
||
// Without this, stale IPv6 addresses from previous sessions can block reconnection
|
||
// on devices without IPv6 connectivity (see bugs-fixed.md #1).
|
||
{
|
||
let s = storage.get().await;
|
||
for (nid, addr) in &bootstrap_anchors {
|
||
let ip_addrs: Vec<std::net::SocketAddr> = addr.ip_addrs().copied().collect();
|
||
if !ip_addrs.is_empty() {
|
||
let _ = s.upsert_known_anchor(nid, &ip_addrs);
|
||
let _ = s.upsert_peer(nid, &ip_addrs, None);
|
||
}
|
||
}
|
||
}
|
||
|
||
// Rebuild social routes from follows + audience
|
||
{
|
||
let s = storage.get().await;
|
||
match s.rebuild_social_routes() {
|
||
Ok(count) if count > 0 => info!(count, "Rebuilt social routes on startup"),
|
||
_ => {}
|
||
}
|
||
}
|
||
|
||
// Startup connection: try discovered anchors FIRST, bootstrap anchors LAST.
|
||
// This keeps load off bootstrap anchors — they're only needed when nothing else works.
|
||
// Order: known non-bootstrap anchors → mDNS (via iroh) → bootstrap anchors
|
||
{
|
||
let conn_count = network.connection_count().await;
|
||
if conn_count < 5 {
|
||
let known = {
|
||
let s = storage.get().await;
|
||
s.list_known_anchors().unwrap_or_default()
|
||
};
|
||
// Split into discovered anchors (priority) and bootstrap anchors (fallback)
|
||
let (discovered, bootstrap_known): (Vec<_>, Vec<_>) = known.into_iter()
|
||
.partition(|(nid, _)| !bootstrap_anchor_ids.contains(nid));
|
||
|
||
// Phase 1: Try discovered (non-bootstrap) anchors first
|
||
let mut connected_anchor = None;
|
||
for (anchor_nid, anchor_addrs) in &discovered {
|
||
if *anchor_nid == node_id || network.is_peer_connected_or_session(anchor_nid).await {
|
||
continue;
|
||
}
|
||
let endpoint_id = match iroh::EndpointId::from_bytes(anchor_nid) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let mut addr = iroh::EndpointAddr::from(endpoint_id);
|
||
for sa in anchor_addrs {
|
||
addr = addr.with_ip_addr(*sa);
|
||
}
|
||
info!(peer = hex::encode(anchor_nid), "Trying discovered anchor");
|
||
match tokio::time::timeout(std::time::Duration::from_secs(10), network.connect_to_anchor(*anchor_nid, addr)).await {
|
||
Ok(Ok(())) => {
|
||
info!(peer = hex::encode(anchor_nid), "Connected to discovered anchor");
|
||
connected_anchor = Some(*anchor_nid);
|
||
break;
|
||
}
|
||
Ok(Err(e)) => debug!(error = %e, peer = hex::encode(anchor_nid), "Discovered anchor: connect failed"),
|
||
Err(_) => debug!(peer = hex::encode(anchor_nid), "Discovered anchor: connect timed out"),
|
||
}
|
||
}
|
||
|
||
// Phase 2: Fall back to bootstrap anchors only if no discovered anchor worked
|
||
if connected_anchor.is_none() {
|
||
for (anchor_nid, anchor_addrs) in &bootstrap_known {
|
||
if *anchor_nid == node_id || network.is_peer_connected_or_session(anchor_nid).await {
|
||
continue;
|
||
}
|
||
let endpoint_id = match iroh::EndpointId::from_bytes(anchor_nid) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let mut addr = iroh::EndpointAddr::from(endpoint_id);
|
||
for sa in anchor_addrs {
|
||
addr = addr.with_ip_addr(*sa);
|
||
}
|
||
info!(peer = hex::encode(anchor_nid), "Trying bootstrap anchor (fallback)");
|
||
match tokio::time::timeout(std::time::Duration::from_secs(10), network.connect_to_anchor(*anchor_nid, addr)).await {
|
||
Ok(Ok(())) => {
|
||
info!(peer = hex::encode(anchor_nid), "Connected to bootstrap anchor");
|
||
connected_anchor = Some(*anchor_nid);
|
||
break;
|
||
}
|
||
Ok(Err(e)) => debug!(error = %e, peer = hex::encode(anchor_nid), "Bootstrap anchor: connect failed"),
|
||
Err(_) => debug!(peer = hex::encode(anchor_nid), "Bootstrap anchor: connect timed out"),
|
||
}
|
||
}
|
||
}
|
||
|
||
// Phase 3: NAT probe + referrals from whichever anchor we connected to
|
||
if let Some(anchor_nid) = connected_anchor {
|
||
match tokio::time::timeout(
|
||
std::time::Duration::from_secs(15),
|
||
network.request_nat_filter_probe(&anchor_nid),
|
||
).await {
|
||
Ok(Ok(())) => info!("NAT filter probe completed during bootstrap"),
|
||
Ok(Err(e)) => warn!(error = %e, "NAT filter probe failed during bootstrap"),
|
||
Err(_) => warn!("NAT filter probe timed out during bootstrap"),
|
||
}
|
||
match tokio::time::timeout(std::time::Duration::from_secs(10), network.request_anchor_referrals(&anchor_nid)).await {
|
||
Ok(Ok(referrals)) if !referrals.is_empty() => {
|
||
info!(count = referrals.len(), "Got anchor referrals");
|
||
let net = Arc::clone(&network);
|
||
let my_id = node_id;
|
||
let anchor = anchor_nid;
|
||
tokio::spawn(async move {
|
||
for referral in referrals {
|
||
if referral.node_id == my_id {
|
||
continue;
|
||
}
|
||
if let Some(addr_str) = referral.addresses.first() {
|
||
let connect_str = format!(
|
||
"{}@{}",
|
||
hex::encode(referral.node_id),
|
||
addr_str,
|
||
);
|
||
if let Ok((rid, raddr)) = crate::parse_connect_string(&connect_str) {
|
||
let connect_fut = async {
|
||
match net.connect_to_peer(rid, raddr).await {
|
||
Ok(()) => { info!(peer = hex::encode(rid), "Connected to referred peer"); Ok(()) },
|
||
Err(_) => {
|
||
match net.connect_via_introduction(rid, anchor).await {
|
||
Ok(()) => { info!(peer = hex::encode(rid), "Connected via hole punch"); Ok(()) },
|
||
Err(e) => Err(e),
|
||
}
|
||
}
|
||
}
|
||
};
|
||
match tokio::time::timeout(std::time::Duration::from_secs(15), connect_fut).await {
|
||
Ok(Ok(())) => {},
|
||
Ok(Err(e)) => debug!(error = %e, peer = hex::encode(rid), "Referral connect failed"),
|
||
Err(_) => debug!(peer = hex::encode(rid), "Referral connect timed out"),
|
||
}
|
||
}
|
||
}
|
||
}
|
||
net.notify_growth().await;
|
||
});
|
||
}
|
||
Ok(Ok(_)) => debug!("No referrals from anchor"),
|
||
Ok(Err(e)) => debug!(error = %e, "Referral request failed"),
|
||
Err(_) => debug!("Referral request timed out"),
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Store bootstrap anchors on the node
|
||
*self.bootstrap_anchors.lock().await = bootstrap_anchors;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Get recent activity events (for diagnostics UI).
|
||
pub fn get_activity_log(&self, limit: usize) -> Vec<ActivityEvent> {
|
||
self.activity_log.lock().unwrap().recent(limit)
|
||
}
|
||
|
||
/// Get timer state: (last_rebalance_ms, last_anchor_register_ms).
|
||
pub fn timer_state(&self) -> (u64, u64) {
|
||
(
|
||
self.last_rebalance_ms.load(AtomicOrdering::Relaxed),
|
||
self.last_anchor_register_ms.load(AtomicOrdering::Relaxed),
|
||
)
|
||
}
|
||
|
||
/// Get the secret seed bytes (for crypto operations by consumers like Tauri)
|
||
pub fn secret_seed_bytes(&self) -> [u8; 32] {
|
||
self.default_posting_secret
|
||
}
|
||
|
||
// --- CDN Replication Budget ---
|
||
|
||
/// Reset budgets if an hour has elapsed since last reset.
|
||
fn maybe_reset_budgets(&self) {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default()
|
||
.as_millis() as u64;
|
||
let last = self.budget_last_reset_ms.load(AtomicOrdering::Relaxed);
|
||
if now.saturating_sub(last) >= 3_600_000 {
|
||
let role = self.network.device_role();
|
||
self.replication_budget_remaining.store(role.replication_limit(), AtomicOrdering::Relaxed);
|
||
self.delivery_budget_remaining.store(role.delivery_limit(), AtomicOrdering::Relaxed);
|
||
self.budget_last_reset_ms.store(now, AtomicOrdering::Relaxed);
|
||
debug!(role = %role, "CDN budgets reset for new hour");
|
||
}
|
||
}
|
||
|
||
/// Try to consume replication budget. Returns true if within budget.
|
||
pub fn consume_replication_budget(&self, bytes: u64) -> bool {
|
||
self.maybe_reset_budgets();
|
||
let prev = self.replication_budget_remaining.fetch_update(
|
||
AtomicOrdering::Relaxed,
|
||
AtomicOrdering::Relaxed,
|
||
|current| {
|
||
if current >= bytes { Some(current - bytes) } else { None }
|
||
},
|
||
);
|
||
prev.is_ok()
|
||
}
|
||
|
||
/// Try to consume delivery budget. Returns true if within budget.
|
||
pub fn consume_delivery_budget(&self, bytes: u64) -> bool {
|
||
self.maybe_reset_budgets();
|
||
let prev = self.delivery_budget_remaining.fetch_update(
|
||
AtomicOrdering::Relaxed,
|
||
AtomicOrdering::Relaxed,
|
||
|current| {
|
||
if current >= bytes { Some(current - bytes) } else { None }
|
||
},
|
||
);
|
||
prev.is_ok()
|
||
}
|
||
|
||
/// Get remaining replication budget bytes.
|
||
pub fn replication_budget_remaining(&self) -> u64 {
|
||
self.maybe_reset_budgets();
|
||
self.replication_budget_remaining.load(AtomicOrdering::Relaxed)
|
||
}
|
||
|
||
/// Get remaining delivery budget bytes.
|
||
pub fn delivery_budget_remaining(&self) -> u64 {
|
||
self.maybe_reset_budgets();
|
||
self.delivery_budget_remaining.load(AtomicOrdering::Relaxed)
|
||
}
|
||
|
||
// ---- Posting identities (multi-persona) ----
|
||
|
||
/// List all posting identities held by this device.
|
||
pub async fn list_posting_identities(&self) -> anyhow::Result<Vec<crate::types::PostingIdentity>> {
|
||
let s = self.storage.get().await;
|
||
s.list_posting_identities()
|
||
}
|
||
|
||
/// Create a new posting identity with a fresh ed25519 key. Auto-follows
|
||
/// the new identity so its own posts show in the merged feed.
|
||
pub async fn create_posting_identity(
|
||
&self,
|
||
display_name: String,
|
||
) -> anyhow::Result<crate::types::PostingIdentity> {
|
||
let key = iroh::SecretKey::generate(&mut rand::rng());
|
||
let seed: [u8; 32] = key.to_bytes();
|
||
let node_id: NodeId = *key.public().as_bytes();
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
let identity = crate::types::PostingIdentity {
|
||
node_id,
|
||
secret_seed: seed,
|
||
display_name: display_name.clone(),
|
||
created_at: now,
|
||
};
|
||
{
|
||
let s = self.storage.get().await;
|
||
s.upsert_posting_identity(&identity)?;
|
||
// Auto-follow this persona so its own posts reach its own feed.
|
||
s.add_follow(&node_id)?;
|
||
}
|
||
|
||
// If the user supplied a non-empty display name at creation time,
|
||
// emit a signed profile post immediately. This makes the persona
|
||
// Discover-able by other nodes even before the user posts anything
|
||
// under it. `publish_profile_post_as` signs with the persona's own
|
||
// secret (not the default posting secret) and propagates via the
|
||
// normal neighbor-manifest CDN path.
|
||
if !display_name.is_empty() {
|
||
if let Err(e) = self.publish_profile_post_as(&node_id, &seed, &display_name, "", None).await {
|
||
warn!(persona = hex::encode(node_id), error = %e, "Failed to emit initial profile post for new persona");
|
||
}
|
||
}
|
||
|
||
Ok(identity)
|
||
}
|
||
|
||
/// Build + store + propagate a `VisibilityIntent::Profile` post authored
|
||
/// by the given persona (not the default posting identity). Extracted so
|
||
/// both `create_posting_identity` and the startup backfill can use it.
|
||
async fn publish_profile_post_as(
|
||
&self,
|
||
posting_id: &NodeId,
|
||
posting_secret: &[u8; 32],
|
||
display_name: &str,
|
||
bio: &str,
|
||
avatar_cid: Option<[u8; 32]>,
|
||
) -> anyhow::Result<()> {
|
||
let profile_post = crate::profile::build_profile_post(
|
||
posting_id,
|
||
posting_secret,
|
||
display_name,
|
||
bio,
|
||
avatar_cid,
|
||
);
|
||
let profile_post_id = crate::content::compute_post_id(&profile_post);
|
||
let timestamp_ms = profile_post.timestamp_ms;
|
||
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&profile_post_id,
|
||
&profile_post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Profile,
|
||
)?;
|
||
crate::profile::apply_profile_post_if_applicable(
|
||
&*storage,
|
||
&profile_post,
|
||
Some(&VisibilityIntent::Profile),
|
||
)?;
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
posting_id,
|
||
posting_secret,
|
||
&profile_post_id,
|
||
timestamp_ms,
|
||
).await;
|
||
Ok(())
|
||
}
|
||
|
||
/// Backfill: for every posting identity with a non-empty display_name
|
||
/// that doesn't already have a `VisibilityIntent::Profile` post,
|
||
/// synthesize one so the persona becomes Discover-able. Uses the
|
||
/// persona's `created_at` as the post timestamp so chronology matches
|
||
/// the persona's history.
|
||
///
|
||
/// Called once from `Node::open_with_bind` after all migrations. Safe to
|
||
/// re-run: the `has_profile_post_by_author` check makes it idempotent.
|
||
async fn backfill_profile_posts_for_named_personas(&self) -> anyhow::Result<usize> {
|
||
let personas = {
|
||
let storage = self.storage.get().await;
|
||
storage.list_posting_identities()?
|
||
};
|
||
let mut backfilled = 0usize;
|
||
for pi in personas {
|
||
if pi.display_name.is_empty() {
|
||
continue;
|
||
}
|
||
{
|
||
let storage = self.storage.get().await;
|
||
if storage.has_profile_post_by_author(&pi.node_id)? {
|
||
continue;
|
||
}
|
||
}
|
||
// Build a profile post whose internal timestamp equals the
|
||
// persona's created_at. This stops the backfilled post from
|
||
// later losing a monotonicity check against a real profile
|
||
// update the user authors in the future.
|
||
let signature = crate::crypto::sign_profile(
|
||
&pi.secret_seed,
|
||
&pi.display_name,
|
||
"",
|
||
&None,
|
||
pi.created_at,
|
||
);
|
||
let content = crate::types::ProfilePostContent {
|
||
display_name: pi.display_name.clone(),
|
||
bio: String::new(),
|
||
avatar_cid: None,
|
||
timestamp_ms: pi.created_at,
|
||
signature,
|
||
};
|
||
let post = Post {
|
||
author: pi.node_id,
|
||
content: serde_json::to_string(&content).unwrap_or_default(),
|
||
attachments: vec![],
|
||
timestamp_ms: pi.created_at,
|
||
};
|
||
let post_id = crate::content::compute_post_id(&post);
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&post_id,
|
||
&post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Profile,
|
||
)?;
|
||
crate::profile::apply_profile_post_if_applicable(
|
||
&*storage,
|
||
&post,
|
||
Some(&VisibilityIntent::Profile),
|
||
)?;
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
&pi.node_id,
|
||
&pi.secret_seed,
|
||
&post_id,
|
||
pi.created_at,
|
||
).await;
|
||
backfilled += 1;
|
||
}
|
||
if backfilled > 0 {
|
||
info!(count = backfilled, "Backfilled profile posts for named personas without one");
|
||
}
|
||
Ok(backfilled)
|
||
}
|
||
|
||
/// If the fresh-install auto-gen persona is still pristine (no name, no
|
||
/// posts, no engagement, not the current default), delete it. Called at
|
||
/// the end of `import_as_personas` so an "import as persona" flow
|
||
/// doesn't leave an orphan blank persona around.
|
||
///
|
||
/// Any of four sticky conditions prevents deletion:
|
||
/// - the user set a display_name
|
||
/// - the user authored a post under this persona
|
||
/// - the user authored a reaction or comment under this persona
|
||
/// - this persona is still the current default (no imported identity
|
||
/// replaced it)
|
||
pub async fn try_prune_first_run_auto_persona(&self) -> anyhow::Result<bool> {
|
||
let (marker_hex, current_default) = {
|
||
let s = self.storage.get().await;
|
||
let m = s.get_setting("first_run_auto_persona_id")?;
|
||
let d = s.get_default_posting_id()?;
|
||
(m, d)
|
||
};
|
||
let Some(hex_str) = marker_hex else { return Ok(false); };
|
||
let Ok(marker_id) = crate::parse_node_id_hex(&hex_str) else {
|
||
// Corrupt marker — clear and move on.
|
||
let s = self.storage.get().await;
|
||
let _ = s.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
};
|
||
|
||
let storage = self.storage.get().await;
|
||
|
||
// Still the default? Import didn't replace it — keep.
|
||
if current_default == Some(marker_id) {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
}
|
||
// Persona still exists?
|
||
let Some(pi) = storage.get_posting_identity(&marker_id)? else {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
};
|
||
// User named it? Keep.
|
||
if !pi.display_name.is_empty() {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
}
|
||
// User authored anything under it? Keep.
|
||
if storage.has_any_post_by_author(&marker_id)? {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
}
|
||
if storage.has_any_engagement_by_author(&marker_id)? {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
return Ok(false);
|
||
}
|
||
// All gates passed — persona is definitively pristine and no longer
|
||
// the default. Safe to drop.
|
||
storage.delete_posting_identity(&marker_id)?;
|
||
let _ = storage.remove_follow(&marker_id);
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
info!(persona = %hex_str, "Pruned pristine fresh-install persona after import");
|
||
Ok(true)
|
||
}
|
||
|
||
/// Delete a posting identity. Refuses to delete the currently default
|
||
/// posting identity unless the caller has already switched the default.
|
||
pub async fn delete_posting_identity(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let s = self.storage.get().await;
|
||
if let Some(default) = s.get_default_posting_id()? {
|
||
if default == *node_id {
|
||
anyhow::bail!("cannot delete the default posting identity; set a different default first");
|
||
}
|
||
}
|
||
s.delete_posting_identity(node_id)?;
|
||
// Best-effort: remove the auto-follow row for this persona.
|
||
let _ = s.remove_follow(node_id);
|
||
Ok(())
|
||
}
|
||
|
||
/// Switch the default posting identity. Takes effect on next restart for
|
||
/// the Node's cached fields, but new posts created via create_post_as can
|
||
/// already use the new identity immediately.
|
||
pub async fn set_default_posting_identity(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let s = self.storage.get().await;
|
||
if s.get_posting_identity(node_id)?.is_none() {
|
||
anyhow::bail!("unknown posting identity");
|
||
}
|
||
s.set_default_posting_id(node_id)?;
|
||
Ok(())
|
||
}
|
||
|
||
// ---- Identity export/import ----
|
||
|
||
pub fn secret_seed(&self) -> [u8; 32] {
|
||
self.default_posting_secret
|
||
}
|
||
|
||
pub fn export_identity_hex(&self) -> anyhow::Result<String> {
|
||
let key_path = self.data_dir.join("identity.key");
|
||
let key_bytes = std::fs::read(&key_path)?;
|
||
Ok(hex::encode(key_bytes))
|
||
}
|
||
|
||
pub fn import_identity(data_dir: &Path, hex_key: &str) -> anyhow::Result<()> {
|
||
std::fs::create_dir_all(data_dir)?;
|
||
let key_path = data_dir.join("identity.key");
|
||
if key_path.exists() {
|
||
anyhow::bail!("identity.key already exists in {:?} — refusing to overwrite", data_dir);
|
||
}
|
||
let bytes = hex::decode(hex_key)?;
|
||
if bytes.len() != 32 {
|
||
anyhow::bail!("key must be exactly 32 bytes (64 hex chars), got {} bytes", bytes.len());
|
||
}
|
||
std::fs::write(&key_path, &bytes)?;
|
||
Ok(())
|
||
}
|
||
|
||
/// Get up to 10 currently-connected peer NodeIds (for recent_peers in profile).
|
||
/// Prefers social peers, then wide.
|
||
async fn current_recent_peers(&self) -> Vec<NodeId> {
|
||
let conns = self.network.connection_info().await;
|
||
let mut social: Vec<NodeId> = Vec::new();
|
||
let mut wide: Vec<NodeId> = Vec::new();
|
||
for (nid, kind, _) in conns {
|
||
if nid == self.node_id {
|
||
continue;
|
||
}
|
||
match kind {
|
||
PeerSlotKind::Preferred | PeerSlotKind::Local => social.push(nid),
|
||
PeerSlotKind::Wide => wide.push(nid),
|
||
}
|
||
}
|
||
let mut result = social;
|
||
result.extend(wide);
|
||
result.truncate(10);
|
||
result
|
||
}
|
||
|
||
// ---- Posts ----
|
||
|
||
pub async fn create_post(&self, content: String) -> anyhow::Result<(PostId, Post)> {
|
||
let (id, post, _vis) = self
|
||
.create_post_with_visibility(content, VisibilityIntent::Public, vec![])
|
||
.await?;
|
||
Ok((id, post))
|
||
}
|
||
|
||
pub async fn create_post_with_visibility(
|
||
&self,
|
||
content: String,
|
||
intent: VisibilityIntent,
|
||
attachment_data: Vec<(Vec<u8>, String)>,
|
||
) -> anyhow::Result<(PostId, Post, PostVisibility)> {
|
||
self.create_post_inner(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
content,
|
||
intent,
|
||
attachment_data,
|
||
).await
|
||
}
|
||
|
||
/// Create a post authored by a specific posting identity held by this
|
||
/// device. Looks up the posting secret and routes through the same post
|
||
/// creation pipeline as the default.
|
||
pub async fn create_post_as(
|
||
&self,
|
||
posting_id: &NodeId,
|
||
content: String,
|
||
intent: VisibilityIntent,
|
||
attachment_data: Vec<(Vec<u8>, String)>,
|
||
) -> anyhow::Result<(PostId, Post, PostVisibility)> {
|
||
let identity = {
|
||
let s = self.storage.get().await;
|
||
s.get_posting_identity(posting_id)?
|
||
.ok_or_else(|| anyhow::anyhow!("unknown posting identity"))?
|
||
};
|
||
self.create_post_inner(
|
||
&identity.node_id,
|
||
&identity.secret_seed,
|
||
content,
|
||
intent,
|
||
attachment_data,
|
||
).await
|
||
}
|
||
|
||
async fn create_post_inner(
|
||
&self,
|
||
posting_id: &NodeId,
|
||
posting_secret: &[u8; 32],
|
||
content: String,
|
||
intent: VisibilityIntent,
|
||
attachment_data: Vec<(Vec<u8>, String)>,
|
||
) -> anyhow::Result<(PostId, Post, PostVisibility)> {
|
||
// Validate attachments
|
||
if attachment_data.len() > 4 {
|
||
anyhow::bail!("max 4 attachments per post");
|
||
}
|
||
for (data, _) in &attachment_data {
|
||
if data.len() > 10 * 1024 * 1024 {
|
||
anyhow::bail!("attachment exceeds 10MB limit");
|
||
}
|
||
}
|
||
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
// Determine encryption parameters and generate CEK if needed.
|
||
// The CEK is generated BEFORE both content and blob encryption so they share the same key.
|
||
enum EncryptionMode {
|
||
Public,
|
||
Recipient { cek: [u8; 32], recipients: Vec<NodeId> },
|
||
Group { cek: [u8; 32], group_id: [u8; 32], epoch: u64, group_seed: [u8; 32], group_pubkey: [u8; 32] },
|
||
}
|
||
|
||
let mode = match &intent {
|
||
VisibilityIntent::Public => EncryptionMode::Public,
|
||
VisibilityIntent::Circle(circle_name) => {
|
||
// Try group encryption first
|
||
let group_info = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_group_key_by_circle(circle_name)?
|
||
.and_then(|gk| {
|
||
storage.get_group_seed(&gk.group_id, gk.epoch).ok().flatten()
|
||
.map(|seed| (gk.group_id, gk.epoch, seed, gk.group_public_key))
|
||
})
|
||
};
|
||
if let Some((group_id, epoch, group_seed, group_pubkey)) = group_info {
|
||
let mut cek = [0u8; 32];
|
||
rand::RngCore::fill_bytes(&mut rand::rng(), &mut cek);
|
||
EncryptionMode::Group { cek, group_id, epoch, group_seed, group_pubkey }
|
||
} else {
|
||
let recipients = self.resolve_recipients(&intent).await?;
|
||
if recipients.is_empty() {
|
||
anyhow::bail!("no recipients resolved for this visibility");
|
||
}
|
||
let mut cek = [0u8; 32];
|
||
rand::RngCore::fill_bytes(&mut rand::rng(), &mut cek);
|
||
EncryptionMode::Recipient { cek, recipients }
|
||
}
|
||
}
|
||
_ => {
|
||
let recipients = self.resolve_recipients(&intent).await?;
|
||
if recipients.is_empty() {
|
||
anyhow::bail!("no recipients resolved for this visibility");
|
||
}
|
||
let mut cek = [0u8; 32];
|
||
rand::RngCore::fill_bytes(&mut rand::rng(), &mut cek);
|
||
EncryptionMode::Recipient { cek, recipients }
|
||
}
|
||
};
|
||
|
||
// Store blob files — for encrypted posts, encrypt each blob with the shared CEK.
|
||
// CID is computed on the ciphertext so peers can verify what they store.
|
||
let mut attachments = Vec::with_capacity(attachment_data.len());
|
||
for (data, mime) in &attachment_data {
|
||
let (store_data, size) = match &mode {
|
||
EncryptionMode::Public => {
|
||
(data.clone(), data.len() as u64)
|
||
}
|
||
EncryptionMode::Recipient { cek, .. } | EncryptionMode::Group { cek, .. } => {
|
||
let encrypted = crypto::encrypt_bytes_with_cek(data, cek)?;
|
||
let sz = encrypted.len() as u64;
|
||
(encrypted, sz)
|
||
}
|
||
};
|
||
let cid = crate::blob::compute_blob_id(&store_data);
|
||
self.blob_store.store(&cid, &store_data)?;
|
||
attachments.push(Attachment {
|
||
cid,
|
||
mime_type: mime.clone(),
|
||
size_bytes: size,
|
||
});
|
||
}
|
||
|
||
// Encrypt content and build visibility
|
||
let (final_content, visibility) = match mode {
|
||
EncryptionMode::Public => (content, PostVisibility::Public),
|
||
EncryptionMode::Recipient { cek, recipients } => {
|
||
let (encrypted, wrapped_keys) =
|
||
crypto::encrypt_post_with_cek(&content, &cek, posting_secret, posting_id, &recipients)?;
|
||
(
|
||
encrypted,
|
||
PostVisibility::Encrypted {
|
||
recipients: wrapped_keys,
|
||
},
|
||
)
|
||
}
|
||
EncryptionMode::Group { cek, group_id, epoch, group_seed, group_pubkey } => {
|
||
let (encrypted, wrapped_cek) =
|
||
crypto::encrypt_post_for_group_with_cek(&content, &cek, &group_seed, &group_pubkey)?;
|
||
(
|
||
encrypted,
|
||
PostVisibility::GroupEncrypted {
|
||
group_id,
|
||
epoch,
|
||
wrapped_cek,
|
||
},
|
||
)
|
||
}
|
||
};
|
||
|
||
let post = Post {
|
||
author: *posting_id,
|
||
content: final_content,
|
||
attachments,
|
||
timestamp_ms: now,
|
||
};
|
||
|
||
let post_id = compute_post_id(&post);
|
||
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(&post_id, &post, &visibility, &intent)?;
|
||
for att in &post.attachments {
|
||
storage.record_blob(&att.cid, &post_id, posting_id, att.size_bytes, &att.mime_type, now)?;
|
||
// Auto-pin own blobs so they're never evicted before foreign content
|
||
let _ = storage.pin_blob(&att.cid);
|
||
}
|
||
|
||
// Initialize encrypted receipt + comment slots for non-public posts
|
||
if !matches!(visibility, PostVisibility::Public) {
|
||
let participant_count = match &visibility {
|
||
PostVisibility::Encrypted { recipients } => recipients.len(),
|
||
PostVisibility::GroupEncrypted { .. } => {
|
||
// For group posts, we don't know exact member count at creation time;
|
||
// use a reasonable default (the circle members count, resolved earlier)
|
||
match &intent {
|
||
VisibilityIntent::Circle(circle_name) => {
|
||
storage.get_circle_members(circle_name)
|
||
.map(|m| m.len() + 1) // +1 for author
|
||
.unwrap_or(2)
|
||
}
|
||
_ => 2,
|
||
}
|
||
}
|
||
PostVisibility::Public => unreachable!(),
|
||
};
|
||
|
||
let receipt_slots: Vec<Vec<u8>> = (0..participant_count)
|
||
.map(|_| crypto::random_slot_noise(64))
|
||
.collect();
|
||
let comment_slot_count = (participant_count + 2) / 3; // ceil(participants / 3)
|
||
let comment_slots: Vec<Vec<u8>> = (0..comment_slot_count)
|
||
.map(|_| crypto::random_slot_noise(256))
|
||
.collect();
|
||
|
||
let blob_header = crate::types::BlobHeader {
|
||
post_id,
|
||
author: *posting_id,
|
||
reactions: vec![],
|
||
comments: vec![],
|
||
policy: Default::default(),
|
||
updated_at: now,
|
||
thread_splits: vec![],
|
||
receipt_slots,
|
||
comment_slots,
|
||
prior_author: None,
|
||
};
|
||
let header_json = serde_json::to_string(&blob_header)?;
|
||
storage.store_blob_header(&post_id, posting_id, &header_json, now)?;
|
||
}
|
||
}
|
||
|
||
// Build and store CDN manifests for blobs
|
||
if !post.attachments.is_empty() {
|
||
let storage = self.storage.get().await;
|
||
let (previous, _following) = storage.get_author_post_neighborhood(posting_id, now, 10)?;
|
||
drop(storage);
|
||
|
||
let manifest = crate::types::AuthorManifest {
|
||
post_id,
|
||
author: *posting_id,
|
||
author_addresses: self.network.our_addresses(),
|
||
created_at: now,
|
||
updated_at: now,
|
||
previous_posts: previous,
|
||
following_posts: vec![],
|
||
signature: vec![],
|
||
};
|
||
let sig = crypto::sign_manifest(posting_secret, &manifest);
|
||
let mut manifest = manifest;
|
||
manifest.signature = sig;
|
||
|
||
let manifest_json = serde_json::to_string(&manifest)?;
|
||
{
|
||
let storage = self.storage.get().await;
|
||
for att in &post.attachments {
|
||
storage.store_cdn_manifest(&att.cid, &manifest_json, posting_id, now)?;
|
||
}
|
||
}
|
||
|
||
// Update previous posts' manifests to include this new post as a following_post
|
||
self.update_neighbor_manifests_as(posting_id, posting_secret, &post_id, now).await;
|
||
|
||
// Push updated manifests to downstream peers
|
||
let manifests_to_push = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_manifests_for_author_blobs(posting_id).unwrap_or_default()
|
||
};
|
||
let our_addrs = self.network.our_addresses();
|
||
for (push_cid, push_json) in &manifests_to_push {
|
||
if let Ok(author_manifest) = serde_json::from_str::<crate::types::AuthorManifest>(push_json) {
|
||
let cdn_manifest = crate::types::CdnManifest {
|
||
author_manifest: author_manifest,
|
||
host: self.node_id,
|
||
host_addresses: our_addrs.clone(),
|
||
source: self.node_id,
|
||
source_addresses: our_addrs.clone(),
|
||
downstream_count: 0,
|
||
};
|
||
self.network.push_manifest_to_downstream(push_cid, &cdn_manifest).await;
|
||
}
|
||
}
|
||
}
|
||
|
||
// v0.6.2: posts propagate ONLY via the CDN (pull + header-diff
|
||
// neighbor propagation). Persona-signed direct pushes (PostPush,
|
||
// PostNotification) are gone — they exposed sender→recipient traffic.
|
||
info!(post_id = hex::encode(post_id), "Created new post");
|
||
Ok((post_id, post, visibility))
|
||
}
|
||
|
||
/// Update the manifests of recent prior posts to include a newly created post
|
||
/// in their following_posts list. Re-signs each updated manifest.
|
||
async fn update_neighbor_manifests_as(
|
||
&self,
|
||
posting_id: &NodeId,
|
||
posting_secret: &[u8; 32],
|
||
new_post_id: &PostId,
|
||
new_timestamp_ms: u64,
|
||
) {
|
||
let storage = self.storage.get().await;
|
||
let manifests = match storage.get_manifests_for_author_blobs(posting_id) {
|
||
Ok(m) => m,
|
||
Err(e) => {
|
||
warn!("Failed to get manifests for neighbor update: {}", e);
|
||
return;
|
||
}
|
||
};
|
||
drop(storage);
|
||
|
||
let new_entry = crate::types::ManifestEntry {
|
||
post_id: *new_post_id,
|
||
timestamp_ms: new_timestamp_ms,
|
||
has_attachments: true,
|
||
};
|
||
|
||
for (cid, json) in manifests {
|
||
let mut manifest: crate::types::AuthorManifest = match serde_json::from_str(&json) {
|
||
Ok(m) => m,
|
||
Err(_) => continue,
|
||
};
|
||
// Only update if this manifest's post was created before the new post
|
||
if manifest.created_at >= new_timestamp_ms {
|
||
continue;
|
||
}
|
||
// Don't add duplicate
|
||
if manifest.following_posts.iter().any(|e| e.post_id == *new_post_id) {
|
||
continue;
|
||
}
|
||
// Keep max 10 following_posts
|
||
if manifest.following_posts.len() >= 10 {
|
||
continue;
|
||
}
|
||
manifest.following_posts.push(new_entry.clone());
|
||
manifest.updated_at = new_timestamp_ms;
|
||
manifest.signature = crypto::sign_manifest(posting_secret, &manifest);
|
||
|
||
let updated_json = match serde_json::to_string(&manifest) {
|
||
Ok(j) => j,
|
||
Err(_) => continue,
|
||
};
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.store_cdn_manifest(&cid, &updated_json, posting_id, new_timestamp_ms);
|
||
drop(storage);
|
||
}
|
||
}
|
||
|
||
async fn resolve_recipients(&self, intent: &VisibilityIntent) -> anyhow::Result<Vec<NodeId>> {
|
||
let storage = self.storage.get().await;
|
||
match intent {
|
||
VisibilityIntent::Public => Ok(vec![]),
|
||
VisibilityIntent::Friends => storage.list_public_follows(),
|
||
VisibilityIntent::Circle(name) => storage.get_circle_members(name),
|
||
VisibilityIntent::Direct(ids) => Ok(ids.clone()),
|
||
// Control / Profile / Announcement posts are always Public on
|
||
// the wire; GroupKeyDistribute posts build their own recipient
|
||
// list in `group_key_distribution::build_distribution_post`.
|
||
// None of these use the standard resolver.
|
||
VisibilityIntent::Control
|
||
| VisibilityIntent::Profile
|
||
| VisibilityIntent::GroupKeyDistribute
|
||
| VisibilityIntent::Announcement => Ok(vec![]),
|
||
}
|
||
}
|
||
|
||
pub async fn get_feed(
|
||
&self,
|
||
) -> anyhow::Result<Vec<(PostId, Post, PostVisibility, Option<String>)>> {
|
||
let (raw, group_seeds, personas) = {
|
||
let storage = self.storage.get().await;
|
||
let posts = storage.get_feed()?;
|
||
let seeds = storage.get_all_group_seeds_map().unwrap_or_default();
|
||
let personas = storage.list_posting_identities().unwrap_or_default();
|
||
(posts, seeds, personas)
|
||
};
|
||
Ok(Self::decrypt_posts(raw, &group_seeds, &personas))
|
||
}
|
||
|
||
pub async fn get_all_posts(
|
||
&self,
|
||
) -> anyhow::Result<Vec<(PostId, Post, PostVisibility, Option<String>)>> {
|
||
let (raw, group_seeds, personas) = {
|
||
let storage = self.storage.get().await;
|
||
let posts = storage.list_posts_reverse_chron()?;
|
||
let seeds = storage.get_all_group_seeds_map().unwrap_or_default();
|
||
let personas = storage.list_posting_identities().unwrap_or_default();
|
||
(posts, seeds, personas)
|
||
};
|
||
Ok(Self::decrypt_posts(raw, &group_seeds, &personas))
|
||
}
|
||
|
||
pub async fn get_feed_page(
|
||
&self,
|
||
before_ms: Option<u64>,
|
||
limit: usize,
|
||
) -> anyhow::Result<Vec<(PostId, Post, PostVisibility, Option<String>)>> {
|
||
let (raw, group_seeds, personas) = {
|
||
let storage = self.storage.get().await;
|
||
let posts = storage.get_feed_page(before_ms, limit)?;
|
||
let seeds = storage.get_all_group_seeds_map().unwrap_or_default();
|
||
let personas = storage.list_posting_identities().unwrap_or_default();
|
||
(posts, seeds, personas)
|
||
};
|
||
Ok(Self::decrypt_posts(raw, &group_seeds, &personas))
|
||
}
|
||
|
||
pub async fn get_all_posts_page(
|
||
&self,
|
||
before_ms: Option<u64>,
|
||
limit: usize,
|
||
) -> anyhow::Result<Vec<(PostId, Post, PostVisibility, Option<String>)>> {
|
||
let (raw, group_seeds, personas) = {
|
||
let storage = self.storage.get().await;
|
||
let posts = storage.list_posts_page(before_ms, limit)?;
|
||
let seeds = storage.get_all_group_seeds_map().unwrap_or_default();
|
||
let personas = storage.list_posting_identities().unwrap_or_default();
|
||
(posts, seeds, personas)
|
||
};
|
||
Ok(Self::decrypt_posts(raw, &group_seeds, &personas))
|
||
}
|
||
|
||
/// Attempt to decrypt each post using all held posting identities as
|
||
/// candidate recipients. The first persona whose secret matches a
|
||
/// wrapped_key recipient wins; if none match, the post remains opaque.
|
||
fn decrypt_posts(
|
||
posts: Vec<(PostId, Post, PostVisibility)>,
|
||
group_seeds: &std::collections::HashMap<(crate::types::GroupId, crate::types::GroupEpoch), ([u8; 32], [u8; 32])>,
|
||
personas: &[crate::types::PostingIdentity],
|
||
) -> Vec<(PostId, Post, PostVisibility, Option<String>)> {
|
||
posts
|
||
.into_iter()
|
||
.map(|(id, post, vis)| {
|
||
let decrypted = match &vis {
|
||
PostVisibility::Public => None,
|
||
PostVisibility::Encrypted { recipients } => {
|
||
personas.iter().find_map(|pi| {
|
||
crypto::decrypt_post(
|
||
&post.content,
|
||
&pi.secret_seed,
|
||
&pi.node_id,
|
||
&post.author,
|
||
recipients,
|
||
)
|
||
.ok()
|
||
.flatten()
|
||
})
|
||
}
|
||
PostVisibility::GroupEncrypted { group_id, epoch, wrapped_cek } => {
|
||
group_seeds.get(&(*group_id, *epoch))
|
||
.and_then(|(seed, pubkey)| {
|
||
crypto::decrypt_group_post(
|
||
&post.content,
|
||
seed,
|
||
pubkey,
|
||
wrapped_cek,
|
||
).ok()
|
||
})
|
||
}
|
||
};
|
||
(id, post, vis, decrypted)
|
||
})
|
||
.collect()
|
||
}
|
||
|
||
// ---- Follows ----
|
||
|
||
pub async fn follow(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let connected = self.network.is_connected(node_id).await;
|
||
let storage = self.storage.get().await;
|
||
storage.add_follow(node_id)?;
|
||
|
||
// Upsert social route. v0.6.2: audience removed; only Follow exists.
|
||
let addresses = storage.get_peer_record(node_id)?
|
||
.map(|r| r.addresses).unwrap_or_default();
|
||
let peer_addresses = storage.build_peer_addresses_for(node_id)?;
|
||
let now = std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default().as_millis() as u64;
|
||
let preferred_tree = storage.build_preferred_tree_for(node_id).unwrap_or_default();
|
||
storage.upsert_social_route(&SocialRouteEntry {
|
||
node_id: *node_id,
|
||
addresses,
|
||
peer_addresses,
|
||
relation: SocialRelation::Follow,
|
||
status: if connected { SocialStatus::Online } else { SocialStatus::Disconnected },
|
||
last_connected_ms: 0,
|
||
last_seen_ms: now,
|
||
reach_method: ReachMethod::Direct,
|
||
preferred_tree,
|
||
})?;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn unfollow(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.remove_follow(node_id)?;
|
||
// v0.6.2: audience removed; unfollow drops the social route entirely.
|
||
storage.remove_social_route(node_id)?;
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn list_follows(&self) -> anyhow::Result<Vec<NodeId>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_follows()
|
||
}
|
||
|
||
/// Batch: for each followed author, return the last-post timestamp we
|
||
/// hold locally. Used by the Following UI to sort by recency (which
|
||
/// replaces the broken "online" indicator since the network/posting
|
||
/// key split anonymized presence).
|
||
pub async fn last_activity_for_follows(&self) -> anyhow::Result<std::collections::HashMap<NodeId, u64>> {
|
||
let storage = self.storage.get().await;
|
||
let follows = storage.list_follows()?;
|
||
storage.last_activity_for_authors(&follows)
|
||
}
|
||
|
||
// ---- Ignored peers ----
|
||
|
||
pub async fn ignore_peer(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.add_ignored_peer(node_id)?;
|
||
// If the peer was in follows, also drop them — ignoring implies
|
||
// no-longer-following. Best-effort; errors are logged by callers.
|
||
let _ = storage.remove_follow(node_id);
|
||
let _ = storage.remove_social_route(node_id);
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn unignore_peer(&self, node_id: &NodeId) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.remove_ignored_peer(node_id)
|
||
}
|
||
|
||
pub async fn list_ignored_peers(&self) -> anyhow::Result<Vec<NodeId>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_ignored_peers()
|
||
}
|
||
|
||
// ---- Discover ----
|
||
|
||
/// Named peers we aren't following and haven't ignored — driven entirely
|
||
/// by signed profile posts we've received through the CDN.
|
||
pub async fn list_discoverable_profiles(&self) -> anyhow::Result<Vec<PublicProfile>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_discoverable_profiles(&self.default_posting_id)
|
||
}
|
||
|
||
// ---- Profiles ----
|
||
|
||
/// Set the default posting identity's profile (display_name, bio,
|
||
/// preserving any existing avatar). Creates a signed
|
||
/// `VisibilityIntent::Profile` post authored by the posting identity and
|
||
/// propagates it via the normal neighbor-manifest CDN path. The locally
|
||
/// stored profile row is keyed by the posting identity — peers who pull
|
||
/// the profile post apply the same update on their side.
|
||
pub async fn set_profile(&self, display_name: String, bio: String) -> anyhow::Result<PublicProfile> {
|
||
let posting_id = self.default_posting_id;
|
||
let posting_secret = self.default_posting_secret;
|
||
|
||
// Preserve existing avatar if present.
|
||
let avatar_cid = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_profile(&posting_id).ok().flatten().and_then(|p| p.avatar_cid)
|
||
};
|
||
|
||
let profile_post = crate::profile::build_profile_post(
|
||
&posting_id,
|
||
&posting_secret,
|
||
&display_name,
|
||
&bio,
|
||
avatar_cid,
|
||
);
|
||
let profile_post_id = crate::content::compute_post_id(&profile_post);
|
||
let timestamp_ms = profile_post.timestamp_ms;
|
||
|
||
// Store post with VisibilityIntent::Profile + apply (upserts profile row).
|
||
// If naming the fresh-install auto-gen persona with a non-empty name,
|
||
// clear the disposability marker — user has claimed this persona.
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&profile_post_id,
|
||
&profile_post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Profile,
|
||
)?;
|
||
crate::profile::apply_profile_post_if_applicable(
|
||
&*storage,
|
||
&profile_post,
|
||
Some(&VisibilityIntent::Profile),
|
||
)?;
|
||
if !display_name.is_empty() {
|
||
if let Ok(Some(marker)) = storage.get_setting("first_run_auto_persona_id") {
|
||
if marker == hex::encode(posting_id) {
|
||
let _ = storage.delete_setting("first_run_auto_persona_id");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Propagate via neighbor-manifest header diffs like any other post.
|
||
self.update_neighbor_manifests_as(
|
||
&posting_id,
|
||
&posting_secret,
|
||
&profile_post_id,
|
||
timestamp_ms,
|
||
).await;
|
||
|
||
let profile = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_profile(&posting_id)?
|
||
.unwrap_or_else(|| PublicProfile {
|
||
node_id: posting_id,
|
||
display_name: display_name.clone(),
|
||
bio: bio.clone(),
|
||
updated_at: timestamp_ms,
|
||
anchors: vec![],
|
||
recent_peers: vec![],
|
||
preferred_peers: vec![],
|
||
public_visible: true,
|
||
avatar_cid,
|
||
})
|
||
};
|
||
|
||
info!(
|
||
posting_id = hex::encode(posting_id),
|
||
profile_post_id = hex::encode(profile_post_id),
|
||
"Published profile post"
|
||
);
|
||
Ok(profile)
|
||
}
|
||
|
||
pub async fn set_anchors(&self, anchors: Vec<NodeId>) -> anyhow::Result<PublicProfile> {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let recent_peers = self.current_recent_peers().await;
|
||
let profile = {
|
||
let storage = self.storage.get().await;
|
||
let existing = storage.get_profile(&self.node_id)?;
|
||
let (display_name, bio, public_visible, avatar_cid) = match existing {
|
||
Some(p) => (p.display_name, p.bio, p.public_visible, p.avatar_cid),
|
||
None => (String::new(), String::new(), true, None),
|
||
};
|
||
let preferred_peers = storage.list_preferred_peers().unwrap_or_default();
|
||
|
||
let profile = PublicProfile {
|
||
node_id: self.node_id,
|
||
display_name,
|
||
bio,
|
||
updated_at: now,
|
||
anchors,
|
||
recent_peers,
|
||
preferred_peers,
|
||
public_visible,
|
||
avatar_cid,
|
||
};
|
||
|
||
storage.store_profile(&profile)?;
|
||
profile
|
||
};
|
||
|
||
let pushed = self.network.push_profile(&profile).await;
|
||
if pushed > 0 {
|
||
info!(pushed, "Pushed anchor update to peers");
|
||
}
|
||
|
||
Ok(profile)
|
||
}
|
||
|
||
pub async fn get_peer_anchors(&self, node_id: &NodeId) -> anyhow::Result<Vec<NodeId>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_peer_anchors(node_id)
|
||
}
|
||
|
||
pub async fn get_profile(&self, node_id: &NodeId) -> anyhow::Result<Option<PublicProfile>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_profile(node_id)
|
||
}
|
||
|
||
/// v0.6.2: the user's own display profile lives under the default
|
||
/// posting identity (published as a signed Profile post), not the
|
||
/// network NodeId.
|
||
pub async fn my_profile(&self) -> anyhow::Result<Option<PublicProfile>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_profile(&self.default_posting_id)
|
||
}
|
||
|
||
pub async fn has_profile(&self) -> anyhow::Result<bool> {
|
||
let storage = self.storage.get().await;
|
||
Ok(storage.get_profile(&self.default_posting_id)?.is_some())
|
||
}
|
||
|
||
pub async fn get_display_name(&self, node_id: &NodeId) -> anyhow::Result<Option<String>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_display_name(node_id)
|
||
}
|
||
|
||
// ---- Blobs ----
|
||
|
||
/// Get a blob by CID from local store.
|
||
pub async fn get_blob(&self, cid: &[u8; 32]) -> anyhow::Result<Option<Vec<u8>>> {
|
||
let data = self.blob_store.get(cid)?;
|
||
if data.is_some() {
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.touch_blob_access(cid);
|
||
}
|
||
Ok(data)
|
||
}
|
||
|
||
/// Decrypt a blob in the context of a post's visibility.
|
||
/// Public posts pass through unchanged. Encrypted/group-encrypted posts decrypt with the CEK.
|
||
fn decrypt_blob_for_post(
|
||
&self,
|
||
data: Vec<u8>,
|
||
post: &Post,
|
||
visibility: &PostVisibility,
|
||
group_seeds: &std::collections::HashMap<([u8; 32], u64), ([u8; 32], [u8; 32])>,
|
||
) -> anyhow::Result<Option<Vec<u8>>> {
|
||
match visibility {
|
||
PostVisibility::Public => Ok(Some(data)),
|
||
PostVisibility::Encrypted { recipients } => {
|
||
let cek = crypto::unwrap_cek_for_recipient(
|
||
&self.default_posting_secret,
|
||
&self.node_id,
|
||
&post.author,
|
||
recipients,
|
||
)?;
|
||
match cek {
|
||
Some(cek) => {
|
||
let plaintext = crypto::decrypt_bytes_with_cek(&data, &cek)?;
|
||
Ok(Some(plaintext))
|
||
}
|
||
None => Ok(None),
|
||
}
|
||
}
|
||
PostVisibility::GroupEncrypted { group_id, epoch, wrapped_cek } => {
|
||
if let Some((seed, pubkey)) = group_seeds.get(&(*group_id, *epoch)) {
|
||
let cek = crypto::unwrap_group_cek(seed, pubkey, wrapped_cek)?;
|
||
let plaintext = crypto::decrypt_bytes_with_cek(&data, &cek)?;
|
||
Ok(Some(plaintext))
|
||
} else {
|
||
Ok(None)
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
/// Get a blob by CID, decrypting it in the context of the given post.
|
||
/// For public posts, returns raw blob data. For encrypted posts, decrypts with the post's CEK.
|
||
pub async fn get_blob_for_post(
|
||
&self,
|
||
cid: &[u8; 32],
|
||
post_id: &PostId,
|
||
) -> anyhow::Result<Option<Vec<u8>>> {
|
||
// Get raw blob data (local — no lock needed)
|
||
let raw_data = match self.blob_store.get(cid)? {
|
||
Some(d) => d,
|
||
None => return Ok(None),
|
||
};
|
||
|
||
// Single lock acquisition for all DB reads
|
||
let (post, visibility, group_seeds) = {
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.touch_blob_access(cid);
|
||
match storage.get_post_with_visibility(post_id)? {
|
||
Some((post, vis)) => {
|
||
let seeds = if matches!(vis, PostVisibility::GroupEncrypted { .. }) {
|
||
storage.get_all_group_seeds_map().unwrap_or_default()
|
||
} else {
|
||
std::collections::HashMap::new()
|
||
};
|
||
(post, vis, seeds)
|
||
}
|
||
None => return Ok(Some(raw_data)), // No post context — return raw
|
||
}
|
||
};
|
||
// Lock released — decrypt without lock
|
||
match &visibility {
|
||
PostVisibility::Public => Ok(Some(raw_data)),
|
||
_ => self.decrypt_blob_for_post(raw_data, &post, &visibility, &group_seeds),
|
||
}
|
||
}
|
||
|
||
/// Prefetch blobs for recently synced posts from a peer.
|
||
/// Scans recent posts (newest first) for missing blobs, caps at 20 per cycle.
|
||
/// Runs outside any locks.
|
||
const MAX_PREFETCH_PER_CYCLE: usize = 20;
|
||
|
||
pub async fn prefetch_blobs_from_peer(&self, peer_id: &NodeId) {
|
||
// Brief lock: get post IDs and their attachment info
|
||
let posts_with_atts: Vec<(PostId, NodeId, Vec<crate::types::Attachment>)> = {
|
||
let storage = self.storage.get().await;
|
||
let post_ids = storage.list_post_ids().unwrap_or_default();
|
||
let mut result = Vec::new();
|
||
for pid in post_ids {
|
||
if result.len() >= Self::MAX_PREFETCH_PER_CYCLE { break; }
|
||
if let Ok(Some(post)) = storage.get_post(&pid) {
|
||
if !post.attachments.is_empty() {
|
||
result.push((pid, post.author, post.attachments.clone()));
|
||
}
|
||
}
|
||
}
|
||
result
|
||
};
|
||
// Lock released — check blob store and filter without lock
|
||
let mut missing: Vec<(PostId, NodeId, Vec<crate::types::Attachment>)> = Vec::new();
|
||
let mut total_missing = 0usize;
|
||
for (pid, author, atts) in posts_with_atts {
|
||
if total_missing >= Self::MAX_PREFETCH_PER_CYCLE { break; }
|
||
let missing_atts: Vec<_> = atts.into_iter()
|
||
.filter(|a| !self.blob_store.has(&a.cid))
|
||
.collect();
|
||
if !missing_atts.is_empty() {
|
||
total_missing += missing_atts.len();
|
||
missing.push((pid, author, missing_atts));
|
||
}
|
||
}
|
||
|
||
if missing.is_empty() {
|
||
return;
|
||
}
|
||
|
||
let mut fetched = 0usize;
|
||
for (post_id, author, attachments) in &missing {
|
||
for att in attachments {
|
||
if fetched >= Self::MAX_PREFETCH_PER_CYCLE { break; }
|
||
match self.fetch_blob_with_fallback(
|
||
&att.cid, post_id, author, &att.mime_type, 0,
|
||
).await {
|
||
Ok(Some(_)) => { fetched += 1; }
|
||
Ok(None) => {}
|
||
Err(e) => {
|
||
tracing::debug!(
|
||
cid = hex::encode(att.cid),
|
||
error = %e,
|
||
"Blob prefetch failed"
|
||
);
|
||
}
|
||
}
|
||
}
|
||
}
|
||
if fetched > 0 {
|
||
tracing::info!(fetched, peer = hex::encode(peer_id), "Prefetched blobs after sync");
|
||
}
|
||
}
|
||
|
||
/// Check if a blob exists locally.
|
||
pub fn has_blob(&self, cid: &[u8; 32]) -> bool {
|
||
self.blob_store.has(cid)
|
||
}
|
||
|
||
/// Fetch a blob from a peer, storing it locally and recording CDN metadata.
|
||
pub async fn fetch_blob_from_peer(
|
||
&self,
|
||
cid: &[u8; 32],
|
||
from_peer: &NodeId,
|
||
post_id: &PostId,
|
||
author: &NodeId,
|
||
mime_type: &str,
|
||
created_at: u64,
|
||
) -> anyhow::Result<Option<Vec<u8>>> {
|
||
// Check local first
|
||
if let Some(data) = self.blob_store.get(cid)? {
|
||
return Ok(Some(data));
|
||
}
|
||
|
||
// Fetch with CDN metadata
|
||
let (data, response) = self.network.fetch_blob_full(cid, from_peer).await?;
|
||
if let Some(ref data) = data {
|
||
// Store blob locally
|
||
self.blob_store.store(cid, data)?;
|
||
let storage = self.storage.get().await;
|
||
storage.record_blob(cid, post_id, author, data.len() as u64, mime_type, created_at)?;
|
||
|
||
// Store AuthorManifest if provided (extract from CdnManifest wrapper)
|
||
if let Some(ref cdn_manifest) = response.manifest {
|
||
if crypto::verify_manifest_signature(&cdn_manifest.author_manifest) {
|
||
let author_json = serde_json::to_string(&cdn_manifest.author_manifest).unwrap_or_default();
|
||
let _ = storage.store_cdn_manifest(
|
||
cid,
|
||
&author_json,
|
||
&cdn_manifest.author_manifest.author,
|
||
cdn_manifest.author_manifest.updated_at,
|
||
);
|
||
}
|
||
}
|
||
|
||
// Record upstream source
|
||
let source_addrs: Vec<String> = response.manifest.as_ref()
|
||
.map(|m| m.host_addresses.clone())
|
||
.unwrap_or_default();
|
||
let _ = storage.touch_file_holder(
|
||
cid,
|
||
from_peer,
|
||
&source_addrs,
|
||
crate::storage::HolderDirection::Received,
|
||
);
|
||
}
|
||
Ok(data)
|
||
}
|
||
|
||
/// Fetch a blob with CDN-aware cascade, preferring non-anchor sources to save anchor
|
||
/// delivery budget:
|
||
/// 1. Local → 2. Existing upstream → 3. Lateral peers (non-anchor first)
|
||
/// → 4. Replicas → 5. Author → 6. Redirect peers
|
||
/// Anchors are deprioritized at each step via storage-level ordering.
|
||
pub async fn fetch_blob_with_fallback(
|
||
&self,
|
||
cid: &[u8; 32],
|
||
post_id: &PostId,
|
||
author: &NodeId,
|
||
mime_type: &str,
|
||
created_at: u64,
|
||
) -> anyhow::Result<Option<Vec<u8>>> {
|
||
// 1. Check local
|
||
if let Some(data) = self.blob_store.get(cid)? {
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.touch_blob_access(cid);
|
||
return Ok(Some(data));
|
||
}
|
||
|
||
// Collect redirect peers from responses in case we need them later
|
||
let mut redirect_peers: Vec<crate::types::PeerWithAddress> = Vec::new();
|
||
|
||
// 2. Try known holders (up to 5 most-recent peers we've interacted
|
||
// with about this file).
|
||
let known_holders = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_file_holders(cid).unwrap_or_default()
|
||
};
|
||
for (holder_nid, _addrs) in &known_holders {
|
||
match self.fetch_blob_from_peer(cid, holder_nid, post_id, author, mime_type, created_at).await {
|
||
Ok(Some(data)) => return Ok(Some(data)),
|
||
Ok(None) => {}
|
||
Err(e) => warn!(error = %e, "blob fetch from known holder failed"),
|
||
}
|
||
}
|
||
|
||
// 3. Lateral N0-N2: mesh peers + N2 peers who have the author's posts
|
||
// (sorted by get_lateral_blob_sources: non-anchors first)
|
||
let lateral_sources = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_lateral_blob_sources(author, post_id).unwrap_or_default()
|
||
};
|
||
for lateral in lateral_sources {
|
||
if lateral == *author {
|
||
continue; // Author tried separately below
|
||
}
|
||
match self.network.fetch_blob_full(cid, &lateral).await {
|
||
Ok((Some(data), response)) => {
|
||
self.blob_store.store(cid, &data)?;
|
||
let storage = self.storage.get().await;
|
||
storage.record_blob(cid, post_id, author, data.len() as u64, mime_type, created_at)?;
|
||
if let Some(ref cdn_manifest) = response.manifest {
|
||
if crypto::verify_manifest_signature(&cdn_manifest.author_manifest) {
|
||
let author_json = serde_json::to_string(&cdn_manifest.author_manifest).unwrap_or_default();
|
||
let _ = storage.store_cdn_manifest(cid, &author_json, &cdn_manifest.author_manifest.author, cdn_manifest.author_manifest.updated_at);
|
||
}
|
||
}
|
||
let _ = storage.touch_file_holder(
|
||
cid,
|
||
&lateral,
|
||
&[],
|
||
crate::storage::HolderDirection::Received,
|
||
);
|
||
return Ok(Some(data));
|
||
}
|
||
Ok((None, response)) => {
|
||
redirect_peers.extend(response.cdn_redirect_peers);
|
||
}
|
||
Err(e) => warn!(peer = hex::encode(lateral), error = %e, "lateral blob fetch failed"),
|
||
}
|
||
}
|
||
|
||
// 4. Try replica peers (before author — replicas are often closer/cheaper)
|
||
let replicas = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_replica_peers(post_id, 3_600_000)?
|
||
};
|
||
for replica in replicas {
|
||
match self.fetch_blob_from_peer(cid, &replica, post_id, author, mime_type, created_at).await {
|
||
Ok(Some(data)) => return Ok(Some(data)),
|
||
Ok(None) => {}
|
||
Err(e) => warn!(peer = hex::encode(replica), error = %e, "blob fetch from replica failed"),
|
||
}
|
||
}
|
||
|
||
// 5. Try author
|
||
match self.fetch_blob_from_peer(cid, author, post_id, author, mime_type, created_at).await {
|
||
Ok(Some(data)) => return Ok(Some(data)),
|
||
Ok(None) => {}
|
||
Err(e) => warn!(error = %e, "blob fetch from author failed"),
|
||
}
|
||
|
||
// 6. Try redirect peers (from any step that returned cdn_redirect_peers)
|
||
for rp in &redirect_peers {
|
||
if let Ok(nid_bytes) = hex::decode(&rp.n) {
|
||
if let Ok(nid) = <[u8; 32]>::try_from(nid_bytes.as_slice()) {
|
||
match self.fetch_blob_from_peer(cid, &nid, post_id, author, mime_type, created_at).await {
|
||
Ok(Some(data)) => return Ok(Some(data)),
|
||
Ok(None) => {}
|
||
Err(e) => warn!(peer = &rp.n, error = %e, "redirect blob fetch failed"),
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
Ok(None)
|
||
}
|
||
|
||
// ---- Circles ----
|
||
|
||
pub async fn create_circle(&self, name: String) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.create_circle(&name)?;
|
||
drop(storage);
|
||
self.create_group_key_for_circle(&name).await?;
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn delete_circle(&self, name: String) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
// Delete group key and associated data
|
||
if let Ok(Some(gk)) = storage.get_group_key_by_circle(&name) {
|
||
let _ = storage.delete_group_key(&gk.group_id);
|
||
}
|
||
storage.delete_circle(&name)
|
||
}
|
||
|
||
pub async fn add_to_circle(&self, circle_name: String, node_id: NodeId) -> anyhow::Result<()> {
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.add_circle_member(&circle_name, &node_id)?;
|
||
}
|
||
|
||
// v0.6.2: distribute the seed via an encrypted key-distribution
|
||
// post (CDN-propagated), replacing the direct GroupKeyDistribute
|
||
// push. Only the admin (holder of the group seed) does this.
|
||
let post_to_propagate: Option<(PostId, u64, NodeId, [u8; 32])> = {
|
||
let storage = self.storage.get().await;
|
||
if let Ok(Some(gk)) = storage.get_group_key_by_circle(&circle_name) {
|
||
if gk.admin == self.default_posting_id {
|
||
if let Ok(Some(seed)) = storage.get_group_seed(&gk.group_id, gk.epoch) {
|
||
// Record our own wrapped member key locally (so we
|
||
// still track membership in group_member_keys for
|
||
// rotation math).
|
||
if let Ok(wrapped_new) = crypto::wrap_group_key_for_member(
|
||
&self.default_posting_secret, &node_id, &seed,
|
||
) {
|
||
let _ = storage.store_group_member_key(
|
||
&gk.group_id,
|
||
&crate::types::GroupMemberKey {
|
||
member: node_id,
|
||
epoch: gk.epoch,
|
||
wrapped_group_key: wrapped_new,
|
||
},
|
||
);
|
||
}
|
||
|
||
match crate::group_key_distribution::build_distribution_post(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
&gk,
|
||
&seed,
|
||
&[node_id],
|
||
) {
|
||
Ok((post_id, post, visibility)) => {
|
||
storage.store_post_with_intent(
|
||
&post_id,
|
||
&post,
|
||
&visibility,
|
||
&VisibilityIntent::GroupKeyDistribute,
|
||
)?;
|
||
Some((post_id, post.timestamp_ms, self.default_posting_id, self.default_posting_secret))
|
||
}
|
||
Err(e) => {
|
||
warn!(error = %e, "failed to build key-distribution post");
|
||
None
|
||
}
|
||
}
|
||
} else { None }
|
||
} else { None }
|
||
} else { None }
|
||
};
|
||
|
||
if let Some((post_id, ts, posting_id, posting_secret)) = post_to_propagate {
|
||
self.update_neighbor_manifests_as(&posting_id, &posting_secret, &post_id, ts).await;
|
||
}
|
||
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn remove_from_circle(
|
||
&self,
|
||
circle_name: String,
|
||
node_id: NodeId,
|
||
) -> anyhow::Result<()> {
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.remove_circle_member(&circle_name, &node_id)?;
|
||
}
|
||
|
||
// Rotate group key if we're the admin
|
||
self.rotate_group_key(&circle_name).await;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Create a group key for a circle (called on circle creation).
|
||
async fn create_group_key_for_circle(&self, circle_name: &str) -> anyhow::Result<()> {
|
||
self.create_group_key_inner(circle_name, None).await
|
||
}
|
||
|
||
// ---- Groups (v0.6.2) ----
|
||
|
||
/// Create a new group anchored at `root_post_id`. Unlike circles, groups
|
||
/// are many-way: every member can post to the group once they've
|
||
/// received the wrapped group seed. Returns the `(GroupId, circle_name)`
|
||
/// pair used internally; the circle_name is synthesised from the root
|
||
/// post id so there's no user-visible naming step.
|
||
pub async fn create_group_from_post(
|
||
&self,
|
||
root_post_id: PostId,
|
||
initial_members: Vec<NodeId>,
|
||
) -> anyhow::Result<(crate::types::GroupId, String)> {
|
||
let circle_name = format!("group:{}", hex::encode(&root_post_id[..6]));
|
||
|
||
// Create the backing circle row + initialize group key with
|
||
// canonical_root_post_id set, then add each initial member (which
|
||
// wraps + distributes the key).
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.create_circle(&circle_name)?;
|
||
}
|
||
self.create_group_key_inner(&circle_name, Some(root_post_id)).await?;
|
||
|
||
for member in initial_members {
|
||
if member == self.node_id {
|
||
continue;
|
||
}
|
||
if let Err(e) = self.add_to_circle(circle_name.clone(), member).await {
|
||
warn!(member = hex::encode(member), error = %e, "failed to add group member");
|
||
}
|
||
}
|
||
|
||
let group_id = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_group_key_by_circle(&circle_name)?
|
||
.map(|gk| gk.group_id)
|
||
.ok_or_else(|| anyhow::anyhow!("group key missing after creation"))?
|
||
};
|
||
|
||
info!(
|
||
root = hex::encode(root_post_id),
|
||
group_id = hex::encode(group_id),
|
||
circle_name = %circle_name,
|
||
"Created group from post"
|
||
);
|
||
Ok((group_id, circle_name))
|
||
}
|
||
|
||
/// Post to a group anchored at `root_post_id`. Any member holding the
|
||
/// group seed can call this. Encrypts the content with the group key and
|
||
/// records a `ThreadMeta` link from the new post back to the root so
|
||
/// `list_group_posts_by_root` can later cluster all contributions.
|
||
pub async fn post_to_group(
|
||
&self,
|
||
root_post_id: PostId,
|
||
content: String,
|
||
attachment_data: Vec<(Vec<u8>, String)>,
|
||
) -> anyhow::Result<(PostId, Post, PostVisibility)> {
|
||
let circle_name = {
|
||
let storage = self.storage.get().await;
|
||
storage.get_group_by_canonical_root(&root_post_id)?
|
||
.map(|gk| gk.circle_name)
|
||
.ok_or_else(|| anyhow::anyhow!("no group found for canonical root post"))?
|
||
};
|
||
|
||
let result = self.create_post_with_visibility(
|
||
content,
|
||
VisibilityIntent::Circle(circle_name),
|
||
attachment_data,
|
||
).await?;
|
||
|
||
// Link the new post back to the canonical root so the group can be
|
||
// reconstructed by `list_group_posts_by_root`.
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_thread_meta(&crate::types::ThreadMeta {
|
||
post_id: result.0,
|
||
parent_post_id: root_post_id,
|
||
})?;
|
||
}
|
||
|
||
Ok(result)
|
||
}
|
||
|
||
/// List all posts that belong to the group rooted at `root_post_id`.
|
||
/// Reads the ThreadMeta parent index + returns the full posts. Callers
|
||
/// decrypt as needed (same as any other GroupEncrypted content).
|
||
pub async fn list_group_posts_by_root(
|
||
&self,
|
||
root_post_id: PostId,
|
||
) -> anyhow::Result<Vec<(PostId, Post, PostVisibility)>> {
|
||
let storage = self.storage.get().await;
|
||
let child_ids = storage.get_thread_children(&root_post_id)?;
|
||
let mut out = Vec::with_capacity(child_ids.len());
|
||
for pid in child_ids {
|
||
if let Some((post, vis)) = storage.get_post_with_visibility(&pid)? {
|
||
out.push((pid, post, vis));
|
||
}
|
||
}
|
||
Ok(out)
|
||
}
|
||
|
||
// ---- end Groups ----
|
||
|
||
// ---- Announcements ----
|
||
|
||
/// Publish a signed network-wide announcement. Only succeeds when run
|
||
/// on the bootstrap anchor — the default posting identity must be
|
||
/// `DEFAULT_ANCHOR_POSTING_ID`. Called from `itsgoin announce` during
|
||
/// release deploys.
|
||
pub async fn publish_announcement(
|
||
&self,
|
||
category: String,
|
||
title: String,
|
||
body: String,
|
||
release: Option<crate::types::ReleaseAnnouncement>,
|
||
) -> anyhow::Result<PostId> {
|
||
if self.default_posting_id != crate::DEFAULT_ANCHOR_POSTING_ID {
|
||
anyhow::bail!(
|
||
"refusing to publish announcement: default posting identity is not the bootstrap anchor"
|
||
);
|
||
}
|
||
let post = crate::announcement::build_announcement_post(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
&category,
|
||
&title,
|
||
&body,
|
||
release,
|
||
);
|
||
let post_id = crate::content::compute_post_id(&post);
|
||
let timestamp_ms = post.timestamp_ms;
|
||
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&post_id,
|
||
&post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Announcement,
|
||
)?;
|
||
crate::announcement::apply_announcement_if_applicable(
|
||
&*storage,
|
||
&post,
|
||
Some(&VisibilityIntent::Announcement),
|
||
)?;
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
&post_id,
|
||
timestamp_ms,
|
||
).await;
|
||
info!(
|
||
post_id = hex::encode(post_id),
|
||
category = %category,
|
||
"Published network-wide announcement"
|
||
);
|
||
Ok(post_id)
|
||
}
|
||
|
||
/// Return the latest stored release announcement for the given channel
|
||
/// (\"stable\" or \"beta\"), or `None` if none is known yet.
|
||
pub async fn latest_release_announcement(
|
||
&self,
|
||
channel: &str,
|
||
) -> anyhow::Result<Option<crate::announcement::StoredAnnouncement>> {
|
||
let storage = self.storage.get().await;
|
||
crate::announcement::latest_release(&*storage, channel)
|
||
}
|
||
|
||
/// Scan any newly-received `VisibilityIntent::GroupKeyDistribute` posts
|
||
/// and apply ones we can decrypt with one of our posting identities.
|
||
/// Intended to run after a sync pass so group seeds propagate to members
|
||
/// without a direct push. Returns the count of applied distributions.
|
||
pub async fn process_group_key_distributions(&self) -> anyhow::Result<usize> {
|
||
let storage = self.storage.get().await;
|
||
let personas = storage.list_posting_identities()?;
|
||
crate::group_key_distribution::process_pending(&*storage, &personas)
|
||
}
|
||
|
||
/// Shared group-key creation used by both circles (canonical_root=None)
|
||
/// and groups (canonical_root=Some).
|
||
async fn create_group_key_inner(
|
||
&self,
|
||
circle_name: &str,
|
||
canonical_root_post_id: Option<PostId>,
|
||
) -> anyhow::Result<()> {
|
||
let (seed, pubkey) = crypto::generate_group_keypair();
|
||
let group_id = crypto::compute_group_id(&pubkey);
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let record = crate::types::GroupKeyRecord {
|
||
group_id,
|
||
circle_name: circle_name.to_string(),
|
||
epoch: 1,
|
||
group_public_key: pubkey,
|
||
admin: self.node_id,
|
||
created_at: now,
|
||
canonical_root_post_id,
|
||
};
|
||
|
||
let storage = self.storage.get().await;
|
||
storage.create_group_key(&record, Some(&seed))?;
|
||
storage.store_group_seed(&group_id, 1, &seed)?;
|
||
|
||
// Wrap for ourselves
|
||
let self_wrapped = crypto::wrap_group_key_for_member(&self.default_posting_secret, &self.node_id, &seed)?;
|
||
let self_mk = crate::types::GroupMemberKey {
|
||
member: self.node_id,
|
||
epoch: 1,
|
||
wrapped_group_key: self_wrapped,
|
||
};
|
||
storage.store_group_member_key(&group_id, &self_mk)?;
|
||
|
||
// Wrap for existing circle members (if any) and distribute the seed
|
||
// via a single encrypted key-distribution post. v0.6.2 replaces the
|
||
// per-member uni-stream GroupKeyDistribute push with this
|
||
// CDN-propagated post (one post per epoch, recipients = all non-self
|
||
// members).
|
||
let other_members: Vec<NodeId> = storage.get_circle_members(circle_name)?
|
||
.into_iter()
|
||
.filter(|m| *m != self.node_id)
|
||
.collect();
|
||
|
||
for member in &other_members {
|
||
if let Ok(wrapped) = crypto::wrap_group_key_for_member(
|
||
&self.default_posting_secret, member, &seed,
|
||
) {
|
||
let _ = storage.store_group_member_key(
|
||
&group_id,
|
||
&crate::types::GroupMemberKey {
|
||
member: *member,
|
||
epoch: 1,
|
||
wrapped_group_key: wrapped,
|
||
},
|
||
);
|
||
}
|
||
}
|
||
drop(storage);
|
||
|
||
if !other_members.is_empty() {
|
||
match crate::group_key_distribution::build_distribution_post(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
&record,
|
||
&seed,
|
||
&other_members,
|
||
) {
|
||
Ok((post_id, post, visibility)) => {
|
||
let ts = post.timestamp_ms;
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&post_id, &post, &visibility, &VisibilityIntent::GroupKeyDistribute,
|
||
)?;
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
&self.default_posting_id, &self.default_posting_secret, &post_id, ts,
|
||
).await;
|
||
}
|
||
Err(e) => {
|
||
warn!(error = %e, "failed to build key-distribution post");
|
||
}
|
||
}
|
||
}
|
||
|
||
info!(circle = %circle_name, group_id = hex::encode(group_id), "Created group key for circle");
|
||
Ok(())
|
||
}
|
||
|
||
/// Rotate the group key for a circle (called on member removal).
|
||
async fn rotate_group_key(&self, circle_name: &str) {
|
||
let rotate_result = {
|
||
let storage = self.storage.get().await;
|
||
let gk = match storage.get_group_key_by_circle(circle_name) {
|
||
Ok(Some(gk)) if gk.admin == self.node_id => gk,
|
||
_ => return,
|
||
};
|
||
let remaining_members = match storage.get_circle_members(circle_name) {
|
||
Ok(m) => m,
|
||
Err(_) => return,
|
||
};
|
||
// Always include ourselves
|
||
let mut all_members = remaining_members;
|
||
if !all_members.contains(&self.node_id) {
|
||
all_members.push(self.node_id);
|
||
}
|
||
match crypto::rotate_group_key(&self.default_posting_secret, gk.epoch, &all_members) {
|
||
Ok((new_seed, new_pubkey, new_epoch, member_keys)) => {
|
||
Some((gk.group_id, new_seed, new_pubkey, new_epoch, member_keys, circle_name.to_string(), gk.canonical_root_post_id))
|
||
}
|
||
Err(e) => {
|
||
warn!(error = %e, "Failed to rotate group key");
|
||
None
|
||
}
|
||
}
|
||
};
|
||
|
||
if let Some((group_id, new_seed, new_pubkey, new_epoch, member_keys, circle_name, canonical_root)) = rotate_result {
|
||
// Update storage
|
||
{
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.update_group_epoch(&group_id, new_epoch, &new_pubkey, Some(&new_seed));
|
||
let _ = storage.store_group_seed(&group_id, new_epoch, &new_seed);
|
||
for mk in &member_keys {
|
||
let _ = storage.store_group_member_key(&group_id, mk);
|
||
}
|
||
}
|
||
|
||
// v0.6.2: distribute the new seed via an encrypted
|
||
// key-distribution post instead of per-member unicast pushes.
|
||
let recipients: Vec<NodeId> = member_keys
|
||
.iter()
|
||
.map(|mk| mk.member)
|
||
.filter(|m| *m != self.default_posting_id)
|
||
.collect();
|
||
|
||
if !recipients.is_empty() {
|
||
let record = crate::types::GroupKeyRecord {
|
||
group_id,
|
||
circle_name: circle_name.clone(),
|
||
epoch: new_epoch,
|
||
group_public_key: new_pubkey,
|
||
admin: self.default_posting_id,
|
||
created_at: std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.map(|d| d.as_millis() as u64)
|
||
.unwrap_or(0),
|
||
canonical_root_post_id: canonical_root,
|
||
};
|
||
match crate::group_key_distribution::build_distribution_post(
|
||
&self.default_posting_id,
|
||
&self.default_posting_secret,
|
||
&record,
|
||
&new_seed,
|
||
&recipients,
|
||
) {
|
||
Ok((post_id, post, visibility)) => {
|
||
let ts = post.timestamp_ms;
|
||
{
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.store_post_with_intent(
|
||
&post_id, &post, &visibility, &VisibilityIntent::GroupKeyDistribute,
|
||
);
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
&self.default_posting_id, &self.default_posting_secret, &post_id, ts,
|
||
).await;
|
||
}
|
||
Err(e) => {
|
||
warn!(error = %e, "failed to build rotate distribution post");
|
||
}
|
||
}
|
||
}
|
||
|
||
info!(circle = %circle_name, epoch = new_epoch, "Rotated group key");
|
||
}
|
||
}
|
||
|
||
pub async fn list_circles(&self) -> anyhow::Result<Vec<Circle>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_circles()
|
||
}
|
||
|
||
// ---- Circle Profiles ----
|
||
|
||
/// Set a circle profile: store locally, encrypt with group key, push to connected peers.
|
||
pub async fn set_circle_profile(
|
||
&self,
|
||
circle_name: String,
|
||
display_name: String,
|
||
bio: String,
|
||
avatar_cid: Option<[u8; 32]>,
|
||
) -> anyhow::Result<crate::types::CircleProfile> {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let cp = crate::types::CircleProfile {
|
||
author: self.default_posting_id,
|
||
circle_name: circle_name.clone(),
|
||
display_name,
|
||
bio,
|
||
avatar_cid,
|
||
updated_at: now,
|
||
};
|
||
|
||
// Get group key for this circle
|
||
let (encrypted_payload, wrapped_cek, group_id, epoch) = {
|
||
let storage = self.storage.get().await;
|
||
// Verify circle exists
|
||
let circles = storage.list_circles()?;
|
||
if !circles.iter().any(|c| c.name == circle_name) {
|
||
anyhow::bail!("circle '{}' does not exist", circle_name);
|
||
}
|
||
|
||
let gk = storage.get_group_key_by_circle(&circle_name)?
|
||
.ok_or_else(|| anyhow::anyhow!("no group key for circle '{}'", circle_name))?;
|
||
|
||
if gk.admin != self.node_id {
|
||
anyhow::bail!("not admin of circle '{}'", circle_name);
|
||
}
|
||
|
||
let seed = storage.get_group_seed(&gk.group_id, gk.epoch)?
|
||
.ok_or_else(|| anyhow::anyhow!("group seed not found for circle '{}'", circle_name))?;
|
||
|
||
// Encrypt circle profile as JSON
|
||
let json = serde_json::to_string(&cp)?;
|
||
let (encrypted, wrapped) = crypto::encrypt_post_for_group(&json, &seed, &gk.group_public_key)?;
|
||
|
||
// Store plaintext + encrypted form
|
||
storage.set_circle_profile(&cp)?;
|
||
storage.store_remote_circle_profile(
|
||
&self.node_id,
|
||
&circle_name,
|
||
&cp,
|
||
&encrypted,
|
||
&wrapped,
|
||
&gk.group_id,
|
||
gk.epoch,
|
||
)?;
|
||
|
||
(encrypted, wrapped, gk.group_id, gk.epoch)
|
||
};
|
||
|
||
// Push to all connected mesh peers
|
||
let payload = crate::protocol::CircleProfileUpdatePayload {
|
||
author: self.default_posting_id,
|
||
circle_name,
|
||
group_id,
|
||
epoch,
|
||
encrypted_payload,
|
||
wrapped_cek,
|
||
updated_at: now,
|
||
};
|
||
let pushed = self.network.push_circle_profile(&payload).await;
|
||
if pushed > 0 {
|
||
info!(pushed, "Pushed circle profile update to peers");
|
||
}
|
||
|
||
Ok(cp)
|
||
}
|
||
|
||
/// Delete a circle profile and push tombstone.
|
||
pub async fn delete_circle_profile(&self, circle_name: String) -> anyhow::Result<()> {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let payload = {
|
||
let storage = self.storage.get().await;
|
||
let gk = storage.get_group_key_by_circle(&circle_name)?
|
||
.ok_or_else(|| anyhow::anyhow!("no group key for circle '{}'", circle_name))?;
|
||
let seed = storage.get_group_seed(&gk.group_id, gk.epoch)?
|
||
.ok_or_else(|| anyhow::anyhow!("group seed not found"))?;
|
||
|
||
// Encrypt empty string as tombstone
|
||
let (encrypted, wrapped) = crypto::encrypt_post_for_group("", &seed, &gk.group_public_key)?;
|
||
|
||
storage.delete_circle_profile(&self.node_id, &circle_name)?;
|
||
|
||
crate::protocol::CircleProfileUpdatePayload {
|
||
author: self.default_posting_id,
|
||
circle_name,
|
||
group_id: gk.group_id,
|
||
epoch: gk.epoch,
|
||
encrypted_payload: encrypted,
|
||
wrapped_cek: wrapped,
|
||
updated_at: now,
|
||
}
|
||
};
|
||
|
||
self.network.push_circle_profile(&payload).await;
|
||
Ok(())
|
||
}
|
||
|
||
/// Set public_visible flag and push profile update.
|
||
pub async fn set_public_visible(&self, visible: bool) -> anyhow::Result<()> {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let recent_peers = self.current_recent_peers().await;
|
||
let profile = {
|
||
let storage = self.storage.get().await;
|
||
let existing = storage.get_profile(&self.node_id)?;
|
||
let (display_name, bio, avatar_cid) = match existing {
|
||
Some(p) => (p.display_name, p.bio, p.avatar_cid),
|
||
None => (String::new(), String::new(), None),
|
||
};
|
||
let existing_anchors = storage.get_peer_anchors(&self.node_id).unwrap_or_default();
|
||
let preferred_peers = storage.list_preferred_peers().unwrap_or_default();
|
||
|
||
let profile = PublicProfile {
|
||
node_id: self.node_id,
|
||
display_name,
|
||
bio,
|
||
updated_at: now,
|
||
anchors: existing_anchors,
|
||
recent_peers,
|
||
preferred_peers,
|
||
public_visible: visible,
|
||
avatar_cid,
|
||
};
|
||
|
||
storage.store_profile(&profile)?;
|
||
profile
|
||
};
|
||
|
||
self.network.push_profile(&profile).await;
|
||
Ok(())
|
||
}
|
||
|
||
/// Resolve display info for any peer, taking circle profiles into account.
|
||
pub async fn resolve_display_name(
|
||
&self,
|
||
author: &NodeId,
|
||
) -> anyhow::Result<(String, String, Option<[u8; 32]>)> {
|
||
let storage = self.storage.get().await;
|
||
storage.resolve_display_for_peer(author, &self.node_id)
|
||
}
|
||
|
||
/// Get our own circle profile for a given circle.
|
||
pub async fn get_circle_profile(
|
||
&self,
|
||
circle_name: &str,
|
||
) -> anyhow::Result<Option<crate::types::CircleProfile>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_circle_profile(&self.node_id, circle_name)
|
||
}
|
||
|
||
/// Get the public_visible setting for our own profile.
|
||
pub async fn get_public_visible(&self) -> anyhow::Result<bool> {
|
||
let storage = self.storage.get().await;
|
||
Ok(storage
|
||
.get_profile(&self.node_id)?
|
||
.map(|p| p.public_visible)
|
||
.unwrap_or(true))
|
||
}
|
||
|
||
// ---- Settings ----
|
||
|
||
/// Get a setting value by key.
|
||
pub async fn get_setting(&self, key: &str) -> anyhow::Result<Option<String>> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_setting(key)
|
||
}
|
||
|
||
/// Set a setting value (upsert).
|
||
pub async fn set_setting(&self, key: &str, value: &str) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.set_setting(key, value)
|
||
}
|
||
|
||
// ---- Cache stats & pressure ----
|
||
|
||
/// Get cache statistics: (used_bytes, max_bytes, blob_count).
|
||
/// max_bytes comes from the `cache_size_bytes` setting (default 1 GB, 0 = unlimited).
|
||
pub async fn get_cache_stats(&self) -> anyhow::Result<(u64, u64, u64)> {
|
||
let storage = self.storage.get().await;
|
||
let used = storage.total_blob_bytes()?;
|
||
let count = storage.count_blobs()?;
|
||
let max_str = storage.get_setting("cache_size_bytes")?.unwrap_or_default();
|
||
let max: u64 = max_str.parse().unwrap_or(1_073_741_824);
|
||
Ok((used, max, count))
|
||
}
|
||
|
||
/// Compute cache pressure score (0-255).
|
||
/// 0 = no pressure (plenty of room or cache empty).
|
||
/// 255 = maximum pressure (lowest-priority blob is >72 h old).
|
||
/// Scales linearly: 0 h → 0, 36 h → 128, 72 h → 255.
|
||
pub async fn compute_cache_pressure(&self) -> anyhow::Result<u8> {
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let staleness_ms = 3600 * 1000;
|
||
|
||
let (candidates, follows) = {
|
||
let storage = self.storage.get().await;
|
||
let candidates = storage.get_eviction_candidates(staleness_ms)?;
|
||
let follows = storage.list_follows().unwrap_or_default();
|
||
(candidates, follows)
|
||
};
|
||
|
||
if candidates.is_empty() {
|
||
return Ok(255); // Empty cache = max willingness to accept
|
||
}
|
||
|
||
// Filter to non-elevated blobs (not pinned, not own content, not followed author)
|
||
let non_elevated: Vec<_> = candidates.iter().filter(|c| {
|
||
!c.pinned && c.author != self.node_id && !follows.contains(&c.author)
|
||
}).collect();
|
||
|
||
if non_elevated.is_empty() {
|
||
return Ok(255); // All blobs are elevated — plenty of room for new content
|
||
}
|
||
|
||
// Find the lowest priority (oldest/least-valuable) blob
|
||
let mut min_priority = f64::MAX;
|
||
let mut min_created_at = u64::MAX;
|
||
for c in &non_elevated {
|
||
let priority = self.compute_blob_priority(c, &follows, now);
|
||
if priority < min_priority {
|
||
min_priority = priority;
|
||
min_created_at = c.created_at;
|
||
}
|
||
}
|
||
|
||
// Scale based on age of the oldest non-elevated blob
|
||
let age_hours = now.saturating_sub(min_created_at) as f64 / (3600.0 * 1000.0);
|
||
let pressure = if age_hours >= 72.0 {
|
||
255
|
||
} else {
|
||
((age_hours / 72.0) * 255.0) as u8
|
||
};
|
||
|
||
Ok(pressure)
|
||
}
|
||
|
||
// ---- Seen engagement tracking ----
|
||
|
||
/// Get seen engagement counts for a post.
|
||
pub async fn get_seen_engagement(&self, post_id: &PostId) -> anyhow::Result<(u32, u32)> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_seen_engagement(post_id)
|
||
}
|
||
|
||
/// Mark a post's engagement as seen (upsert).
|
||
pub async fn set_seen_engagement(&self, post_id: &PostId, react_count: u32, comment_count: u32) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.set_seen_engagement(post_id, react_count, comment_count)
|
||
}
|
||
|
||
/// Get last-read timestamp for a conversation partner.
|
||
pub async fn get_last_read_message(&self, partner_id: &NodeId) -> anyhow::Result<u64> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_last_read_message(partner_id)
|
||
}
|
||
|
||
/// Mark a conversation as read up to the given timestamp.
|
||
pub async fn set_last_read_message(&self, partner_id: &NodeId, timestamp_ms: u64) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.set_last_read_message(partner_id, timestamp_ms)
|
||
}
|
||
|
||
// ---- Delete / Revocation ----
|
||
|
||
pub async fn delete_post(&self, post_id: &PostId) -> anyhow::Result<()> {
|
||
// Load the target post and the posting identity of its author. Only
|
||
// the author can delete their own content, so the signing key must be
|
||
// one we hold in posting_identities.
|
||
let (target_author, author_secret) = {
|
||
let storage = self.storage.get().await;
|
||
let post = storage
|
||
.get_post(post_id)?
|
||
.ok_or_else(|| anyhow::anyhow!("post not found"))?;
|
||
let pi = storage
|
||
.get_posting_identity(&post.author)?
|
||
.ok_or_else(|| anyhow::anyhow!("cannot delete: not authored by a persona on this device"))?;
|
||
(pi.node_id, pi.secret_seed)
|
||
};
|
||
|
||
// Build the control-delete post signed by the target's author.
|
||
let control_post = crate::control::build_delete_control_post(
|
||
&target_author,
|
||
&author_secret,
|
||
post_id,
|
||
);
|
||
let control_post_id = crate::content::compute_post_id(&control_post);
|
||
let now = control_post.timestamp_ms;
|
||
|
||
// Clean up blob storage local-side. Blobs in remote holders become
|
||
// orphans and get evicted naturally via LRU — BlobDeleteNotice is
|
||
// gone in v0.6.2.
|
||
let blob_cids = {
|
||
let storage = self.storage.get().await;
|
||
let cids = storage.delete_blobs_for_post(post_id)?;
|
||
for cid in &cids {
|
||
let _ = storage.cleanup_cdn_for_blob(cid);
|
||
}
|
||
cids
|
||
};
|
||
for cid in &blob_cids {
|
||
if let Err(e) = self.blob_store.delete(cid) {
|
||
warn!(cid = hex::encode(cid), error = %e, "Failed to delete blob file");
|
||
}
|
||
}
|
||
|
||
// Store the control post locally with VisibilityIntent::Control so
|
||
// feeds filter it and propagation queries find it. Apply the op under
|
||
// the same guard so delete recording + target cleanup happen with the
|
||
// control-post insert atomically.
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&control_post_id,
|
||
&control_post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Control,
|
||
)?;
|
||
crate::control::apply_control_post_if_applicable(
|
||
&*storage,
|
||
&control_post,
|
||
Some(&VisibilityIntent::Control),
|
||
)?;
|
||
}
|
||
|
||
// Propagate via the normal neighbor-manifest CDN path: include the
|
||
// control post in the author's other posts' `following_posts` lists
|
||
// and push manifest diffs to their file_holders. Peers who follow
|
||
// any of the author's posts pick up the control post and apply it.
|
||
self.update_neighbor_manifests_as(
|
||
&target_author,
|
||
&author_secret,
|
||
&control_post_id,
|
||
now,
|
||
).await;
|
||
|
||
info!(
|
||
post_id = hex::encode(post_id),
|
||
control_post_id = hex::encode(control_post_id),
|
||
blobs_removed = blob_cids.len(),
|
||
"Deleted post via control post",
|
||
);
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn revoke_post_access(
|
||
&self,
|
||
post_id: &PostId,
|
||
revoked: &NodeId,
|
||
mode: RevocationMode,
|
||
) -> anyhow::Result<Option<PostId>> {
|
||
let (post, visibility) = {
|
||
let storage = self.storage.get().await;
|
||
storage
|
||
.get_post_with_visibility(post_id)?
|
||
.ok_or_else(|| anyhow::anyhow!("post not found"))?
|
||
};
|
||
|
||
if post.author != self.node_id {
|
||
anyhow::bail!("cannot revoke: you are not the author");
|
||
}
|
||
|
||
let existing_recipients = match &visibility {
|
||
PostVisibility::Public => anyhow::bail!("cannot revoke access on a public post"),
|
||
PostVisibility::Encrypted { recipients } => recipients,
|
||
PostVisibility::GroupEncrypted { .. } => {
|
||
anyhow::bail!("cannot revoke individual access on a group-encrypted post; remove from circle instead")
|
||
}
|
||
};
|
||
|
||
let new_recipient_ids: Vec<NodeId> = existing_recipients
|
||
.iter()
|
||
.map(|wk| wk.recipient)
|
||
.filter(|r| r != revoked)
|
||
.collect();
|
||
|
||
if new_recipient_ids.len() == existing_recipients.len() {
|
||
anyhow::bail!("revoked node was not a recipient of this post");
|
||
}
|
||
|
||
match mode {
|
||
RevocationMode::SyncAccessList => {
|
||
let new_wrapped = crypto::rewrap_visibility(
|
||
&self.default_posting_secret,
|
||
&self.node_id,
|
||
existing_recipients,
|
||
&new_recipient_ids,
|
||
)?;
|
||
let new_vis = PostVisibility::Encrypted {
|
||
recipients: new_wrapped,
|
||
};
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.update_post_visibility(post_id, &new_vis)?;
|
||
}
|
||
|
||
// Propagate via a signed control-visibility post rather than a
|
||
// direct push. Only the target's author can make such a post.
|
||
let author_secret = {
|
||
let s = self.storage.get().await;
|
||
s.get_posting_identity(&post.author)?
|
||
.map(|pi| pi.secret_seed)
|
||
.ok_or_else(|| anyhow::anyhow!("missing posting secret for post author"))?
|
||
};
|
||
let control_post = crate::control::build_visibility_control_post(
|
||
&post.author,
|
||
&author_secret,
|
||
post_id,
|
||
&new_vis,
|
||
);
|
||
let control_post_id = crate::content::compute_post_id(&control_post);
|
||
let now = control_post.timestamp_ms;
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_intent(
|
||
&control_post_id,
|
||
&control_post,
|
||
&PostVisibility::Public,
|
||
&VisibilityIntent::Control,
|
||
)?;
|
||
}
|
||
self.update_neighbor_manifests_as(
|
||
&post.author,
|
||
&author_secret,
|
||
&control_post_id,
|
||
now,
|
||
).await;
|
||
info!(post_id = hex::encode(post_id), control_post_id = hex::encode(control_post_id), "Revoked access (sync mode) via control post");
|
||
Ok(None)
|
||
}
|
||
RevocationMode::ReEncrypt => {
|
||
let (new_content, new_wrapped) = crypto::re_encrypt_post(
|
||
&post.content,
|
||
&self.default_posting_secret,
|
||
&self.node_id,
|
||
existing_recipients,
|
||
&new_recipient_ids,
|
||
)?;
|
||
let new_vis = PostVisibility::Encrypted {
|
||
recipients: new_wrapped,
|
||
};
|
||
|
||
let new_post = Post {
|
||
author: self.default_posting_id,
|
||
content: new_content,
|
||
attachments: post.attachments.clone(),
|
||
timestamp_ms: post.timestamp_ms,
|
||
};
|
||
let new_post_id = compute_post_id(&new_post);
|
||
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.store_post_with_visibility(&new_post_id, &new_post, &new_vis)?;
|
||
}
|
||
|
||
// delete_post already pushes the DeleteRecord.
|
||
// Replacement post propagates via the CDN to remaining recipients.
|
||
self.delete_post(post_id).await?;
|
||
|
||
info!(
|
||
old_id = hex::encode(post_id),
|
||
new_id = hex::encode(new_post_id),
|
||
"Re-encrypted post (revoke)"
|
||
);
|
||
Ok(Some(new_post_id))
|
||
}
|
||
}
|
||
}
|
||
|
||
pub async fn revoke_circle_access(
|
||
&self,
|
||
circle_name: &str,
|
||
revoked: &NodeId,
|
||
mode: RevocationMode,
|
||
) -> anyhow::Result<usize> {
|
||
let posts = {
|
||
let storage = self.storage.get().await;
|
||
storage.find_posts_by_circle_intent(circle_name, &self.node_id)?
|
||
};
|
||
|
||
let mut count = 0;
|
||
for (post_id, _post, vis) in &posts {
|
||
if let PostVisibility::Encrypted { recipients } = vis {
|
||
if recipients.iter().any(|wk| &wk.recipient == revoked) {
|
||
match self.revoke_post_access(post_id, revoked, mode).await {
|
||
Ok(_) => count += 1,
|
||
Err(e) => {
|
||
warn!(
|
||
post_id = hex::encode(post_id),
|
||
error = %e,
|
||
"Failed to revoke post access"
|
||
);
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
info!(circle = circle_name, count, "Revoked circle access");
|
||
Ok(count)
|
||
}
|
||
|
||
pub async fn get_redundancy_summary(&self) -> anyhow::Result<(usize, usize, usize, usize)> {
|
||
let storage = self.storage.get().await;
|
||
storage.get_redundancy_summary(&self.node_id, 3_600_000)
|
||
}
|
||
|
||
// ---- Networking ----
|
||
|
||
pub fn endpoint_addr(&self) -> iroh::EndpointAddr {
|
||
self.network.endpoint_addr()
|
||
}
|
||
|
||
/// Connect to a peer by node ID using address resolution:
|
||
/// 0. Already connected or has session → done
|
||
/// 1. Social route cache → try cached address
|
||
/// 2. Peers table → connect directly
|
||
/// 3. N2/N3 lookup → ask tagged reporter for address
|
||
/// 4. Worm lookup → fan-out search beyond N3
|
||
/// 5. Relay introduction → coordinate hole punch via relay peer
|
||
/// 6. Session relay fallback → pipe through intermediary
|
||
pub async fn connect_by_node_id(&self, peer_id: NodeId) -> anyhow::Result<()> {
|
||
if self.network.is_connected(&peer_id).await {
|
||
return Ok(());
|
||
}
|
||
|
||
// Check if we already have a session connection
|
||
if self.network.conn_handle().has_session(&peer_id).await {
|
||
return Ok(());
|
||
}
|
||
|
||
// Check if this peer is known to be behind NAT / unreachable directly
|
||
let skip_direct = self.network.conn_handle().is_likely_unreachable(&peer_id).await;
|
||
|
||
// Step 0: Try social route cache (skipped for known-unreachable peers)
|
||
if !skip_direct {
|
||
let storage = self.storage.get().await;
|
||
if let Some(route) = storage.get_social_route(&peer_id)? {
|
||
// Try cached addresses directly
|
||
for addr in &route.addresses {
|
||
let endpoint_id = match iroh::EndpointId::from_bytes(&peer_id) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let ep_addr = iroh::EndpointAddr::from(endpoint_id).with_ip_addr(*addr);
|
||
drop(storage);
|
||
if self.network.connect_to_peer(peer_id, ep_addr).await.is_ok() {
|
||
info!(peer = hex::encode(peer_id), "Connected via social route cache");
|
||
return Ok(());
|
||
}
|
||
// Re-acquire lock for next iteration
|
||
break; // Only try first address from route directly
|
||
}
|
||
|
||
// Try peer_addresses: connect to their known peers and ask for target
|
||
for pa in &route.peer_addresses {
|
||
if let Ok(pa_nid) = crate::parse_node_id_hex(&pa.n) {
|
||
if self.network.is_connected(&pa_nid).await {
|
||
// Already connected to this peer — ask them
|
||
let resolved = self.network.conn_handle().resolve_address(&peer_id).await.unwrap_or(None);
|
||
if let Some(addr_str) = resolved {
|
||
if let Ok((_nid, ep_addr)) = crate::parse_connect_string(
|
||
&format!("{}@{}", hex::encode(peer_id), addr_str)
|
||
) {
|
||
if self.network.connect_to_peer(peer_id, ep_addr).await.is_ok() {
|
||
info!(peer = hex::encode(peer_id), via = &pa.n[..12], "Connected via social route peer referral");
|
||
return Ok(());
|
||
}
|
||
}
|
||
}
|
||
} else if let Some(pa_addr_str) = pa.a.first() {
|
||
// Try connecting to the peer first, then ask
|
||
if let Ok(pa_sock) = pa_addr_str.parse::<std::net::SocketAddr>() {
|
||
let pa_eid = match iroh::EndpointId::from_bytes(&pa_nid) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let pa_ep = iroh::EndpointAddr::from(pa_eid).with_ip_addr(pa_sock);
|
||
if self.network.connect_to_peer(pa_nid, pa_ep).await.is_ok() {
|
||
let resolved = self.network.conn_handle().resolve_address(&peer_id).await.unwrap_or(None);
|
||
if let Some(addr_str) = resolved {
|
||
if let Ok((_nid, ep_addr)) = crate::parse_connect_string(
|
||
&format!("{}@{}", hex::encode(peer_id), addr_str)
|
||
) {
|
||
if self.network.connect_to_peer(peer_id, ep_addr).await.is_ok() {
|
||
info!(peer = hex::encode(peer_id), via = &pa.n[..12], "Connected via social route peer referral (new conn)");
|
||
return Ok(());
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Steps 1-4: Direct connection attempts (skipped for known-unreachable peers)
|
||
if !skip_direct {
|
||
// Step 1: Try direct address from peers table
|
||
if let Some(addr) = self.network.addr_from_storage(&peer_id).await {
|
||
if self.network.connect_to_peer(peer_id, addr).await.is_ok() {
|
||
return Ok(());
|
||
}
|
||
}
|
||
|
||
// Step 2-3: Try address resolution via N2/N3
|
||
let resolved = self.network.conn_handle().resolve_address(&peer_id).await.unwrap_or(None);
|
||
|
||
if let Some(addr_str) = resolved {
|
||
if let Ok(addr) = crate::parse_connect_string(&format!("{}@{}", hex::encode(peer_id), addr_str)) {
|
||
if self.network.connect_to_peer(peer_id, addr.1).await.is_ok() {
|
||
return Ok(());
|
||
}
|
||
}
|
||
}
|
||
|
||
// Step 4: Try worm lookup (fan-out search beyond N3)
|
||
info!(peer = hex::encode(peer_id), "Trying worm lookup...");
|
||
if let Ok(Some(wr)) = self.network.worm_lookup(&peer_id).await {
|
||
if wr.node_id == peer_id {
|
||
if let Some(addr_str) = wr.addresses.first() {
|
||
if let Ok(addr) = crate::parse_connect_string(&format!("{}@{}", hex::encode(peer_id), addr_str)) {
|
||
if self.network.connect_to_peer(peer_id, addr.1).await.is_ok() {
|
||
return Ok(());
|
||
}
|
||
}
|
||
}
|
||
} else {
|
||
info!(
|
||
target = hex::encode(peer_id),
|
||
found_via = hex::encode(wr.node_id),
|
||
"Worm found target via recent peer"
|
||
);
|
||
if let Some(addr_str) = wr.addresses.first() {
|
||
if let Ok(needle_addr) = crate::parse_connect_string(&format!("{}@{}", hex::encode(wr.node_id), addr_str)) {
|
||
if self.network.connect_to_peer(wr.node_id, needle_addr.1).await.is_ok() {
|
||
let resolved = self.network.conn_handle().resolve_address(&peer_id).await.unwrap_or(None);
|
||
if let Some(target_addr_str) = resolved {
|
||
if let Ok(target_addr) = crate::parse_connect_string(&format!("{}@{}", hex::encode(peer_id), target_addr_str)) {
|
||
if self.network.connect_to_peer(peer_id, target_addr.1).await.is_ok() {
|
||
return Ok(());
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// All direct attempts failed — mark peer as likely unreachable
|
||
self.network.conn_handle().mark_unreachable(&peer_id);
|
||
}
|
||
|
||
// Step 6: Relay introduction — find relay peer(s) and request introduction
|
||
{
|
||
let on_cooldown = {
|
||
let storage = self.storage.get().await;
|
||
storage.is_relay_cooldown(&peer_id, 300_000).unwrap_or(false)
|
||
};
|
||
|
||
if !on_cooldown {
|
||
let relay_candidates = self.network.conn_handle().find_relays_for(&peer_id).await;
|
||
|
||
let mut had_capacity_reject = false;
|
||
let mut last_intro_id: Option<crate::connection::IntroId> = None;
|
||
let mut last_relay_peer: Option<NodeId> = None;
|
||
let mut last_relay_available = false;
|
||
|
||
for (relay_peer, ttl) in &relay_candidates {
|
||
info!(
|
||
target = hex::encode(peer_id),
|
||
relay = hex::encode(relay_peer),
|
||
ttl,
|
||
"Attempting relay introduction"
|
||
);
|
||
|
||
let intro_result = tokio::time::timeout(
|
||
std::time::Duration::from_secs(15),
|
||
self.network.send_relay_introduce_standalone(relay_peer, &peer_id, *ttl),
|
||
).await;
|
||
|
||
match intro_result {
|
||
Ok(Ok(result)) if result.accepted => {
|
||
info!(
|
||
target = hex::encode(peer_id),
|
||
addrs = ?result.target_addresses,
|
||
relay_available = result.relay_available,
|
||
"Relay introduction accepted, attempting hole punch"
|
||
);
|
||
|
||
// Save for potential session relay fallback
|
||
last_intro_id = Some(result.intro_id);
|
||
last_relay_peer = Some(*relay_peer);
|
||
last_relay_available = result.relay_available;
|
||
|
||
// Try direct connection to target's addresses (hole punch with scanning)
|
||
let our_profile = self.network.conn_handle().our_nat_profile().await;
|
||
let peer_profile = {
|
||
let s = self.storage.get().await;
|
||
s.get_peer_nat_profile(&peer_id)
|
||
};
|
||
if let Some(conn) = crate::connection::hole_punch_with_scanning(
|
||
self.network.endpoint(),
|
||
&peer_id,
|
||
&result.target_addresses,
|
||
our_profile,
|
||
peer_profile,
|
||
).await {
|
||
self.network.conn_handle().add_session(peer_id, conn, SessionReachMethod::HolePunch, None).await;
|
||
self.network.conn_handle().mark_reachable(&peer_id);
|
||
info!(peer = hex::encode(peer_id), "Connected via hole punch");
|
||
return Ok(());
|
||
}
|
||
|
||
// Intro accepted but hole punch failed — try session relay below
|
||
break;
|
||
}
|
||
Ok(Ok(result)) => {
|
||
let reason = result.reject_reason.as_deref().unwrap_or("unknown");
|
||
if reason.contains("capacity") {
|
||
debug!(
|
||
relay = hex::encode(relay_peer),
|
||
"Relay at capacity, trying next candidate"
|
||
);
|
||
had_capacity_reject = true;
|
||
continue; // Try next relay candidate
|
||
}
|
||
debug!(
|
||
target = hex::encode(peer_id),
|
||
reason,
|
||
"Relay introduction rejected"
|
||
);
|
||
// Target explicitly rejected — don't try more relays
|
||
break;
|
||
}
|
||
Ok(Err(e)) => {
|
||
debug!(error = %e, "Relay introduction failed, trying next candidate");
|
||
continue; // Network error — try next relay
|
||
}
|
||
Err(_) => {
|
||
debug!("Relay introduction timed out, trying next candidate");
|
||
continue; // Timeout — try next relay
|
||
}
|
||
}
|
||
}
|
||
|
||
// Step 7: Session relay fallback — if intro was accepted but hole punch failed
|
||
if let (Some(intro_id), Some(relay_peer)) = (last_intro_id, last_relay_peer) {
|
||
if last_relay_available {
|
||
info!(
|
||
target = hex::encode(peer_id),
|
||
relay = hex::encode(relay_peer),
|
||
"Hole punch failed, attempting session relay"
|
||
);
|
||
|
||
match self.attempt_session_relay(&relay_peer, &peer_id, &intro_id).await {
|
||
Ok(()) => {
|
||
info!(peer = hex::encode(peer_id), "Connected via session relay");
|
||
return Ok(());
|
||
}
|
||
Err(e) => {
|
||
debug!(error = %e, "Session relay failed");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// Record cooldown on failure (skip if all rejections were capacity-related)
|
||
if !relay_candidates.is_empty() && !had_capacity_reject {
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.record_relay_miss(&peer_id);
|
||
}
|
||
}
|
||
}
|
||
|
||
anyhow::bail!(
|
||
"cannot resolve address for peer {} (tried social routes, peers table, N2/N3, worm lookup, and relay introduction)",
|
||
hex::encode(peer_id)
|
||
)
|
||
}
|
||
|
||
/// Attempt to establish a session relay through an intermediary.
|
||
async fn attempt_session_relay(
|
||
&self,
|
||
relay_peer: &NodeId,
|
||
target: &NodeId,
|
||
intro_id: &crate::connection::IntroId,
|
||
) -> anyhow::Result<()> {
|
||
use crate::protocol::{
|
||
write_typed_message, MessageType, SessionRelayPayload,
|
||
};
|
||
|
||
let relay_conn = self.network.conn_handle().get_connection(relay_peer).await
|
||
.ok_or_else(|| anyhow::anyhow!("relay peer disconnected"))?;
|
||
|
||
let (mut send, _recv) = relay_conn.open_bi().await?;
|
||
|
||
let payload = SessionRelayPayload {
|
||
intro_id: *intro_id,
|
||
target: *target,
|
||
};
|
||
write_typed_message(&mut send, MessageType::SessionRelay, &payload).await?;
|
||
|
||
self.network.conn_handle().add_session(*target, relay_conn, SessionReachMethod::Relayed, None).await;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Worm lookup: fan-out search for a peer beyond the 3-hop discovery map.
|
||
pub async fn worm_lookup(&self, target: &NodeId) -> anyhow::Result<Option<WormResult>> {
|
||
self.network.worm_lookup(target).await
|
||
}
|
||
|
||
/// Connect to a peer and establish a mesh connection
|
||
pub async fn sync_with(&self, peer_id: NodeId) -> anyhow::Result<()> {
|
||
self.connect_by_node_id(peer_id).await?;
|
||
// Reset last_sync_ms for this author so the responder sends ALL posts,
|
||
// not just posts newer than our last sync timestamp.
|
||
{
|
||
let storage = self.storage.get().await;
|
||
let _ = storage.update_follow_last_sync(&peer_id, 0);
|
||
}
|
||
let stats = self.network.conn_handle().pull_from_peer(&peer_id).await?;
|
||
// Also fetch engagement data (reactions, comments) for posts we hold
|
||
let engagement = self.network.conn_handle().fetch_engagement_from_peer(&peer_id).await.unwrap_or(0);
|
||
info!(
|
||
peer = hex::encode(peer_id),
|
||
posts = stats.posts_received,
|
||
engagement_headers = engagement,
|
||
"Sync complete"
|
||
);
|
||
// Prefetch blobs for posts we just received
|
||
if stats.posts_received > 0 {
|
||
self.prefetch_blobs_from_peer(&peer_id).await;
|
||
}
|
||
Ok(())
|
||
}
|
||
|
||
/// Connect to a peer using full address
|
||
pub async fn sync_with_addr(&self, addr: iroh::EndpointAddr) -> anyhow::Result<()> {
|
||
let peer_id = *addr.id.as_bytes();
|
||
self.network.connect_to_peer(peer_id, addr).await?;
|
||
let stats = self.network.conn_handle().pull_from_peer(&peer_id).await?;
|
||
info!(
|
||
peer = hex::encode(peer_id),
|
||
posts = stats.posts_received,
|
||
"Sync complete"
|
||
);
|
||
Ok(())
|
||
}
|
||
|
||
/// Pull from all connected peers
|
||
pub async fn sync_all(&self) -> anyhow::Result<()> {
|
||
let stats = self.network.pull_from_all().await?;
|
||
info!(
|
||
"Pull complete: {} posts from {} peers",
|
||
stats.posts_received, stats.peers_pulled
|
||
);
|
||
// v0.6.2: apply any newly-received key-distribution posts so group
|
||
// seeds propagate automatically after sync.
|
||
if let Ok(n) = self.process_group_key_distributions().await {
|
||
if n > 0 { info!(applied = n, "Applied group key distributions"); }
|
||
}
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn add_peer(&self, peer_id: NodeId) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.add_peer(&peer_id)?;
|
||
Ok(())
|
||
}
|
||
|
||
pub async fn list_peers(&self) -> anyhow::Result<Vec<NodeId>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_peers()
|
||
}
|
||
|
||
pub async fn list_peer_records(&self) -> anyhow::Result<Vec<PeerRecord>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_peer_records()
|
||
}
|
||
|
||
pub async fn list_bootstrap_anchors(&self) -> Vec<(NodeId, iroh::EndpointAddr)> {
|
||
self.bootstrap_anchors.lock().await.clone()
|
||
}
|
||
|
||
/// Get connection info for display: (node_id, slot_kind, connected_at)
|
||
pub async fn list_connections(&self) -> Vec<(NodeId, PeerSlotKind, u64)> {
|
||
self.network.connection_info().await
|
||
}
|
||
|
||
pub async fn stats(&self) -> anyhow::Result<NodeStats> {
|
||
let storage = self.storage.get().await;
|
||
Ok(NodeStats {
|
||
post_count: storage.post_count()?,
|
||
peer_count: storage.list_peers()?.len(),
|
||
follow_count: storage.list_follows()?.len(),
|
||
})
|
||
}
|
||
|
||
/// Start the accept loop (run in background)
|
||
pub fn start_accept_loop(&self) -> tokio::task::JoinHandle<anyhow::Result<()>> {
|
||
let network = Arc::clone(&self.network);
|
||
tokio::spawn(async move { network.run_accept_loop().await })
|
||
}
|
||
|
||
/// Start pull cycle: Protocol v4 tiered pull — 60s ticks, full pull on first tick,
|
||
/// then only pull for stale authors (last_sync_ms > 4 hours old).
|
||
pub fn start_pull_cycle(self: &Arc<Self>, _interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let node = Arc::clone(self);
|
||
tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(60));
|
||
let mut is_first_tick = true;
|
||
loop {
|
||
interval.tick().await;
|
||
|
||
if is_first_tick {
|
||
// Full pull on startup
|
||
let _ = node.network.pull_from_all().await;
|
||
is_first_tick = false;
|
||
// Prefetch after initial sync
|
||
let peers = node.network.conn_handle().connected_peers().await;
|
||
for peer_id in peers {
|
||
node.prefetch_blobs_from_peer(&peer_id).await;
|
||
}
|
||
continue;
|
||
}
|
||
|
||
// Tiered: only pull for stale authors (4-hour default)
|
||
let stale_authors = {
|
||
let storage = node.storage.get().await;
|
||
storage.get_stale_follows(4 * 3600 * 1000).unwrap_or_default()
|
||
};
|
||
|
||
if stale_authors.is_empty() {
|
||
continue; // Most ticks skip — no stale authors
|
||
}
|
||
|
||
// Find a connected peer and pull
|
||
let peers = node.network.conn_handle().connected_peers().await;
|
||
if let Some(peer_id) = peers.first() {
|
||
match node.network.conn_handle().pull_from_peer(peer_id).await {
|
||
Ok(stats) => {
|
||
if stats.posts_received > 0 {
|
||
tracing::debug!(
|
||
posts = stats.posts_received,
|
||
"Tiered pull complete"
|
||
);
|
||
node.prefetch_blobs_from_peer(peer_id).await;
|
||
}
|
||
}
|
||
Err(e) => tracing::debug!(error = %e, "Tiered pull failed"),
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start diff cycle: every interval_secs, broadcast N1/N2 changes to connected peers.
|
||
pub fn start_diff_cycle(&self, interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let full_sync_interval = (4 * 60 * 60) / interval_secs; // every 4 hours
|
||
tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
let mut tick_count: u64 = 0;
|
||
loop {
|
||
interval.tick().await;
|
||
tick_count += 1;
|
||
|
||
if tick_count % full_sync_interval == 0 {
|
||
// Full state re-broadcast every 4 hours to catch missed diffs
|
||
match network.broadcast_full_state().await {
|
||
Ok(count) => {
|
||
if count > 0 {
|
||
tracing::info!(count, "Full N1/N2 state broadcast (4h cycle)");
|
||
}
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Full state broadcast failed");
|
||
}
|
||
}
|
||
} else {
|
||
match network.broadcast_diff().await {
|
||
Ok(count) => {
|
||
if count > 0 {
|
||
tracing::debug!(count, "Broadcast routing diff");
|
||
}
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Routing diff broadcast failed");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start rebalance cycle: every interval_secs, rebalance connection slots.
|
||
pub fn start_rebalance_cycle(&self, interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let timer = Arc::clone(&self.last_rebalance_ms);
|
||
tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default()
|
||
.as_millis() as u64;
|
||
timer.store(now, AtomicOrdering::Relaxed);
|
||
if let Err(e) = network.rebalance().await {
|
||
tracing::debug!(error = %e, "Rebalance failed");
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start the reactive growth loop: wakes on signal, sequentially fills local
|
||
/// slots with the most diverse N2 candidates. Each connection updates N2/N3
|
||
/// knowledge before picking the next candidate.
|
||
pub fn start_growth_loop(&self) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let (tx, rx) = tokio::sync::mpsc::channel(1);
|
||
tokio::spawn(async move {
|
||
network.set_growth_tx(tx.clone()).await;
|
||
// Initial kick: bootstrap may have already populated N2 before this started
|
||
let _ = tx.try_send(());
|
||
network.run_growth_loop(rx).await;
|
||
})
|
||
}
|
||
|
||
/// Start recovery loop: triggered when mesh drops below 2 connections.
|
||
/// Immediately reconnects to anchors and requests referrals.
|
||
pub fn start_recovery_loop(&self) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let storage = Arc::clone(&self.storage);
|
||
let node_id = self.node_id;
|
||
let alog = Arc::clone(&self.activity_log);
|
||
let (tx, mut rx) = tokio::sync::mpsc::channel::<()>(1);
|
||
tokio::spawn(async move {
|
||
let log_evt = |level: ActivityLevel, cat: ActivityCategory, msg: String, peer: Option<NodeId>| {
|
||
if let Ok(mut log) = alog.try_lock() { log.log(level, cat, msg, peer); }
|
||
};
|
||
network.set_recovery_tx(tx).await;
|
||
while rx.recv().await.is_some() {
|
||
tracing::info!("Recovery triggered: reconnecting to anchors");
|
||
log_evt(ActivityLevel::Warn, ActivityCategory::Recovery, "Recovery triggered: mesh empty".into(), None);
|
||
// Debounce: wait briefly for more disconnects to settle
|
||
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
|
||
// Drain any queued signals
|
||
while rx.try_recv().is_ok() {}
|
||
|
||
// Gather anchors: known_anchors table, then anchor peers fallback
|
||
let anchors: Vec<(crate::types::NodeId, Vec<std::net::SocketAddr>)> = {
|
||
let s = storage.get().await;
|
||
let known = s.list_known_anchors().unwrap_or_default();
|
||
if !known.is_empty() {
|
||
known
|
||
} else {
|
||
s.list_anchor_peers().unwrap_or_default()
|
||
.into_iter()
|
||
.map(|r| (r.node_id, r.addresses))
|
||
.collect()
|
||
}
|
||
};
|
||
|
||
for (anchor_nid, anchor_addrs) in &anchors {
|
||
if *anchor_nid == node_id { continue; }
|
||
// Connect to anchor (mesh or session fallback)
|
||
if !network.is_peer_connected_or_session(anchor_nid).await {
|
||
let endpoint_id = match iroh::EndpointId::from_bytes(anchor_nid) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let mut addr = iroh::EndpointAddr::from(endpoint_id);
|
||
for sa in anchor_addrs {
|
||
addr = addr.with_ip_addr(*sa);
|
||
}
|
||
match network.connect_to_anchor(*anchor_nid, addr).await {
|
||
Ok(()) => {
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Recovery, "Connected to anchor".into(), Some(*anchor_nid));
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Recovery: anchor connect failed");
|
||
log_evt(ActivityLevel::Warn, ActivityCategory::Recovery, format!("Anchor connect failed: {}", e), Some(*anchor_nid));
|
||
continue;
|
||
}
|
||
}
|
||
}
|
||
// Register with anchor
|
||
let _ = network.send_anchor_register(anchor_nid).await;
|
||
// Request referrals
|
||
match network.request_anchor_referrals(anchor_nid).await {
|
||
Ok(referrals) => {
|
||
for referral in referrals {
|
||
if referral.node_id == node_id { continue; }
|
||
if let Some(addr_str) = referral.addresses.first() {
|
||
let connect_str = format!(
|
||
"{}@{}", hex::encode(referral.node_id), addr_str,
|
||
);
|
||
if let Ok((rid, raddr)) = crate::parse_connect_string(&connect_str) {
|
||
match network.connect_to_peer(rid, raddr).await {
|
||
Ok(()) => {
|
||
tracing::info!(peer = hex::encode(rid), "Recovery: connected to referred peer");
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Recovery, "Connected to referred peer".into(), Some(rid));
|
||
}
|
||
Err(_) => {
|
||
match network.connect_via_introduction(rid, *anchor_nid).await {
|
||
Ok(()) => {
|
||
tracing::info!(peer = hex::encode(rid), "Recovery: connected via hole punch");
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Recovery, "Connected via hole punch".into(), Some(rid));
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, peer = hex::encode(rid), "Recovery: hole punch failed");
|
||
log_evt(ActivityLevel::Warn, ActivityCategory::Recovery, format!("Hole punch failed: {}", e), Some(rid));
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
Err(e) => tracing::debug!(error = %e, "Recovery: referral request failed"),
|
||
}
|
||
}
|
||
let conn_count = network.connection_count().await;
|
||
tracing::info!(connections = conn_count, "Recovery complete");
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Recovery, format!("Recovery complete, {} connections", conn_count), None);
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start social checkin cycle: every interval_secs, refresh stale social routes.
|
||
/// Uses ephemeral connections if not persistently connected.
|
||
pub fn start_social_checkin_cycle(&self, interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let storage = Arc::clone(&self.storage);
|
||
tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
let stale = {
|
||
let s = storage.get().await;
|
||
s.list_stale_social_routes(interval_secs as u64 * 1000).unwrap_or_default()
|
||
};
|
||
for route in stale {
|
||
let our_addrs: Vec<String> = network.endpoint_addr().ip_addrs()
|
||
.map(|s| s.to_string()).collect();
|
||
let result = network.send_social_checkin(
|
||
&route.node_id, &our_addrs, &[],
|
||
).await;
|
||
match result {
|
||
Ok(reply) => {
|
||
let s = storage.get().await;
|
||
let addrs: Vec<std::net::SocketAddr> = reply.addresses.iter()
|
||
.filter_map(|a| a.parse().ok()).collect();
|
||
let _ = s.touch_social_route_connect(
|
||
&reply.node_id, &addrs, ReachMethod::Direct,
|
||
);
|
||
let _ = s.update_social_route_peer_addrs(
|
||
&reply.node_id, &reply.peer_addresses,
|
||
);
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(
|
||
peer = hex::encode(route.node_id),
|
||
error = %e,
|
||
"Social checkin failed"
|
||
);
|
||
}
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Register with all connected anchor peers. Returns count registered.
|
||
pub async fn register_with_anchors(&self) -> usize {
|
||
let conns = self.network.connection_info().await;
|
||
let mut count = 0;
|
||
for (nid, _, _) in &conns {
|
||
if self.network.is_anchor_peer(nid).await {
|
||
match self.network.send_anchor_register(nid).await {
|
||
Ok(()) => {
|
||
count += 1;
|
||
info!(anchor = hex::encode(nid), "Registered with anchor");
|
||
}
|
||
Err(e) => debug!(error = %e, anchor = hex::encode(nid), "Anchor register failed"),
|
||
}
|
||
}
|
||
}
|
||
count
|
||
}
|
||
|
||
/// Start anchor register cycle: periodically re-register with anchors and request referrals
|
||
/// when connection count is low.
|
||
pub fn start_anchor_register_cycle(&self, interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let storage = Arc::clone(&self.storage);
|
||
let node_id = self.node_id;
|
||
let alog = Arc::clone(&self.activity_log);
|
||
let timer = Arc::clone(&self.last_anchor_register_ms);
|
||
tokio::spawn(async move {
|
||
let log_evt = |level: ActivityLevel, cat: ActivityCategory, msg: String, peer: Option<NodeId>| {
|
||
if let Ok(mut log) = alog.try_lock() { log.log(level, cat, msg, peer); }
|
||
};
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default()
|
||
.as_millis() as u64;
|
||
timer.store(now, AtomicOrdering::Relaxed);
|
||
|
||
// Re-register with connected anchors (mesh + session)
|
||
let conns = network.connection_info().await;
|
||
let session_peers = network.session_peer_ids().await;
|
||
let mut registered_anchors = std::collections::HashSet::new();
|
||
// Mesh-connected anchors
|
||
for (nid, _, _) in &conns {
|
||
if network.is_anchor_peer(nid).await {
|
||
match network.send_anchor_register(nid).await {
|
||
Ok(()) => {
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, "Re-registered with anchor".into(), Some(*nid));
|
||
registered_anchors.insert(*nid);
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Anchor re-register failed");
|
||
log_evt(ActivityLevel::Warn, ActivityCategory::Anchor, format!("Re-register failed: {}", e), Some(*nid));
|
||
}
|
||
}
|
||
}
|
||
}
|
||
// Session-connected anchors (e.g. anchor with full mesh)
|
||
for nid in &session_peers {
|
||
if registered_anchors.contains(nid) { continue; }
|
||
if network.is_anchor_peer(nid).await {
|
||
match network.send_anchor_register(nid).await {
|
||
Ok(()) => {
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, "Re-registered with anchor (session)".into(), Some(*nid));
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Anchor session re-register failed");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// If few connections, try requesting referrals from known anchors
|
||
let conn_count = network.connection_count().await;
|
||
if conn_count < 10 {
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, format!("Low connections ({}), requesting referrals", conn_count), None);
|
||
let known = {
|
||
let s = storage.get().await;
|
||
s.list_known_anchors().unwrap_or_default()
|
||
};
|
||
for (anchor_nid, anchor_addrs) in known {
|
||
if anchor_nid == node_id {
|
||
continue;
|
||
}
|
||
// Connect if not already connected (mesh or session)
|
||
if !network.is_peer_connected_or_session(&anchor_nid).await {
|
||
let endpoint_id = match iroh::EndpointId::from_bytes(&anchor_nid) {
|
||
Ok(eid) => eid,
|
||
Err(_) => continue,
|
||
};
|
||
let mut addr = iroh::EndpointAddr::from(endpoint_id);
|
||
for sa in &anchor_addrs {
|
||
addr = addr.with_ip_addr(*sa);
|
||
}
|
||
if let Err(e) = network.connect_to_anchor(anchor_nid, addr).await {
|
||
tracing::debug!(error = %e, "Anchor cycle: connect failed");
|
||
continue;
|
||
}
|
||
}
|
||
match network.request_anchor_referrals(&anchor_nid).await {
|
||
Ok(referrals) => {
|
||
for referral in referrals {
|
||
if referral.node_id == node_id {
|
||
continue;
|
||
}
|
||
if let Some(addr_str) = referral.addresses.first() {
|
||
let connect_str = format!(
|
||
"{}@{}",
|
||
hex::encode(referral.node_id),
|
||
addr_str,
|
||
);
|
||
if let Ok((rid, raddr)) = crate::parse_connect_string(&connect_str) {
|
||
match network.connect_to_peer(rid, raddr).await {
|
||
Ok(()) => {
|
||
tracing::info!(peer = hex::encode(rid), "Anchor cycle: connected to referred peer");
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, "Connected to referred peer".into(), Some(rid));
|
||
}
|
||
Err(_) => {
|
||
match network.connect_via_introduction(rid, anchor_nid).await {
|
||
Ok(()) => {
|
||
tracing::info!(peer = hex::encode(rid), "Anchor cycle: connected via hole punch");
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, "Connected via hole punch".into(), Some(rid));
|
||
}
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, peer = hex::encode(rid), "Anchor cycle: hole punch failed");
|
||
log_evt(ActivityLevel::Warn, ActivityCategory::Anchor, format!("Hole punch failed: {}", e), Some(rid));
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
Err(e) => tracing::debug!(error = %e, "Anchor cycle: referral request failed"),
|
||
}
|
||
}
|
||
}
|
||
|
||
// Anchor self-verification probe
|
||
{
|
||
let probe_due = network.conn_handle().probe_due().await;
|
||
if probe_due {
|
||
log_evt(ActivityLevel::Info, ActivityCategory::Anchor, "Initiating anchor self-verification probe".into(), None);
|
||
match network.conn_handle().initiate_anchor_probe().await {
|
||
Ok(true) => {}, // success already logged inside
|
||
Ok(false) => {}, // failure already logged inside
|
||
Err(e) => {
|
||
tracing::debug!(error = %e, "Anchor probe error");
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start bootstrap connectivity check: 24 hours after startup, verify the bootstrap
|
||
/// anchor is within our network knowledge (N1/N2/N3). If not, we may be in an isolated
|
||
/// segment — reconnect to bootstrap and request referrals to bridge back.
|
||
pub fn start_bootstrap_connectivity_check(self: &Arc<Self>) -> tokio::task::JoinHandle<()> {
|
||
let node = Arc::clone(self);
|
||
tokio::spawn(async move {
|
||
// Wait 24 hours before first check
|
||
tokio::time::sleep(std::time::Duration::from_secs(24 * 60 * 60)).await;
|
||
|
||
let mut interval = tokio::time::interval(std::time::Duration::from_secs(24 * 60 * 60));
|
||
loop {
|
||
interval.tick().await;
|
||
|
||
// Parse bootstrap anchor NodeId
|
||
let bootstrap_nid = match crate::parse_connect_string(DEFAULT_ANCHOR) {
|
||
Ok((nid, _)) => nid,
|
||
Err(_) => continue,
|
||
};
|
||
|
||
// Skip if we ARE the bootstrap
|
||
if bootstrap_nid == node.node_id {
|
||
continue;
|
||
}
|
||
|
||
// Check if bootstrap is in N1 (mesh), N2, or N3
|
||
let is_reachable = {
|
||
let connected = node.network.is_connected(&bootstrap_nid).await;
|
||
if connected {
|
||
true
|
||
} else {
|
||
let storage = node.storage.get().await;
|
||
let in_n2 = storage.find_in_n2(&bootstrap_nid).unwrap_or_default();
|
||
if !in_n2.is_empty() {
|
||
true
|
||
} else {
|
||
let in_n3 = storage.find_in_n3(&bootstrap_nid).unwrap_or_default();
|
||
!in_n3.is_empty()
|
||
}
|
||
}
|
||
};
|
||
|
||
if is_reachable {
|
||
tracing::debug!("Bootstrap connectivity check: bootstrap in reach, network OK");
|
||
continue;
|
||
}
|
||
|
||
// Bootstrap not in N1/N2/N3 — we may be isolated
|
||
tracing::info!("Bootstrap connectivity check: bootstrap not in reach, reconnecting");
|
||
|
||
// Connect to bootstrap and request referrals
|
||
if let Err(e) = node.connect_by_node_id(bootstrap_nid).await {
|
||
tracing::warn!(error = %e, "Bootstrap connectivity: failed to connect");
|
||
continue;
|
||
}
|
||
|
||
// Report bootstrap in our N1 for 24 hours so peers learn about it
|
||
node.network.conn_handle().add_sticky_n1(&bootstrap_nid, 24 * 60 * 60 * 1000);
|
||
|
||
match node.network.request_anchor_referrals(&bootstrap_nid).await {
|
||
Ok(referrals) => {
|
||
tracing::info!(count = referrals.len(), "Bootstrap connectivity: got referrals");
|
||
for referral in referrals {
|
||
if referral.node_id == node.node_id { continue; }
|
||
if let Some(addr_str) = referral.addresses.first() {
|
||
let connect_str = format!("{}@{}", hex::encode(referral.node_id), addr_str);
|
||
if let Ok((rid, raddr)) = crate::parse_connect_string(&connect_str) {
|
||
let _ = node.network.connect_to_peer(rid, raddr).await;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
Err(e) => {
|
||
tracing::warn!(error = %e, "Bootstrap connectivity: referral request failed");
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start CDN manifest refresh cycle: periodically ask upstream for newer manifests.
|
||
/// Manifests older than `max_age_ms` are refreshed from their upstream source.
|
||
pub fn start_manifest_refresh_cycle(&self, interval_secs: u64, max_age_ms: u64) -> tokio::task::JoinHandle<()> {
|
||
let network = Arc::clone(&self.network);
|
||
let storage = Arc::clone(&self.storage);
|
||
tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
let cutoff = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default()
|
||
.as_millis() as u64 - max_age_ms;
|
||
let stale_cids = {
|
||
let s = storage.get().await;
|
||
s.get_stale_manifest_cids(cutoff).unwrap_or_default()
|
||
};
|
||
for cid in &stale_cids {
|
||
// Get current updated_at + pick a holder to refresh from
|
||
let (current_updated_at, refresh_source) = {
|
||
let s = storage.get().await;
|
||
let updated_at = s.get_cdn_manifest(cid).ok().flatten()
|
||
.and_then(|json| serde_json::from_str::<crate::types::AuthorManifest>(&json).ok())
|
||
.map(|m| m.updated_at)
|
||
.unwrap_or(0);
|
||
let source = s.get_file_holders(cid)
|
||
.unwrap_or_default()
|
||
.into_iter()
|
||
.next()
|
||
.map(|(nid, _)| nid);
|
||
(updated_at, source)
|
||
};
|
||
let Some(upstream_nid) = refresh_source else { continue; };
|
||
match network.request_manifest_refresh(cid, &upstream_nid, current_updated_at).await {
|
||
Ok(Some(cdn_manifest)) => {
|
||
if crypto::verify_manifest_signature(&cdn_manifest.author_manifest) {
|
||
let author_json = serde_json::to_string(&cdn_manifest.author_manifest).unwrap_or_default();
|
||
let s = storage.get().await;
|
||
let _ = s.store_cdn_manifest(
|
||
cid,
|
||
&author_json,
|
||
&cdn_manifest.author_manifest.author,
|
||
cdn_manifest.author_manifest.updated_at,
|
||
);
|
||
// Relay to known holders (flat set)
|
||
let holders = s.get_file_holders(cid).unwrap_or_default();
|
||
drop(s);
|
||
if !holders.is_empty() {
|
||
network.push_manifest_to_downstream(cid, &cdn_manifest).await;
|
||
}
|
||
tracing::debug!(
|
||
cid = hex::encode(cid),
|
||
"Refreshed stale manifest from upstream"
|
||
);
|
||
}
|
||
}
|
||
Ok(None) => {} // No update available
|
||
Err(e) => {
|
||
tracing::debug!(
|
||
cid = hex::encode(cid),
|
||
upstream = hex::encode(&upstream_nid),
|
||
error = %e,
|
||
"Manifest refresh from upstream failed"
|
||
);
|
||
}
|
||
}
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Build our N+10:Addresses (our connected peers with their addresses).
|
||
pub async fn build_peer_addresses(&self) -> Vec<PeerWithAddress> {
|
||
let conns = self.network.connection_info().await;
|
||
let storage = self.storage.get().await;
|
||
let mut result = Vec::new();
|
||
for (nid, kind, _) in conns {
|
||
if nid == self.node_id {
|
||
continue;
|
||
}
|
||
// Prefer social peers
|
||
if kind != PeerSlotKind::Local && result.len() >= 10 {
|
||
continue;
|
||
}
|
||
let addrs: Vec<String> = storage.get_peer_record(&nid)
|
||
.ok()
|
||
.flatten()
|
||
.map(|r| r.addresses.iter().map(|a| a.to_string()).collect())
|
||
.unwrap_or_default();
|
||
result.push(PeerWithAddress {
|
||
n: hex::encode(nid),
|
||
a: addrs,
|
||
});
|
||
if result.len() >= 10 {
|
||
break;
|
||
}
|
||
}
|
||
result
|
||
}
|
||
|
||
/// List all social routes (for CLI/Tauri display).
|
||
pub async fn list_social_routes(&self) -> anyhow::Result<Vec<SocialRouteEntry>> {
|
||
let storage = self.storage.get().await;
|
||
storage.list_social_routes()
|
||
}
|
||
|
||
// ---- Blob Eviction ----
|
||
|
||
/// Compute priority score for a blob. Higher score = keep longer.
|
||
pub fn compute_blob_priority(
|
||
&self,
|
||
candidate: &crate::storage::EvictionCandidate,
|
||
follows: &[NodeId],
|
||
now_ms: u64,
|
||
) -> f64 {
|
||
compute_blob_priority_standalone(candidate, &self.node_id, follows, now_ms)
|
||
}
|
||
|
||
/// Delete a blob locally. BlobDeleteNotice was removed in v0.6.2; remote
|
||
/// holders notice eviction via their own LRU / replica-miss handling.
|
||
pub async fn delete_blob_local(&self, cid: &[u8; 32]) -> anyhow::Result<()> {
|
||
{
|
||
let storage = self.storage.get().await;
|
||
storage.cleanup_cdn_for_blob(cid)?;
|
||
storage.remove_blob(cid)?;
|
||
}
|
||
let _ = self.blob_store.delete(cid);
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Evict lowest-priority blobs until total storage is under max_bytes.
|
||
pub async fn evict_blobs(&self, max_bytes: u64) -> anyhow::Result<usize> {
|
||
let total = {
|
||
let storage = self.storage.get().await;
|
||
storage.total_blob_bytes()?
|
||
};
|
||
|
||
if total <= max_bytes {
|
||
return Ok(0);
|
||
}
|
||
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
// 1-hour staleness for replica counts
|
||
let staleness_ms = 3600 * 1000;
|
||
|
||
let (candidates, follows) = {
|
||
let storage = self.storage.get().await;
|
||
let candidates = storage.get_eviction_candidates(staleness_ms)?;
|
||
let follows = storage.list_follows().unwrap_or_default();
|
||
(candidates, follows)
|
||
};
|
||
|
||
// Score and sort ascending (lowest priority first)
|
||
let mut scored: Vec<(f64, &crate::storage::EvictionCandidate)> = candidates
|
||
.iter()
|
||
.map(|c| (self.compute_blob_priority(c, &follows, now), c))
|
||
.collect();
|
||
scored.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap_or(std::cmp::Ordering::Equal));
|
||
|
||
let mut bytes_freed: u64 = 0;
|
||
let target_free = total - max_bytes;
|
||
let mut evicted = 0;
|
||
|
||
for (score, candidate) in &scored {
|
||
if bytes_freed >= target_free {
|
||
break;
|
||
}
|
||
if let Err(e) = self.delete_blob_local(&candidate.cid).await {
|
||
warn!(cid = hex::encode(candidate.cid), error = %e, "Failed to evict blob");
|
||
continue;
|
||
}
|
||
bytes_freed += candidate.size_bytes;
|
||
evicted += 1;
|
||
info!(
|
||
cid = hex::encode(candidate.cid),
|
||
score = score,
|
||
size = candidate.size_bytes,
|
||
"Evicted blob"
|
||
);
|
||
}
|
||
|
||
info!(evicted, bytes_freed, "Blob eviction complete");
|
||
Ok(evicted)
|
||
}
|
||
|
||
/// Start a periodic eviction cycle.
|
||
pub fn start_eviction_cycle(
|
||
node: Arc<Self>,
|
||
interval_secs: u64,
|
||
max_bytes: u64,
|
||
) -> tokio::task::JoinHandle<()>
|
||
where
|
||
Self: Send + Sync + 'static,
|
||
{
|
||
tokio::spawn(async move {
|
||
let mut interval = tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
match node.evict_blobs(max_bytes).await {
|
||
Ok(0) => {}
|
||
Ok(n) => info!(evicted = n, "Eviction cycle complete"),
|
||
Err(e) => warn!(error = %e, "Eviction cycle failed"),
|
||
}
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start UPnP lease renewal cycle. Renews every lease_secs/2.
|
||
/// On 3 consecutive failures: clears is_anchor and logs a warning.
|
||
pub fn start_upnp_renewal_cycle(&self) -> Option<tokio::task::JoinHandle<()>> {
|
||
let mapping = self.network.upnp_mapping()?;
|
||
let local_port = mapping.local_port;
|
||
let external_port = mapping.external_addr.port();
|
||
let interval_secs = (mapping.lease_secs / 2) as u64;
|
||
let network = Arc::clone(&self.network);
|
||
let alog = Arc::clone(&self.activity_log);
|
||
|
||
Some(tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
let mut consecutive_failures: u32 = 0;
|
||
loop {
|
||
interval.tick().await;
|
||
if crate::upnp::renew_upnp_mapping(local_port, external_port).await {
|
||
consecutive_failures = 0;
|
||
debug!("UPnP: lease renewed (port {})", external_port);
|
||
} else {
|
||
consecutive_failures += 1;
|
||
warn!("UPnP: renewal failed ({}/3)", consecutive_failures);
|
||
if consecutive_failures >= 3 {
|
||
network.clear_anchor();
|
||
if let Ok(mut log) = alog.try_lock() {
|
||
log.log(
|
||
ActivityLevel::Warn,
|
||
ActivityCategory::Connection,
|
||
"UPnP lease lost after 3 renewal failures, auto-anchor disabled".into(),
|
||
None,
|
||
);
|
||
}
|
||
warn!("UPnP: 3 consecutive renewal failures, auto-anchor disabled");
|
||
return; // stop the cycle
|
||
}
|
||
}
|
||
}
|
||
}))
|
||
}
|
||
|
||
// --- HTTP Post Delivery ---
|
||
|
||
/// Start the HTTP server for serving public posts to browsers.
|
||
/// Only starts if this node is publicly TCP-reachable.
|
||
pub fn start_http_server(&self) -> Option<tokio::task::JoinHandle<()>> {
|
||
if !self.network.is_http_capable() {
|
||
debug!("HTTP server not started: node is not publicly TCP-reachable");
|
||
return None;
|
||
}
|
||
let port = self.network.bound_port();
|
||
if port == 0 {
|
||
return None;
|
||
}
|
||
let storage = Arc::clone(&self.storage);
|
||
let blob_store = Arc::clone(&self.blob_store);
|
||
let downstream_addrs = Arc::new(tokio::sync::Mutex::new(
|
||
std::collections::HashMap::<[u8; 32], Vec<std::net::SocketAddr>>::new(),
|
||
));
|
||
|
||
// Advertise HTTP capability to peers
|
||
let http_addr = self.network.http_addr();
|
||
self.network.conn_handle().set_http_info(true, http_addr.clone());
|
||
// Also update the ConnectionManager's fields for payload construction
|
||
{
|
||
let rt = tokio::runtime::Handle::current();
|
||
let conn_mgr = Arc::clone(&self.network.conn_mgr_arc());
|
||
rt.spawn(async move {
|
||
let mut cm = conn_mgr.lock().await;
|
||
cm.http_capable = true;
|
||
cm.http_addr = http_addr;
|
||
});
|
||
}
|
||
|
||
info!("Starting HTTP server on TCP port {}", port);
|
||
Some(tokio::spawn(async move {
|
||
if let Err(e) = crate::http::run_http_server(port, storage, blob_store, downstream_addrs).await {
|
||
warn!("HTTP server stopped: {}", e);
|
||
}
|
||
}))
|
||
}
|
||
|
||
/// Start the web redirect handler (itsgoin.net share link resolution).
|
||
pub fn start_web_handler(self: &Arc<Self>, port: u16) -> tokio::task::JoinHandle<()> {
|
||
let node = Arc::clone(self);
|
||
info!("Starting web redirect handler on port {}", port);
|
||
tokio::spawn(async move {
|
||
if let Err(e) = crate::web::run_web_handler(port, node).await {
|
||
warn!("Web redirect handler stopped: {}", e);
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Start UPnP TCP lease renewal cycle alongside the UDP renewal.
|
||
pub fn start_upnp_tcp_renewal_cycle(&self) -> Option<tokio::task::JoinHandle<()>> {
|
||
if !self.network.has_upnp_tcp() {
|
||
return None;
|
||
}
|
||
let mapping = self.network.upnp_mapping()?;
|
||
let local_port = mapping.local_port;
|
||
let external_port = mapping.external_addr.port();
|
||
let interval_secs = (mapping.lease_secs / 2) as u64;
|
||
|
||
Some(tokio::spawn(async move {
|
||
let mut interval =
|
||
tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
if !crate::upnp::renew_upnp_tcp_mapping(local_port, external_port).await {
|
||
warn!("UPnP: TCP lease renewal failed");
|
||
// Don't stop the cycle — TCP is best-effort
|
||
}
|
||
}
|
||
}))
|
||
}
|
||
|
||
/// Generate a share link URL for a public post.
|
||
/// Returns None if post is not public or not found.
|
||
pub async fn generate_share_link(&self, post_id: &PostId) -> anyhow::Result<Option<String>> {
|
||
// Look up the post to verify it's public and get the author
|
||
let (post, visibility) = {
|
||
let store = self.storage.get().await;
|
||
match store.get_post_with_visibility(post_id)? {
|
||
Some(pv) => pv,
|
||
None => return Ok(None),
|
||
}
|
||
};
|
||
|
||
if !matches!(visibility, PostVisibility::Public) {
|
||
return Ok(None);
|
||
}
|
||
|
||
let post_hex = hex::encode(post_id);
|
||
let author_hex = hex::encode(post.author);
|
||
Ok(Some(format!("https://itsgoin.net/p/{}/{}", post_hex, author_hex)))
|
||
}
|
||
|
||
// --- Engagement API ---
|
||
|
||
/// React to a post with an emoji. If `private`, encrypts payload for post author only.
|
||
pub async fn react_to_post(
|
||
&self,
|
||
post_id: PostId,
|
||
emoji: String,
|
||
private: bool,
|
||
) -> anyhow::Result<crate::types::Reaction> {
|
||
let our_node_id = self.default_posting_id;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
// For private reactions, look up the post author and encrypt
|
||
let encrypted_payload = if private {
|
||
let storage = self.storage.get().await;
|
||
let post = storage.get_post(&post_id)?
|
||
.ok_or_else(|| anyhow::anyhow!("post not found"))?;
|
||
drop(storage);
|
||
let seed = self.default_posting_secret;
|
||
let payload_json = serde_json::json!({
|
||
"emoji": emoji,
|
||
"reactor": hex::encode(our_node_id),
|
||
"timestamp_ms": now,
|
||
}).to_string();
|
||
Some(crate::crypto::encrypt_private_reaction(&seed, &post.author, &payload_json)?)
|
||
} else {
|
||
None
|
||
};
|
||
|
||
let signature = crate::crypto::sign_reaction(&self.default_posting_secret, &our_node_id, &post_id, &emoji, now);
|
||
let reaction = crate::types::Reaction {
|
||
reactor: our_node_id,
|
||
emoji: emoji.clone(),
|
||
post_id,
|
||
timestamp_ms: now,
|
||
encrypted_payload,
|
||
deleted_at: None,
|
||
signature,
|
||
};
|
||
|
||
// Store locally
|
||
let storage = self.storage.get().await;
|
||
storage.store_reaction(&reaction)?;
|
||
drop(storage);
|
||
|
||
// Propagate via BlobHeaderDiff to downstream + upstream
|
||
{
|
||
let network = &self.network;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::AddReaction(reaction.clone())],
|
||
timestamp_ms: now,
|
||
};
|
||
// propagate_engagement_diff targets all file_holders (flat set, max 5)
|
||
// which already subsumes what used to be upstream + downstream.
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
|
||
Ok(reaction)
|
||
}
|
||
|
||
/// Remove a reaction from a post.
|
||
pub async fn remove_reaction(&self, post_id: PostId, emoji: String) -> anyhow::Result<()> {
|
||
let our_node_id = self.default_posting_id;
|
||
let storage = self.storage.get().await;
|
||
storage.remove_reaction(&our_node_id, &post_id, &emoji)?;
|
||
drop(storage);
|
||
|
||
// Propagate removal
|
||
{
|
||
let network = &self.network;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::RemoveReaction {
|
||
reactor: our_node_id,
|
||
emoji,
|
||
post_id,
|
||
}],
|
||
timestamp_ms: now,
|
||
};
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Get all reactions for a post. Decrypts private reactions if we're the post author.
|
||
pub async fn get_reactions(&self, post_id: PostId) -> anyhow::Result<Vec<crate::types::Reaction>> {
|
||
let storage = self.storage.get().await;
|
||
let reactions = storage.get_reactions(&post_id)?;
|
||
let post_info = storage.get_post(&post_id)?;
|
||
drop(storage);
|
||
|
||
let our_node_id = self.default_posting_id;
|
||
|
||
// If we're the author, decrypt private reactions
|
||
if let Some(post) = post_info {
|
||
if post.author == our_node_id {
|
||
let seed = self.default_posting_secret;
|
||
return Ok(reactions.into_iter().map(|mut r| {
|
||
if let Some(ref enc) = r.encrypted_payload {
|
||
if let Ok(decrypted) = crate::crypto::decrypt_private_reaction(&seed, &r.reactor, enc) {
|
||
r.encrypted_payload = Some(decrypted);
|
||
}
|
||
}
|
||
r
|
||
}).collect());
|
||
}
|
||
}
|
||
|
||
Ok(reactions)
|
||
}
|
||
|
||
/// Get reaction counts grouped by emoji for a post.
|
||
pub async fn get_reaction_counts(&self, post_id: PostId) -> anyhow::Result<Vec<(String, u64, bool)>> {
|
||
let our_node_id = self.default_posting_id;
|
||
let storage = self.storage.get().await;
|
||
let counts = storage.get_reaction_counts(&post_id, &our_node_id)?;
|
||
Ok(counts)
|
||
}
|
||
|
||
/// Add a plain inline comment to a post (signed with our posting key).
|
||
/// The comment's `content` is the full text; `ref_post_id` is None.
|
||
pub async fn comment_on_post(
|
||
&self,
|
||
post_id: PostId,
|
||
content: String,
|
||
) -> anyhow::Result<crate::types::InlineComment> {
|
||
self.comment_on_post_inner(post_id, content, None).await
|
||
}
|
||
|
||
/// Add a rich comment: the full body lives in `ref_post_id` (typically a
|
||
/// newly-created public post by the commenter that carries attachments
|
||
/// or a long body). The inline `preview` text appears in the parent
|
||
/// post's header-diff and is what most clients render by default; the
|
||
/// expanded view fetches the referenced post. Signature binds the
|
||
/// preview + ref_post_id so a peer can't rewrite either independently.
|
||
pub async fn comment_on_post_with_ref(
|
||
&self,
|
||
post_id: PostId,
|
||
preview: String,
|
||
ref_post_id: PostId,
|
||
) -> anyhow::Result<crate::types::InlineComment> {
|
||
self.comment_on_post_inner(post_id, preview, Some(ref_post_id)).await
|
||
}
|
||
|
||
async fn comment_on_post_inner(
|
||
&self,
|
||
post_id: PostId,
|
||
content: String,
|
||
ref_post_id: Option<PostId>,
|
||
) -> anyhow::Result<crate::types::InlineComment> {
|
||
let our_node_id = self.default_posting_id;
|
||
let seed = self.default_posting_secret;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let signature = crate::crypto::sign_comment(
|
||
&seed,
|
||
&our_node_id,
|
||
&post_id,
|
||
&content,
|
||
now,
|
||
ref_post_id.as_ref(),
|
||
);
|
||
|
||
let comment = crate::types::InlineComment {
|
||
author: our_node_id,
|
||
post_id,
|
||
content,
|
||
timestamp_ms: now,
|
||
signature,
|
||
deleted_at: None,
|
||
ref_post_id,
|
||
};
|
||
|
||
let storage = self.storage.get().await;
|
||
storage.store_comment(&comment)?;
|
||
drop(storage);
|
||
|
||
// Propagate via BlobHeaderDiff to the target post's known holders.
|
||
{
|
||
let network = &self.network;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::AddComment(comment.clone())],
|
||
timestamp_ms: now,
|
||
};
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
|
||
Ok(comment)
|
||
}
|
||
|
||
/// Edit one of your own comments on a post.
|
||
pub async fn edit_comment(
|
||
&self,
|
||
post_id: PostId,
|
||
timestamp_ms: u64,
|
||
new_content: String,
|
||
) -> anyhow::Result<()> {
|
||
let our_node_id = self.default_posting_id;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let storage = self.storage.get().await;
|
||
storage.edit_comment(&our_node_id, &post_id, timestamp_ms, &new_content)?;
|
||
drop(storage);
|
||
|
||
// Propagate via BlobHeaderDiff
|
||
{
|
||
let network = &self.network;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::EditComment {
|
||
author: our_node_id,
|
||
post_id,
|
||
timestamp_ms,
|
||
new_content,
|
||
}],
|
||
timestamp_ms: now,
|
||
};
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
Ok(())
|
||
}
|
||
|
||
/// Delete one of your own comments on a post.
|
||
pub async fn delete_comment(
|
||
&self,
|
||
post_id: PostId,
|
||
timestamp_ms: u64,
|
||
) -> anyhow::Result<()> {
|
||
let our_node_id = self.default_posting_id;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
let storage = self.storage.get().await;
|
||
storage.delete_comment(&our_node_id, &post_id, timestamp_ms)?;
|
||
drop(storage);
|
||
|
||
// Propagate via BlobHeaderDiff
|
||
{
|
||
let network = &self.network;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::DeleteComment {
|
||
author: our_node_id,
|
||
post_id,
|
||
timestamp_ms,
|
||
}],
|
||
timestamp_ms: now,
|
||
};
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
Ok(())
|
||
}
|
||
|
||
/// Get all comments for a post.
|
||
pub async fn get_comments(&self, post_id: PostId) -> anyhow::Result<Vec<crate::types::InlineComment>> {
|
||
let storage = self.storage.get().await;
|
||
let comments = storage.get_comments(&post_id)?;
|
||
Ok(comments)
|
||
}
|
||
|
||
/// Set the comment/reaction policy for a post (author-only).
|
||
pub async fn set_comment_policy(
|
||
&self,
|
||
post_id: PostId,
|
||
policy: crate::types::CommentPolicy,
|
||
) -> anyhow::Result<()> {
|
||
let storage = self.storage.get().await;
|
||
storage.set_comment_policy(&post_id, &policy)?;
|
||
drop(storage);
|
||
|
||
// Propagate policy change
|
||
{
|
||
let network = &self.network;
|
||
let our_node_id = self.default_posting_id;
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: our_node_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::SetPolicy(policy)],
|
||
timestamp_ms: now,
|
||
};
|
||
network.propagate_engagement_diff(&post_id, &diff, &our_node_id).await;
|
||
}
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Get the comment policy for a post.
|
||
pub async fn get_comment_policy(&self, post_id: PostId) -> anyhow::Result<Option<crate::types::CommentPolicy>> {
|
||
let storage = self.storage.get().await;
|
||
let policy = storage.get_comment_policy(&post_id)?;
|
||
Ok(policy)
|
||
}
|
||
|
||
/// Get the full comment thread for a post (inline comments + split posts merged).
|
||
pub async fn get_comment_thread(&self, post_id: PostId) -> anyhow::Result<Vec<crate::types::InlineComment>> {
|
||
let storage = self.storage.get().await;
|
||
|
||
// 1. Inline comments
|
||
let mut comments = storage.get_comments(&post_id)?;
|
||
|
||
// 2. Split posts (thread children)
|
||
let children = storage.get_thread_children(&post_id)?;
|
||
for child_id in children {
|
||
if let Ok(Some(child_post)) = storage.get_post(&child_id) {
|
||
// Split posts store comments as JSON in content
|
||
if let Ok(split_comments) = serde_json::from_str::<Vec<crate::types::InlineComment>>(&child_post.content) {
|
||
comments.extend(split_comments);
|
||
}
|
||
}
|
||
}
|
||
|
||
// Dedup by (author, timestamp_ms) and sort
|
||
let mut seen = std::collections::HashSet::new();
|
||
comments.retain(|c| seen.insert((c.author, c.timestamp_ms)));
|
||
comments.sort_by_key(|c| c.timestamp_ms);
|
||
|
||
Ok(comments)
|
||
}
|
||
|
||
// --- Encrypted receipt/comment slot methods ---
|
||
|
||
/// Unwrap the CEK for a post we are a participant of, returning (cek, sorted_participants).
|
||
/// Returns None if this is a public post or we cannot decrypt.
|
||
async fn get_post_cek_and_participants(
|
||
&self,
|
||
post_id: &PostId,
|
||
) -> anyhow::Result<Option<([u8; 32], Vec<NodeId>)>> {
|
||
let storage = self.storage.get().await;
|
||
let (post, visibility) = match storage.get_post_with_visibility(post_id)? {
|
||
Some(pv) => pv,
|
||
None => return Ok(None),
|
||
};
|
||
drop(storage);
|
||
|
||
match &visibility {
|
||
PostVisibility::Public => Ok(None),
|
||
PostVisibility::Encrypted { recipients } => {
|
||
let cek = crypto::unwrap_cek_for_recipient(
|
||
&self.default_posting_secret,
|
||
&self.node_id,
|
||
&post.author,
|
||
recipients,
|
||
)?;
|
||
match cek {
|
||
Some(cek) => {
|
||
let mut participants: Vec<NodeId> = recipients.iter().map(|wk| wk.recipient).collect();
|
||
participants.sort();
|
||
participants.dedup();
|
||
Ok(Some((cek, participants)))
|
||
}
|
||
None => Ok(None),
|
||
}
|
||
}
|
||
PostVisibility::GroupEncrypted { group_id, epoch, wrapped_cek } => {
|
||
let storage = self.storage.get().await;
|
||
let group_seeds = storage.get_all_group_seeds_map().unwrap_or_default();
|
||
let group_key_record = storage.get_group_key(group_id)?;
|
||
let members = if let Some(ref gk) = group_key_record {
|
||
storage.get_circle_members(&gk.circle_name).unwrap_or_default()
|
||
} else {
|
||
vec![]
|
||
};
|
||
drop(storage);
|
||
|
||
if let Some((seed, pubkey)) = group_seeds.get(&(*group_id, *epoch)) {
|
||
let cek = crypto::unwrap_group_cek(seed, pubkey, wrapped_cek)?;
|
||
let mut participants: Vec<NodeId> = members;
|
||
// Ensure the author is included
|
||
if !participants.contains(&post.author) {
|
||
participants.push(post.author);
|
||
}
|
||
participants.sort();
|
||
participants.dedup();
|
||
Ok(Some((cek, participants)))
|
||
} else {
|
||
Ok(None)
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
/// Write our receipt slot for an encrypted post.
|
||
/// `state` is the receipt state, `emoji` is optional (only used when state == Reacted).
|
||
pub async fn write_receipt_slot(
|
||
&self,
|
||
post_id: PostId,
|
||
state: crate::types::ReceiptState,
|
||
emoji: Option<String>,
|
||
) -> anyhow::Result<()> {
|
||
let (cek, participants) = self.get_post_cek_and_participants(&post_id).await?
|
||
.ok_or_else(|| anyhow::anyhow!("not a participant of this encrypted post"))?;
|
||
let slot_key = crypto::derive_slot_key(&cek);
|
||
|
||
// Find our slot index (sorted participant position)
|
||
let our_slot = participants.iter().position(|nid| nid == &self.node_id)
|
||
.ok_or_else(|| anyhow::anyhow!("our node_id not found in participants"))?;
|
||
|
||
// Build plaintext: [1 byte state][8 bytes timestamp_ms][23 bytes emoji+padding]
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
let mut plaintext = [0u8; 32];
|
||
plaintext[0] = state as u8;
|
||
plaintext[1..9].copy_from_slice(&now.to_le_bytes());
|
||
if let Some(ref emoji_str) = emoji {
|
||
let emoji_bytes = emoji_str.as_bytes();
|
||
let copy_len = emoji_bytes.len().min(23);
|
||
plaintext[9..9 + copy_len].copy_from_slice(&emoji_bytes[..copy_len]);
|
||
}
|
||
|
||
let encrypted = crypto::encrypt_slot(&plaintext, &slot_key)?;
|
||
|
||
// Update the BlobHeader
|
||
let storage = self.storage.get().await;
|
||
let header = storage.get_blob_header(&post_id)?;
|
||
let mut blob_header = if let Some((json, _ts)) = header {
|
||
serde_json::from_str::<crate::types::BlobHeader>(&json)
|
||
.unwrap_or_else(|_| crate::types::BlobHeader {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
reactions: vec![],
|
||
comments: vec![],
|
||
policy: Default::default(),
|
||
updated_at: now,
|
||
thread_splits: vec![],
|
||
receipt_slots: vec![],
|
||
comment_slots: vec![],
|
||
prior_author: None,
|
||
})
|
||
} else {
|
||
crate::types::BlobHeader {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
reactions: vec![],
|
||
comments: vec![],
|
||
policy: Default::default(),
|
||
updated_at: now,
|
||
thread_splits: vec![],
|
||
receipt_slots: vec![],
|
||
comment_slots: vec![],
|
||
prior_author: None,
|
||
}
|
||
};
|
||
|
||
// Ensure enough slots exist
|
||
while blob_header.receipt_slots.len() <= our_slot {
|
||
blob_header.receipt_slots.push(crypto::random_slot_noise(64));
|
||
}
|
||
blob_header.receipt_slots[our_slot] = encrypted.clone();
|
||
blob_header.updated_at = now;
|
||
|
||
let header_json = serde_json::to_string(&blob_header)?;
|
||
storage.store_blob_header(&post_id, &blob_header.author, &header_json, now)?;
|
||
drop(storage);
|
||
|
||
// Propagate via BlobHeaderDiff
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
ops: vec![crate::types::BlobHeaderDiffOp::WriteReceiptSlot {
|
||
post_id,
|
||
slot_index: our_slot as u32,
|
||
data: encrypted,
|
||
}],
|
||
timestamp_ms: now,
|
||
};
|
||
self.network.propagate_engagement_diff(&post_id, &diff, &self.node_id).await;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Write a private comment to an encrypted post's comment slot.
|
||
pub async fn write_comment_slot(
|
||
&self,
|
||
post_id: PostId,
|
||
content: String,
|
||
) -> anyhow::Result<()> {
|
||
let (cek, _participants) = self.get_post_cek_and_participants(&post_id).await?
|
||
.ok_or_else(|| anyhow::anyhow!("not a participant of this encrypted post"))?;
|
||
let slot_key = crypto::derive_slot_key(&cek);
|
||
|
||
let now = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)?
|
||
.as_millis() as u64;
|
||
|
||
// Build plaintext: [32 bytes author_node_id][8 bytes timestamp_ms][216 bytes content+padding]
|
||
let mut plaintext = [0u8; 256];
|
||
plaintext[..32].copy_from_slice(&self.node_id);
|
||
plaintext[32..40].copy_from_slice(&now.to_le_bytes());
|
||
let content_bytes = content.as_bytes();
|
||
let copy_len = content_bytes.len().min(216);
|
||
plaintext[40..40 + copy_len].copy_from_slice(&content_bytes[..copy_len]);
|
||
|
||
let encrypted = crypto::encrypt_slot(&plaintext, &slot_key)?;
|
||
|
||
// Find first available comment slot or add new ones
|
||
let storage = self.storage.get().await;
|
||
let header = storage.get_blob_header(&post_id)?;
|
||
let mut blob_header = if let Some((json, _ts)) = header {
|
||
serde_json::from_str::<crate::types::BlobHeader>(&json)
|
||
.unwrap_or_else(|_| crate::types::BlobHeader {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
reactions: vec![],
|
||
comments: vec![],
|
||
policy: Default::default(),
|
||
updated_at: now,
|
||
thread_splits: vec![],
|
||
receipt_slots: vec![],
|
||
comment_slots: vec![],
|
||
prior_author: None,
|
||
})
|
||
} else {
|
||
crate::types::BlobHeader {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
reactions: vec![],
|
||
comments: vec![],
|
||
policy: Default::default(),
|
||
updated_at: now,
|
||
thread_splits: vec![],
|
||
receipt_slots: vec![],
|
||
comment_slots: vec![],
|
||
prior_author: None,
|
||
}
|
||
};
|
||
|
||
// Try to find an empty slot by attempting decryption
|
||
let mut target_index = None;
|
||
for (i, slot) in blob_header.comment_slots.iter().enumerate() {
|
||
if let Ok(decrypted) = crypto::decrypt_slot(slot, &slot_key) {
|
||
// Check if all 256 plaintext bytes are zero (empty)
|
||
if decrypted.len() == 256 && decrypted.iter().all(|&b| b == 0) {
|
||
target_index = Some(i);
|
||
break;
|
||
}
|
||
} else {
|
||
// Cannot decrypt — could be random noise (empty), use it
|
||
target_index = Some(i);
|
||
break;
|
||
}
|
||
}
|
||
|
||
let (slot_index, add_new) = if let Some(idx) = target_index {
|
||
(idx, false)
|
||
} else {
|
||
// No available slots — add one
|
||
let idx = blob_header.comment_slots.len();
|
||
blob_header.comment_slots.push(crypto::random_slot_noise(256));
|
||
(idx, true)
|
||
};
|
||
|
||
blob_header.comment_slots[slot_index] = encrypted.clone();
|
||
blob_header.updated_at = now;
|
||
|
||
let header_json = serde_json::to_string(&blob_header)?;
|
||
storage.store_blob_header(&post_id, &blob_header.author, &header_json, now)?;
|
||
drop(storage);
|
||
|
||
// Propagate
|
||
let op = if add_new {
|
||
crate::types::BlobHeaderDiffOp::AddCommentSlots {
|
||
post_id,
|
||
count: 1,
|
||
slots: vec![encrypted],
|
||
}
|
||
} else {
|
||
crate::types::BlobHeaderDiffOp::WriteCommentSlot {
|
||
post_id,
|
||
slot_index: slot_index as u32,
|
||
data: encrypted,
|
||
}
|
||
};
|
||
|
||
let diff = crate::protocol::BlobHeaderDiffPayload {
|
||
post_id,
|
||
author: self.default_posting_id,
|
||
ops: vec![op],
|
||
timestamp_ms: now,
|
||
};
|
||
self.network.propagate_engagement_diff(&post_id, &diff, &self.node_id).await;
|
||
|
||
Ok(())
|
||
}
|
||
|
||
/// Read and decrypt all receipt slots for an encrypted post.
|
||
pub async fn read_receipt_slots(
|
||
&self,
|
||
post_id: PostId,
|
||
) -> anyhow::Result<Vec<crate::types::ReceiptSlotData>> {
|
||
let (cek, participants) = self.get_post_cek_and_participants(&post_id).await?
|
||
.ok_or_else(|| anyhow::anyhow!("not a participant of this encrypted post"))?;
|
||
let slot_key = crypto::derive_slot_key(&cek);
|
||
|
||
let storage = self.storage.get().await;
|
||
let header = storage.get_blob_header(&post_id)?;
|
||
drop(storage);
|
||
|
||
let blob_header = match header {
|
||
Some((json, _ts)) => serde_json::from_str::<crate::types::BlobHeader>(&json)?,
|
||
None => return Ok(vec![]),
|
||
};
|
||
|
||
let mut results = Vec::new();
|
||
for (i, slot) in blob_header.receipt_slots.iter().enumerate() {
|
||
let participant_id = participants.get(i).copied();
|
||
match crypto::decrypt_slot(slot, &slot_key) {
|
||
Ok(plaintext) if plaintext.len() >= 9 => {
|
||
let state = crate::types::ReceiptState::from_u8(plaintext[0]);
|
||
let timestamp_ms = u64::from_le_bytes(
|
||
plaintext[1..9].try_into().unwrap_or([0u8; 8]),
|
||
);
|
||
let emoji = if state == crate::types::ReceiptState::Reacted && plaintext.len() >= 32 {
|
||
let emoji_bytes = &plaintext[9..32];
|
||
let end = emoji_bytes.iter().position(|&b| b == 0).unwrap_or(23);
|
||
if end > 0 {
|
||
String::from_utf8(emoji_bytes[..end].to_vec()).ok()
|
||
} else {
|
||
None
|
||
}
|
||
} else {
|
||
None
|
||
};
|
||
results.push(crate::types::ReceiptSlotData {
|
||
slot_index: i as u32,
|
||
node_id: participant_id,
|
||
state,
|
||
timestamp_ms,
|
||
emoji,
|
||
});
|
||
}
|
||
_ => {
|
||
// Could not decrypt — noise/uninitialized slot
|
||
results.push(crate::types::ReceiptSlotData {
|
||
slot_index: i as u32,
|
||
node_id: participant_id,
|
||
state: crate::types::ReceiptState::Empty,
|
||
timestamp_ms: 0,
|
||
emoji: None,
|
||
});
|
||
}
|
||
}
|
||
}
|
||
|
||
Ok(results)
|
||
}
|
||
|
||
/// Read and decrypt all comment slots for an encrypted post.
|
||
pub async fn read_comment_slots(
|
||
&self,
|
||
post_id: PostId,
|
||
) -> anyhow::Result<Vec<crate::types::CommentSlotData>> {
|
||
let (cek, _participants) = self.get_post_cek_and_participants(&post_id).await?
|
||
.ok_or_else(|| anyhow::anyhow!("not a participant of this encrypted post"))?;
|
||
let slot_key = crypto::derive_slot_key(&cek);
|
||
|
||
let storage = self.storage.get().await;
|
||
let header = storage.get_blob_header(&post_id)?;
|
||
drop(storage);
|
||
|
||
let blob_header = match header {
|
||
Some((json, _ts)) => serde_json::from_str::<crate::types::BlobHeader>(&json)?,
|
||
None => return Ok(vec![]),
|
||
};
|
||
|
||
let mut results = Vec::new();
|
||
for (i, slot) in blob_header.comment_slots.iter().enumerate() {
|
||
match crypto::decrypt_slot(slot, &slot_key) {
|
||
Ok(plaintext) if plaintext.len() >= 40 => {
|
||
// Check if it's an empty slot (all zeros)
|
||
if plaintext.iter().all(|&b| b == 0) {
|
||
continue;
|
||
}
|
||
let mut author = [0u8; 32];
|
||
author.copy_from_slice(&plaintext[..32]);
|
||
// Skip if author is all zeros (empty)
|
||
if author == [0u8; 32] {
|
||
continue;
|
||
}
|
||
let timestamp_ms = u64::from_le_bytes(
|
||
plaintext[32..40].try_into().unwrap_or([0u8; 8]),
|
||
);
|
||
let content_bytes = &plaintext[40..];
|
||
let end = content_bytes.iter().position(|&b| b == 0).unwrap_or(content_bytes.len());
|
||
let content = String::from_utf8_lossy(&content_bytes[..end]).to_string();
|
||
|
||
results.push(crate::types::CommentSlotData {
|
||
slot_index: i as u32,
|
||
author,
|
||
timestamp_ms,
|
||
content,
|
||
});
|
||
}
|
||
_ => {
|
||
// Cannot decrypt or too short — skip
|
||
}
|
||
}
|
||
}
|
||
|
||
results.sort_by_key(|c| c.timestamp_ms);
|
||
Ok(results)
|
||
}
|
||
}
|
||
|
||
pub struct NodeStats {
|
||
pub post_count: usize,
|
||
pub peer_count: usize,
|
||
pub follow_count: usize,
|
||
}
|
||
|
||
/// Standalone priority scoring for testing.
|
||
/// score = pin_boost + (relationship × heart_recency × freshness / (peer_copies + 1))
|
||
pub fn compute_blob_priority_standalone(
|
||
candidate: &crate::storage::EvictionCandidate,
|
||
our_node_id: &NodeId,
|
||
follows: &[NodeId],
|
||
now_ms: u64,
|
||
) -> f64 {
|
||
let pin_boost = if candidate.pinned { 1000.0 } else { 0.0 };
|
||
|
||
// Share-link popularity boost: high downstream count indicates the blob
|
||
// has been shared via share links and is actively being served to others.
|
||
let share_boost = if candidate.downstream_count >= 3 {
|
||
100.0
|
||
} else if candidate.downstream_count >= 1 {
|
||
50.0 * candidate.downstream_count as f64 / 3.0
|
||
} else {
|
||
0.0
|
||
};
|
||
|
||
// v0.6.2: audience removed. Relationship is author-of-ours vs followed vs other.
|
||
let relationship = if candidate.author == *our_node_id {
|
||
5.0
|
||
} else if follows.contains(&candidate.author) {
|
||
2.0
|
||
} else {
|
||
0.1
|
||
};
|
||
|
||
let thirty_days_ms = 30u64 * 24 * 3600 * 1000;
|
||
let access_age_ms = now_ms.saturating_sub(candidate.last_accessed_at);
|
||
let heart_recency = (1.0 - (access_age_ms as f64 / thirty_days_ms as f64)).max(0.0);
|
||
|
||
let post_age_days = now_ms.saturating_sub(candidate.created_at) as f64 / (24.0 * 3600.0 * 1000.0);
|
||
let freshness = 1.0 / (1.0 + post_age_days);
|
||
|
||
let copies_factor = 1.0 / (candidate.peer_copies as f64 + 1.0);
|
||
|
||
pin_boost + share_boost + (relationship * heart_recency * freshness * copies_factor)
|
||
}
|
||
|
||
// --- Active Replication Cycle ---
|
||
|
||
impl Node {
|
||
/// Start the active replication cycle: periodically ask peers to hold our
|
||
/// under-replicated recent content. Only Available/Persistent devices initiate.
|
||
pub fn start_replication_cycle(self: &Arc<Self>, interval_secs: u64) -> tokio::task::JoinHandle<()> {
|
||
let node = Arc::clone(self);
|
||
tokio::spawn(async move {
|
||
// Wait 2 minutes before first cycle (let connections establish)
|
||
tokio::time::sleep(std::time::Duration::from_secs(120)).await;
|
||
let mut interval = tokio::time::interval(std::time::Duration::from_secs(interval_secs));
|
||
loop {
|
||
interval.tick().await;
|
||
node.run_replication_check().await;
|
||
}
|
||
})
|
||
}
|
||
|
||
/// Single replication check iteration.
|
||
async fn run_replication_check(&self) {
|
||
// All devices initiate replication — phones need their content replicated
|
||
// before they go to sleep.
|
||
|
||
// 1. Get own posts < 72h old
|
||
let seventy_two_hours_ms = 72u64 * 3600 * 1000;
|
||
let now_ms = std::time::SystemTime::now()
|
||
.duration_since(std::time::UNIX_EPOCH)
|
||
.unwrap_or_default()
|
||
.as_millis() as u64;
|
||
let since_ms = now_ms.saturating_sub(seventy_two_hours_ms);
|
||
|
||
// Get connected peers first (no storage lock needed)
|
||
let connected = self.network.connected_peers().await;
|
||
if connected.is_empty() {
|
||
debug!("No peers for replication");
|
||
return;
|
||
}
|
||
|
||
// Priority: Available (desktops) > Persistent (anchors) > Intermittent (phones)
|
||
let role_priority = |role: &DeviceRole| -> u16 {
|
||
match role {
|
||
DeviceRole::Available => 300, // desktops — best replication targets
|
||
DeviceRole::Persistent => 200, // anchors — good but save for web
|
||
DeviceRole::Intermittent => 100, // phones — last resort but still useful
|
||
}
|
||
};
|
||
|
||
// Single lock: get under-replicated posts AND peer roles/pressure
|
||
let (under_replicated, suitable_peers) = {
|
||
let storage = self.storage.get().await;
|
||
let recent_ids = match storage.get_own_recent_post_ids(&self.node_id, since_ms) {
|
||
Ok(ids) => ids,
|
||
Err(e) => {
|
||
debug!(error = %e, "Replication: failed to get own recent posts");
|
||
return;
|
||
}
|
||
};
|
||
|
||
// Filter to under-replicated (< 2 holders)
|
||
let mut needs_replication = Vec::new();
|
||
for pid in &recent_ids {
|
||
match storage.get_file_holder_count(pid) {
|
||
Ok(count) if count < 2 => {
|
||
needs_replication.push(*pid);
|
||
}
|
||
_ => {}
|
||
}
|
||
}
|
||
|
||
// Get peer roles + cache pressure in same lock
|
||
let mut candidates = Vec::new();
|
||
for peer_id in &connected {
|
||
if *peer_id == self.node_id { continue; }
|
||
let role_str = storage.get_peer_device_role(peer_id)
|
||
.ok()
|
||
.flatten()
|
||
.unwrap_or_default();
|
||
let role = DeviceRole::from_str_label(&role_str);
|
||
let pressure = storage.get_peer_cache_pressure(peer_id)
|
||
.ok()
|
||
.flatten()
|
||
.unwrap_or(128) as u16;
|
||
// Combined score: role priority + cache pressure
|
||
let score = role_priority(&role) + pressure;
|
||
candidates.push((*peer_id, score));
|
||
}
|
||
|
||
(needs_replication, candidates)
|
||
};
|
||
|
||
// If none need replication, skip silently
|
||
if under_replicated.is_empty() {
|
||
return;
|
||
}
|
||
|
||
if suitable_peers.is_empty() {
|
||
debug!("No peers available for replication");
|
||
return;
|
||
}
|
||
|
||
// Pick best candidate (highest combined score)
|
||
let best_peer = suitable_peers
|
||
.iter()
|
||
.max_by_key(|(_, score)| *score)
|
||
.map(|(id, _)| *id)
|
||
.unwrap();
|
||
|
||
// 7. Cap at 20 post IDs per request, one request per cycle
|
||
let batch: Vec<PostId> = under_replicated.into_iter().take(20).collect();
|
||
let batch_len = batch.len();
|
||
|
||
// 8. Send ReplicationRequest
|
||
match self.network.send_replication_request(&best_peer, batch, 128).await {
|
||
Ok(accepted) => {
|
||
if accepted.is_empty() {
|
||
debug!(
|
||
peer = hex::encode(best_peer),
|
||
"Replication: peer rejected all posts"
|
||
);
|
||
} else {
|
||
debug!(
|
||
peer = hex::encode(best_peer),
|
||
accepted = accepted.len(),
|
||
requested = batch_len,
|
||
"Replication: peer accepted posts"
|
||
);
|
||
}
|
||
}
|
||
Err(e) => {
|
||
debug!(
|
||
peer = hex::encode(best_peer),
|
||
error = %e,
|
||
"Replication: request failed"
|
||
);
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
#[cfg(test)]
|
||
mod tests {
|
||
use super::*;
|
||
use crate::storage::EvictionCandidate;
|
||
|
||
fn make_node_id(byte: u8) -> NodeId {
|
||
[byte; 32]
|
||
}
|
||
|
||
fn make_candidate(
|
||
author: NodeId,
|
||
pinned: bool,
|
||
created_at: u64,
|
||
last_accessed_at: u64,
|
||
peer_copies: u32,
|
||
) -> EvictionCandidate {
|
||
EvictionCandidate {
|
||
cid: [0u8; 32],
|
||
post_id: [0u8; 32],
|
||
author,
|
||
size_bytes: 1000,
|
||
created_at,
|
||
last_accessed_at,
|
||
pinned,
|
||
peer_copies,
|
||
downstream_count: 0,
|
||
}
|
||
}
|
||
|
||
#[test]
|
||
fn own_pinned_scores_highest() {
|
||
let our_id = make_node_id(1);
|
||
let now = 10_000_000_000u64; // ~115 days in ms
|
||
let candidate = make_candidate(our_id, true, now - 86400_000, now, 0);
|
||
|
||
let score = compute_blob_priority_standalone(
|
||
&candidate, &our_id, &[], now,
|
||
);
|
||
assert!(score > 1000.0, "own pinned should score >1000, got {}", score);
|
||
}
|
||
|
||
#[test]
|
||
fn follow_recent_scores_higher_than_stranger_stale() {
|
||
let our_id = make_node_id(1);
|
||
let follow_id = make_node_id(2);
|
||
let stranger_id = make_node_id(3);
|
||
let now = 10_000_000_000u64;
|
||
|
||
let follow_candidate = make_candidate(follow_id, false, now - 86400_000, now, 0);
|
||
let follow_score = compute_blob_priority_standalone(
|
||
&follow_candidate, &our_id, &[follow_id], now,
|
||
);
|
||
|
||
let stranger_candidate = make_candidate(
|
||
stranger_id, false,
|
||
now - 10 * 86400_000,
|
||
now - 20 * 86400_000,
|
||
5,
|
||
);
|
||
let stranger_score = compute_blob_priority_standalone(
|
||
&stranger_candidate, &our_id, &[], now,
|
||
);
|
||
|
||
assert!(follow_score > stranger_score,
|
||
"follow recent ({}) should score higher than stranger stale ({})",
|
||
follow_score, stranger_score);
|
||
}
|
||
|
||
#[test]
|
||
fn no_relationship_scores_near_zero() {
|
||
let our_id = make_node_id(1);
|
||
let stranger = make_node_id(99);
|
||
let now = 10_000_000_000u64;
|
||
|
||
let candidate = make_candidate(
|
||
stranger, false,
|
||
now - 30 * 86400_000,
|
||
now - 30 * 86400_000,
|
||
10,
|
||
);
|
||
let score = compute_blob_priority_standalone(
|
||
&candidate, &our_id, &[], now,
|
||
);
|
||
|
||
assert!(score < 0.01, "stranger stale should score near 0, got {}", score);
|
||
}
|
||
|
||
#[test]
|
||
fn priority_ordering() {
|
||
let our_id = make_node_id(1);
|
||
let follow_id = make_node_id(2);
|
||
let stranger_id = make_node_id(4);
|
||
let now = 10_000_000_000u64;
|
||
|
||
let own = make_candidate(our_id, true, now - 86400_000, now, 0);
|
||
let follow = make_candidate(follow_id, false, now - 86400_000, now, 0);
|
||
let stranger = make_candidate(stranger_id, false, now - 30 * 86400_000, now - 30 * 86400_000, 10);
|
||
|
||
let own_score = compute_blob_priority_standalone(&own, &our_id, &[follow_id], now);
|
||
let follow_score = compute_blob_priority_standalone(&follow, &our_id, &[follow_id], now);
|
||
let stranger_score = compute_blob_priority_standalone(&stranger, &our_id, &[follow_id], now);
|
||
|
||
assert!(own_score > follow_score, "own ({}) > follow ({})", own_score, follow_score);
|
||
assert!(follow_score > stranger_score, "follow ({}) > stranger ({})", follow_score, stranger_score);
|
||
}
|
||
}
|