v0.3.6: Active CDN replication, device roles, budgets, tombstones, engagement fix, DOS hardening

Active CDN replication:
- All devices proactively replicate recent posts (<72h, <2 replicas) to peers
- Target priority: desktops (300) > anchors (200) > phones (100) + cache_pressure
- ReplicationRequest/Response (0xE1/0xE2) wire messages
- 10-min cycle, 2-min initial delay, cap 20 posts per request
- Graceful with small networks (1 peer = 1 replica, 0 peers = silent skip)

Device roles & budgets:
- Intermittent (phone), Available (desktop), Persistent (anchor)
- Advertised in InitialExchange, stored per-peer
- Replication budget: phones 100MB/hr, desktops/anchors 200MB/hr
- Delivery budget: phones 1GB/hr, desktops 2GB/hr, anchors 1GB/hr
- Hourly auto-reset, enforcement on blob serving

Cache management:
- 1GB default cache limit, configurable in settings UI
- Eviction cycle activated (was implemented but never started)
- Share-link priority boost (+100 for 3+ downstream)
- Cache pressure score (0-255) for replication targeting

Engagement distribution fix:
- BlobHeader JSON rebuilt after BlobHeaderDiff ops
- Previously reactions/comments stored in tables but header stayed stale

Tombstone system:
- deleted_at column on reactions and comments
- Tombstones propagate through pull sync (additive merge respects timestamps)
- UI queries filter WHERE deleted_at IS NULL

Persistent notifications:
- seen_engagement and seen_messages tables replace in-memory Sets
- Only notify on genuinely unseen content, survives restarts

DOS hardening:
- BlobHeaderDiff fan-out: single batched task, max 10 concurrent via JoinSet
- Blob prefetch: cap 20 per cycle, newest first
- PostDownstreamRegister: cap 50 per sync
- Delivery budget enforcement on BlobRequest handler
- Pull preference: non-anchors first to preserve anchor delivery budget

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Scott Reimers 2026-03-20 21:00:28 -04:00
parent b7f2d369fa
commit a7e632de88
16 changed files with 1254 additions and 158 deletions

View file

@ -1,6 +1,6 @@
[package]
name = "itsgoin-desktop"
version = "0.3.5"
version = "0.3.6"
edition = "2021"
[lib]

View file

@ -1328,6 +1328,25 @@ async fn get_public_visible(
.map_err(|e| e.to_string())
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
struct CacheStatsDto {
used_bytes: u64,
max_bytes: u64,
blob_count: u64,
}
#[tauri::command]
async fn get_cache_stats(state: State<'_, AppState>) -> Result<CacheStatsDto, String> {
let node = state.inner();
let (used, max, count) = node.get_cache_stats().await.map_err(|e| e.to_string())?;
Ok(CacheStatsDto {
used_bytes: used,
max_bytes: max,
blob_count: count,
})
}
#[tauri::command]
async fn get_setting(state: State<'_, AppState>, key: String) -> Result<Option<String>, String> {
let node = state.inner();
@ -1340,6 +1359,56 @@ async fn set_setting(state: State<'_, AppState>, key: String, value: String) ->
node.set_setting(&key, &value).await.map_err(|e| e.to_string())
}
#[tauri::command]
async fn mark_post_seen(
state: State<'_, AppState>,
post_id: String,
react_count: u32,
comment_count: u32,
) -> Result<(), String> {
let node = state.inner();
let pid = hex_to_postid(&post_id)?;
node.set_seen_engagement(&pid, react_count, comment_count).await.map_err(|e| e.to_string())
}
#[tauri::command]
async fn mark_conversation_read(
state: State<'_, AppState>,
partner_id: String,
) -> Result<(), String> {
let node = state.inner();
let nid = parse_node_id(&partner_id)?;
let now_ms = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_millis() as u64)
.unwrap_or(0);
node.set_last_read_message(&nid, now_ms).await.map_err(|e| e.to_string())
}
#[tauri::command]
async fn get_seen_engagement(
state: State<'_, AppState>,
post_id: String,
) -> Result<serde_json::Value, String> {
let node = state.inner();
let pid = hex_to_postid(&post_id)?;
let (rc, cc) = node.get_seen_engagement(&pid).await.map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"seenReactCount": rc,
"seenCommentCount": cc,
}))
}
#[tauri::command]
async fn get_last_read_message(
state: State<'_, AppState>,
partner_id_hex: String,
) -> Result<u64, String> {
let node = state.inner();
let nid = parse_node_id(&partner_id_hex)?;
node.get_last_read_message(&nid).await.map_err(|e| e.to_string())
}
#[tauri::command]
async fn generate_share_link(state: State<'_, AppState>, post_id_hex: String) -> Result<Option<String>, String> {
let node = state.inner();
@ -1890,6 +1959,18 @@ pub fn run() {
n.start_upnp_tcp_renewal_cycle(); // UPnP TCP lease renewal (for HTTP serving)
n.start_http_server(); // HTTP post delivery (if publicly reachable)
n.start_bootstrap_connectivity_check(); // 24h isolation check
n.start_replication_cycle(600); // 10 min active replication
// Start blob eviction cycle (every 5 min)
let cache_max_bytes: u64 = {
let storage = n.storage.lock().await;
storage.get_setting("cache_size_bytes")
.ok()
.flatten()
.and_then(|s| s.parse().ok())
.unwrap_or(1_073_741_824u64) // default 1 GB
};
Node::start_eviction_cycle(Arc::clone(&n), 300, cache_max_bytes);
Ok::<_, anyhow::Error>(n)
})?;
@ -1964,8 +2045,13 @@ pub fn run() {
write_message_comment,
get_message_receipts,
get_message_comments,
get_cache_stats,
get_setting,
set_setting,
mark_post_seen,
mark_conversation_read,
get_seen_engagement,
get_last_read_message,
generate_share_link,
])
.build(tauri::generate_context!())

View file

@ -1,6 +1,6 @@
{
"productName": "itsgoin",
"version": "0.3.5",
"version": "0.3.6",
"identifier": "com.itsgoin.app",
"build": {
"frontendDist": "../../frontend",