v0.4.3: Lock contention overhaul, StoragePool, mobile bottom nav, text scaling

Eliminate all conn_mgr lock holds during network I/O across 14 actor commands
and bi-stream handlers. PostFetch, TcpPunch, PullFromPeer, FetchEngagement,
ResolveAddress, AnchorProbe use brief locks for data gathering only. WormLookup,
ContentSearch, WormQuery use connection snapshots for lock-free cascade fan-out.
RelayIntroduce extracts forwarding data under brief lock, does I/O outside.
BlobRequest, PostFetchRequest, ManifestRefresh use Arc clones instead of conn_mgr
lock. ConnectionActor hoists shared Arcs (storage, blob_store, endpoint) for
lock-free access. ResolveAddress adds 5s per-query timeout (was unbounded).

Initial exchange failure now aborts mesh upgrade (was silently continuing with
broken connection). connect_to_peer/connect_to_anchor use consistent 15s timeout.
Rebalance connects outside the lock via pending_connects pattern.

StoragePool: 8 concurrent SQLite connections in WAL mode replace single
Mutex<Storage>. Reads run fully parallel; writes serialize at SQLite level only.
PRAGMA busy_timeout=5000 for graceful write contention.

Mobile bottom nav bar (<=768px) with icon tabs. Text sizes: XS/S/M/L/XL
(75%/100%/125%/150%/200%), default M. localStorage persistence for instant
restore. Toast repositioned above mobile nav.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Scott Reimers 2026-03-22 21:35:38 -04:00
parent f17535d61d
commit 43adbbdf7d
15 changed files with 1546 additions and 618 deletions

View file

@ -30,6 +30,44 @@ pub struct Storage {
conn: Connection,
}
/// Pool of Storage connections for concurrent SQLite access in WAL mode.
/// Each connection is independently locked — readers don't block each other.
/// Uses tokio::sync::Mutex so guards are Send (safe across .await points).
pub struct StoragePool {
slots: Vec<tokio::sync::Mutex<Storage>>,
}
const STORAGE_POOL_SIZE: usize = 8;
impl StoragePool {
/// Create a pool of Storage connections to the same database.
pub fn open(path: impl AsRef<std::path::Path>) -> anyhow::Result<Self> {
let mut slots = Vec::with_capacity(STORAGE_POOL_SIZE);
// First connection does schema init + migration
let first = Storage::open(path.as_ref())?;
slots.push(tokio::sync::Mutex::new(first));
// Additional connections just open + WAL mode (schema already exists)
for _ in 1..STORAGE_POOL_SIZE {
let conn = Connection::open(path.as_ref())?;
conn.execute_batch("PRAGMA journal_mode=WAL; PRAGMA busy_timeout=5000;")?;
slots.push(tokio::sync::Mutex::new(Storage { conn }));
}
Ok(Self { slots })
}
/// Get an available Storage connection. Tries each slot with try_lock;
/// if all busy, awaits the first (rare under normal load).
pub async fn get(&self) -> tokio::sync::MutexGuard<'_, Storage> {
for slot in &self.slots {
if let Ok(guard) = slot.try_lock() {
return guard;
}
}
// All busy — await the first
self.slots[0].lock().await
}
}
/// Current schema version. Bump this when making schema or data changes
/// that require migration. Old databases with a lower version will be migrated.
/// If the gap is too large (major version mismatch), the DB is reset instead.