Platform: Reset wipe, empty name, Android browse + backup-off, import as personas

Reset All Data:
- Sentinel now written at the app-level data_dir instead of the
  active identity's subdir. On Android the subdir path was never
  checked at startup, so reset silently did nothing.
- On detection, wipe EVERYTHING under the app data_dir: identity.key,
  itsgoin.db + WAL + SHM, blobs, all identity subdirs. Next launch
  is truly fresh — new network key, new posting key, no prior data.

First-run name:
- Display name is optional. Blank submits as anonymous.
- First-run modal + profile overlay placeholder updated to say
  "Display name (optional)".

Android file picker:
- pick_file on Android now uses tauri-plugin-android-fs'
  show_open_file_dialog (Storage Access Framework OPEN_DOCUMENT).
  Read the picked URI's bytes, stage them in the app's private cache
  as a timestamped file, return the staged path so existing
  import_* code can read it as a regular filesystem path.
- Zip filter passes application/zip + application/octet-stream (some
  file providers report the latter for .zip).

Android auto-backup off:
- AndroidManifest: allowBackup="false", fullBackupContent="false",
  dataExtractionRules pointing at new data_extraction_rules.xml
- New data_extraction_rules.xml excludes all domains from both
  cloud-backup and device-transfer. Prior default (allowBackup=true)
  silently replicated identity.key to Google Drive for any user with
  cloud backup on — which effectively published the root secret to
  a third party without asking. Users who want off-device backup use
  Settings -> Export (explicit zip they control).

Import as personas:
- New import_as_personas function in core/import.rs + new
  import_as_personas_cmd Tauri IPC.
- Reads identity.key from the bundle and adds it to posting_identities
  as a persona. Also reads posting_identities.json (v0.6+ bundles)
  and adds each entry. Dedupes by node_id.
- Posts stay AS-AUTHORED — original post_id, original author,
  original signatures, original wrapped_key recipients. No
  re-encryption. Content encrypted to any of the imported keys
  becomes decryptable because we now hold the secrets.
- Blobs, follows, profiles copied across.
- If current device has <=1 posting identity (the fresh-install one)
  and the bundle brings more, auto-switch the default to the first
  imported persona. Covers first-run-then-import flow cleanly.

Import wizard UI:
- New default option: "Restore as personas" — posts keep original
  authors; source's keys become personas you can post as.
- Old "Merge with decryption key" retained as "Consolidate under
  current default persona (requires source key)" for the case where
  a user intentionally abandons a persona.
- "Public posts only" and "Add as separate identity" retained.

deploy.sh made executable (chmod +x tracked).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Scott Reimers 2026-04-22 17:40:21 -04:00
parent 4a1db1ce7f
commit 7e1e1dd738
7 changed files with 365 additions and 21 deletions

View file

@ -119,6 +119,235 @@ struct ParsedImport {
skipped: usize,
}
/// Staged content from a bundle, ready to write into the current identity
/// without reparenting. Posts keep their original author/post_id/signatures;
/// the bundle's posting keys become personas on this device.
struct StagedImport {
/// Posting identities to register (includes the bundle's identity.key and
/// any entries from posting_identities.json). Deduped by node_id.
posting_identities: Vec<PostingIdentity>,
/// Posts in the form (post_id, Post, PostVisibility, blobs).
posts: Vec<(crate::types::PostId, Post, PostVisibility, Vec<(Attachment, Vec<u8>)>)>,
/// Follows to add to current identity's follow list.
follows: Vec<NodeId>,
/// Profiles to upsert (keyed by their own node_id, which becomes one of
/// our personas or a remote peer).
profiles: Vec<crate::types::PublicProfile>,
}
/// Import a bundle as personas: add the source's posting keys to our
/// `posting_identities`, and insert their posts AS-AUTHORED (no reparenting).
/// Content encrypted to any of the imported keys becomes decryptable because
/// we now hold those secrets. Idempotent — duplicate post ids / posting keys
/// are skipped via ON CONFLICT handling.
pub async fn import_as_personas(
zip_path: &Path,
storage: &StoragePool,
blob_store: &BlobStore,
) -> anyhow::Result<ImportResult> {
let staged = {
let zip_path = zip_path.to_path_buf();
tokio::task::spawn_blocking(move || -> anyhow::Result<StagedImport> {
let file = std::fs::File::open(&zip_path)?;
let mut archive = zip::ZipArchive::new(file)?;
// identity.key — the source device's primary key, which now
// becomes a posting persona on our device.
let mut posting_identities: Vec<PostingIdentity> = Vec::new();
if let Ok(mut entry) = archive.by_name("itsgoin-export/identity.key") {
let mut key_bytes = Vec::new();
entry.read_to_end(&mut key_bytes)?;
if key_bytes.len() == 32 {
let seed: [u8; 32] = key_bytes.as_slice().try_into().unwrap();
let sk = iroh::SecretKey::from_bytes(&seed);
let nid: NodeId = *sk.public().as_bytes();
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_millis() as u64)
.unwrap_or(0);
posting_identities.push(PostingIdentity {
node_id: nid,
secret_seed: seed,
display_name: String::new(),
created_at: now,
});
}
}
// posting_identities.json (v0.6+ bundles) — additional personas.
if let Ok(mut entry) = archive.by_name("itsgoin-export/posting_identities.json") {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
if let Ok(ids) = serde_json::from_str::<Vec<PostingIdentity>>(&buf) {
for id in ids {
if !posting_identities.iter().any(|p| p.node_id == id.node_id) {
posting_identities.push(id);
}
}
}
}
// posts.json
let posts_raw: Vec<ExportedPost> = match archive.by_name("itsgoin-export/posts.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str(&buf).unwrap_or_default()
}
Err(_) => Vec::new(),
};
let mut staged_posts = Vec::new();
for ep in &posts_raw {
let id_bytes = hex::decode(&ep.id).unwrap_or_default();
let post_id: crate::types::PostId = match id_bytes.as_slice().try_into() {
Ok(id) => id,
Err(_) => continue,
};
let author_bytes = hex::decode(&ep.author).unwrap_or_default();
let author: NodeId = match author_bytes.as_slice().try_into() {
Ok(a) => a,
Err(_) => continue,
};
let attachments: Vec<Attachment> = serde_json::from_str(&ep.attachments_json)
.unwrap_or_default();
let vis: PostVisibility = serde_json::from_str(&ep.visibility_json)
.unwrap_or(PostVisibility::Public);
let post = Post {
author,
content: ep.content.clone(),
attachments: attachments.clone(),
timestamp_ms: ep.timestamp_ms,
};
// Read attached blobs
let mut blobs = Vec::new();
for att in &attachments {
let path = format!("itsgoin-export/blobs/{}", hex::encode(att.cid));
if let Ok(mut blob_entry) = archive.by_name(&path) {
let mut data = Vec::new();
blob_entry.read_to_end(&mut data)?;
blobs.push((att.clone(), data));
}
}
staged_posts.push((post_id, post, vis, blobs));
}
// follows.json (optional)
let follows: Vec<NodeId> = match archive.by_name("itsgoin-export/follows.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str::<Vec<String>>(&buf)
.unwrap_or_default()
.into_iter()
.filter_map(|s| {
let b = hex::decode(&s).ok()?;
<[u8; 32]>::try_from(b.as_slice()).ok()
})
.collect()
}
Err(_) => Vec::new(),
};
// profiles.json (optional)
let profiles: Vec<crate::types::PublicProfile> = match archive.by_name("itsgoin-export/profiles.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str(&buf).unwrap_or_default()
}
Err(_) => Vec::new(),
};
Ok(StagedImport {
posting_identities,
posts: staged_posts,
follows,
profiles,
})
}).await??
};
// Phase 2: write into storage.
let mut imported_posts = 0usize;
let mut imported_blobs = 0usize;
let mut imported_personas = 0usize;
let mut skipped_posts = 0usize;
// Posting identities first so the decrypt-any-persona path (feed render)
// can find them immediately after this import call returns. If the
// current device has exactly one posting identity (typically the one
// auto-created on first launch) and we're importing additional ones,
// switch the default to the first imported persona — the user's intent is
// to pick up where they left off, not to post under a fresh throwaway.
{
let s = storage.get().await;
let prior_count = s.count_posting_identities().unwrap_or(0);
for pi in &staged.posting_identities {
s.upsert_posting_identity(pi)?;
imported_personas += 1;
}
if prior_count <= 1 {
if let Some(first) = staged.posting_identities.first() {
let _ = s.set_default_posting_id(&first.node_id);
}
}
}
// Posts + blobs. Content keeps its original post_id, author, signatures.
for (post_id, post, vis, blobs) in &staged.posts {
let s = storage.get().await;
if s.get_post(post_id).ok().flatten().is_some() {
skipped_posts += 1;
continue;
}
if crate::content::verify_post_id(post_id, post) {
// Bundle doesn't carry intent — fall back to Public for public posts,
// Friends for encrypted (closest match for re-surfacing via circles).
let intent = match &vis {
PostVisibility::Public => crate::types::VisibilityIntent::Public,
_ => crate::types::VisibilityIntent::Friends,
};
s.store_post_with_intent(post_id, post, vis, &intent)?;
imported_posts += 1;
} else {
warn!(post_id = hex::encode(post_id), "Skipping post with invalid signature during import");
skipped_posts += 1;
continue;
}
for (att, data) in blobs {
if !blob_store.has(&att.cid) {
blob_store.store(&att.cid, data)?;
}
s.record_blob(&att.cid, post_id, &post.author, data.len() as u64, &att.mime_type, post.timestamp_ms)?;
let _ = s.pin_blob(&att.cid);
imported_blobs += 1;
}
}
// Follows + profiles.
{
let s = storage.get().await;
for f in &staged.follows {
let _ = s.add_follow(f);
}
for p in &staged.profiles {
let _ = s.store_profile(p);
}
}
Ok(ImportResult {
posts_imported: imported_posts,
posts_skipped: skipped_posts,
blobs_imported: imported_blobs,
message: format!(
"Imported {} personas, {} posts ({} skipped), {} blobs",
imported_personas, imported_posts, skipped_posts, imported_blobs
),
})
}
/// Import public posts from a ZIP into the current identity.
/// Creates new posts with the current node_id as author, preserving original timestamps.
pub async fn import_public_posts(