Platform: Reset wipe, empty name, Android browse + backup-off, import as personas

Reset All Data:
- Sentinel now written at the app-level data_dir instead of the
  active identity's subdir. On Android the subdir path was never
  checked at startup, so reset silently did nothing.
- On detection, wipe EVERYTHING under the app data_dir: identity.key,
  itsgoin.db + WAL + SHM, blobs, all identity subdirs. Next launch
  is truly fresh — new network key, new posting key, no prior data.

First-run name:
- Display name is optional. Blank submits as anonymous.
- First-run modal + profile overlay placeholder updated to say
  "Display name (optional)".

Android file picker:
- pick_file on Android now uses tauri-plugin-android-fs'
  show_open_file_dialog (Storage Access Framework OPEN_DOCUMENT).
  Read the picked URI's bytes, stage them in the app's private cache
  as a timestamped file, return the staged path so existing
  import_* code can read it as a regular filesystem path.
- Zip filter passes application/zip + application/octet-stream (some
  file providers report the latter for .zip).

Android auto-backup off:
- AndroidManifest: allowBackup="false", fullBackupContent="false",
  dataExtractionRules pointing at new data_extraction_rules.xml
- New data_extraction_rules.xml excludes all domains from both
  cloud-backup and device-transfer. Prior default (allowBackup=true)
  silently replicated identity.key to Google Drive for any user with
  cloud backup on — which effectively published the root secret to
  a third party without asking. Users who want off-device backup use
  Settings -> Export (explicit zip they control).

Import as personas:
- New import_as_personas function in core/import.rs + new
  import_as_personas_cmd Tauri IPC.
- Reads identity.key from the bundle and adds it to posting_identities
  as a persona. Also reads posting_identities.json (v0.6+ bundles)
  and adds each entry. Dedupes by node_id.
- Posts stay AS-AUTHORED — original post_id, original author,
  original signatures, original wrapped_key recipients. No
  re-encryption. Content encrypted to any of the imported keys
  becomes decryptable because we now hold the secrets.
- Blobs, follows, profiles copied across.
- If current device has <=1 posting identity (the fresh-install one)
  and the bundle brings more, auto-switch the default to the first
  imported persona. Covers first-run-then-import flow cleanly.

Import wizard UI:
- New default option: "Restore as personas" — posts keep original
  authors; source's keys become personas you can post as.
- Old "Merge with decryption key" retained as "Consolidate under
  current default persona (requires source key)" for the case where
  a user intentionally abandons a persona.
- "Public posts only" and "Add as separate identity" retained.

deploy.sh made executable (chmod +x tracked).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Scott Reimers 2026-04-22 17:40:21 -04:00
parent 4a1db1ce7f
commit 7e1e1dd738
7 changed files with 365 additions and 21 deletions

View file

@ -119,6 +119,235 @@ struct ParsedImport {
skipped: usize,
}
/// Staged content from a bundle, ready to write into the current identity
/// without reparenting. Posts keep their original author/post_id/signatures;
/// the bundle's posting keys become personas on this device.
struct StagedImport {
/// Posting identities to register (includes the bundle's identity.key and
/// any entries from posting_identities.json). Deduped by node_id.
posting_identities: Vec<PostingIdentity>,
/// Posts in the form (post_id, Post, PostVisibility, blobs).
posts: Vec<(crate::types::PostId, Post, PostVisibility, Vec<(Attachment, Vec<u8>)>)>,
/// Follows to add to current identity's follow list.
follows: Vec<NodeId>,
/// Profiles to upsert (keyed by their own node_id, which becomes one of
/// our personas or a remote peer).
profiles: Vec<crate::types::PublicProfile>,
}
/// Import a bundle as personas: add the source's posting keys to our
/// `posting_identities`, and insert their posts AS-AUTHORED (no reparenting).
/// Content encrypted to any of the imported keys becomes decryptable because
/// we now hold those secrets. Idempotent — duplicate post ids / posting keys
/// are skipped via ON CONFLICT handling.
pub async fn import_as_personas(
zip_path: &Path,
storage: &StoragePool,
blob_store: &BlobStore,
) -> anyhow::Result<ImportResult> {
let staged = {
let zip_path = zip_path.to_path_buf();
tokio::task::spawn_blocking(move || -> anyhow::Result<StagedImport> {
let file = std::fs::File::open(&zip_path)?;
let mut archive = zip::ZipArchive::new(file)?;
// identity.key — the source device's primary key, which now
// becomes a posting persona on our device.
let mut posting_identities: Vec<PostingIdentity> = Vec::new();
if let Ok(mut entry) = archive.by_name("itsgoin-export/identity.key") {
let mut key_bytes = Vec::new();
entry.read_to_end(&mut key_bytes)?;
if key_bytes.len() == 32 {
let seed: [u8; 32] = key_bytes.as_slice().try_into().unwrap();
let sk = iroh::SecretKey::from_bytes(&seed);
let nid: NodeId = *sk.public().as_bytes();
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_millis() as u64)
.unwrap_or(0);
posting_identities.push(PostingIdentity {
node_id: nid,
secret_seed: seed,
display_name: String::new(),
created_at: now,
});
}
}
// posting_identities.json (v0.6+ bundles) — additional personas.
if let Ok(mut entry) = archive.by_name("itsgoin-export/posting_identities.json") {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
if let Ok(ids) = serde_json::from_str::<Vec<PostingIdentity>>(&buf) {
for id in ids {
if !posting_identities.iter().any(|p| p.node_id == id.node_id) {
posting_identities.push(id);
}
}
}
}
// posts.json
let posts_raw: Vec<ExportedPost> = match archive.by_name("itsgoin-export/posts.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str(&buf).unwrap_or_default()
}
Err(_) => Vec::new(),
};
let mut staged_posts = Vec::new();
for ep in &posts_raw {
let id_bytes = hex::decode(&ep.id).unwrap_or_default();
let post_id: crate::types::PostId = match id_bytes.as_slice().try_into() {
Ok(id) => id,
Err(_) => continue,
};
let author_bytes = hex::decode(&ep.author).unwrap_or_default();
let author: NodeId = match author_bytes.as_slice().try_into() {
Ok(a) => a,
Err(_) => continue,
};
let attachments: Vec<Attachment> = serde_json::from_str(&ep.attachments_json)
.unwrap_or_default();
let vis: PostVisibility = serde_json::from_str(&ep.visibility_json)
.unwrap_or(PostVisibility::Public);
let post = Post {
author,
content: ep.content.clone(),
attachments: attachments.clone(),
timestamp_ms: ep.timestamp_ms,
};
// Read attached blobs
let mut blobs = Vec::new();
for att in &attachments {
let path = format!("itsgoin-export/blobs/{}", hex::encode(att.cid));
if let Ok(mut blob_entry) = archive.by_name(&path) {
let mut data = Vec::new();
blob_entry.read_to_end(&mut data)?;
blobs.push((att.clone(), data));
}
}
staged_posts.push((post_id, post, vis, blobs));
}
// follows.json (optional)
let follows: Vec<NodeId> = match archive.by_name("itsgoin-export/follows.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str::<Vec<String>>(&buf)
.unwrap_or_default()
.into_iter()
.filter_map(|s| {
let b = hex::decode(&s).ok()?;
<[u8; 32]>::try_from(b.as_slice()).ok()
})
.collect()
}
Err(_) => Vec::new(),
};
// profiles.json (optional)
let profiles: Vec<crate::types::PublicProfile> = match archive.by_name("itsgoin-export/profiles.json") {
Ok(mut entry) => {
let mut buf = String::new();
entry.read_to_string(&mut buf)?;
serde_json::from_str(&buf).unwrap_or_default()
}
Err(_) => Vec::new(),
};
Ok(StagedImport {
posting_identities,
posts: staged_posts,
follows,
profiles,
})
}).await??
};
// Phase 2: write into storage.
let mut imported_posts = 0usize;
let mut imported_blobs = 0usize;
let mut imported_personas = 0usize;
let mut skipped_posts = 0usize;
// Posting identities first so the decrypt-any-persona path (feed render)
// can find them immediately after this import call returns. If the
// current device has exactly one posting identity (typically the one
// auto-created on first launch) and we're importing additional ones,
// switch the default to the first imported persona — the user's intent is
// to pick up where they left off, not to post under a fresh throwaway.
{
let s = storage.get().await;
let prior_count = s.count_posting_identities().unwrap_or(0);
for pi in &staged.posting_identities {
s.upsert_posting_identity(pi)?;
imported_personas += 1;
}
if prior_count <= 1 {
if let Some(first) = staged.posting_identities.first() {
let _ = s.set_default_posting_id(&first.node_id);
}
}
}
// Posts + blobs. Content keeps its original post_id, author, signatures.
for (post_id, post, vis, blobs) in &staged.posts {
let s = storage.get().await;
if s.get_post(post_id).ok().flatten().is_some() {
skipped_posts += 1;
continue;
}
if crate::content::verify_post_id(post_id, post) {
// Bundle doesn't carry intent — fall back to Public for public posts,
// Friends for encrypted (closest match for re-surfacing via circles).
let intent = match &vis {
PostVisibility::Public => crate::types::VisibilityIntent::Public,
_ => crate::types::VisibilityIntent::Friends,
};
s.store_post_with_intent(post_id, post, vis, &intent)?;
imported_posts += 1;
} else {
warn!(post_id = hex::encode(post_id), "Skipping post with invalid signature during import");
skipped_posts += 1;
continue;
}
for (att, data) in blobs {
if !blob_store.has(&att.cid) {
blob_store.store(&att.cid, data)?;
}
s.record_blob(&att.cid, post_id, &post.author, data.len() as u64, &att.mime_type, post.timestamp_ms)?;
let _ = s.pin_blob(&att.cid);
imported_blobs += 1;
}
}
// Follows + profiles.
{
let s = storage.get().await;
for f in &staged.follows {
let _ = s.add_follow(f);
}
for p in &staged.profiles {
let _ = s.store_profile(p);
}
}
Ok(ImportResult {
posts_imported: imported_posts,
posts_skipped: skipped_posts,
blobs_imported: imported_blobs,
message: format!(
"Imported {} personas, {} posts ({} skipped), {} blobs",
imported_personas, imported_posts, skipped_posts, imported_blobs
),
})
}
/// Import public posts from a ZIP into the current identity.
/// Creates new posts with the current node_id as author, preserving original timestamps.
pub async fn import_public_posts(

View file

@ -16,7 +16,10 @@
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:theme="@style/Theme.itsgoin_desktop"
android:usesCleartextTraffic="${usesCleartextTraffic}">
android:usesCleartextTraffic="${usesCleartextTraffic}"
android:allowBackup="false"
android:fullBackupContent="false"
android:dataExtractionRules="@xml/data_extraction_rules">
<activity
android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale|smallestScreenSize|screenLayout|uiMode"
android:launchMode="singleTask"

View file

@ -0,0 +1,29 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
Disable cloud backup and device-to-device transfer of app data.
The identity secret in identity.key grants full access to all of a user's
private content (DMs, encrypted posts, persona keys). Silently replicating
it to Google Drive / device-transfer without a conscious user action is not
an acceptable default. Users who want backup can use in-app
Settings -> Export, which produces a ZIP the user explicitly handles.
Android 12+ (API 31+) reads this file. Combined with allowBackup="false"
and fullBackupContent="false" in AndroidManifest.xml for older Android.
-->
<data-extraction-rules>
<cloud-backup>
<exclude domain="root" />
<exclude domain="file" />
<exclude domain="database" />
<exclude domain="sharedpref" />
<exclude domain="external" />
</cloud-backup>
<device-transfer>
<exclude domain="root" />
<exclude domain="file" />
<exclude domain="database" />
<exclude domain="sharedpref" />
<exclude domain="external" />
</device-transfer>
</data-extraction-rules>

View file

@ -299,7 +299,9 @@ async fn post_to_dto(
}
}
/// Decrypt a just-created post for immediate display.
/// Decrypt a just-created post for immediate display. The post was authored
/// by one of our held posting identities (default or a specific persona);
/// look up that identity's secret to decrypt.
async fn decrypt_just_created(
node: &Node,
post: &Post,
@ -308,11 +310,15 @@ async fn decrypt_just_created(
match vis {
PostVisibility::Public => None,
PostVisibility::Encrypted { recipients } => {
let author_identity = {
let s = node.storage.get().await;
s.get_posting_identity(&post.author).ok().flatten()
}?;
itsgoin_core::crypto::decrypt_post(
&post.content,
&node.secret_seed_bytes(),
&node.node_id,
&node.node_id,
&author_identity.secret_seed,
&author_identity.node_id,
&author_identity.node_id,
recipients,
)
.ok()
@ -1946,10 +1952,47 @@ async fn pick_file(app: tauri::AppHandle, title: String, filter_name: Option<Str
let path = builder.blocking_pick_file();
Ok(path.map(|p| p.to_string()))
}
#[cfg(any(target_os = "android", target_os = "ios"))]
#[cfg(target_os = "android")]
{
// Android: SAF "open document" dialog. The dialog returns a content
// URI, not a filesystem path, so we read the bytes via the plugin
// and stage them in the app's private cache so existing import code
// (which expects a path) can read the file normally.
let _ = (title,);
use tauri_plugin_android_fs::AndroidFsExt;
let mime_types: Vec<&str> = match filter_ext.as_deref() {
Some(exts) if exts.iter().any(|e| e == "zip") => vec!["application/zip", "application/octet-stream"],
_ => vec!["*/*"],
};
let api = app.android_fs();
let uris = api.show_open_file_dialog(None, &mime_types, false)
.map_err(|e| format!("Open dialog failed: {}", e))?;
let uri = match uris.into_iter().next() {
Some(u) => u,
None => return Ok(None),
};
let data = api.read(&uri).map_err(|e| format!("Read failed: {}", e))?;
// Stage in private cache so import_* can open it by path.
let cache_dir = app.path().app_cache_dir()
.map_err(|e| format!("no cache dir: {}", e))?;
std::fs::create_dir_all(&cache_dir).map_err(|e| e.to_string())?;
// Name includes a timestamp so repeated picks don't clobber.
let stamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_millis())
.unwrap_or(0);
let filename = match filter_ext.as_deref() {
Some(exts) if exts.iter().any(|e| e == "zip") => format!("import-{}.zip", stamp),
_ => format!("import-{}", stamp),
};
let dest = cache_dir.join(filename);
std::fs::write(&dest, &data).map_err(|e| format!("Stage write failed: {}", e))?;
Ok(Some(dest.to_string_lossy().to_string()))
}
#[cfg(target_os = "ios")]
{
let _ = (app, title, filter_name, filter_ext);
Ok(None) // Mobile: file picker not supported via this command
Ok(None)
}
}
@ -2202,7 +2245,14 @@ async fn request_referrals(state: State<'_, AppNode>) -> Result<String, String>
#[tauri::command]
async fn reset_data(state: State<'_, AppNode>) -> Result<String, String> {
let node = get_node(&state).await;
let sentinel = node.data_dir.join(".reset");
// Write the sentinel at the APP-level data_dir (parent of the active
// identity's dir). The startup sentinel check runs at the same level.
// Earlier versions wrote to node.data_dir which is the identity subdir,
// making the check miss on Android.
let app_data_dir = node.data_dir.parent()
.ok_or_else(|| "no parent data dir".to_string())?
.to_path_buf();
let sentinel = app_data_dir.join(".reset");
std::fs::write(&sentinel, b"reset").map_err(|e| e.to_string())?;
Ok("Reset scheduled. Restart the app to apply.".to_string())
}
@ -2730,6 +2780,23 @@ async fn import_public_posts(
Ok(result.message)
}
/// Import a bundle as personas on the current identity. The bundle's posting
/// keys become additional personas; imported content keeps its original author
/// and encrypted content becomes decryptable because we now hold those keys.
#[tauri::command]
async fn import_as_personas_cmd(
state: State<'_, AppNode>,
zip_path: String,
) -> Result<String, String> {
let node = get_node(&state).await;
let result = itsgoin_core::import::import_as_personas(
std::path::Path::new(&zip_path),
&node.storage,
&node.blob_store,
).await.map_err(|e| e.to_string())?;
Ok(result.message)
}
#[tauri::command]
async fn import_as_new_identity(
state: State<'_, AppIdentity>,
@ -2812,12 +2879,24 @@ pub fn run() {
};
std::fs::create_dir_all(&data_dir)?;
// Check for reset sentinel from previous session
// Check for reset sentinel from previous session. A "Reset All
// Data" request wipes EVERYTHING under the app data dir so the
// next launch starts truly fresh — new network key, new posting
// key, no posts, no blobs, no identities.
let sentinel = data_dir.join(".reset");
if sentinel.exists() {
info!("Reset sentinel found — clearing data");
let _ = std::fs::remove_file(data_dir.join("itsgoin.db"));
let _ = std::fs::remove_dir_all(data_dir.join("blobs"));
info!("Reset sentinel found — wiping all app data");
if let Ok(entries) = std::fs::read_dir(&data_dir) {
for entry in entries.flatten() {
let path = entry.path();
if path.ends_with(".reset") { continue; }
if path.is_dir() {
let _ = std::fs::remove_dir_all(&path);
} else {
let _ = std::fs::remove_file(&path);
}
}
}
let _ = std::fs::remove_file(&sentinel);
}
@ -2992,6 +3071,7 @@ pub fn run() {
import_summary,
import_public_posts,
import_as_new_identity,
import_as_personas_cmd,
import_merge_with_key,
])
.build(tauri::generate_context!())