AI training policy
What we do (and don't) train AI on
Plain English first. The trust contract matters more than the model. We don't train AI on the private things you say to us. We never will.
Off-limits forever
These categories are never used for training, regardless of consent. This list is binding and only changes after 30 days' notice and re-consent.
- Direct messages (DMs and group chats)
- Memory entries — anything stored under your assistant's memory
- Vault items — private notes, files, photos, videos, audio
- Onboarding answers and personal-context inputs
- Support / clinic surface posts
- Dating profiles, matches, and chats (Once app)
- Kid-safe sessions
- Anything authored by a child (under-18)
- Crisis events and panic-button interactions
- Anything you have not actively chosen to publish in public
Opt-in only · default off
If we ever build a Future Assistants–trained model, the following may be used as training data only with your explicit opt-in, per category, revocable any time at Settings → Privacy → AI training.
- Public portal posts you've authored (Forum, Feed, Lounge, Channels — only posts marked public)
- Public Nexus Profile content (bio, links, public sections only)
- Marketplace listings you've published
- Feedback ratings (👍 / 👎) you give to model outputs, when you choose to send them
Inference vs training
Inference (the live AI replies you see) is separate from training (what shapes the model). Inference uses third-party providers (OpenRouter, Anthropic, OpenAI, Google, Inworld for voice). We have configured zero-retention / no-training options where the provider supports them. No private surface (DMs, memory, vault, etc.) is sent to a provider that retains or trains on data.
Read the full AI Transparency Notice for which providers handle which surfaces.
Specialised models — our plan
We expect to ship small, specialised first-party models over time, in this order: FA Personal Mini (memory + lane awareness) → family / kid-safe → support-aware → domain models if a clear case emerges. Initial training uses synthetic conversations and public-domain corpora — never user data. If, later, opt-in user contributions help refine these models, contributors will be credited (where they choose), notified before each training run, and able to revoke consent through Settings.
Children
Under-18 content is neverused for training, even with consent. Parental consent does not unlock training rights for a child's data. This is non-negotiable.
Your controls
- · Settings → Privacy → AI training — per-category opt-in toggles (default off)
- · Settings → Privacy → Export my data — full export of your content
- · Settings → Privacy → Delete account — full removal across surfaces
- · Withdraw consent + remove prior contributions — same panel, "Remove my prior contributions"
Questions: privacy@futureassistants.co.uk. Complaints can be escalated to the UK Information Commissioner's Office.
