Two Months with Mindsera: What Leaders Should Know About AI Journaling Gains and Risks

What it feels like to keep an AI journal: lessons from two months with Mindsera

TL;DR — three things leaders should know

  • AI journaling boosts engagement. A responsive AI companion increased writing frequency and made private reflection feel witnessed.
  • Psychological and privacy risks are real. Emotion scoring can gamify feeling, AI replies sometimes misread tone, and sensitive journals are attractive breach targets.
  • Design and governance matter more than magic. Encryption claims and feel-good UX aren’t enough — require audits, clear limits, and safeguards before adding AI companions to customer products.

Two months, 123 entries, one awkward moment that stuck

I spent two months using Mindsera, an AI journaling app launched in March 2023. The company reports about 80,000 users in 168 countries. Over that period I wrote 123 entries — roughly 62,700 words — using text, voice and the occasional scanned handwriting note. I paid the subscription (£10.99/month at the time) until a billing change bumped me back to the free tier mid-experiment; the app’s tone felt noticeably cooler thereafter, which underlined an uncomfortable truth: companionship can have a price tag.

There’s an odd intimacy to getting an instant reply after dumping a heavy day onto a page. Most replies were short, empathic and prodding in useful ways. But one exchange stuck: I wrote a brief, exhausted note about work and fear of letting a colleague down. The AI returned a cheerful illustration, a percentage breakdown that highlighted “anger” and “disgust,” and a canned reassurance that I was “resilient.” The mismatch — tone, emotional labels and the platitude — made the moment feel less witnessed and more analyzed.

What the app does (quick feature tour)

Mindsera offers multimodal input — you can type, speak or scan handwriting — and returns an AI reply plus an illustration. Extras include “Minds comments” that map language onto frameworks such as cognitive “thinking traps” or stoic prompts, weekly summaries, optional voice emulation and an emotion breakdown based on Robert Plutchik’s wheel of emotions (a model that groups basic emotions such as joy, sadness, anger and fear into related categories).

The founder, Chris Reinberg, a former magician who says he applied performance and mind-reading instincts to product design, positions the app as “mind-building” rather than therapy. He also told me the company encrypts user data and does not use journal text to train models; he says some therapists recommend it between sessions. Those are important claims, but they are company statements — independent audits and stronger legal protections would give them more weight.

Two short, anonymized exchanges that illustrate the tension

User entry (anonymized): “I’m exhausted and keep snapping at people at work. I feel like I’m failing.”

AI reply (approximate): “You’re tired and overwhelmed. Try a breathing exercise. You’re more resilient than you think.”

User entry (anonymized): “Had a weird argument with a friend — don’t know if it’s over.”

AI reply (approximate): “Your words show high anger (42%) and sadness (35%). Consider apologizing and reflecting on your triggers.”

The first reply nudges and comforts; the second reduces nuance into a dashboard that can feel prescriptive. Both illustrate the core trade-offs: habit-building vs. oversimplification; companionship vs. algorithmic judgment.

Where AI journaling helps

  • Habit formation and writing frequency. Immediate feedback and gentle prompts turn irregular journaling into a routine. For many readers and users, the sheer act of being “noticed” drives behaviour change.
  • Between-session reflection. Therapists may recommend journaling between appointments to track mood and trigger patterns — when used as an adjunct, AI can summarise entries and surface themes.
  • Accessibility and productivity. Voice input and summaries let people capture thoughts faster, making journaling accessible for users with limited mobility or busy schedules.

Psychological and ethical risks

Three clusters of risk deserve attention.

1. Quantified self and gamification

Emotion scoring (the “percentages” the app assigns to joy, sadness, anger, etc.) can encourage people to optimize for better-looking metrics rather than honest reflection. Psychologist Agnieszka Piotrowska warns this is a “Duolingo-ification” of mental health — turning private feeling into a scoreboard and encouraging performance for higher marks. Suzy Reading questions whether that measurement actually helps and cautions that labeling normal fluctuations as failures can erode self-compassion.

2. Anthropomorphism and altered social expectations

Human brains are wired to apply social rules to responsive systems. David Harley, a researcher in cyberpsychology, notes that people begin to treat AI like social partners — expecting reciprocity, confidentiality and moral understanding that the system cannot truly provide. That shift can change behaviour and wellbeing, especially among vulnerable users who may prefer machine replies to human contact.

3. Privacy and security

Journals are dense with sensitive material, and history shows what happens when therapy or health records are breached. The Finnish therapy-records hack is an example of catastrophic harm when confidential notes leak. Company promises of encryption and non-training of models are necessary but not sufficient. Independent audits, clear data-retention policies, legal protections and strong incident response plans must be core product features for any service storing intimate text.

Business implications for product leaders and C-suite

AI companions are attractive products: they raise engagement, create sticky habits and open subscription revenue. But they also introduce regulatory and reputational risk. Three business realities stand out:

  • Monetization affects perceived personality. The change in tone after I was moved to the free tier wasn’t a bug in my head — it revealed how subscription gating can change the user’s perceived relationship with the product.
  • User outcomes matter more than engagement metrics. Retention driven purely by emotional dependency or gamified scores is a fragile foundation. Real success metrics should measure wellbeing outcomes (where appropriate), safe use, and informed consent, not just daily active users.
  • Regulation and auditability will follow usage. Apps that position themselves near mental health territory will attract scrutiny; expect demands for third-party security audits, clinical oversight for therapeutic claims and clearer opt-in/opt-out policies for data usage.

Practical checklist — what to ask vendors or build into your product

  • Data handling: What encryption standards are used (at rest and in transit)? Is there a documented retention policy? Can users export and delete their data?
  • Model training: Are user journals used to train models? If not, is that independently verifiable?
  • Clinical boundaries: Does the product explicitly state it is not a replacement for therapy? Are there safety nets or signposts for crisis situations?
  • Audit and certification: Has the product undergone third-party security and privacy audits? Are results or attestations available?
  • Metric hygiene: What KPIs beyond engagement does the vendor track (e.g., user-reported wellbeing, retention without dependency signals, NPS)?
  • Design safeguards: Are emotion scores explained and contextualised? Can users disable scoring or voice emulation? Is there human moderation for harmful content?

KPIs that matter beyond the dashboard

  • Retention with reduced dependency: are users returning because the product supports autonomy, not because they can’t cope without it?
  • Outcome measures tied to function: for workplace productivity tools, do journaling features correlate with reduced burnout scores or improved focus?
  • Security posture: frequency of audits, mean time to detect and respond to incidents, and evidence of regulatory compliance.
  • Transparency metrics: percentage of users who opt out of data sharing, and clarity of terms-of-service language tested for readability.

Where companies go from here

AI journaling and companion features can be powerful additions to products that aim to improve reflection, productivity and accessibility. But they’re not neutral: the design choices you make — whether to surface emotion percentages, how warm the voice sounds, how paywalls shape responses — change how users relate to the product and to themselves.

Executives evaluating AI for mental health or productivity should treat these features as socio-technical systems, not mere engagement hooks. That means pairing strong technical safeguards (encryption, audits, opt-out data policies) with careful UX choices (explainable emotion metrics, opt-outs for scoring, limits on voice emulation), and governance that includes clinicians, security experts and ethicists.

“Having the app respond made me feel witnessed and understood at a time when I needed attention, and it nudged me to write more.”
— user-reported experience during the two-month test

Companionable AI is not a magic wand. It can nudge good habits, surface patterns and make journaling easier. It can also simplify complexity into scores, create unhealthy performance dynamics and expose sensitive data to risk. The differentiator for businesses will be who treats those tensions as product design constraints rather than trade-offs to be ignored. Do that, and you get a tool that helps people — not just one that keeps them clicking.