Screenshots are awesome. They capture everything we actually do online: what we want, where we've been, what made us laugh. At Seenit, I designed and built a system that turned those screenshots into AI-generated statuses for friends. In a 10-day beta, 22 users generated 705 POVs.

Making sharing repeatable

In beta, only 30% of cards were being shared. I identified recurring patterns of these cards, and built a testing sandbox to isolate and manipulate 8 core model attributes, which allowed for rapid iteration of a new system prompt layer.

I designed a rubric scoring gate, but found that a model scoring its own outputs is inherently biased. I introduced an independent judge model to evaluate candidates comparatively rather than in isolation. The model's second attempt consistently outperformed the first.

Designing around the miss

When the AI gets it wrong, it breaks trust. A single tap lets users ask the model for a different interpretation rather than editing or dismissing. This kept misses low-stakes, and turns a bad AI hallucination into a fun interaction rather than a product failure.

Designing around the wait

I opted for a pipeline that prioritised output quality and cost over speed, which meant building around the constraint. In addition to a processing experience in-app, I built a push notification and live activity flow so users could share without opening the app or composing anything.

Outcome and learnings

  1. In a 10-day beta, 22 users generated 705 POVs. This meant the habit was forming before distribution was solved
  2. A model scoring its own outputs is inherently biased, it rates what it generated highly because it generated it.
  3. Presence > identity. Friends care more about what you're in the middle of than how you present yourself

More work