Your personal experimentation cockpit

Chat with the Cloudflare-hosted intake LLM, trigger the optimisation engine to run experiments, watch the progress bar update in real time, and export artifacts the moment they are produced.

  • Collecting intake requirements
  • Running optimisation engine
  • Preparing mentor response

Workflow transparency

See status in motion

The progress bar animates through intake, analysis, and insights so you always know which worker is in control.

Artifacts on demand

Downloads are generated locally: one click gives you the curated JSON, CSV, and PNG chart for the run—ready for Slack or Notion.

Edge-native performance

Everything runs close to your users so responses stay snappy—no heavyweight servers to manage or scale.