Seed Heritage

Seed Heritage

Building an organisational learning system that delivers commercial returns.

The Challenge

Seed Heritage invested in Optimizely to drive growth, but a year in the experimentation program was underperforming with low test volume, inconsistent results, and no clear link to business outcomes, creating a growing cost at scale.

When TIP engaged with the team, we began with honest diagnosis. What we found was not a single point of failure - it was four interconnected gaps that together explained exactly why the program had stalled. The program was generating ideas, not insights. Hypotheses were intuition-led, with little use of data or behavioural signals to identify real customer problems or high-value opportunities. Prioritisation was subjective, and effort was spread across low-impact changes. There was no consistent experimentation framework—processes, reporting, and knowledge sharing were fragmented, with no clear backlog or learning loop. As a result, insights weren’t retained or reused. Overall, Optimizely was underutilised not due to the tool, but because the capability and structure to use it effectively hadn’t yet been established.

Building an organisational learning system that delivers commercial returns.
Building an organisational learning system that delivers commercial returns.
Our Approach
Our Approach

Sustained capability change requires pressure from the top and energy from the team simultaneously. TIP designed a program that worked both directions at once.

1. Data-led foundations

A structured discovery methodology - analytics, session recordings, heuristics, behavioural signals - at the front of every test cycle, with a T-score prioritisation model giving every experiment a defensible commercial reason to be on the roadmap.

2. Rituals, process and institutional knowledge

A structured test journal, backlog process, and reporting cadence that transformed individual test outcomes into compounding institutional knowledge - with ceremonies and governance to keep the program running long after our engagement closed.

3. Connected technology stack

We introduced ContentSquare and integrated it directly with Optimizely - creating a closed loop between behavioural insight and experimentation execution. The team could now identify friction, quantify its revenue cost, and know what a fix was worth before building a single test.

4. Engineering capacity and program visibility

Specialist experimentation engineering removed the build bottleneck keeping the program in the shallows. Formal training, bootcamps, and a monthly experiment showcase turned experimentation from a hidden delivery function into a visible contributor to business performance.

1. Data-led foundations

A structured discovery methodology - analytics, session recordings, heuristics, behavioural signals - at the front of every test cycle, with a T-score prioritisation model giving every experiment a defensible commercial reason to be on the roadmap.

1. Data-led foundations

A structured discovery methodology - analytics, session recordings, heuristics, behavioural signals - at the front of every test cycle, with a T-score prioritisation model giving every experiment a defensible commercial reason to be on the roadmap.

Building an organisational learning system that delivers commercial returns.
The Solution

A transformation built from both ends of the organisation.

The Impossible Premise designed and delivered a transformation program that worked from both the top of the organisation down and the team up, because sustained capability change requires both. We started by aligning senior stakeholders on what a mature experimentation program could deliver in commercial terms - linking it directly to revenue growth, customer lifetime value, and competitive advantage to secure executive visibility and make it a business priority, not just a delivery function. At the same time, we built the foundations within the team: introducing structured discovery using data, behavioural signals and analytics; creating a data-led test journal and backlog process; and implementing a T-score model to prioritise experiments based on value, impact, statistical confidence, and effort so every test had a defensible rationale. We then strengthened the technology and insight layer by integrating Contentsquare with Optimizely and improving instrumentation and data flows, turning their stack into a connected system that surfaced friction points, quantified revenue opportunity, and informed better experimentation decisions before build. To remove delivery constraints, we added specialist engineering support to increase testing capacity and enable more complex, higher-impact experiments. Alongside this, we ran training, bootcamps, and monthly experiment showcases to embed capability across the organisation and make experimentation a visible driver of business performance.