
I like to start every new group of the Platform Engineering Fundamentals course by having everyone fill in a Platform Engineering Maturity model template based on their current setup/initiative.
It’s a great way to get a sense of where everyone is, but also - make a pretty powerful and fundamental point to everyone right at the get-go.
Here are 6 randomly grabbed maturity models from the last 3 classes. Can you spot what this newsletter is going to be about?

I could share 10, or even 100 more of these course maturity models and the story would all be the same. There is a wide variety across each area EXCEPT measurement.
It’s not just the course participants either. As I’ve shared before, data from the State of Platform Engineering volume 3 highlights that 45% of respondents: DO NOT MEASURE.

So why is measuring so difficult? Since platform engineering tends to touch almost the entire organization, its benefits are often fairly wide in scope and can be several steps removed from broader business outcomes, making it hard to draw clear correlations. Not to mention the fact that the platform team touches everyone - so how can you prove your platform works when it touches 10 different places and does 100 different things?
Well, therein lies the recurring big mistake. It’s the same one that gets repeated again and again in the course, and this newsletter and it’s probably the mistake you’re making in the platform you're building right now.
Your platform is too big. Too unfocused, and covers too many areas. And it has too unclear a scope.
When you build your platform, you need to identify a clear and specific goal that you are trying to solve. (you likely have many, but trust me - you want to start smaller).
If you’re trying to solve DevEx problems, security problems, and compliance problems all in one fell swoop, it’s going to be impossible to properly set your platform moving in the right direction, and understand how you should be measuring it.
So… let’s take a look at an example Minimum Viable Platform process and how it can highlight how good platform engineering measurement is done. This MVP is starting with 1 representative and engaged team. And importantly, this team's issues are reflective of the wider organization.
Step 1: Be very clear about what the goal of your platform is.
Example goal: Decrease cognitive load for Developer team XYZ
(Keep in mind that you don’t want to build 50 individual platforms for teams of 20 or so, this should be something that can be expandable outward, so always be careful of being overly niche or too specific)
Step 2: Define the parameters of the goal, and understand the team.
This developer team has 25 members. They are a mix of a few experienced, and a large group of less experienced devs.
Share a survey to get feedback from the team, organize a call to discuss issues, interview a few key members + interview the head of the team.
The team has to spend a lot of time messing around with their own Terraform. The junior devs struggle to do this and so rely on a large number of Tickets to a supporting Ops team. However, due to delays in help, they frequently Slack msg the experienced devs who can help them. This however puts increased pressure on the Senior devs, who have to fulfill their tasks + a shadow Ops role. Senior devs feel they are spending too much time training junior devs, and running Ops while Junior devs constantly feel like they are out of their depth.
Step 3: Set the baseline measurements for your platform
Now that you’ve identified what the key issues are, for example in this case it is devs messing with Terraform. Senior Devs feel like they’re running shadow ops. And Junior devs are feeling underwater.
You can share an (anonymized) survey on specific points like for example these:
- How are you interacting with our Terraform?
- How many times per day do you interact with our Terraform?
- How many requests per day do you get from other members of the team for help with Terraform?
- On a scale of 1-10, how do you feel about your current workload (with 1 being exceptionally overwhelmed, and 10 being perfectly content). Or something similar. This is a subjective question, trying to gauge perception and feeling.
- What do you find most challenging about your current workload?
You could then also track:
- # of Jira tickets
- Changes to TF files
You then have a sense of what you need to focus on for this MVP, and can easily look back 6 months later once you’ve built an MVP for this use case, track these metrics, and reshare this survey.
Those responses then make it far easier to prove value, secure more funding, and expand outward to other teams.
This MVP of course features a relatively easy and clear-cut case. Life tends not to be so simple.
But it highlights how platform as a product and product management principles are what drive platform engineering success.
This process of honing in and focusing on the specifics of what your goal is, using user research to set its parameters clearly, and then using servers plus clear metrics to set the baseline is crucial to measuring platform engineering no matter what your objectives are.
These are all concepts from product management, that are supposed to be what makes platform engineering a key differentiator (and are unsurprisingly what are missing, from those teams who struggle with measurement).
This really is platform engineering’s biggest problem.
But a LOT is happening to solve it. I’ll be sharing a lot of content over the coming months going into more details on this topic, on the MVP Success Metrics Framework we use in the Platform Engineering Fundamentals course, and on best practices you can use for measuring your own (likely more complicated) challenges.
Stay tuned✌️
