Measuring software maturity
Most organisations will have a number of software components that make up their enterprise, coming together to realise those business capabilities which make up the end-to-end value chain of the business.
There is practice you can use called Capability Maturity Model Integration (CMMI). It specifies 5 different levels of maturity which abstracts into various topics like Project Planning, Configuration Management, etc. Whilst this might be useful in an Enterprise Architecture sense, it doesn’t do much to help us quantify and measure the maturity of the software components that exist within a Software as a Service (SaaS) that make up those capabilities.
I propose considering 3 simple levels of software maturity:
- Immature - Unpredictable, unreliable, not repeatable and poorly controlled;
- Stable - Predictable, reliable, repeatable and controlled;
- Optimised - Exemplary state.
Hopefully they seem reasonable and familiar, given that they have a close resemblance to unofficial open source levels of maturity.
You have just built something, it basically hangs together OK for demos, but you’d never want it in production. I could be the experience level of the team in the technology or speed of development, but components of this level are generally unreliable and when they do fail it takes a while to sort them out.
Here is my criteria for an immature system:
- Code - Not all the team understands it, documentation varies, but at least it’s all in one place;
- Build - Stored and released from Source Control. Probably doesn’t have much CI automation;
- Test - Unit tests cover most of the code base, integration testing quality varies;
- Package - Inconsistently spread across multiple archives;
- Configure - Some infrastructure provisioning automation exists, but it’s not complete and so isn’t repeatable;
- Monitor - Logging levels vary, its not centrally collected and little or no monitoring is in place;
- Secure - Secure by design and implementation (mostly), but workarounds are still in place to give people the access they need;
- Recover - Team knows how they might go about it, it’s backed-up, but nobody has really tested it;
- Access - Always an issue, workarounds frequently put in place to ‘just get it working’.
Do not put components at this level into production, if you do be prepared for a massive headache.
Everything works as it should, it’s secure, reliable, repeatable and changes are well managed.
You can safely put components at this level into production.
Here is my criteria for a stable system:
- Code - Understood by all the team, well documented, everybody knows where to find stuff;
- Build - Stored and released from Source Control. CI pipelines completely manage releases;
- Test - Achieved minimum levels of automated unit and integration testing;
- Package - Application can be deployed as a single archive, requires no modification between environments;
- Configure - Infrastructure provisioning and configuration is treated as code and is completely automated. Environments are create/destroyed as required and are fully representative;
- Monitor - Application is logging all states. Server logs are collected and can be analysed in real-time. Monitoring and alerting is in place to respond to errors;
- Secure - Through design, implementation and operation, the solution meets the organisation security criteria;
- Recover - All data is resilient and backed-up, it can be immediately accessed for restoration purposes, full recovery from backups is regularly tested;
- Access - Everybody who needs access has the access they need, and a process exists to maintain accounts, activity is audited.
It is likely that in this state, full disaster recovery and knowledge transfer has not been completed, it may also be the case that manual intervention is required to handle. Releases tend to be a bigger deal than they should be.
These things have stood the test of time, they are fully automated, self-healing and support continuous deployment.
People working on these components focus on knowledge, incremental improvements and innovation.
Here is my criteria for an optimised system, in addition to stable:
- Code - The ‘master’ branch never breaks, everybody pushes to master frequently, feature toggles are extensively used;
- Build - Continuous Delivery is used with zero downtime, rolling deployments. Release schedules are fully automated, master/mainline is always in production;
- Test - Maximum reasonable levels of automated testing is in place for the entire stack, including the UX and underlying data;
- Package - As stable;
- Configure - No gaps in automation. Also, infrastructure is immutable, servers are never modified, new ones come along and the old ones are terminated;
- Monitor - Errors are automatically detected and handled, solution can recover completely automatically from internal failure;
- Secure - System can automatically detect and respond to security related events;
- Recover - As stable;
- Access - As stable.