Video: AI Planning - Principle 3: Capability

In this video, Graeme Cox explains his third principle of planning an AI Strategy - Capability.

Setup a call with Graeme

Please submit this form, if you would like to setup a call with Graeme.


The third element in this is around your capability.

It's really important to work on your internal machine learning and AI capability as you develop these solutions. Even if you take somebody like Attacop or some other third party to actually build you a solution, You can have that done to the extent that it is delivered over an API and even hosted externally, so really your consumption of data is simply you send, the raw data out of the API, you get your answers back, whether that is a predictive solution a natural language solution, a knowledge, retrieval solution, whatever it might be. So you don't have to take these solution directly into your data centers and run your own dev ops, but you have to understand them very least.

And your technology teams have to understand them. They have to understand how how they work, where they break, what the what the implications are of having AI as features in the organization. So developing traditional software teams out to have deeper AI understanding is important regardless of how strongly you lean on a third party.

But at least as much, if not more, is that as executive AI maturity, that understanding first of what AI can do to your business, and then an understanding of how you should treat AI in context. So what are the risks of AI, one of the limitations of AI. How do you handle AI and how do you communicate about that to your people? This comes back to know, this very strongly rooted in this fourth pillar advisability and the responsible AI piece of this about how you keep employees on board with you as you start to build copilot solutions that you could see one day potentially could be autopilot solutions, you know, And in this, I think, a a really stark example at the moment is the software development industry itself.

Know, I've gone in, over the the couple of years I've spent doing, technical due diligence for private equity firms up then. I've looked at somewhere in the region of fifty or sixty different businesses and had the privilege of, you know, look peering over the CTO CIO's shoulders and, and having a look at how they run things. And I've seen these businesses move from, a tiny percent of code being automate automatically generated just a couple of years ago to as much as twenty five percent of the total output of code. Today being generated by ai co pilots.

Now I hope in all cases that code is still treated through the same QA process as all human generated notice, which means that it's it's reviewed. It's tested.

It's not just accepted because it came out of a machine. Because it these things have flaws, but the rate of efficiency and improvement I see in software development teams is enormous.

This is that we're now totally, therefore, in the co pilot area era for software development, and you can see if you roll up forwards.

There is the potential that this will turn into an autopilot over time.