t may not be everyone's favorite corporate function....but it's very necessary.
No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.
When it comes to AI, this dichotomy is particularly stark because of the nature of the technology. It’s exciting, it changes quickly, and people want to play around with it…but it’s also unpredictable and can put companies and their customers at risk.
Hence the need for governance - whether you like it or not.
But AI governance doesn’t have to be cumbersome and overbearing. In fact, there are some simple ways to integrate it into your existing governance processes so that you don’t need a standalone set of meetings, reports, or templates that nobody wants to attend or deal with.
First, start by understanding what governance is “for” at your company. Most teams think of governance as a system of controls and checks that ensure that goals are being achieved in the right way. This might include considerations such as:
Cost-Effectiveness - is the project on budget?
Quality - is the work product being created at a high level of quality?
Speed - are milestones being met as scheduled?
Risk - is the team or company exposed in some way?
These elements are usually discussed in a forum to ensure that they are within tolerances; if they’re not, corrective action is usually required.
AI projects can be included in these discussions, as cost, quality, speed and risk are all important for them as well. But the tolerances may need to be different due to the unique nature of the tech. For example, outputs from LLMs can be unpredictable, so holding a generative AI system to the same quality standard that you would an e-commerce website is unreasonable. Similarly, it might be easier to run over-budget when building a chatbot because the gap between technical capability and desired customer experience may be wider than expected, resulting in the need for more experimentation and trial and error. And risk from unproven AI applications has been well-documented but suffice it to say that it needs to be a central topic of conversation at any governance forum that’s addressing AI.
So you can use existing governance processes to manage AI projects - great. You might be wondering, is there anything unique about AI that requires a new process or capability to be created in order to govern it effectively?
I’m glad you asked. The answer is yes.
Testing is simultaneously the most important and least understood element of AI governance. Most organisations have things they’d like to improve about their current testing capabilities (to put it mildly). But asking test teams to take on the additional challenge of learning how to test AI applications - with their experimental nature and unpredictable outputs - can be quite daunting. Yet without a good testing framework that is specific to AI, it’s difficult to get the inputs you need to participate in a governance forum. You need to be able to articulate the current state of quality in an AI solution in order to participate in the conversation.
Unpacking the approach for testing AI applications is outside the scope of this week’s edition - we’ll cover that in a future post. Here are a few approaches to AI testing from a governance perspective to consider in the meantime:
AI governance isn’t something to afraid of - in fact, it's necessary, and it needn’t be invasive. The more teams can adapt their processes to the unique needs of AI projects, the more they’ll be able to manage cost, speed, quality and risk without sacrificing progress.
Continuing our AI series that we began in last week’s edition with our deep-dive on how AI can make a difference in private equity, this week we’ll focus on a capability instead of an industry.
Occasionally at The Path, we like to take a break from our regular, Pultizer-worthy content to write a deep dive on how AI can make a difference in a particular industry. This week we’re focusing on private equity and how GPs and their management teams can use AI to manage risk, optimize performance, and seize opportunities that others might miss.
Specifically, we’re going to unpack a particular finding in The State of Generative AI in the Enterprise, a report based on data gathered in 2023 and published by Menlo Ventures. Over 450 enterprise executives were surveyed to get their thoughts on how Gen AI adoption has been going at their companies.
For every article, post, or video excitedly talking about the potential of AI, there is another one warning about its dangers. Given the press and hype around each new AI breakthrough, it’s no surprise that governments, business leaders, and academics are closely tracking the development of the technology and trying to put guardrails in place to ensure public safety.
For those who think about corporate financials all day, it’s tough out there right now. That won’t come as a surprise to CFOs, or people who work in a CFO’s organization, but it was certainly a wake up call for me as I started learning on the job at Pathfindr.
In this blog, we will show you how to put together a value framework that will help your team decide where to invest in AI capabilities and how to maximize the return on that investment.
In this blog, Nathan Buchanan explains why strategic decisions around AI implementation can be so difficult to make.
Previously, we talked about different ways to calculate value from AI implementation. We focused on the different types of value, where it could be found across an organization and the things to keep in mind when you’re trying to track it. What we DIDN’T focus on was the other side of the discussion.
In this week’s edition of the Path we’ll talk about some ways that AI efforts go wrong, and what teams can do about them.
If you're a Not For Profit, you've probably heard that AI can help you address these needs, but you’re not sure where to start, or how to afford it even if you did. What can you do?