The 80s and 90s were the golden age of action movies, with studios falling all over themselves to capitalize on the most bankable stars with as many original films and sequels as they could get a green light for. Once such film was 1994’s Speed, a no-holds-barred thrill ride set on an LA bus during rush hour. It established Keanu Reeves and Sandra Bullock as leading man/lady material, won two Academy Awards (yes, really), and left audiences wanting more. Approval for a sequel seemed like a foregone conclusion, and lo and behold, Speed 2: Cruise Control was released three years later.
As you can probably tell from the above image, it did not fare nearly as well and is consistently ranked as one of the worst sequels of all time (though I, for one, thoroughly enjoyed re-watching it a decade after its release). Much like the titular vessel in the movie, AI implementation projects can be at risk for crashing and burning if leaders are so locked in on the end goal that they’re not attuned to changing conditions that require a course correction.
How’s that for a segue?
In this week’s edition of the Path we’ll talk about some ways that AI efforts go wrong, and what teams can do about them.
(1) Trying to do too much
It’s easy to get carried away with the potential of AI when you’re first exposed to what it can do. You start thinking about all of the manual processes in your business that could be improved with AI and suddenly you’ve got a team spun up working on a custom solution to “AI-ify” everything, burning through tens of thousands of dollars a month. A tale as old as time.
The challenge with this approach, as tempting as it is, is that AI’s inherent unpredictability and inconsistency make it so that the more tasks you try to cram into one solution, the more unreliable it becomes. Most solutions for end-to-end workflows usually involve multiple discrete agents/bots working together and talking to each other, and even those require the ability to define workarounds or revert to manual process steps when one of the AI components stops functioning like it should.
What to do about it: if you’re looking at a long or complex process as an opportunity for AI, try breaking it down into sub-processes or individual steps first. Look at each one with a critical eye and determine which of them would add the most value and are simple enough that an AI solution would be able to handle it with a high degree of confidence every single time. If you start there you’ll get a sufficient amount of value (per the 80/20 rule) to make it worthwhile and will have a strong foundation to build on in the future.
(2) Over-relying on sales literature
If you’re implementing an OOTB AI tool, be it something as wide-ranging as MS Copilot or something more discrete, many teams have a tendency to assume that the tool will do exactly what it says it will, or that it’s as easy to configure as it says on the box. This is flawed thinking, as the amount of AI-washing in the market today makes it very difficult to tell fact from fiction, and building an entire AI initiative around the implementation of one tool (rather than defining and focusing on the problem to be solved first) can lead to disappointment
What to do about it: two things can help you avoid this pitfall. One is to have a very clear idea of what you are trying to achieve with AI before you decide on a solution. If you are just trying to “implement AI”, you’re likely to spend money on the wrong tool, or worse, a tool you don’t need at all. The second is to have a defined rollout and experimentation plan to make sure that the tool you’ve selected is a good fit for the problem you want to solve. Better to invest a small amount in some licenses for testing purposes before making a decision than to go “all in” without a plan and regret it later.
(3) Not governing appropriately
We’ve talked before in the Path about AI governance, and while it’s no more popular today than it was a few months ago, governance is still an incredibly important element of successful AI projects. The reason for this is because AI is an unpredictable, emerging technology it needs stronger controls to ensure value is being created safely. Leaders and budget-holders will pull the plug on projects that don’t hold up to their business case or introduce unnecessary or uncontrolled risks. At the same time, it’s important not to sacrifice all of your team’s innovative spirit and freedom to experiment on the governance altar.
What to do about it: introducing a lightweight governance framework to help steer your project doesn’t require a massive effort. It can be as simple as creating a best-guess business case, communicating risks and reporting on mitigations, consistently reporting on progress and evaluating your solution against the value levers you’re trying to move. If you do these things, you’ll find that you are better able to anticipate and react to problems, which will lead to better outcomes down the road.
(4) Losing sight of the human element
It’s no secret that we at Pathfindr believe that AI innovation starts and ends with people. People are the ones who come up with the best ideas for where AI can make a difference, and they’re the ones who will need to use it effectively to realize its full potential. Teams that forget this fact do so at their peril. You could invest serious money into building an AI solution that is difficult to use or that nobody wants. That’s why it’s critical to seek feedback early and often during ideation and development.
What to do about it: involve your end users during the initial requirements gathering phase, and get them a working prototype of the product as soon as possible for them to react to. There’s no better way to gather or refine requirements than to build something first and put it in front of your audience - they’ll tell you what works and what doesn’t, what they like and what they don’t.
If you feel like your AI voyage is perilously close to drifting out of control (that’s the last nautical metaphor - I promise), or if you don’t know where to start, you can join our workshop next week that will help you get a handle on your AI strategy and give you lots of ideas for making a difference in your organization. It’s the only AI masterclass that will leave you saying ”AI can do WHAT?!?!”
Continuing our AI series that we began in last week’s edition with our deep-dive on how AI can make a difference in private equity, this week we’ll focus on a capability instead of an industry.
Occasionally at The Path, we like to take a break from our regular, Pultizer-worthy content to write a deep dive on how AI can make a difference in a particular industry. This week we’re focusing on private equity and how GPs and their management teams can use AI to manage risk, optimize performance, and seize opportunities that others might miss.
Specifically, we’re going to unpack a particular finding in The State of Generative AI in the Enterprise, a report based on data gathered in 2023 and published by Menlo Ventures. Over 450 enterprise executives were surveyed to get their thoughts on how Gen AI adoption has been going at their companies.
It may not be everyone's favorite corporate function....but it's very necessary. No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.
For every article, post, or video excitedly talking about the potential of AI, there is another one warning about its dangers. Given the press and hype around each new AI breakthrough, it’s no surprise that governments, business leaders, and academics are closely tracking the development of the technology and trying to put guardrails in place to ensure public safety.
For those who think about corporate financials all day, it’s tough out there right now. That won’t come as a surprise to CFOs, or people who work in a CFO’s organization, but it was certainly a wake up call for me as I started learning on the job at Pathfindr.
In this blog, we will show you how to put together a value framework that will help your team decide where to invest in AI capabilities and how to maximize the return on that investment.
In this blog, Nathan Buchanan explains why strategic decisions around AI implementation can be so difficult to make.
Previously, we talked about different ways to calculate value from AI implementation. We focused on the different types of value, where it could be found across an organization and the things to keep in mind when you’re trying to track it. What we DIDN’T focus on was the other side of the discussion.
If you're a Not For Profit, you've probably heard that AI can help you address these needs, but you’re not sure where to start, or how to afford it even if you did. What can you do?