Nate Buchanan Director, Pathfindr
Nate Buchanan Director, Pathfindr
Previously, we talked about different ways to calculate value from AI implementation. We focused on the different types of value, where it could be found across an organization and the things to keep in mind when you’re trying to track it.
What we DIDN’T focus on was the other side of the discussion. Value is all well and good - everyone wants it, and if you believe what the AI influencer community has to say, it’s incredibly easy to get from this world-changing technology. But what’s it going to cost you?
This week we’re breaking down that question into three separate components that we believe are important considerations when you’re deciding where and when to invest money into AI solutions. They are:
- Cost; or, how much will we need to spend?
- Complexity; or, how difficult is it going to be?
- Risk; or, what should we be worried about?
Each of these is very important in its own right, and they are often intertwined. For example, a solution that is highly complex is likely to cost more or introduce more risk. Evaluating each AI use case that your team is considering against these three elements will help you get the full picture and make an informed decision on whether to proceed.
Let’s look at cost first. There are three components of cost that need to be included in your analysis:
- Infrastructure Cost - this is the cost of building the platform required for development, testing and hosting of the AI solution. If you’ve already invested in AI capabilities from the likes of Microsoft, Google, or Amazon, and have a team in place able to build solutions on one of their platforms, that is a sunk cost that should be baked in already. But if you are starting from scratch, or if the use case you’re considering requires major updates to the platform you’ve got in place, this can be a major factor.
- Build Cost - assuming your infrastructure is good to go, build cost is the cost of developing and testing the specific use case. This could be treated like building any other application, including the cost of confirming requirements, developing the code, testing it, and deploying it into production.
- Operating Cost - with AI applications, you need to break your operating cost into two buckets: the cost to maintain and support the application in production (fixing defects, answering user questions etc) like you would any other product, and the compute cost required to operate the solution. For complex, text- or API-heavy applications, the monthly bills can start to add up and it’s important to baseline usage every month so that you can get some predictability in your budget.
Next, let’s consider complexity. While some elements of cost may be unknown during the ideation phase of a use case, complexity is usually pretty simple to figure out during requirements gathering.
- Data Complexity - we’ve talked in previous editions about how your data doesn’t need to be in order for you to get benefit from AI. While that’s true, it certainly doesn’t hurt to have high-quality data in the right place at the right time. Depending on the application you’re trying to build, user experience and outcomes can vary widely based on your data situation, and you might need to make some key decisions as to whether data needs to be cleaned up or reorganized before proceeding.
- Process Complexity - at Pathfindr we love to dive deep into a process to really understand it before we make any recommendations regarding AI. Processes tend to be more complex than they need to be, and it’s good to look at them end-to-end with a critical eye before deciding what to automate or change. Once you’ve done this exercise, you’ll have a good idea of how difficult it will be to apply AI as an accelerator.
- Oversight Complexity - most AI solutions require a human in the loop at some point (sometimes multiple points). In some cases, a process can be simplified as much as possible and the need for manual oversight remains, either due to regulatory requirements, unreliability of AI, or some other factor. It’s important to evaluate how much human input will be required once the target state is achieved so that you can account for that when making your decision.
Last - but certainly not least - we have risk. Much has been written about AI risk elsewhere, and we don’t have the space to cover all of that here in this edition. But here are three elements of risk that are worth considering as you weigh the ROI of an AI idea.
- Privacy Risk - this is probably the most common worry related to AI applications. Everyone is concerned about company data or client data being used to train models, or being accidentally (or maliciously) released to the public. Fortunately, there are safeguards available, particularly if you are using one of the hyperscalers as your AI platform of choice. It’s something to be aware of, but not a reason to avoid AI altogether.
- Regulatory Risk - this is particularly relevant for regulated industries such as finance, utilities, health care and so on. Your industry or the government may have restrictions on how you can use AI in your business. In Australia at least, there is no regulation or law on the books today restricting how AI can be used in business, but legal frameworks for AI regulation are currently being established in the EU and UK. Expect Australia to follow suit.
- Reputational Risk - this area of risk is largely up to you to control and manage. If you are implementing an internal use case for a small number of staff, it’s unlikely that you’ll end up in the paper for the wrong reasons. Conversely, if you create a customer chatbot and widely advertise it, only for it to insult people when they use it, that could potentially be an issue.
As you can see, there are lots of things to consider when deciding what AI application to build next. If you can pull together a 360 degree view as an input to the decision-making process, you’ll give everyone comfort that you’ve thought through all the angles, which will inspire confidence no matter which direction the team decides to go.
Other Blogs from Nate
AI for Quality Engineering
Continuing our AI series that we began in last week’s edition with our deep-dive on how AI can make a difference in private equity, this week we’ll focus on a capability instead of an industry.
AI for Private Equity
Occasionally at The Path, we like to take a break from our regular, Pultizer-worthy content to write a deep dive on how AI can make a difference in a particular industry. This week we’re focusing on private equity and how GPs and their management teams can use AI to manage risk, optimize performance, and seize opportunities that others might miss.
It's not too late
Specifically, we’re going to unpack a particular finding in The State of Generative AI in the Enterprise, a report based on data gathered in 2023 and published by Menlo Ventures. Over 450 enterprise executives were surveyed to get their thoughts on how Gen AI adoption has been going at their companies.
Good AI Governance
It may not be everyone's favorite corporate function....but it's very necessary.
No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.
Great AI, Great Responsibility
For every article, post, or video excitedly talking about the potential of AI, there is another one warning about its dangers. Given the press and hype around each new AI breakthrough, it’s no surprise that governments, business leaders, and academics are closely tracking the development of the technology and trying to put guardrails in place to ensure public safety.
AI for CFOs
For those who think about corporate financials all day, it’s tough out there right now. That won’t come as a surprise to CFOs, or people who work in a CFO’s organization, but it was certainly a wake up call for me as I started learning on the job at Pathfindr.
Bang for your AI Buck
In this blog, we will show you how to put together a value framework that will help your team decide where to invest in AI capabilities and how to maximize the return on that investment.
Build vs. Buy vs. Wait
In this blog, Nathan Buchanan explains why strategic decisions around AI implementation can be so difficult to make.
Righting the AI Ship
In this week’s edition of the Path we’ll talk about some ways that AI efforts go wrong, and what teams can do about them.
AI for Purpose
If you're a Not For Profit, you've probably heard that AI can help you address these needs, but you’re not sure where to start, or how to afford it even if you did. What can you do?
AI - The only thing to fear is inaction
Why you shouldn't be afraid to take a stab at AI in your business
AI Revolution
Spoiler Alert: No.