Considering feasibility, desirability, and viability when developing AI products
Artificial Intelligence is rapidly evolving as it is used in a growing number of applications across business and society. The majority of AI research is, of course, dominated by computer science and the focus on i) developing more innovative algorithms and ii) the design of processors and storage needed for different applications.
While there’s a huge range of AI and machine learning (ML) prototypes being developed for a variety of applications, a slim few make it into production and have a long-lasting impact. Of those that do creep into the mainstream, however, understanding the underlying basics is becoming more relevant for many product managers. But today’s ‘product people’ are a relatively heterogeneous bunch. For some, the focus is almost exclusively on user experience (for example, their product’s value proposition might be focused wholly on an amazing user interface), whereas other product managers might already manage products that require a deep understanding of data and code.
As ML systems become more powerful and ubiquitous, understanding ML will become a necessity for product managers at both ends of the spectrum, but for slightly different reasons. For the former UI-focused product managers, ML features will radically change how users interact with their products. On the other end of the spectrum, product managers who are in charge of technical platforms and APIs will be more concerned with how AI algorithms are integrated.
Given that product management is as big a topic as machine learning, however, we’re massively oversimplifying things. So, let’s take a step back and consider a fundamental question: When is it worthwhile to develop an AI product?
A very useful tool that most product managers have heard of is the so-called innovation sweet spot that was popularized by IDEO in the early 2000s. It explores the feasibility, desirability, and viability of a product, and worthwhile ideas tend to have a good measure of all three. If you aren't familiar with the framework, you can start by reading this article. Let’s explore this concept in the context of machine learning products.
Although feasibility isn’t the traditional starting point when evaluating a machine learning product idea, it’s one of the most different aspects when compared to traditional software products. This is especially the case since we are still in the very early days of ML systems.
Although making estimates can be difficult, we have and are continuing to become very good at evaluating what’s possible by using software. When you describe a problem to a developer, for example, they’re able to think about technologies and libraries. On the other hand, you might be thinking about previous products where the problem has been solved, and these thought patterns are bound to pop up when evaluating ML feasibility.
Some of the questions you might ask yourself and your team when assessing feasibility might include:
What problem are we solving?
Putting the problem into perspective is necessary for any issue that needs to be solved, but it’s even more important when dealing with significant uncertainty. For instance, if your idea is to detect flawed products in a production line, then granularity becomes important: how big of an improvement are we looking for in faulty products sent to the customer?
Do we have data about the problem? If not, how can we get it?
To say machine learning is all about data is a massive understatement. While ML doesn’t always need to revolve around so-called ‘big data’, there needs to be enough high-quality data related to a problem in order to solve it. Some data, such as user actions within an application, are easy to acquire; they’re probably already stored somewhere that you can access easily.
In contrast, other data is more difficult to get hold of, such as large, high-quality labeled image sets. Consider a production line as an example, you’ll need an image set that contains both perfect and defective products with flaws clearly marked and labeled. These images also need to be shot from the same angles where sensors would be on the production line.
Are patterns present that will make sense for an algorithm?
The most difficult thing about ML is that there’s often a lot of upfront work led by data scientists to evaluate a dataset and experiment to see whether there are patterns in it that an ML model can make sense of. Unlike with software, however, feasibility is difficult to evaluate without taking a deep dive into it and getting your hands dirty—speaking metaphorically, of course.
In traditional software, feasibility can be almost described as binary (possible or not). However, in machine learning, feasibility may be more of a scale, which spills over to what's desirable.
Knowing what your potential users want can be tricky, this is particularly true when it comes to AI products and ML systems. At a basic level, evaluating the desirability of a machine learning product is the same as evaluating a regular piece of software, but on a deeper level, there’s something of a trap.
While having AI sitting at the core of every product seems desirable—and, indeed, many companies are looking at implementing AI in their full product offerings—there are additional questions that need to be asked.
How well will the algorithm perform (and how well does it need to)?
This question relates to both the feasibility and desirability of a machine learning solution. There are no simple answers here, however.
You might find that an algorithm is capable of detecting faults at a rapid rate but also produces false positives at the same time. What impact will this have in production? Returning to the example of a production line, perhaps workers on it will ignore the algorithm altogether due to the potential for false positives. This will simply cause your ML solution to become undesirable.
How much control does the user have over the solution? Is it trusted?
Trust in machine learning is a major topic and one that we’ve covered previously. This is because ML models are often seen as black boxes not just by users but also by the people developing them. An important thing to consider is how much information you’ll need to convey to convince the user and how much control over the decisions you provide to them.
In the production line example, will you need to present the algorithm’s confidence about each prediction, or will staff on the line have the ability to adjust the threshold at which a defective product is discarded? Keep in mind that this is another oversimplification; handing over control of an algorithm is technically difficult to implement and likely to be difficult for them to understand.
Once you’ve got over the feasibility and desirability hurdles, the question becomes one of viability and worth. There are many ways to evaluate viability, and product managers will usually need to consider whether their new product idea aligns with strategy. Some of the ML-specific questions that relate to viability might include:
Will long-term value outweigh the short-term cost?
Developing machine learning systems can be very expensive, especially when you consider that data science professionals and ML engineers are two of the most sought-after professions today. They don’t come cheap. Even if you’ve got them within your team, there’s plenty of other work that they could be taking on. In addition to this, you need to remember that true subject matter experts are in even shorter supply. Quality data can also be expensive to acquire, and training models aren't exactly free either.
Then, when it comes to building a model, there’s a lot of code to write and infrastructure set up. Data collection, model serving, and eliminating bias are all complicated tasks. In addition, ML is new to most people and there’s a significant amount of education needed to get stakeholders up to speed so that they don’t hold back progress.
Due to the sheer cost and expertise that’s needed, it’s always worthwhile to begin by asking whether something can be solved with a purpose-built system rather than AI. If you’re going to proceed by building an AI solution, ask how you can ensure that your team will use research and technologies that have been built by others—this could save you a lot of money.
Will the problem change over time?
There are two aspects that can affect the viability of an AI product. The first aspect is how dynamic the environment is where the solution is being deployed. The second aspect is whether your solution is replicable to other similar problems, meaning you can use most of the engineering over and over again.
Returning to the production line example for the first point, if a product’s model is going to change frequently then you’ll need to consider the cost of acquiring new data and re-training the model. For the second point, if fault detection has been designed for a single product model and then this changes later, this could render your ML solution unfit for purpose.
It can be argued that there’s a certain level of naivety among product managers when it comes to machine learning. However, as we’ve explored, feasibility, desirability, and viability are hugely interconnected in AI products, and uncovering their relationships is a necessity for all product managers, whether technical or non-technical.