No – machine learning engineers needn’t worry too much. While AI is being increasingly used in the field of computer science, this is often done to augment the skills of professionals like software and machine learning engineers rather than replace them outright.
But that doesn’t stop some people from worrying. The question of automation is one that comes up often. And it’s no wonder when you consider just how powerful artificial intelligence (AI) is. After all, it can write its own songs, drive cars, play video games, and help to treat patients. It’s also beginning to write its own code. But will machine learning engineers be automated as a result of further advances in this still-emerging field?
While many will be quick to dismiss this question as fanciful hyperbole, there is some merit to it. According to estimates by McKinsey, around 45 percent of all jobs could be automated using current technology. That’s a worrying statistic, especially as more and more processes are beginning to be automated. As the years go by, we’re undoubtedly going to see some jobs become obsolete.
You can see this happening to some extent already in the field of software development. As soon as software testing became a big thing, automation tools began springing up almost out of nowhere as the demand for faster, better, software testing (“Quality at Speed”) rose.
This naturally begs the question of whether developers are making themselves obsolete by building these automation tools. If developers keep on building machines that can write their own code, what will we need human developers and machine learning engineers for in the future?
Before we tackle this question, let’s build a little more context by looking briefly at the history of AI and what it’s capable of.
While the term artificial intelligence was coined in the 1950s, it didn’t really take off until the 1990s when research began to take off following decades of slow progress.
AI’s first mainstream win came in 1997 when IBM’s Deep Blue beat chess champion Garry Kasparov. 14 years later in 2011, IBM’s question-answering AI system Watson won the quiz show “Jeopardy!” by beating its reigning champions which sent AI into the mainstream.
While AI has become highly developed over the last 25-or-so years and the above achievements are impressive, teaching a computer to play chess is one thing; successfully teaching it to automate an entire profession is another.
So, what can AI do?
Well, as we’ve seen, it can beat chess grandmasters and Jeopardy! champions. But chess is a game with a finite number of moves and outcomes. And as for Watson, it was trained on thousands of existing clues and responses that were labeled as either correct or incorrect.
It’s worth noting that although Watson did develop the ability to understand the meaning of language to an extent through machine learning, natural language processing is a long way away from machine learning engineering, a field in which highly trained (and very highly paid!) humans spend years crafting highly complicated code to enable computers to automate tasks.
So, although AI has come a long way and is capable of driving cars (with relative safety—there’s a lot more improvement needed…), booking things for you over the phone, providing customer support, writing social media ads, predicting which Netflix shows you might like, and understanding you when you speak to it… most of the time, there’s plenty it can’t do. It can’t gain you admission into university or run a Twitter account without supervision, for example – indeed, there are more things that AI can’t do than it can do, much more than many people think.
One question that commonly crops up is, “Can AI write its own code?”
The short answer is sort of.
In June 2020, GitHub released a beta version of a program that uses AI to assist programmers. When programmers begin typing a command, a database query, or request to an API, the program, which is known as Copilot, will guess the programmer’s intent and write the rest of it.
In July of the same year, an AI language-generating system called GPT-3, the successor to the so-called “world’s most dangerous AI” GPT-2, was used to build a web page through nothing other than descriptions written out in plain English.
But just as AI can sort of write code like humans, this code, just like it is when written by humans, can be prone to bugs.
In the case of Copilot, those testing the program found that subtle errors were able to easily creep into their code when accepting proposals. This raises many concerns, not least of which is that of complacency: Programmers who get too comfortable with programs like Copilot could easily let these bugs slip the net.
Meanwhile, in the case of GPT-3, testing has shown that the code it produces isn’t always useful. It’s also got a tendency to commit errors that are difficult to correct without human intervention. For example, when the program was asked the simple mathematical question of “What number comes before a million?”, GPT-3 replied with, “Nine hundred thousand and ninety-nine,” which is clearly wrong.
While AI and machine learning are very closely related, they’re not the same. Machine learning is a subset of AI and concerns the development of computer systems that can learn and adapt without following explicit instructions. In recent years however it has developed into an area of its own. Indeed, in many ways, machine learning takes AI to the next level by removing the limitations that are associated with finite outcomes and defined rules that constrained early AI technologies such as IBM’s Deep Blue.
At the most basic level, this is achieved with algorithms and statistical models that can analyze and draw inferences from patterns found in data. This is the very reason why data is such a valuable asset in our modern world. As the importance of data and machine learning has grown, so too has the demand for so-called “machine learning engineers” who are responsible for creating programs and algorithms that enable machines to act and learn without being given explicit instructions.
Machine learning’s approach is to look at millions upon millions of pieces of data and then learn from it over time. Unlike traditional computer programs, machine learning isn’t bound by rules-based programming and thus doesn’t have the same limited scalability and testing problems that are associated with traditional programming. The compromise here, however, is that machine learning is in many ways an unknown; a black box. Even the engineers who work on these algorithms don’t entirely know how their machine learning models make their decisions.
Just like in software engineering, automation has started to show up in machine learning. By automating the modeling tasks necessary to develop and deploy machine learning models, automated machine learning (or “AutoML”) enables machine learning solutions to be implemented with ease.
But programming algorithms and modeling only represent a small part of the workload of a machine learning engineer. Data management, such as data munging and data cleaning, is a far bigger and more resource-intensive process. Automating certain elements of machine learning frees up an organization’s machine learning engineers and enables them to focus on more complex issues.
So, while there are parts of core machine learning that can be and are regularly automated, the data management side cannot be automated away. This is because removing humans and human judgment from the equation can lead to bias, a problem that is detrimental to machine learning models.
Regardless, there’s still the ever-present worry on people’s minds that developments in AI and machine learning are going to render their jobs redundant.
Indeed, when you consider how quickly computer science automation is changing the playing field in technology, including in areas like software development and machine learning, it’s understandable why some are nervous for the future.
This isn’t helped when you read statistics like 43% of businesses plan to reduce their workforce due to technology integration and computer support specialists face a 72% of automation – but these are often read out of context and don’t exactly paint a complete picture.
The truth is that machine learning engineers, software developers, and other computer science professionals should understand that the future of AI in engineering isn’t one of complete automation. Rather, automating specific, often repetitive tasks to help human engineers work faster and more efficiently with fewer bugs and setbacks. In essence, and in many ways paradoxically, AI is going to help human engineers rather than be a threat or a burden.
As previously mentioned, we’re already seeing software development being heavily augmented by clever AI tools that dramatically improve workflows by enabling better analysis, faster work, and more comprehensive testing.
Just look at Snyk Code (prev. DeepCode) as an example.
In Snyk Code, machine learning is used to analyze and clean up code written by software engineers in real-time. In many ways, it is to software engineers what Grammarly is to writers and editors. And guess what? None of these professionals have seen their work fully automated because of these tools, but their workflows have benefitted massively. Even though more advanced tools do exist that purport to be able to “automate copywriting”, they are severely limited because machine learning models simply do not have the same level of contextual understanding as humans, nor are they capable of critical thought.
Coming back from that slight tangent, let’s return to our initial question of whether machine learning engineers will be automated and replaced by AI.
The answer is in some ways yes, but also no.
Only a human programmer can build code based on their own understanding of precise specifications and requirements. As we have explored, only programmers can make sense of tricky concepts and questions that don’t have exact or multiple possible answers.
While it’s true that some aspects of a machine learning or software engineer’s day-to-day are going to be automated and augmented by AI, this doesn’t mean they’re going to lose their jobs to it. In fact, the opposite is likely to happen: It’s going to make engineers better at what they do.
Not only that but, paradoxically, automation will also increase the demand for the skills of engineers.
As technology matures, more and more businesses are going to want to deploy software solutions or use machine learning models to benefit their bottom lines. Since automation reduces the cost of developing such solutions by improving engineer workflows and making them more productive, it reduces barriers to adoption for businesses.
In summary, engineers needn’t fret about the prospect of AI stealing their jobs. Although the field is changing and engineer workflows will be automated more in the future, the situation will be one whereby engineers and AI tools coexist in a symbiotic relationship whereby the engineer is the net beneficiary.
Engineers should therefore embrace the positive changes that automation is bringing to the industry and be aware of tools like GPT-3 that can make their jobs easier.