KENNESAW, Ga. | Dec 19, 2022
Like many of you reading this, I鈥檝e been pondering the implications of the leaps forward AI has taken in recent months. Between DALL-E, ChatGPT, and others, we鈥檝e seen some sudden jumps in what is possible with AI. At the same time, there鈥檚 been a lot of talk about what implications these technologies have for the job market. In this blog, I鈥檓 going to put forth an alternative, and somewhat contrarian, view that the jobs soon most at risk from AI could be some of the jobs we long thought least at risk. I鈥檒l outline how I came to this view by first reviewing some common data science themes and then drawing upon them conceptually to make my case about jobs.
In the world of analytics and data science, there are both supervised and unsupervised models. In the case of a supervised model, there is a clear target and there are precise ways to measure a model鈥檚 effectiveness at predicting its target. While there is much debate about which assessment metric (or combination of metrics) is best to use to choose the winner, each of the metrics are well defined. For example, accuracy, recall, F1 score, and more are common for assessing classic response models. Once we pick a target, we can rank model performance cleanly and precisely. This is because we know with certainty the right and wrong answer for each prediction and there is no debate over what the model is supposed to predict for each observation.
Unsupervised models aren鈥檛 as clean because there is no objectively correct answer. I spent a lot of time segmenting customer files using cluster analysis early in my career. With a cluster analysis, there is no provably correct target for either the number of clusters or what they should be. There is a subjective process used to decide which of the many potential solutions to go with. But, in the end, there is no way to prove that one is superior to the other. This subjectivity and lack of an objectively correct answer are why methodologies like clustering and marketing attribution can be so difficult to get agreement on, as I鈥檝e written about in the past.
Most discussions I鈥檝e heard in the past about AI replacing jobs focused on the equivalent of supervised learning. In other words, jobs that have tasks with a specific and defined outcome are presumed at high risk of job loss. This is a valid argument to a large extent. We鈥檝e seen factories replace humans on assembly lines for years, and AI will likely only accelerate existing trends like that.
However, people also talk about programmers, data scientists, data engineers, and similar professionals being displaced by AI. This could happen over time to some extent, but each of those professions involves more than simply solving the exact same problem again and again. For example, it is true that once I鈥檝e defined a business problem, collected the proper data, cleansed that data, and created the proper metrics with that data that the final step of building a model can be highly automated.
But there is a ton of higher-level thinking and problem-solving that goes into all of those initial steps that will be brutally difficult to replace with AI anytime soon. People who think that designing new, innovative code and analytical processes to solve novel problems can be handled with AI will be disappointed for years to come. AI will realistically focus on repetitive, already solved programming and data science tasks. Those aren鈥檛 activities that will keep today鈥檚 data and analytics professionals employed with a high salary anyway.
The specific tasks that professionals in 鈥渟upervised鈥 jobs like data science perform could change in the mix, but AI will likely enable these professionals to get a lot more done a lot more quickly. This would be a win all around as an AI-enhanced professional can charge less for the same work because it takes them less time. They can then do more work to make up the gap leading to a classic price elasticity scenario 鈥 though the price for each analysis is lower, so many more outputs will be created that value generated and income earned will increase. Those who get left behind in supervised jobs will be mostly those that don鈥檛 embrace AI.
What鈥檚 recently dawned on me is that some of the jobs thought of as the most creative and least automatable in the past might well be those to fall hardest to AI in the near future. Jobs involving art, acting, or music, for example, are 鈥渦nsupervised鈥 in the sense that there is no objective way to specify what is good and what is bad. One person鈥檚 art is another person鈥檚 junk. One person鈥檚 favorite actress is another person鈥檚 unbearable hack. While there are actors, paintings, and music that people will tend to agree are good or bad, it isn鈥檛 possible to explain exactly why or to get everyone to agree.
Let鈥檚 consider actors. Today, you can pay a famous actor to play a role. You鈥檝e got to pay an (often large) fee to the actor. If they are hard to work with, you and the crew will have to deal with that daily. If the actor isn鈥檛 quite getting across what you want, you鈥檒l have to shoot multiple takes. At some point, the subjective decision will be made that a scene is good enough to run with.
With the advances in AI, a director will soon be able to generate an artificial actor that looks and sounds exactly how they envision the character. They鈥檒l be able to prompt the avatar to deliver lines with precise cadence, tone, and emphasis. In other words, they鈥檒l be able to get to the subjectively good enough scene without ever dealing with a live person and for a fraction of the cost. Why put up with the management, drama (pun intended), and cost of a live actor when you can use an avatar that produces the same outcome? Similarly, why pay someone a huge fee for a custom portrait that can take weeks or months if you can generate one just as good and much more quickly with AI?
I made some bold statements here to get readers thinking and I am sure that artists and actors will not be happy with my hypothesis. I also doubt those jobs will be taken away in the next year or two. However, I do think that people are missing the fact that creative, 鈥渦nsupervised鈥 jobs may be at as much or more risk from AI than the 鈥渟upervised鈥 jobs that more often get discussed.
Regardless of the type of job, the constant will be that AI advances are going to force people to change what they do, how they do it, and in what mix. Nobody鈥檚 job is safe, whether supervised or unsupervised if we don鈥檛 accept that reality and take active steps to learn how to adapt to the new reality of AI.
Originally published by the International Institute for Analytics
By Bill Franks | February 28, 2023