Guess how many of the 270 jobs reported in the 1950 census were eliminated by automation. Are you contemplating a quarter? Perhaps a third?
“Just one,” says Kellogg Assistant Professor of Management and Organisations Hatim Rahman. “And in case you are wondering, that job was a lift operator.”
It’s a statistic he wants us to remember as we think about the future of work, notably the “fear that AI is going to suddenly lead to mass unemployment.” Rahman discussed how AI will affect our careers and society in a recent Insightful Leader Live webinar.
What is his take? “Decades of research shows that fear is unfounded.”
Instead, he adds, where we go is largely up to us though we’ll need to be proactive to avoid having it decided for us. Here are four key points from his talk.
Even “Rapid” Transformation Occurs more Slowly than we Believe
As much as it appears that technology is evolving so quickly that we can barely keep up, it takes far longer for those advancements to properly integrate into society. This will be true for artificial intelligence as well. “It’s going to take a long time for it to penetrate an industry, especially in ways that will affect your career,” Rahman claims.
Individual illustrators, translators, and journalists who have already lost jobs to generative AI may find this disappointing. However, taking a step back, it is evident that, while these occupations are the first to be impacted by technology, they are far from obsolete.
Instead, they are evolving as AI is gradually introduced into multiple job streams and new infrastructure is developed to support it. “The more complex the technology, the more technical, human, and monetary resources are needed to develop, integrate, and maintain [the] technology,” Rahman explains.
This means that we, as a society, have plenty of time to decide how to deploy artificial intelligence.
You’ve most likely heard the notion that AI is neither good nor terrible, but rather a useful tool. Much will rely on how we choose to deploy it. And we have time to make this decision jointly and wisely.
We may use AI to replace as many workers as possible or we can use it to strengthen talent and identify it in underserved areas. We can choose to let robots make the majority of decisions about our healthcare, education, and defense or we can opt to keep people at the helm, guaranteeing that human values and objectives rule the day.
And it is not stupid to believe that we will have a choice. There are already instances where we have chosen to prioritize human involvement. Let’s take aviation. Despite predictions that more than 90% of a pilot’s responsibilities can be automated, our society has chosen to keep well-trained pilots capable of flying manually in the cockpit in case something goes wrong.
According to Rahman, automation has worked fairly well for pilots. “In fact, in aggregate, the number of pilots and their pay has increased for years.”
However, when determining priorities, it is crucial to consider other perspectives.
Of course, pilots have more than just their training; they also have powerful professional organizations that can lobby for them. Not all employees are as lucky, and it is unlikely that everyone will have the ability to influence employment decisions that affect them.
This is a genuine issue, states Rahman, “because without diverse voices and stakeholders, the design and implementation of AI has [reflected], and will reflect, a very narrow group of people’s interest.”
For example, most of the current discussion on generative AI has originated from tech companies anxious to discover commercial applications for their technologies. Perhaps it is not surprising, then, that “a lot of the way we talk about AI is a hammer seeking for a nail: ‘Here are enormous language models. Rahman asks, “How can we use them?” “I don’t think that strategy will help us thrive. Instead, we should consider what output the AI aims to assess, forecast, or generate. And should we employ artificial intelligence to make such predictions? This is where we need varied voices and professionals in the room to address the question.”
He suggests that everyone do whatever they can to contribute to the discourse. Even grassroots organizations comprised of like-minded laypeople can help influence businesses and local governments to develop and use AI in mutually beneficial ways.
And, because it is hard to rationally advocate for our interests if we are kept in the dark about decisions that affect us, we must likewise demand transparency in how AI systems are trained, deployed, and double-checked.
Let Machines be Machines, and Humans be Humans
Finally, Rahman argues that the term “artificial intelligence” is misleading because the technology is neither “artificial” nor “intelligent.”
AI is not “artificial” in the sense that it is trained using massive amounts of human data and fine-tuned by a small army of low-wage human workers. (It also has a substantial carbon footprint.) And AI is not “intelligent” since it cannot reason in a meaningful way. Rahman demonstrated that if you ask a model like GPT4, “What is the fifth word in this sentence?” it will always give you a different answer usually the wrong one.
Instead, AI processes human inputs practically.
However, it can be extremely powerful. “AI tends to excel with efficiency, speed, and the scale at which it can be implemented,” according to Rahman. “AI doesn’t get tired or bored in the same way humans do.”
In contrast, he explains that people thrive in innovation, emotional intelligence, and rapid adaptation to new situations. Whereas AI may require thousands of training cycles to learn to distinguish between cats and dogs, a human toddler may draw the same conclusions after seeing only a few Goldendoodles.
With these relative strengths and weaknesses in mind, it becomes simpler to consider AI systems as cooperation partners rather than substitutes, capable of magnifying any human values and priorities we set.
Source- Forbes