now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
TechTalk / Viewpoint
GenAI will address many of risks it creates
The current debate about generative AI focuses disproportionately on the disruption it might unleash. While it is true technological advances always disrupt legacy industries and existing systems and processes, one must not ignore the opportunities they can create or the risks they can mitigate
Michael R. Strain 19 May 2024

Pessimism suffuses current discussions about generative artificial intelligence (AI). A YouGov survey in March found that Americans primarily feel “cautious” or “concerned” about AI, whereas only one in five are “hopeful” or “excited.” Around four in ten are very or somewhat concerned that AI could put an end to the human race.

Such fears illustrate the human tendency to focus more on what could be lost than on what could be gained from technological change. Advances in AI will cause disruption. But creative destruction creates as well as destroys, and that process ultimately is beneficial. Often, the problems created by a new technology can also be solved by it. We are already seeing this with AI, and we will see more of it in the coming years.

Recall the panic that swept through schools and universities when OpenAI first demonstrated that its ChatGPT tool can write in natural language. Many educators raised valid concerns that generative AI would help students cheat on assignments and exams, short-changing their educations. But the same technology that enables this abuse also enables detection and prevention of it.

Moreover, generative AI can help to improve education quality. The long-standing classroom model of education faces serious challenges. Aptitude and preparation vary widely across students within a given classroom, as do styles of learning and levels of engagement, attention, and focus. In addition, the quality of teaching varies across classrooms.

AI could address these issues by acting as a private tutor for every student. If a particular student learns math best by playing math games, AI can play math games. If another student learns better by quietly working on problems and asking for help when needed, AI can accommodate that. If one student is falling behind while another in the same classroom has already mastered the material and grown bored, AI tutors can work on remediation with the former student and more challenging material with the latter. AI systems will also serve as customized teaching assistants, helping teachers develop lesson plans and shape classroom instruction.

The economic benefits of these applications would be substantial. When every child has a private AI tutor, educational outcomes will improve overall, with less-advantaged students and pupils in lower-quality schools likely benefiting disproportionately. These better-educated students will then grow into more productive workers who can command higher wages. They also will be wiser citizens, capable of brightening the outlook for democracy. Because democracy is a foundation for long-term prosperity, this, too, will have salutary economic effects.

Many commentators worry that AI will undermine democracy by supercharging misinformation and disinformation. They ask us to imagine a “deep fake” of, say, President Joe Biden announcing that the United States is withdrawing from NATO, or perhaps of Donald Trump suffering a medical event. Such a viral video might be so convincing as to affect public opinion in the run-up to the US November election.

But while deep fakes of political leaders and candidates for high office are a real threat, concerns about AI-driven risks to democracy are overblown. Again, the same technology that allows for deep fakes and other forms of information warfare can also be deployed to counter them. Such tools are already being introduced. For example, SynthID, a watermarking tool developed by Google DeepMind, imbues AI-generated content with a digital signature that is imperceptible to humans but detectable by software. Three months ago, OpenAI added watermarks to all images generated by ChatGPT.

Will AI weapons create a more dangerous world? It is too early to say. But as with the examples above, the same technology that can create better offensive weapons can also create better defences. Many experts believe that AI will increase security by mitigating the “defender’s dilemma”: the asymmetry whereby bad actors need to succeed only once, whereas defensive systems must work every time.

In February, Google CEO Sundar Pichai reported that his firm had developed a large language model designed specifically for cyber defence and threat intelligence. “Some of our tools are already up to 70% better at detecting malicious scripts and up to 300% more effective at identifying files that exploit vulnerabilities,” he wrote.

The same logic applies to national security threats. Military strategists worry that swarms of low-cost, easy-to-make drones could threaten large, expensive aircraft carriers, fighter jets and tanks – all systems that the US military relies on – if they are controlled and coordinated by AI. But the same underlying technology is already being used to create defences against such attacks.

Finally, many experts and citizens are concerned about AI displacing human workers. But, as I wrote a few months ago, this common fear reflects a zero-sum mentality that misunderstands how economies evolve. Though generative AI will displace many workers, it also will create new opportunities. Work in the future will look vastly different from work today because generative AI will create new goods and services whose production will require human labour. A similar process happened with previous technological advances. As the MIT economist David Autor and his colleagues have shown, the majority of today’s jobs are in occupations introduced after 1940.

The current debate around generative AI focuses disproportionately on the disruption it might unleash. But technological advances not only disrupt; they also create. There will always be bad actors seeking to wreak havoc with new technologies. Fortunately, there is an enormous financial incentive to counter such risks, as well as to preserve and generate profits.

The personal computer and the internet empowered thieves, facilitated the spread of false information, and led to substantial labour-market disruptions. Yet very few today would turn back the clock. History should inspire confidence – but not complacency – that generative AI will lead to a better world.

Michael R. Strain is the director of economic policy studies at the American Enterprise Institute.

Copyright: Project Syndicate

Conversation
David Rees
David Rees
senior emerging markets economist
Schroders
- JOINED THE EVENT -
In-person roundtable
Securing the future
View Highlights
Conversation
Ying Bai
Ying Bai
ESG lead, Greater China
FTSE Russell
- JOINED THE EVENT -
7th Taiwan Investment Summit - Webinar Series 2021
Transitioning to a green future
View Highlights