A-Z of Artificial Intelligence, Part 3
This three-part article covers the thirty terms that the BBC has identified you need to understand in order to understand artificial intelligence (AI). Much of this three-part series is based on an excellent article I read on the BBC website
Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.
Imagine going back in time to the 1970s, and trying to explain to somebody what it means “to google”, what a “URL” is, or why it’s good to have “fibre-optic broadband”. You’d probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That’s no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.
“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.” Ginni Rometty
A-Z of AI
Years ago, biologists realised that publishing details of dangerous pathogens on the internet is probably a bad idea – allowing potential bad actors to learn how to make killer diseases. Despite the benefits of open science, the risks seem too great.
Recently, AI researchers and companies have been facing a similar dilemma: how much should AI be open source? Given that the most advanced AI is currently in the hands of a few private companies, some are calling for greater transparency and democratisation of the technologies. However, disagreement remains about how to achieve the best balance between openness and safety.
AIs now are impressively proficient at understanding natural language. However, getting the very best results from them requires the ability to write effective “prompts”: the text you type in matters.
Some believe that “prompt engineering” may represent a new frontier for job skills, akin to when mastering Microsoft Excel made you more employable decades ago. If you’re good at prompt engineering, goes the wisdom, you can avoid being replaced by AI – and may even command a high salary. Whether this continues to be the case remains to be seen.
Quantum machine learning
In terms of maximum hype, a close second to AI in 2023 would be quantum computing. It would be reasonable to expect that the two would combine at some point. Using quantum processes to supercharge machine learning is something that researchers are now actively exploring. As a team of Google AI researchers wrote in 2021: “Learning models made on quantum computers may be dramatically more powerful…potentially boasting faster computation [and] better generalisation on less data.” It’s still early days for the technology, but one to watch.
Race to the bottom
As AI has advanced rapidly, mainly in the hands of private companies, some researchers have raised concerns that they could trigger a “race to the bottom” in terms of impacts. As chief executives and politicians compete to put their companies and countries at the forefront of AI, the technology could accelerate too fast to create safeguards, appropriate regulation and allay ethical concerns. With this in mind, earlier this year, various key figures in AI signed an open letter calling for a six-month pause in training powerful AI systems. In June 2023, the European Parliament adopted a new AI Act to regulate the use of the technology, in what will be the world’s first detailed law on artificial intelligence if EU member states approve it.
The AI equivalent of a doggy treat. When an AI is learning, it benefits from feedback to point it in the right direction. Reinforcement learning rewards outputs that are desirable, and punishes those that are not.
A new area of machine learning that has emerged in the past few years is “Reinforcement learning from human feedback“. Researchers have shown that having humans involved in the learning can improve the performance of AI models, and crucially may also help with the challenges of human-machine alignment, bias, and safety.
Superintelligence & shoggoths
Superintelligence is the term for machines that would vastly outstrip our own mental capabilities. This goes beyond “artificial general intelligence” to describe an entity with abilities that the world’s most gifted human minds could not match, or perhaps even imagine. Since we are currently the world’s most intelligent species, and use our brains to control the world, it raises the question of what happens if we were to create something far smarter than us.
A dark possibility is the “shoggoth with a smiley face“: a nightmarish, Lovecraftian creature that some have proposed could represent AI’s true nature as it approaches superintelligence. To us, it presents a congenial, happy AI – but hidden deep inside is a monster, with alien desires and intentions totally unlike ours.
Analysing training data is how an AI learns before it can make predictions – so what’s in the dataset, whether it is biased, and how big it is all matter. The training data used to create OpenAI’s GPT-3 was an enormous 45TB of text data from various sources, including Wikipedia and books. If you ask ChatGPT how big that is, it estimates around nine billion documents.
Unsupervised learning is a type of machine learning where an AI learns from unlabelled training data without any explicit guidance from human designers. As BBC News explains in this visual guide to AI, you can teach an AI to recognise cars by showing it a dataset with images labelled “car”. But to do so unsupervised, you’d allow it to form its own concept of what a car is, by building connections and associations itself. This hands-off approach, perhaps counterintuitively, leads to so-called “deep learning” and potentially more knowledgeable and accurate AIs.
Given only a minute of a person speaking, some AI tools can now quickly put together a “voice clone” that sounds remarkably similar. Here the BBC investigated the impact that voice cloning could have on society – from scams to the 2024 US election.
It used to be the case that researchers would build AI that could play single games, like chess, by training it with specific rules and heuristics. An example would be IBM’s Deep Blue, a so-called “expert system”. Many AIs like this can be extremely good at one task, but poor at anything else: this is “weak” AI.
However, this is changing fast. More recently, AIs like DeepMind’s MuZero have been released, with the ability to teach itself to master chess, Go, shogi and 42 Atari games without knowing the rules. Another of DeepMind’s models, called Gato, can “play Atari, caption images, chat, stack blocks with a real robot arm and much more”. Researchers have also shown that ChatGPT can pass various exams that students take at law, medical and business school (although not always with flying colours.)
Such flexibility has raised the question about how close we are to the kind of “strong” AI that is indistinguishable from the abilities of the human mind (see “Artificial General Intelligence”)
Could AI bring about the end of humanity? Some researchers and technologists believe AI has become an “existential risk”, alongside nuclear weapons and bioengineered pathogens, so its continued development should be regulated, curtailed or even stopped. What was a fringe concern a decade ago has now entered the mainstream, as various senior researchers and intellectuals have joined the fray.
It’s important to note that there are differences of opinion within this amorphous group – not all are total doomists, and not all outside this group are Silicon Valley cheerleaders. What unites most of them is the idea that, even if there’s only a small chance that AI supplants our own species, we should devote more resources to preventing that happening. There are some researchers and ethicists, however, who believe such claims are too uncertain and possibly exaggerated, serving to support the interests of technology companies.
YOLO – which stands for You only look once – is an object detection algorithm that is widely used by AI image recognition tools because of how fast it works. (Its creator, Joseph Redman of the University of Washington is also known for his rather esoteric CV design.)
When an AI delivers a zero-shot answer, that means it is responding to a concept or object it has never encountered before.
So, as a simple example, if an AI designed to recognise images of animals has been trained on images of cats and dogs, you’d assume it’d struggle with horses or elephants. But through zero-shot learning, it can use what it knows about horses semantically – such as its number of legs or lack of wings – to compare its attributes with the animals it has been trained on.
The rough human equivalent would be an “educated guess”. AIs are getting better and better at zero-shot learning, but as with any inference, it can be wrong.
This three-part article is based on an excellent post on the BBC website.
This final article in the series covered AI terms O-Z.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Eliezer Yudkowsky
About the Author
Stephen John Leonard is the founder of ADEPT Decisions and has held a wide range of roles in the banking and risk industry since 1985.