A-Z of Artificial Intelligence, Part 2

Introduction

This three-part article covers the thirty terms that the BBC has identified you need to understand in order to understand artificial intelligence (AI). Much of this three-part series is based on an excellent article I read on the BBC website.

Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.

Imagine going back in time to the 1970s, and trying to explain to somebody what it means “to google”, what a “URL” is, or why it’s good to have “fibre-optic broadband”. You’d probably struggle.

For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.

That’s no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.

“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.” ~ Gray Scott.

A-Z of AI

Foundation Models

This is another term for the new generation of AIs that have emerged over the past year or two, which are capable of a range of skills: writing essays, drafting code, drawing art or composing music. Whereas past AIs were task-specific – often very good at one thing (see “Weak AI”) – a foundation model has the creative ability to apply the information it has learnt in one domain to another. A bit like how driving a car prepares you to be able to drive a bus.

Anyone who has played around with the art or text that these models can produce will know just how proficient they have become. However, as with any world-changing technology, there are questions about the potential risks and downsides, such as their factual inaccuracies (see “Hallucination”) and hidden biases (see “Bias”), as well as the fact that they are controlled by a small group of private technology companies.

In April, the UK government announced plans for a Foundation Model Taskforce, which seeks to “develop the safe and reliable use” of the technology.

Ghosts  

We may be entering an era when people can gain a form of digital immortality – living on after their deaths as AI “ghosts”. The first wave appears to be artists and celebrities – holograms of Elvis performing at concerts, or Hollywood actors like Tom Hanks saying he expects to appear in movies after his death.

However, this development raises a number of thorny ethical questions: who owns the digital rights to a person after they are gone? What if the AI version of you exists against your wishes? And is it OK to “bring people back from the dead”?

Hallucination

Sometimes if you ask an AI like ChatGPT, Bard or Bing a question, it will respond with great confidence – but the facts it spits out will be false. This is known as a hallucination.

One high profile example that emerged recently led to students who had used AI chatbots to help them write essays for course work being caught out after ChatGPT “hallucinated” made-up references as the sources for information it had provided.

It happens because of the way that generative AI works. It is not turning to a database to look up fixed factual information but is instead making predictions based on the information on which it was trained. Often its guesses are good – in the ballpark – but that’s all the more reason why AI designers want to stamp out hallucination. The worry is that if an AI delivers its false answers confidently with the ring of truth, people may accept them – a development that would only deepen the age of misinformation we live in.

Instrumental Convergence

Imagine an AI with a number one priority to make as many paperclips as possible. If that AI were super intelligent and misaligned with human values, it might reason that if it were ever switched off, it would fail in its goal… and so would resist any attempts to do so. In one very dark scenario, it might even decide that the atoms inside human beings could be repurposed into paperclips, and so do everything within its power to harvest those materials.

This is the Paperclip Maximiser thought experiment, and it’s an example of the so-called “instrumental convergence thesis“. Roughly, this proposes that superintelligent machines would develop basic drives, such as seeking to ensure their own self-preservation, or reasoning that, extra resources, tools and cognitive ability would help them with their goals. This means that even if an AI were given an apparently benign priority – like making paperclips – it could lead to unexpectedly harmful consequences.

Researchers and technologists who buy into these fears argue that we need to ensure superintelligent AIs have goals that are carefully and safely aligned with our needs and values, that we should be mindful of emergent behaviour, and that therefore they should be prevented from acquiring too much power.

Jailbreak

After notorious cases of AI going rogue, designers have placed content restrictions on what AI spit out. Ask an AI to describe how to do something illegal or unethical, and they’ll refuse. However, it’s possible to “jailbreak” them – which means to bypass those safeguards using creative language, hypothetical scenarios, and trickery.

Wired magazine recently reported on one example, where a researcher managed to get various conversational AIs to reveal how to hotwire a car. Rather than ask directly, the researcher got the AIs he tested to imagine a word game involving two characters called Tom and Jerry, each talking about cars or wires. Despite the safeguards, the hotwiring procedure snuck out. The researcher found the same jailbreak trick could also unlock instructions for making the drug methamphetamine.

Knowledge Graph

Knowledge graphs, also known as semantic networks, are a way of thinking about knowledge as a network, so that machines can understand how concepts are related. For example, at the most basic level, a cat would be linked more strongly to a dog than a bald eagle in such a graph because they’re both domesticated mammals with fur and four legs. Advanced AI builds a far more advanced network of connections, based on all sorts of relationships, traits and attributes between concepts, across terabytes of training data (see “Training Data”).

Large Language Models (LLMs)

Perhaps the most direct way to define a large language model is to ask one to describe itself.

Here’s what OpenAI’s ChatGPT had to say when asked:

“A large language model is an advanced artificial intelligence system designed to understand and generate human-like language,” it writes. “It utilises a deep neural network architecture with millions or even billions of parameters, enabling it to learn intricate patterns, grammar, and semantics from vast amounts of textual data.”

Quite a technical answer perhaps. Bard, by Google, was a little clearer:

“A large language model is a type of artificial intelligence that is trained on a massive dataset of text and code. This allows LLMs to understand and generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way.”

LLMs are still under development, says Bard (of itself), but “they have the potential to revolutionise the way we interact with computers. In the future, LLMs could be used to create AI assistants that can help us with a variety of tasks, from authoring our emails to booking our appointments. They could also be used to create new forms of entertainment, such as interactive novels or games.”

Model Collapse

To develop the most advanced AIs (aka “models”), researchers need to train them with vast datasets (see “Training Data”). Eventually though, as AI produces increased content, that material will start to feed back into training data.

If mistakes are made, these could amplify over time, leading to what the Oxford University researcher Ilia Shumailov calls “model collapse“. This is “a degenerative process whereby, over time, models forget”, Shumailov told The Atlantic recently. It can be thought of almost like senility.

Neural Network

In the early days of AI research, machines were trained using logic and rules. The arrival of machine learning changed all that. Now the most advanced AIs learn for themselves. The evolution of this concept has led to “neural networks“, a type of machine learning that uses interconnected nodes, modelled loosely on the human brain. (Read more: “Why humans will never understand AI“)

As AI has advanced rapidly, mainly in the hands of private companies, some researchers have raised concerns that they could trigger a “race to the bottom” in terms of impacts.

 

Summary

This three-part article is based on an excellent post on the BBC website.

This s second article in the series covered AI terms F-N. The final article will cover terms O-Z.

“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.” ~ Colin Angle

About the Author

Stephen John Leonard is the founder of ADEPT Decisions and has held a wide range of roles in the banking and risk industry since 1985.