The A-Z of Artificial Intelligence, Part 1

Introduction

This three-part article covers the thirty terms that the BBC has identified you need to understand in order to understand artificial intelligence (AI). Much of this three-part series is based on an excellent article I read on the BBC website: https://www.bbc.com/future/article/20230717-what-you-should-know-about-artificial-intelligence-from-a-z

Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.

Imagine going back in time to the 1970s, and trying to explain to somebody what it means “to google”, what a “URL” is, or why it’s good to have “fibre-optic broadband”. You’d probably struggle.

For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.

That’s no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilisation a billion-fold.”  Ray Kurzweil

 

A-Z of AI

Artificial General Intelligence (AGI)

Most of the AIs developed to date have been “narrow” or “weak”. So, for example, an AI may be capable of crushing the world’s best chess player, but if you asked it how to cook an egg or author an essay, it’d fail. That’s quickly changing: AI can now teach itself to perform multiple tasks, raising the prospect that “artificial general intelligence” is on the horizon.

An AGI would be an AI with the same flexibility of thought as a human – and possibly even the consciousness too – plus the super-abilities of a digital mind. Companies such as OpenAI and DeepMind have made it clear that creating AGI is their goal. OpenAI argues that it would “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge” and become a “great force multiplier for human ingenuity and creativity”.

However, some fear that going a step further – creating a superintelligence far smarter than human beings – could bring great dangers (see “Superintelligence” and “X-risk”).

Alignment

While we often focus on our individual differences, humanity shares many common values that bind our societies together, from the importance of family to the moral imperative not to murder. Certainly, there are exceptions, but they’re not the majority.

However, we’ve never had to share the Earth with a powerful non-human intelligence. How can we be sure AI’s values and priorities will align with our own?

This alignment problem underpins fears of an AI catastrophe: that a form of super-intelligence emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we’re to have safe AI, ensuring it remains aligned with us will be crucial (see “X-Risk”).

In early July, OpenAI – one of the companies developing advanced AI – announced plans for a “super alignment” programme, designed to ensure AI systems much smarter than humans follow human intent. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the company said.

Bias

For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more gatekeeping and decision-making to AI, many worry that machines could enact hidden prejudices, preventing some people from accessing certain services or knowledge. This discrimination would be obscured by supposed algorithmic impartiality.

In the worlds of AI ethics and safety, some researchers believe that bias  – as well as other near-term problems such as surveillance misuse – are far more pressing problems than proposed future concerns such as extinction risk.

In response, some catastrophic risk researchers point out that the various dangers posed by AI are not necessarily mutually exclusive – for example, if rogue nations misused AI, it could suppress citizens’ rights and create catastrophic risks. However, there is strong disagreement forming about which should be prioritised in terms of government regulation and oversight, and whose concerns should be listened to.

Compute

Not a verb, but a noun. Compute refers to the computational resources – such as processing power – required to train AI. It can be quantified, so it’s a proxy to measure how quickly AI is advancing (as well as how costly and intensive it is too.)

Since 2012, the amount of compute has doubled every 3.4 months, which means that, when OpenAI’s GPT-3 was trained in 2020, it required 600,000 times more computing power than one of the most cutting-edge machine learning systems from 2012. Opinions differ on how long this rapid rate of change can continue, and whether innovations in computing hardware can keep up: will it become a bottleneck?

Diffusion Models

A few years ago, one of the dominant techniques for getting AI to create images were so-called generative adversarial networks (Gan). These algorithms worked in opposition to each other – one trained to produce images while the other checked its work compared with reality, leading to continual improvement.

However, recently a new breed of machine learning called “diffusion models” have shown greater promise, often producing superior images. Essentially, they acquire their intelligence by destroying their training data with added noise, and then they learn to recover that data by reversing this process. They’re called diffusion models because this noise-based learning process echoes the way gas molecules diffuse.

Emergence & Explainability

Emergent behaviour describes what happens when an AI does something unanticipated, surprising and sudden, apparently beyond its creators’ intention or programming. As AI learning has become opaquer, building connections and patterns that even its makers themselves can’t unpick, emergent behaviour becomes a more likely scenario.

The average person might assume that to understand an AI, you’d lift up the metaphorical hood and look at how it was trained. Modern AI is not so transparent; its workings are often hidden in a so-called “black box“. So, while its designers may know what training data they used, they have no idea how it formed the associations and predictions inside the box (see “Unsupervised Learning”).

That’s why researchers are now focused on improving the “explainability” (or “interpretability”) of AI – essentially making its internal workings more transparent and understandable to humans. This is particularly important as AI makes decisions in areas that affect people’s lives directly, such as law or medicine. If a hidden bias exists in the black box, we need to know.

The worry is that if an AI delivers its false answers confidently with the ring of truth, people may accept them – a development that would only deepen the age of misinformation we live in.

Summary

This three-part article is based on an excellent post on the . BBC website.

This first article in the series covered AI terms A-E. The two future articles will cover terms F-Z.

“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”  Larry Page

About the Author

Stephen John Leonard is the founder of ADEPT Decisions and has held a wide range of roles in the banking and risk industry since 1985.