Malicious, or just good at the job?

Malicious, or just good at the job?

Welcome to this week’s LEAP:IN newsletter. Each week, we unpack leader’s powerful quotes and decipher the tech landscape. With exclusive content from some of the world’s leading experts in AI, robotics, space, edutech, climate tech and more, read on to discover this week’s insights and subscribe to receive weekly updates direct to your inbox.

subscribe
 

This week we’re quoting…

Ben Goertzel (CEO at SingularityNET)

What Goertzel said:

“I think the 88-quintillion-dollar question with AGI is not so much can we build it; it’s not even so much how we build it. The really big question is: what values and culture will the AGI have?”

Robot culture in sci-fi

You know we take any opportunity to dig out the best stuff from the sci-fi shelf. But this isn’t just frivolous: sci-fi has accurately predicted the future time and time again. 

So what could we learn from fiction about the future culture of AI?

Well, we can definitely learn a lot about what we don’t want, because there’s a lot of conflict between machines and humans in fictional imaginings of AI. Malicious AI turns on its creators and/or would-be saviours. 

There was Ava in Ex Machina, who deceived Caleb and left him to die. Go further back through the history of sci-fi and remember Hal 9000 in 2001: A Space Odyssey, whose self-interested motivations were disguised by a soothing voice, but who would do anything to avoid being shut down. 

Then there’s the Gunslinger in Westworld, diverging from its programming to kill amusement park visitors; Skynet in The Terminator, the behind-the-scenes villain that set Arnold Schwarzenegger on his murderous mission; Viki in I, Robot, who came to the conclusion that humans were the greatest threat to humanity and the planet, and created a loophole in robot law to allow her to kill people. 

The list goes on. 

Let’s worry about competence, not malice

But wait…was any of that actually malicious? Even in those fictional depictions, wasn’t the AI just trying to do the job it was made to do – no matter what? 

According to Stephen Hawking, “the real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” 

This gives us a much more useful starting point for imagining the future values and culture of AI. Machines aren’t likely to develop pointlessly malicious consciousness, but they will develop new, more efficient ways of achieving their goals. 

So what goals are we really giving AI? 

This brings us straight back to what Goertzel said in his keynote. We need to focus AI systems on positive actions and arm them with goals for the greater good. If AI will stop at nothing to provide high quality education to young people, or improve healthcare outcomes…well, great. But if AI will stop at nothing to build the best weapons, or produce the greatest volume of plastic goods…well, what then? 

Think of it like this: modern self-help culture tells us (humans) to find our deeper purpose, connect with our true selves, and live in alignment with our most profound values. If future AI self-help culture does the same, then it’s the values that we are giving to AI right now that intelligent machines will be trying to reconnect with. 

We are already instilling AI with its true purpose. So we need to be clear about the messages we’re building into it. 

And…

Dr. Kai-Fu Lee (CEO and President at Sinovation Ventures) 

What Dr. Kai Fu Lee said:

“Perhaps the largest breakthrough that we saw in the last five years is self-supervised learning.” 

What difference does self-supervision make?

AI learns how to identify data sets and utilise them, but now a new set of technologies is allowing AI to teach itself. It doesn’t need human supervision. 

Which means it can build learning models from datasets by generating and applying its own labels – it doesn’t require learning guidance via manual labelling by humans. It can also create predictions beyond data that is already labelled (because it can generate labels itself). 

In a nutshell, self-supervised learning models can get everything they need from the data itself, without additional input. 

You’re already using it, by the way

Because self-supervised learning models can work with data that hasn’t been manually labelled, it’s currently being used in places where there are big datasets that haven’t been labelled, but that contain information someone would like to extract. 

Like…

  • Grammarly. You’ve probably heard of it, and maybe even used it. It’s an automated writing assistant that suggests better ways of wording or phrasing different sentences – and it can do that because it analyses thousands of similar sentences to understand the context of your writing.
  • GPT-3. Created by Open AI, it’s a self-supervised learning model that has eaten up approximately all of the internet (OK, not all of it…yet) in order to understand the structure of digital data – and then generate new content based on that advanced understanding.
  • Jasper. An AI content generator that uses the GPT-3 model to create new, original written content for digital marketing. It’s not perfect (says the human writer, quaking in her boots), but it can definitely write a passable blog post.
  • XLM-R. This is Facebook’s AI method for training language systems across multiple languages, to improve hate speech detection without having to manually label datasets in every language spoken by Facebook users. It uses modelling of word probability and language masking, along with translation language modelling, to align representations in different languages with one another – and increase the efficiency in detecting red flags. 

Part of the narrow AI revolution

Self-supervising AI is part of the narrow AI revolution. It’s happening now. So it’ll probably be replaced by an AGI model in the not too distant future. 

But it’s an important part of AI development in the present, and it’s driving a greater understanding of what AI could potentially do in the future. Not just among tech developers, but among digital users: people are using self-supervised AI every day. They’re getting used to it, and accepting it. 

Does normalising the use of AI = trust?

As more and more people use AI as part of their digital services on a daily basis, trust in AI is growing…but still quite slowly.

A new survey in January this year by Ipsos, for the World Economic Forum, looked at attitudes towards AI in 28 countries. It found that:

  • 60% of adults around the world expect that AI products and services will profoundly change their daily lives within 3-5 years
  • 60% also agree that AI will make life easier. But only half think AI might have more benefits than drawbacks
  • Only 50% of those surveyed say they trust companies that use AI as much as they trust other companies

Kay Firth-Butterfield (Head of Artificial Intelligence and Machine Learning at the World Economic Forum) said:

“In order to trust artificial intelligence, people must know and understand exactly what AI is, what it’s doing, and its impact. Leaders and companies must make transparent and trustworthy AI a priority as they implement this technology.” 

Aye Aye to that. 

subscribe
 

Related
articles

Saudi Arabia: A hotspot for the digital generation

From working in Parliament in Austria to pushing the boundaries of digital tech in Riyadh, LEAP 2024 speaker Margarete Schramboek (Board Member at Aramco Digital; Former Minister of Economy and Digital, Austria) has a passion for the potential of digitisation to transform our world.  We asked her what entrepreneurs should

Will fintech create new inequalities in finance?

Last year on the blog, Dr. Ritesh Jain (Founder of Infynit) explained why payments are the lifeblood of the financial services industry.  This week, we caught up with him again to dig a little deeper into his perspective on payments, fintech, and financial inclusion.  Crucially, we wanted to find out

The evolution of esports with Fabien Devide

Last year, esports club Team Vitality celebrated a decade in the industry – and over that time it has been part of seismic changes in the popularity and scope of the esports market. Back in 2013, the esports market was just beginning to find its place, carving out a space at