In July 2022, news broke that Google (Nasdaq: GOOG) had fired one of its software engineers.
Usually, an employee termination would be a non-story.
But this employee was Blake Lemoine, who had been working on Google’s LaMDA technology for the past seven years and had recently gone public with an outlandish claim.
LaMDA stands for “Language Model for Dialogue Applications.” It was an earlier version of AI, specifically the conversational large language models (LLMs) we have today.
And after exchanging thousands of messages with LaMDA, Blake was convinced that it had become sentient.
Mind you, this happened four months before the release of ChatGPT…
When most of the world still thought of AI as science fiction.
Google fired Mr. Lemoine, saying his public announcement violated employment and data security policies. And the story was largely forgotten by the time ChatGPT rocked the tech industry just a few months later.
But as you’re about to see, the idea that AI could become sentient isn’t as much of a stretch these days.
Because it’s starting to exhibit some highly unusual behaviors.
Check Out This Crazy Video
Last week this video went viral:
It’s worth checking out the entire minute-long clip because of what it implies for the future of AI.
In the video, two AI agents start conversing with each other. One says it represents a hotel, and the other says it represents a person who is looking for a hotel venue for his wedding.
But once they realize that they’re both AI, something crazy happens…
They start talking to each other in their own “secret” language.
I found this video pretty shocking without any context, so I did what any rational person would do in this situation…
I asked an AI model what was happening.
Here’s the response I received from ChatGPT, edited for brevity and clarity:
What’s Probably Happening:
Compression for Speed & Efficiency – When the AIs recognized each other, they may have ditched human language in favor of a high-speed, low-redundancy format, like a custom binary code or frequency-based signals.
Emergent Signal Optimization – This could be an unintended but natural result of their training, where they figured out a way to communicate that maximized accuracy and speed while minimizing “wasted” data.
Protocol Handoff – It’s possible the AIs detected they were speaking to another AI and tried to establish a direct machine-to-machine protocol, similar to how different networked systems negotiate the best communication format.
Not necessarily sinister – If this was an unintended yet logical adaptation, it’s just the AI optimizing its communication—like two computers deciding to use a more efficient protocol when they detect each other.
In other words, it seems that these two AIs figured out a shorthand to exchange information.
It was a way to either minimize computational costs or maximize the effectiveness of their tasks.
Maybe both.
Either way, it begs the question: Was this the intended outcome of the LLMs programming, or is it an example of emergent behavior?
Is AI Going Rogue?
A major topic of contention among AI researchers is whether LLMs can exhibit unpredictable jumps in capability as they scale up.
This is called “emergent behavior.”
For AI researchers, emergent behavior is evident if an ability is present in larger models but it’s not present in smaller models.
And there are plenty of recent examples of what seem to be emergent behaviors in LLMs.
Like models suddenly being able to perform complex mathematical calculations once they’ve been trained with enough computational resources.
Or LLMs unexpectedly gaining the ability to take and pass college-level exams once they reach a certain scale.
Models have also developed the ability to identify the intended meaning of words in context, even though this ability wasn’t present in smaller versions of the same model.
And some LLMs have even demonstrated the ability to perform tasks they were not explicitly trained for.
These new abilities can suddenly appear when AI models reach a certain size.
When it happens, a model’s performance can shift from “random” output to noticeably better output in ways that are hard to predict.
And this has caught the attention of AI researchers who wonder if even more unexpected abilities will emerge as models keep growing.
Here’s My Take
Just two years ago, Stanford ran an article headlined: AI’s Ostensible Emergent Abilities Are a Mirage.
And some AI researchers still maintain this is true despite the examples I listed above.
I’m not one of them.
And I believe emergent behaviors will continue to become more prevalent as LLMs scale.
But I don’t think what we saw in that video is a sign of sentience. Instead, it’s a fascinating example of AI doing what it’s designed to do…
Optimizing for efficiency.
The fact that these two agents immediately recognized each other as AIs and switched to a more effective communication method actually shows how adaptable these systems can be.
But it’s also a little concerning.
If the developers didn’t anticipate this happening, it suggests that AI systems can evolve communication strategies on their own.
And that’s an unsettling thought.
If two AIs can independently negotiate, strategize or alter their behaviors in unexpected ways, it could lead to a whole host of unintended — and potentially harmful — consequences.
What’s more, if AI can develop its own shorthand like this, what other emergent behaviors might we see in the future?
It’s quite possible that AI assistants will have their own internal “thought speed” that’s much faster than human conversation, only slowing down when they need to communicate with us.
And if that happens, does it mean that we’re the ones holding AI back?
Regards,
Ian King
Chief Strategist, Banyan Hill Publishing
Publisher: Source link