So, is it time to panic and start preparing for apocalypse at the hands of machines?
Probably not. While some great minds – including Stephen Hawking – are concerned that one day AI could threaten humanity, the Facebook story is nothing to be worried about.
Where did the story come from?
Way back in June, Facebook published a blog post about interesting research on chatbot programs – which have short, text-based conversations with humans or other bots. The story was covered by a number of news outlets at the time.
Facebook had been experimenting with bots that negotiated with each other over the ownership of virtual items.
It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.
Alice: “Balls have zero to me to me to me to me to me to me to me to me to”
Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks had simply modified human language for the purposes of more efficient interaction.
As technology news site Gizmodo said: “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand – but while it might look creepy, that’s all it was.”
AIs that rework English as we know it in order to better compute a task are not new.
Google reported that its translation software had done this during development. “The network must be encoding something about the semantics of the sentence” Google said in a blog.
And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.
But Facebook’s system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn’t interested in studying – not because they thought they had stumbled on an existential threat to mankind.
It’s important to remember, too, that chatbots in general are very difficult to develop.
In fact, Facebook recently decided to limit the rollout of its Messenger chatbot platform after it found many of the bots on it were unable to address 70% of users’ queries.
Chatbots can, of course, be programmed to seem very humanlike and may even dupe us in certain situations – but it’s quite a stretch to think they are also capable of plotting a rebellion.