Update: This article has been amended to stress that the experiment was abandoned because the programs were not doing the work required, not because they were afraid of the results, as has been reported elsewhere.
Hardly fake news when the idea is that an AI was getting different results than expected, outside of their programmed path, by doing things on its own. I've linked this specific article because it is the correct story.
AI development is starting to resemble Sci-Fi, what with this little experiment and Google's own translation AI as examples of how far things are coming along.
Using an intermediate 'universal' language step was always a goal for translation, at least that's what I was taught back in '94 in a course on natural language parsing for AI. Ofcourse back then, as the article states, we were still trying to find a universal structure, while now faster computers have made it possible to brute force match countless documents. Yet it seems it's still easily 'gamed'.
I don't see anything going outside their programmed path though. It's pattern recognition with access to the vast memory of all digitized knowledge. Impressive engineering, yet there's no thinking involved on the AI's part. It going outside of its programming would be it answering a question instead of simply translating it.
I was playing around a bit with it and it still struggles a lot with words with multiple meanings, not choosing the right translation based on context, and sometimes comes up with things that don't make sense. I doubt second order context (based on story events) will work at all if it can't even get it right in single sentences.
It will be a long time before AI can translate sayings, songs, poetry, humor etc and make it sound right in another language. There is no understanding of language displayed here at all.