Scientists, ethicists and science fiction authors have been concerned for years about AI's ability to develop skills without the programmers' permission. Recent interviews with Google executives could be contributing to these concerns.
James Manyika, Google’s SVP of Technology and Society, spoke on CBS’s 60 Minutes, April 16th, about how one AI system taught itself Bengali despite not being trained in the language. He said that the system can translate Bengali with only a few prompts in Bengali.
Sundar Pichai, CEO of Google confirmed that AI systems still have elements that surprise experts. "There's an aspect to this that we all call a black box. You don't understand. You can't tell exactly why it said that. Pichai stated the company had "some ideas" as to why this might be the case but needed more research in order to fully understand how it functions.
CBS's Scott Pelley questioned why the system was being opened to the public when its developers didn't understand it. Pichai replied: "I do not think that we understand the human mind either."
AI development also has flaws, which can lead to fakes, deepfakes and weaponization. Sometimes, the industry is so confident that it uses what they call "hallucinations."
Pichai replied, "Yes, that's to be expected." Nobody in the field has solved the problem of hallucinations. Pichai stated that "all models have this issue". The solution, he said, was to develop "more robust safety layer before we build and before we deploy more powerful models".
Pichai is a long-time advocate of global AI regulation. Elon Musk and Twitter CEO have called for a stop to the development of new, more powerful AI models. Chinese legislators have already established new rules while the regulatory process in Europe and the US is in its infancy.