Last week, in an “open letter” signed by Tesla CEO, Elon Musk, Apple co-founder Steve Wozniak, and also by representatives from various fields (including robotics, machine learning, and computer science), urged for a 6-month pause on “giant AI experiments,” saying it represents a risk for humanity. Since then, I’ve been following some specialists’ opinions and I invite you to join me in a reflection on this scenario. The Open Letter The “Pause Giant AI Experiments: An open letter”, which currently has almost 6k signatures asks, as an urgent matter, that artificial intelligence laboratories pause some projects.
“We call on all AI labs to immediately pause for at cambodia email list least 6 months the training of AI systems more powerful than GPT-4“ says the highlight in the header. It warns, “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” And it also predicts an “apocalyptic” future: “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” What is The Real “Weight” of This Letter? At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved.
Despite being endorsed by leading technology authorities, such as Google and Meta engineers, the letter has generated controversy due to some subscribers being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time. It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.