Should we risk loss of control of our civilization?
Posted: Sun Jan 05, 2025 10:38 am
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4“ says the highlight in the header. It warns, “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” And it also predicts an “apocalyptic” future: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? ” What is The Real “Weight” of This Letter? At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved.
Despite being endorsed by leading technology israel business email list authorities, such as Google and Meta engineers, the letter has generated controversy due to some subscribers being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time. It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.
Sam Altman, co-founder of Open-AI, in a conversation with podcaster Lex Fridman, asserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real. Also, in an interview with WSJ, Altman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release. What Are Its Practical Effects? Andrew Ng, founder and CEO of Landing AI, founder of DeepLearning.
Despite being endorsed by leading technology israel business email list authorities, such as Google and Meta engineers, the letter has generated controversy due to some subscribers being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time. It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.
Sam Altman, co-founder of Open-AI, in a conversation with podcaster Lex Fridman, asserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real. Also, in an interview with WSJ, Altman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release. What Are Its Practical Effects? Andrew Ng, founder and CEO of Landing AI, founder of DeepLearning.