chatgpt ai writer
AI could benefit society, but it could also become a monster. To guide the way, we need leadership and understanding
We’re at a Frankenstein moment.
An artificial intelligence boom is taking over Silicon Valley, with hi-tech firms racing to develop everything from self-driving cars to chatbots capable of writing poetry.
Yet AI could also spread conspiracy theories and lies even more quickly than the internet already does – fueling political polarization, hate, violence and mental illness in young people. It could undermine national security with deepfakes.
In recent weeks, members of Congress have sounded the alarm over the dangers of AI but no bill has been proposed to protect individuals or stop the development of AI’s most threatening aspects.
Most lawmakers don’t even know what AI is, according to Representative Jay Obernolte, the only member of Congress with a master’s degree in artificial intelligence.
What to do?
Many tech executives claim they can simultaneously look out for their company’s interests and for society’s. Rubbish. Why should we assume that their profit motives align perfectly with the public’s needs?
Sam Altman – the CEO of OpenAI, the company responsible for some of the most mind-blowing recent advances in AI – believes no company, including his, should be trusted to solve these problems. The boundaries of AI should be decided, he says, not by “Microsoft or OpenAI, but society, governments, something like that”.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.