9 Problems with Generative AI
In the rapidly evolving landscape of artificial intelligence, generative AI tools are demonstrating incredible potential. However, their potential for harm is also becoming more and more apparent.
Together with our partner VERSES, we have visualized some concerns regarding generative AI tools using data from a variety of different sources. Many of them fall into one of the following categories: quality control & data accuracy, ethical considerations, or technical challenges—with, of course, a certain degree of overlap.
Let’s dive into it.
Bias In, Bias Out
Theme: Quality Control & Accuracy
One of the critical issues with generative AI lies in its tendency to reproduce biases present in the data it has been trained on. Rather than mitigating biases, these tools often magnify or perpetuate them, raising questions about the accuracy of their applications—which could lead to much bigger problems around ethics.
The Black Box Problem
Theme: Ethical & Legal Considerations
Another significant hurdle in embracing generative AI is the lack of transparency in its decision-making processes. With thought processes that are often uninterpretable, these AI systems face challenges in explaining their decisions, especially when errors occur on critical matters.
It’s worth noting that this is a broader problem with AI systems and not just generative tools.
High Cost to Train and Maintain
Theme: Complexity & Technical Challenges
Training generative AI models like large language model (LLM) ChatGPT is extremely expensive, with costs often reaching millions of dollars due to the computational power and infrastructure required. For instance, now Ex-CEO of OpenAI, Sam Altman confirmed that ChatGPT-4 cost a whopping $100 million to train.