Last fall, as Microsoft was rushing to stuff AI tech into its Bing search engine, its partner OpenAI warned the tech giant about the dangers of integrating its GPT-4 large language model (LLM) early without training it more, the Wall Street Journal reports.
Despite OpenAI cautioning Microsoft that jumping the gun and integrating an unreleased version of the LLM could lead to lies and nonsense responses, Microsoft went ahead and launched the tech anyway in early February.
The result is well-documented: a barrage of « hallucinations » that greatly undermined the tech’s usefulness as a search assistant and perception in the public eye.
The incident underlines that these consequences were almost entirely predictable and avoidable — yet Microsoft ignored OpenAI’s warnings, likely in a bid to position itself better during the early days of the burgeoning AI wars.
While Microsoft and OpenAI have made at least some headway in addressing the hallucination issue, Bing’s AI assistant still spits out plenty of fabrications. Worse yet, Bing’s AI has even started citing hallucinations from other AI chatbots, like Google’s Bard.
Meanwhile, OpenAI’s own chatbot ChatGPT has left Bing in the dust and has amassed 200 million monthly users, according to the WSJ.
Microsoft spent billions to get early access, inextricably tying itself to the successes and failures of OpenAI. Yet despite its sizable investment, anonymous sources told the WSJ that employees are still being shut out of the inner workings of OpenAI’s tech.