How will OpenAI’s Whisper model impact AI applications?

AI applications
AI applications

Last week, OpenAI released Whisper, an open-source deep learning model for speech recognition. OpenAI’s tests on Whisper show promising results in transcribing audio not only in English, but also in several other languages.

Developers and researchers who have experimented with Whisper are also impressed with what the model can do. However, what is perhaps equally important is what Whisper’s release tells us about the shifting culture in artificial intelligence (AI) research and the kind of applications we can expect in the future.

A return to openness?

OpenAI has been much criticized for not open-sourcing its models. GPT-3 and DALL-E, two of OpenAI’s most impressive deep learning models, are only available behind paid API services, and there is no way to download and examine them.

In contrast, Whisper was released as a pretrained, open-source model that everyone can download and run on a computing platform of their choice. This latest development comes as the past few months have seen a trend toward more openness among commercial AI research labs.