Not to be outdone by Meta’s Make-A-Video, Google today detailed its work on Imagen Video, an AI system that can generate video clips given a text prompt (e.g. “a teddy bear washing dishes”). While the results aren’t perfect — the looping clips the system generates tend to have artifacts and noise — Google claims that Imagen Video is a step toward a system with a “high degree of controllability” and world knowledge, including the ability to generate footage in a range of artistic styles.
As my colleague Devin Coldewey noted in his piece about Make-A-Video, text-to-video systems aren’t new. Earlier this year, a group of researchers from Tsinghua University and the Beijing Academy of Artificial Intelligence released CogVideo, which can translate text into reasonably high-fidelity short clips. But Imagen Video appears to be a significant leap over the previous state-of-the-art, showing an aptitude for animating captions that existing systems would have trouble understanding.
“It’s definitely an improvement,” Matthew Guzdial, an assistant professor at the University of Alberta studying AI and machine learning, told TechCrunch via email. “As you can see from the video examples, even though the comms team is selecting the best outputs there’s still weird blurriness and artificing. So this definitely is not going to be used directly in animation or TV anytime soon. But it, or something like it, could definitely be embedded in tools to help speed some things up.”