The Poynter Institute for Media Studies is a non-profit journalism school and research organization in St. Petersburg, Florida. The school is the owner of the Tampa Bay Times newspaper and the International Fact-Checking Network. It also operates PolitiFact. In journalism circles, Poynter is considered to be a trusted source.
So when Poynter publishes a headline like this it’s a good idea to read:
Here’s the piece, written by Ren LaForme, Tony Elkins and Alex Mahadevan…
The artificial intelligence research organization OpenAI unveiled a stunningly realistic text-to-video tool on Thursday. It’s difficult to understate the reaction from AI enthusiasts, researchers and journalists. A few representative headlines:
CBS News: “OpenAI’s new text-to-video tool, Sora, has one artificial intelligence expert ‘terrified’.”
ABC News: “OpenAI video-generator Sora risks fueling propaganda and bias, experts say.”
The New York Times: “OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos.”
On Monday, I called up Tony Elkins, Poynter faculty and a founding member of the News Product Alliance, and Alex Mahadevan, director of MediaWise at Poynter, to get their takes on the development. Elkins and Mahadevan both meticulously track the evolution of AI and test new models in their roles at Poynter. This conversation has been edited for brevity and clarity.
Ren LaForme: We’ve seen the breathless reports about OpenAI’s new text-to-video tool, Sora. There are a lot of unknowns. But I thought I’d start by asking you if you could tell me what we do know about it.
Tony Elkins: It is a fairly significant out-of-the-box demo. It looks really good for their first try. From where AI video existed a year ago — and even some tools I just started testing like Pika — the jump between that and this is just ridiculous.
Did you see the video with the woman in bed with the cat? It’s very realistic at first glance, but when she rolls over there’s no arm there, and then the cat has an arm that comes out of nowhere. But it wasn’t super jarring. You had to really pay attention to know it was AI.
To me, the most significant part is that this is a demo. What’s the second release going to look like? It took several versions of DALL-E and Midjourney to produce realistic images.
Alex Mahadevan: I agree. I was very impressed. I saw the cat one as well and the physics of the cat batting the woman’s face and the comforter rolling over. There’s another video I saw of the grandmother who’s showing her hands and then preparing some gnocchi. Her hand turns into a spoon.
Clearly in a lot of these videos, there are absurdities that are comical and quite scary. And that highlights major weaknesses in this technology.
For the rest, click here.