Meta Announces New AI Tool to Generate Realistic Video and Audio Depicting Real People

So, I know this is uncomfortable, but the thing is… we are already at the point where we can’t actually know if a video is real, and therefore the default position should be that anything you see on the internet is fake unless you have reason to believe otherwise.

This is a video that was going around a couple weeks ago:

That was apparently made with open source software. Although I don’t think closed source AI is going to remain superior to open source AI, it is right now, so these companies (and therefore the government) already have the ability to do a lot better than that video. And in a year’s time, the open source stuff will catch up and be indistinguishable from reality.

People should exploit this situation and create massive amounts of disinformation in order to promote the political agenda. The other side is definitely going to be doing that.

The Guardian:

Meta, the owner of Facebook and Instagram, announced on Friday it had built a new artificial intelligence model called Movie Gen that can create realistic-seeming video and audio clips in response to user prompts, claiming it can rival tools from leading media generation startups like OpenAI and ElevenLabs.

Samples of Movie Gen’s creations provided by Meta showed videos of animals swimming and surfing, as well as clips using people’s real photos to depict them performing actions like painting on a canvas.

Movie Gen also can generate background music and sound effects synced to the content of the videos, Meta said in a blogpost. Users can also edit existing videos with the model.

That’s what’s going to be easiest at first – replacing faces with other faces and changing real video.

Horny Korean teenage boys are leading the way here, apparently.

In one such video, Meta had the tool insert pompoms into the hands of a man running by himself in the desert, while in another it changed a parking lot on which a man was skateboarding from dry ground into one covered by a splashing puddle.

Videos created by Movie Gen can be up to 16 seconds long, while audio can be up to 45 seconds long, Meta said. It shared data showing blind tests indicating that the model performs favorably compared with offerings from startups including Runway, OpenAI, ElevenLabs and Kling.

Meta spokespeople said the company was unlikely to release Movie Gen for open use by developers, as it has with its Llama series of large-language models, saying it considers the risks individually for each model. They declined to comment on Meta’s assessment for Movie Gen specifically.

They probably will release it, because they want people working with their models and that’s the way to do it, but it doesn’t matter if they do or don’t, the ship is out of the bottle. Closed source, scam that it is, cannot possibly contain AI development. AI is too important for that nonsense.

Elvis Dunderhoff contributed to this article.Â