Meet Black Forest Labs, the startup powering Elon Musk’s unhinged AI image generator


Elon Musk’s Grok released a new AI image generation feature on Tuesday night that, just like the AI chatbot, has very few safeguards. That means you can generate fake images of Donald Trump smoking marijuana on the Joe Rogan Show, for example, and upload it straight to the X platform. But a new startup is the one behind the controversial feature.

The social media site is already flooded with outrageous images from the new feature. That certainly raises concerns heading into an election cycle, but strictly speaking it’s not really Elon Musk’s AI company powering the madness. Musk seems to have found a company that sympathizes with his vision for Grok as an “anti-woke chatbot” without the strict guardrails found in OpenAI’s Dall-E or Google’s Imagen. On Tuesday, xAI announced a collaboration with Black Forest Labs, an AI image and video startup launched on August 1, to power Grok’s image generator using its FLUX.1 model.

Black Forest Labs is based in Germany, and recently came out of stealth with $31 million in seed funding, led by Andreessen Horowitz, according to a press release. Other notable investors include Y Combinator CEO Garry Tan and former Oculus CEO Brandan Iribe. The startup’s co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, were formerly researchers that helped create Stability AI’s Stable Diffusion models.

According to Artificial Analysis, Black Forest Lab’s FLUX.1 models surpass Midjourney and OpenAI’s AI image generators in terms of quality, at least as ranked by users in their image arena.

The startup says it is “making our models available to a wide audience,” with open-source AI image generation models on Hugging Face and GitHub. Soon, the company says it plans to create a text-to-video model as well.

Black Forest Labs did not immediately respond to TechCrunch’s request for comment.

In its launch release, the company says it aims to “enhance trust in the safety of these models,” however, some might say the flood of its AI generated images on X Wednesday did the opposite. Many images users were able to create using Grok and Black Forest Labs’ tool, such as Pikachu holding an assault rifle, were not able to be recreated with Google or OpenAI’s image generators. There’s certainly no doubt that copyrighted imagery was used for the model’s training.

That’s kind of the point

This lack of safeguards is likely a major reason Musk chose this collaborator. Musk has made clear that he believes safeguards actually make AI models less safe. “The danger of training AI to be woke – in other words, lie – is deadly,” said Musk in a tweet from 2022.

Board Director of Black Forest Labs, Anjney Midha, posted on X a series of comparisons between images generated on day one of launch by Google Gemini and Grok’s Flux collaboration. The thread highlights Google Gemini’s well-documented issues with creating historically accurate images of people, specifically by injecting racial diversity into images inappropriately.

“I’m glad @ibab and team took this seriously and made the right choice,” said Midha in a tweet, referring to FLUX.1’s seeming avoidance of this issue (and mentioning the account of xAI lead researcher Igor Babuschkin).

Because of this flub, Google apologized and turned off Gemini’s ability to generate images of people in February. As of today, the company still doesn’t let Gemini generate images of people.

A firehose of misinformation

This general lack of safeguards could cause problems for Musk. The X platform drew criticism when AI-generated deepfake explicit images representing Taylor Swift went viral on the platform. Besides that incident, Grok generates hallucinated headlines that appear to users on X almost weekly.

Just last week, five secretaries of state urged X to stop spreading misinformation about Kamala Harris on X. Earlier this month, Musk reshared a video that used AI to clone Harris’ voice, making it appear as if the Vice President admitted to being a “diversity hire.”

Musk seems intent on letting misinformation like this pervade the platform. By allowing users to post Grok’s AI images, which seem to lack any watermarks, directly on the platform, he’s essentially opened a firehose of misinformation pointed at everyone’s X newsfeed.





Source link

About The Author

Scroll to Top