OpenAI has recently announced exciting news for ChatGPT users, particularly those utilizing the free version of the platform. The company is now extending the ability for free users to generate up to two images daily using DALL-E 3, their advanced generative AI model. This feature, which was previously exclusive to subscribers of ChatGPT Plus, is now being made available to a broader audience.
The announcement was made on X, where OpenAI shared that free-tier users can now take advantage of this tool to create images based on text prompts. Whether it’s for crafting visuals for a presentation, designing a custom card, or bringing a creative idea to life, free users can now generate two images each day.
“We’re excited to introduce the ability for ChatGPT Free users to create up to two images per day with DALL·E 3,” OpenAI stated. They encouraged users to experiment with the tool by offering sample prompts, such as imagining a superhero flying around Mount Everest with a powerful physique.
DALL-E 3, much like OpenAI’s GPT-4, is a state-of-the-art model, specializing in converting text into a wide array of visual styles, from whimsical illustrations to hyper-realistic images. While this new feature provides free users with a taste of the capabilities, unlimited access remains part of the benefits offered to ChatGPT Plus subscribers. In addition to DALL-E 3, subscribers gain access to enhanced features like early access to new tools, increased usage limits with GPT-4, and more.
This update marks a significant step in making generative AI more accessible, allowing a wider range of users to explore and utilize OpenAI’s advanced technologies.
Meanwhile, OpenAI has also expressed concerns about the potential for users to form emotional attachments to its newly introduced Voice Mode feature in ChatGPT. These concerns were highlighted in the company’s “System Card” for GPT-4o, a detailed document that explores the risks and protective measures related to the AI model. One of the key risks identified is the tendency for users to anthropomorphize the chatbot, meaning they might begin to attribute human-like qualities to the AI, which could lead to unintended emotional connections.