This type of program, or what Meta is calling an AI research concept, isn’t new, but the precision of Make-a-Scene is different.
Make-a-Scene works by incorporating user-created sketches into its text-based image generation, outputting a 2048 x 2048-pixel image.
Meta says Make-a-Scene “could enable new forms of AI-powered creative expression while putting creators and their visions at the center of the process.”
The new product is better than previous image-generating systems because the results are easier to control. In the past, Meta says if the user wanted an image of a zebra riding a bicycle, the zebra might be the wrong size or the bicycle could be facing the wrong direction.
In its announcement, Meta says Make-a-Scene isn’t just for artists. One of their program managers, Andy Boyatzis, used the program to create art with his 2 and 4-year-old children. “If they wanted to draw something, I just said, ‘What if…?’ and that led them to creating wild things, like a blue giraffe and a rainbow plane. It just shows the limitlessness of what they could dream up.”
Meta says this is just the beginning of the exploration into these types of creative programs.
Programs that will lead to users, possibly even the ever-so-coveted young users, spending more time on their platform.
In its internal testing, Meta showed that a panel of human evaluators overwhelmingly chose the text-and-sketch image over the text-only image as better aligned with the original sketch 99.54% of the time and better aligned with the original text description 66% of the time.
“Through projects like Make-A-Scene, we’re continuing to explore how AI can expand the creative expression.”
Meta has shared the technology with AI artists. Its release date has not been posted yet.