Categories
General

AI and the Music Video

Whether or not Artificial Intelligence can ever be creative is an ongoing question. In the Unreal blog they highlighted a project Words, camera, action: Creating an AI music video with Unreal Engine 5. As is usually the case with creative projects, it was a collaboration between human creators and a multiplicity of Machine Learning ‘algorithms’ used primarily to apply looks.

What if you could create an Akira-inspired cyberpunk film—using nothing but text? 

Whether or not this particular project appeals aesthetically, it’s a strong indicator of how the visualization power of Unreal can work together with human and machine creatives.

Generating visuals from text prompts is an evolving field, and includes the amazing Dall-E 2, Google’s Imagen, or the still private Midjourney is a relatively new, but incredibly amazing field that will change we source original imagery. Whether taking the computer generated imagery and blending it into backgrounds, populating the video with a Metahuman or use Machine Learning algorithms to process the images to look more like  Katsuhiro Otomo’s iconic Akira anime style, AI has its fingerprints all over the end result, but was ‘guided’ by the human co-creators.