In the last few years and especially in the past several months we have seen a fast growth of artificial intelligence technologies creating stunning visual content based on text. One of the most well-known (but certainly not the only one) is Dalle 2. A new viral video shows what might be one of the first examples of how the same technology could be applied to video content.
AI creating visual content
The idea of machines being able to create art (or at least some form of unique visual representation) is not new. Even as far back as the early 1970s inventors were playing around with art-producing code. Artist Harold Cohen wrote down a special algorithm allowing a computer to create some interesting freehand drawings.
The software Cohen created Called Aaron was one of the earliest autonomous picture creators and according to Cohen, the software generated forms he had not imagined before.
While Aaron and similar algorithms developed in the next 2-3 decades kept improving, AI in its true form (as we understand it today) had to wait until the first and second decades of the 21st century and some important technological developments in machine learning, computational vision and AI in general.
Generative Adversarial Networks or GANs started to become used for creating art around 2017 and this gave a huge boost to the field of generative art which finally brings us to where we are today with services such as Dalle 2 which received a tremendous amount of coverage since it was announced in early 2022 (and at the moment still in closed beta).
AI Does Video VFX
Until now we only discussed AI creating images, but what about videos? you might think that this can be significantly more difficult but technology seems to move faster than many of us anticipate and in the video above Josh from the YouTube channel Olufemii shares a short social clip showing a tennis player playing where the surrounding changes based on what a person is typing.
While the same effect can be done using VFX software this requires some skills and time to produce. But what if you could actually do it like it is shown in the clip by typing a different background or scenery and getting a different yet quite believable surrounding?
Some users commented that the clip must be fake, produced using VFX but according to Josh it is actually real and is a sneak peek at a technology that is currently being developed by runwayml – a web-based video editing service.
Josh was able to talk to the company and they confirmed that they are working on such technology which should come soon to their service.
We can guess that this technology will have lots of limitations initially but if the speed at which GANs, in general, have been developing in the past 5 years has thought us anything it is that it won’t be long before we will see
What do you think about AI-based VFX? is this another useful tool for video editors or will this eventually replace the need for professional video editors altogether?
If you are interested in some more AI art history you should check out Naomi Rea’s article “How Did A.I. Art Evolve?”.