Creativity has long seemed immune to advances in artificial intelligence and computational technology. We have always thought of it as an exclusive strength and intrinsic component of what makes us human.
But times are changing. Artificial Intelligence is breaking the creativity frontier. There is already software that can create movie trailers, write pop songs, imitate the style of a specific composer or painter and automatically generate news articles - or even jokes as we saw at our session in Cannes earlier this year when we invited Dr Hannah Knight to demonstrate the comedic talents of Ginger the Robot.
A key recent development in this space has been Adobe Sensei which is one of the most powerful AI platforms creative professionals can already use today. Since Adobe migrated to Creative Cloud subscription based model, its understanding of users’ video, imaging, photography and illustration workflows has improved exponentially.
Sensei now uses this wealth of data to automate workflows like image or video editing, content searching, or to provide recommendations to its users. The objective: making algorithms a partner for creative professionals and enabling them to spend more time on value added creative tasks whilst delegating to an intelligent and automated system the performance of more mechanical ones.
Why does it matter for marketers?
Visual search on Typekit is one of the features already available through Sensei. Users can share a photo of a typeface they have seen in a billboard, poster or artwork to find, through visual search, the fonts that are most similar to it.
Other features to support creativity, aren’t available yet, but have been demonstrated at Adobe’s latest Max conference. These include: being able to searc images that Sensei automatically tagged; performing tasks in Photoshop just with voice; radically improving mobile portrait photography (aka selfies); and finding stock images based on a user’s sketch.
In the image space another feature called Scene Stitch enables removing and replacing unwanted content using Adobe’s entire library of stock photos. Similarly for videos, users can remove a specific part of a clip with the click of a button and in a few seconds, instead of going through a video frame by frame.
The impact that this kind of automation in video and image editing could have on fake news content is cause of concerns, but the time savings for creative professionals are significant.
What are the implications for the future?
Expert opinion on the role of AI in creativity varies. Will AI truly become a partner driving the creative potential of professionals even further? Or will it be able to direct artistic efforts to create works of art by itself? Software can be used to define what creativity can look, sound, or feel like based on a number of parameters, but can it develop its own individual sense of creativity? And is creativity something that can be learned by a machine fed huge amounts of data?
More has to be seen before we can draw a conclusion. But even if it is possible, is this what we should be aiming for? Adobe is clear that its focus with Sensei is not to recreate human creativity but to enable interactions between human and machines that will foster the creativity of its users.
What seems irrefutable is that artificial intelligence will increasingly influence creative expression. Just as in other areas where AI has made a significant impact, the frameworks we create to inform its development will remain key to its potential impact on the creative professions, fake news content, and human machine interactions in general.
For now, we can count on any form of collaboration - including those with machines - to be a key driver of creativity. So, let’s welcome the creative algorithms to the team.
Suggestions for further reading
A report from this year’s Cannes Lions Festival where social roboticist Dr Heather Knight and her robot stand up comedian Ginger joined DigitasLBi’s Chris Clarke on stage.