In the bowels of Publicis Groupe’s UK HQ on Baker Street there’s a machine that offers insights and tips to any marketer needing inspiration. Scan a code and the Zoltar-like Beardy Planner will offer up bon-mots such as “Tradition might be the future” and “Involve consumers as partners”. But what does AI mean for beardy planners in real-life – and more importantly, what does it mean for communications planning?
Algorithms are already affecting the way we do strategy. Our social feeds are algorithmically programmed, as are the ads we see and, depending on how we use Spotify and Netflix, so might the music and TV shows that we listen to. The marketing community is bathing in broadly similar information, news and culture, selected for us by machine. Perhaps the desire to escape our filter bubbles is one factor behind resurgent interest in observational research. No conference on customer experience is complete without examples of this applied ethnography, such as the supermarket chain that insists its branch managers have dinner with a local family when they start in a new area.
Just as machine learning is already affecting our decision-making as consumers, it’s also going to speed up strategic decision-making. The Monte Carlo algorithm helped AlphaGoZero achieve its result through applied randomness. Rather than ‘what if’ scenarios, Deep Mind’s AI uses huge computational power to crunch millions of game simulations and learn from them. This “self-play learning” is like having the world’s brainiest – and eager to learn – strategist on your side trying out new hypotheses on an industrial scale. AI can be used to reinforce, rather than replace, our intelligence.
Yet for every scary clip of a back-flipping robot, there’s an example of machines struggling to go beyond the literal. It’s why artist James Bridle was able to trap a self-driving car with a bag of salt, encircling it with a solid line it couldn’t cross. Examples of machine-to-machine interaction will proliferate as our assistants take on any task that involves a decision-tree. As Bridle has also noted when writing about the phenomenon of automated videos aimed at kids on YouTube, bots creating content for other bots is worrying, not just for the arbitrary way in which it features violence but because it forces us to act more like machines – with their own biases - in order to be discovered.
As representatives of the customer inside the organization, it will be strategists’ job to critically check the inputs, outputs and biases of machine learning. At the same time, we need to ask ourselves if the way we’re applying technology is appropriate; putting up a large display that highlights jay-walkers in real time may work in China, but would it be right in Germany?
Dov Seidman, who writes on technology and advises companies on ethical behaviour, thinks we’re entering an era of the heart: the more machines and software influence our lives, the more people will seek out human-to-human connections. As machines do more of our thinking for us, the one thing they won’t be able to do is feel for us. So there is still some way to go.
Suggestions for further reading
New York Times journalist Thomas L. Friedman talks to Dov Seidman, CEO of LRN, about what it means to be human in the age of intelligent machines.
How James Bridle was able to trap a self-driving car with a bag of salt.
The story behind Frank Lantz’s game Paperclip which borrows from Nick Bostrom’s theory on how a paperclip-making AI could destroy the universe.