NewsWhip CEO Paul Quigley returns to the topic of Artificial Intelligence, and its implications for publishers.
Last week I wrote about why NewsWhip is building an AI layer. Long story short: the volume of information and media being generated each day means that the long-suffering journalists (the information creators, curators and DJs) are navigating a blizzard of digital artifacts. We need to ensure they get the smartest and most important information, quickly and intelligently. Otherwise we’re just more noise.
Fears of an AI Layer in Media Distribution
On the flip side, I expect there will be some hesitation of any automation or robo-processes entering media production, best encapsulated by the dystopian robo-GIFs that pepper John Hermann’s essays at The Awl.
John has written about the loss of distribution control by publishers and broadcasters in the era of Google and Facebook becoming the consumer’s new front pages. He writes about the battle for social traffic from aggregating last night’s TV. The GIFs of clumsy bots trying to do human things are creepy at best, harbingers of doom at worst. I believe this is how many in media regard the technology companies and platforms that have superceded their control of the information pipes.
There should be a touch of that hesitation here too. If we introduce AI to signal technology used by news professionals, we are trusting “the algorithms” to learn about our information diet and needs, spot patterns, understand our role or profile in information distribution, and start pushing notifications when we need them. We may become dependent on them, or get bad results. Plus, for many journalists, there’s a fear that delivering signals will push people toward aggregating content instead of creating original material.
From our vantage point, we don’t think these fears will come to pass. Here are my observations based on over two years of customers using Spike, our signaling technology:
Won’t this take away journalists freedom to just write what they believe is important and dumb down the news?
This question is easily addressed. Signal technology and listening to the audience does not make content “dumb” – hard news gets as much coverage and sharing activity as fluffy news. When we replaced the front covers of the world’s major English language newspapers with their “most shared” stories of that day, the results were not any “dumber” than the stories picked by the editors.
It may be true though that many publishers will chase a particular piece of viral video or a particular story at once because they all get a signal. But that’s always been the case – signal technology will just make it easier to quantify whether something is worth going after.
Won’t this just be more noise for busy overwhelmed journos?
Automated event detection will empower. Right now, many journalists and communicators spend a big chunk of their day searching for stories, and end up producing hit-and-miss results. Users of signalling technology save loads of time finding stories and produce much better results – stories people want to read and share. Customers estimate using our current tech saves them one to five hours work a day. When it gets smarter how much more time will be saved? How much more quality media can be produced?
As the daily mountain of digital recordings and representations of live events (all the stuff formerly known as UGC) grows, what’s the alternative for the modern “information DJ”? They can spend their day sifting and searching without any help, or get some help finding events, then have more time to use editorial judgment and skills to produce stories. As AI gets better at ferreting out significant events and reactions, why not use it?
What if AI is crap and gives us invalid signals?
AI technology will have to prove itself. People use technology that works, and abandon stuff that doesn’t. With our own tech, journalists use the features that give results and ignore the others. A good technology will unearth stories and events, track popularity and see what’s getting shared online each day. If it doesn’t work well, it won’t be adopted, and no harm will be done.
Won’t this just cause loads of aggregating of content?
Use of signal technology won’t make news producers aggregators. When editors get a signal that something is popular, they do not “aggregate it” – they put resources into exploring and covering that story. That may involve moving it further through investigation, comment, analysis, or research. Our big news customers such as the AP, The Washington Post, BBC, The Guardian, the LA Times are not content aggregators, but they do use our technology to help make decisions each day.
Meanwhile, people producing art (and cultural criticism) can keep doing what they do. Technology is optional. It’s people writing about real time events with a wide brief that could do with some robo-help.
I’m sure there’s a good debate coming on this topic – the use of technology in determining our information diet. It’s already happening all over the place: in your Facebook feed, in how and when articles are being selected by publishers for sharing to different social audiences, in the robo-writing of simple sports and business articles.
For now: the possibility of being magically prompted with the information you need, when you need it, is quickly going from possible to inevitable. For better or worse, boring stories beware.