Do we welcome our new robot overlords? How AI will affect society and scholarly publishing
Artificial intelligence is rapidly moving from science fiction to science fact with AI now driving cars and beating Go grandmasters, causing both excitement and anxiety. Following the US National Science and Technology Council report “Preparing For The Future of Artificial Intelligence” last month, The Cambridge Union debating society tackled the rise of AI and the SpotOn conference considered whether AI could be put to use in scholarly peer review.
This House Fears The Rise of Artificial Intelligence
“I’m sorry, Dave. I’m afraid I can’t do that”
– HAL 9000, 2000: A Space Odyssey
The Cambridge Union debate was sponsored by Hindawi and I was invited to take a front-row seat last month as the two sides sparred.
Arguing for the motion were Kathryn Parsons of digital literacy company Decoded, Seán Ó hÉigeartaigh, Executive Director of the Centre for the Study of Existential Risk in Cambridge, and Sir Nigel Shadbolt, Professor of Computer Science at Oxford.
Kathryn stoked the fear by suggesting that technology is replacing people in jobs and could bring mass unemployment, though data scientists, Zumba instructors, choreographers, and theologians may breathe a sigh of relief.
Seán pointed out that neural networks, a common type of AI, are “black boxes” – they may get the right answer, but if they can’t say how they know the answer then can we trust them? AI may also further risk privacy breaches of our personal data. He argued “we don’t need to fear AI”, which I thought undermined his side though his point was subtle – it is not AI itself we need worry about, but rather the rise of AI. Sean has worked with philosophers in “effective altruism”, who place risks from AI at the top of their global research priorities – I put it to him later that these may be a little self-serving, as AI may put some philosophers out of business, and the second priority is promoting effective altruism… ahead of climate change or global health.
Nigel’s introduction as coming from Oxford was met with sharp intakes of breath, showing the Cambridge audience may really be more worried about the Other Place than AI. He built on Seán’s point to note that weaponization of AI – such as cyberoffensives and missile guidance – is a real risk, as with any dual use technology. It is the uses AI might be put to by people that is worrying, not AI itself: the “enduring natural stupidity of our political class” makes him afraid.
“Don’t panic” was the message of the BBC’s Rory Cellan-Jones, Murray Shanahan, a robotics professor at Imperial who consulted for the film Ex Machina, and Ben Medlock, whose SwiftKey technology has been used by Prof. Stephen Hawking.
Rory and Ben noted wryly that they had been thrown a curveball the day before by Hawking, who declared that “powerful AI will be either the best, or the worst thing, ever to happen to humanity”. Putting on a brave face, Rory noted that AI can do single tasks very well, but is not able to do lots of tasks together – this is still a unique human skill. Murray argued it is anthropomorphizing artificial intelligence that makes us afraid, as we think it will have the worst human qualities – but there is no reason to build such AIs. Ben played down the risks of the fully conscious AI we see in movies, as general intelligence must be embodied and there is no current prospect of that with AI. Essentially, the media – and the proposition – were scaremongering.
Speakers from both sides agreed afterwards they agreed more than they disagreed, perhaps a feature of scientific as opposed to political debate. But one side had to win and the ayes had it, by 106 to 86: Cambridge Union is afraid of the rise of AI. You can see my live tweets and more at #CUSAI.
Applying Artificial Intelligence to Peer Review… What Could Possibly go Wrong?
“Finally, robotic beings rule the [world] journal
The [humans] reviewers are dead”
– Flight of the Conchords(edited)
The Cambridge Union is so interested in AI that they discussed it again this month, focussing this time on the impact on industry by 2026. By coincidence, the SpotOn London conference (on science policy, outreach and tools online) this month also looked to the future, asking what peer review might look like in 2030. SpotOn also covered AI, again with a negative slant.
Science fiction writer John Gilbey, set the scene, referencing a cartoon on a piece he wrote in 1988 show a robot kicking a researcher out of a lab! John echoed the Cambridge Union debate – he is not worried about HAL 9000 or Skynet, but rather subtler issues. The fears of AI among researchers have mostly been replaced with interest in how AI might help with data analysis and modelling; voting using our accept, reject, major revisions and minor revisions paddles, most of the audience agreed AI will be part of peer review by 2030, but there was some disagreement whether this was desirable. 2030 was a conservative estimate – it’s not routine, but AI is already getting adopted in scholarly publishing.
Years ago, I would dream of a robot editor to enhance or even replace the human task of finding peer reviewers. We now have relatively simple but surprisingly effective clustering tools like JANE, Hindawi uses an algorithm to offer suggestions to our editors, and companies like ÜberResearch are applying machine learning to the problem.
Stephanie Dawson asked whether text and data mining could replace peer review and assess soundness while humans are left to judge importance. Indeed, machines are already reading manuscripts. The panel showcased the StatReviewer tool, which uses natural language processing to work out what a paper is about and applies the EQUATOR Network reporting guidelines to know what content to expect – and flag it up when it believes something is missing. This is similar to the Bayesian cognition human reviewers unconsciously apply. The Penelope tool has a similar promise, and Meta offers to find the right journal, missing citations, and more (via William Gunn). Will editors be on the scrapheap as AI hits the mainstream?
But digital humanities researcher Matt Hayler was wary – can AI review the cutting edge of fields, if it is basing its definition of “good” on what has gone before? Timothy Houle, who works on StatReviewer, raised an ethical question – their system could use authors’ prior work, good or bad, to feed into its analysis of their work, but should it? This is one of the questions in the debate about double-blind review. You can see tweets about the panel at #SpotOnAI.
The text and images in this blog post are by Hindawi and are distributed under the Creative Commons Attribution License (CC-BY). Cambridge Union term-card is copyright Cambridge Union.