Computer science was born of a rebellious, hacker culture, a spirit that lives on in the publishing culture of artificial intelligence (AI). The burgeoning field is increasingly turning to conference publications and free, open-review websites while shunning traditional outlets—sentiments dramatically expressed in a growing boycott of a high-profile AI journal. As of 15 May, about 3000 people, mostly academic computer scientists, had signed a petition promising not to submit, review, or edit articles for Nature Machine Intelligence (NMI), a new journal from the publisher Springer Nature set to begin publication in January 2019.
The petition, signed by many prominent researchers in AI, is more than just a call for open access. It decries not only closed-access, subscription-based journals such as NMI, but also author-fee publications: open-access journals that are free to read but require researchers to pay to publish. Instead the signatories call for more “zerocost” open-access journals.
The purpose of the boycott is “to lower the barriers to research progress” for resource-strapped scientists, says Thomas Dietterich, a computer scientist at Oregon State University in Corvallis, who began the boycott last month. The field is moving too fast for traditional publishing, and AI's potential for both great benefit and great harm requires openness, he says. “Locking up our research papers behind a paywall would make public scrutiny more difficult.”
Paul Ginsparg, a physicist at Cornell University and founder of the preprint repository arXiv, where computer scientists often publish, applauds what he calls “a principled stand.” But, he adds, “I personally have no animus towards the subscription model.” And he thinks the petition signers may have unrealistic hopes for zero-cost journals. Servers are cheap, but “systematic quality control is labor-intensive, and that costs real money.”
Springer Nature is not backing away from its plans for the journal, Susie Winter, a Springer Nature spokesperson in London, said in a statement. “At present, we believe that the fairest way of producing highly selective journals like this one and ensuring their long-term sustainability as a resource for the widest possible community is to spread the associated costs among many readers.” Dietterich says he did not include Springer Nature's flagship journal, Nature, in the boycott because computer scientists tend not to publish in general-interest journals anyway. (Google DeepMind, which published prominent papers on its AlphaGo AI in Nature, is an exception, although multiple DeepMind employees have signed the boycott.)
Journals from nonprofit societies such as AAAS (which publishes Science and Science Robotics), the Institute of Electrical and Electronics Engineers, and the Association for Computing Machinery also got a pass, Dietterich says, because of their missions and low fees.
CREDITS: (GRAPH) E. HAND/SCIENCE; (DATA) YOAV SHOHAM ET AL., THE AI INDEX 2017 ANNUAL REPORT (2017)
In computer science, most of the action does not take place in journals, anyway. Often, papers are posted to arXiv and then submitted, generally for free, to conferences, where they get a limited form of peer review: comments, and acceptance or rejection. Computer scientists have gravitated to arXiv because of the slow reviewing process at journals, says Yann LeCun, Facebook's chief AI scientist in New York City. Moreover, for academic advancement in computer science, conference papers and not journal papers have become the coin of the realm, says Leslie Kaelbling, a computer scientist at the Massachusetts Institute of Technology in Cambridge. “We are all very practiced, everywhere, at arguing to our deans and provosts and so on at tenure time that, ‘Yes, this person has almost no journal articles but it's OK.’”
AI is now moving toward not just open access, but open review. In 2013, Andrew McCallum, a computer scientist at the University of Massachusetts in Amherst, launched OpenReview, a site that allows authors to submit conference papers and invites reviewers to post their comments and decisions openly. Anyone else can add a review, too. Major AI conferences have begun using the site, and McCallum says fears of flame wars or soft reviews were unrealized. “Furthermore, some magical things happened.” For example, for a 2013 AI paper on data analysis, a mathematician from outside computer science noted errors in a proof, and shared an idea to fix them. “This is the way science should be working, right?” McCallum says. He adds that he and Ginsparg have discussed using OpenReview to provide an independent overlay on arXiv articles, which don't currently allow comments.
Kaelbling says the explosion of AI research is stressing existing publications, and that sites such as OpenReview can help by spreading out the reviewing effort and curbing low-quality submissions. One upcoming conference, the Conference on Neural Information Processing Systems (NIPS), asked Kaelbling for help finding 2750 reviewers for submitted papers. “But I feel reasonably sure that it will be very hard to find 2750 qualified NIPS reviewers,” she says. “It's crazy.”
The torrent of AI publications may be unsustainable, but it is exhilarating, McCallum says. He tells a story about a colleague who posted a paper on computer vision to arXiv. Within months, other papers had built on it, been posted to arXiv, and been built on themselves. When one of those papers was presented at a conference, the authors didn't just discuss their own paper. They discussed a year of progress. “It was more like, ‘Here, let me tell you a retrospective of seven generations of scientific research,’” McCallum says. “This never would have happened in a closed publishing world.”