Scientific communication is a research problem
A recent article in "The Atlantic" has been the subject of many comments in my Twittersphere. It's about scientific communication in the age of computer-aided research, which requires communicating computations (i.e. code, data, and results) in addition to the traditional narrative of a paper. The article focuses on computational notebooks, a technology introduced in the late 1980s by Mathematica but which has become accessible to most researchers only since Project Jupyter (formerly known as the IPython notebook) started to offer an open-source implementation supporting a wide range of programming languages. The gist of the article is that today's practice of publishing science in PDF files is obsolete, and that notebooks are the future.
One interesting follow-up thread on Twitter explored if any scientific papers had actually been published in the form of Jupyter notebooks. It seems that the answer is no. Notebooks are published as supplementary material to standard papers, or as informal communication outside of the official scientific record, in particular for teaching purposes, but no one could point to a paper indexed in any article database that was written as a Jupyter notebook. As to the question of why his hasn't happened, all answers remain speculative in the absence of research into the subject. Publishers' format requirements are certainly a part of the problem, but limitations of today's notebook format also matter. In particular, notebooks lack support for bibliographies and for cross referencing.
Another interesting follow-up is a blog post by Luis Pedro Coelho who predicts that PDFs will stay with us for many years to come, because none of the proposed successors is actually mature enough for use in real life. In particular, he points out the complexity and lack of longevity and stability of most of today's computational tools. My personal experience is very similar to his. He also asks the very relevant question if a notebook-style presentation of results and computations is actually a good idea in the context of a scientific paper. I suspect nobody can provide an evidence-based answer at this time.
As these discussions illustrate, scientific communication about computer-aided research remains a research problem. As a community, we do not know how to explain, share, or review computer-aided research in a satisfactory way. Most of us agree that PDFs are no longer sufficient, and that we need to share code and data. However, we do not yet have good enough practices for doing so, at least not for all practically relevant situations. We do not know either if sharing code and data will actually be sufficient to enable effective communication. It is well possible that we will also need to develop practices for better explaining computations to each other, and have them peer reviewed in some form.
From this point of view, all of today's technology, be it Jupyter, Org mode, knitr or similar tools, should best be seen as support tools for performing experiments in scientific communication. What is still largely missing is systematic research that evaluates these experiments with the goal of summarizing the collective experience and drawing conclusions. There are promising starts, such as this study on the actual use of Jupyter notebooks, but their number is negligible compared to the number of articles proclaiming that this or that technology is going to revolutionize scientific communication without providing any tangible evidence.
I think it is time for the scientific community to acknowledge that it doesn't really know how to communicate computer-aided research effectively, and encourage research into the question. Experimenting with the various proposed approaches is essential, but analyzing the outcomes of these experiments is essential as well. In my opinion, we currently over-emphasize tool development, community building, and teaching, which are all directed at implementing new practices, but neglect research into what these practices actually should be. Future generations of scientists may well remember today's hot developments as sources of technical debt.
A personal anecdote provide and illustration of the dominating attitude. My ActivePapers project is clearly labeled as research. Its goal is to explore how non-trivial computations (long run times, big data sets) can be performed, archived, and published reproducibly. For first results, see this paper. Whenever I present this project, I know there is one question someone in the audience will ask: What are your plans for increasing your user base? I answer that I am doing research and not product development, and that I am not recruiting users but at best collaborators. This always causes surprise and sometimes animated discussions. It almost seems that doing research on doing research is a strange idea for professional scientists. On the other hand, my other research project on scientific communication, the digital scientific notation Leibniz, does not generate this kind of reaction, but then it hasn't see that much exposure yet. It explores the question of how we can explain a complex computation in a way that allows readers to verify its scientific assumptions. For a first account, see this preprint.
Finally, readers might be interested in two of my earlier blog posts that are related to notebooks:
"Beyond Jupyter: what’s in a notebook?" looks at notebooks as digital documents, focusing on the information content rather than on the tool for doing computations.
"From facts to narratives" explores various approaches, one of them being notebooks, to combining formal elements of a computation (code, date) with a explanatory narrative.