Can open science help to make research more accessible?

Authors: Elta Smith with Anna Knack, Catie Lichten, Sarah Parks, Salil Gunashekar, RAND Europe

Today’s connected society presents new and exciting opportunities to share and use information that supports research and innovation. Recent scientific advances, such as the discovery of the Higgs boson and gravitational waves, have been the result of idea and data collaborations between thousands of scientists, facilitated by digital technologies.

Every day new data is generated through the activities of billions of people, and rich and complex data continues to proliferate through the use of digital devices. New approaches to scientific research are enabled by this environment and the opportunities for discovery and innovation could increase dramatically when research results and research data are more accessible and re-usable.

In this context, ‘open science’ broadly refers to a set of attitudes, beliefs, and practices that are characterised by open access to scientific research publications, open research data, and other forms of multi-directional exchange between academic researchers themselves and with the public.

International research collaborations have occurred for decades, such as in the post-1945 world when cross-national economic theorisation, modelling and data sharing facilitated the rebuilding of the European economy. Extensive international data cooperation has occurred particularly since the 1970s, with an increase in international co-authorship in the early 1980s, as well as a growing trend in scientific research producer-user cooperation and collaboration in the 1980s.

Open science is therefore not ‘new’, but rather a way of describing a research process that is ever-evolving in the wake of rapid advancements in digital technology. It also describes the bottom-up expansion of the scientific research community to include other knowledge co-producers – including non-scientists – which impacts the entire research cycle and wider society. In the short term, proponents of open science expect that it can provide more transparency, and increase collaborations, and in the long term, it could make science more efficient and responsive to addressing grand societal challenges.

Free circulation of knowledge, the sharing of research results, and transparency of methodology are core tenets of the scientific method. Open science facilitates these features of research and helps to remove barriers to sharing outputs, resources, methods or tools in the research process. The leading principle of open science is that anyone, whether they are part of the research community or the public, should be able to access scientific knowledge.

Over the last three years, RAND Europe has undertaken a series of studies focusing on different aspects of open science, including a study in 2016-17 for the European Commission to develop the world’s first online monitor to track global trends in open science. The monitor has informed the work of the Open Science Policy Platform, which is charged with the development and implementation of open science policy in Europe.

Our work has demonstrated the need for and importance of a more open research system. An interesting example is the Reproducibility Project, which developed out of concerns about the reliability of published scientific findings, particularly in the field of psychology. Several well-regarded studies have identified problems with replicating research results, including a survey of psychology researchers which found that only about half of the replications the respondents had attempted were successful.

However, the problem is not unique to psychology. In the medical sciences, replication is also an issue. A 2011 study that attempted to replicate studies on potential drug targets found that only one quarter of 67 attempts were successful. Another widely-cited study found that only 11% of landmark cancer treatment studies (6 out of 53) were reportedly replicable. In the context of life sciences research, one study estimated that a staggering $28 billion (USD) is spent every year in the U.S. alone on pre-clinical research that is not reproducible.

These developments and wider events in the psychology community led psychologist Brian Nosek to initiate the Reproducibility Project. The main objective of the study, which was funded by the Laura and John Arnold Foundation (LJAF), was to systematically assess – in an open and transparent way – the degree to which psychology studies are replicable. The project was co-ordinated by the Centre for Open Science in the U.S. and spanned three years to reproduce 100 different psychology studies. The results of the study were published in August 2015 and involved a large-scale collaborative effort of over 350 contributors and volunteers from around the world. The project findings were striking – less than 40% of the 100 original published studies could be reproduced.

The work sparked discussion and debate within the research community about the presence of systemic issues related to reliability and replicability in psychology and indeed other fields, and about ways to overcome these problems. Publishers and funders have taken a keen interest in this area and have started making changes to their policies and practices. Numerous editorials and commentaries writing about the need to improve scientific reproducibility and transparency cite the Reproducibility Project.

Furthermore, changes have been made to some journal policies, particularly in psychology journals, to better support and encourage reproducibility – for example, requiring the sharing of data materials and pre-registering study protocols. Funders such as LJAF, the National Institutes of Health, the National Science Foundation, and the Netherlands Organisation for Scientific Research have begun discussing and/or rolling out initiatives to support reproducibility research. More recently, ‘follow-on’ projects in other areas have begun to examine reproducibility and transparency – most notably, THE LJAF has funded another study (‘Reproducibility Project: Cancer Biology’) to examine the replicability of findings in 50 published cancer biology studies.

Very simply, evidence should be at the heart of science policy decision making. Now is a critical moment to ensure that the opportunities offered by open science are fed into research policy discourse internationally. This can illustrate the potential of a more open research system to create a transparent and accessible evidence base from which to examine the reliability, credibility and usefulness of research results.

Full references can be found in the published version.

Footnote:

Elta Smith is an associate research group director, Sarah Parks and Salil Gunashekar are senior analysts, Catie Lichten is an analyst and Anna Knack is a research assistant at RAND Europe. All five are involved in RAND Europe’s work on open science.