top of page

Open Science

Updated: Apr 23, 2019

Reproducibility, Replicability & Reliability made easy (hopefully!)

After attending a series of workshops at the British Neuroscience Association conference, I realised how little I really knew about Open Science. I have decided to curate a list of resources to help me (and hopefully others), learn more about how to make our research more open. This list is a work in progress (and by no means exhaustive) and suggestions for resources to be added are more than welcome (cemail me or tweet @AleLautarescu)


Why open science

A good starting point is reading Munafo et al. (2017) - A manifesto for reproducible science

"Open science refers to the process of making the content and process of producing evidence and claims transparent and accessible to others."

Threats to reproducible science (Munafo et al. 2017)

What can you do

It is important to understand that open science is not an all-or-nothing approach. You can start by implementing only one or two of the suggestions below.

You can join your local UK Reproducibility Network hub. A list of hubs can be found at

Planning your research project

  • Conduct a systematic review when formulating your hypotheses. Systematic reviews reveal uncited studies, in the way that reading reference lists may not.

  • Conduct a sample size estimation (Low statistical power increases the likelihood of obtaining both false-positive and false-negative results!). You can also find and use existing datasets (

  • Preregistration of study design, primary outcome and analysis plans. You can pre-register on the Open Science Framework ( or Journals have also started adopting Registered Reports(i.e. peer review before results are known). This helps eliminate the bias against negative results & prevent HARKING (i.e. hypothesising after results are known) and P-hacking (i.e. analysis decisions made with knowledge of the observed data). You can find a list of journals accepting Registered reports at Lastly, there are other options for publishing your peer-reviewed protocol (such as

Infographic from

Analysing your data

  • Attend training courses in methodology and stats (this will help you understand what p-values actually mean, the importance of statistical power and effect sizes etc). There are some good free online courses on Coursera (such as and other resources that you can implement, such as StatCheck - which checks for errors in statistical reporting (

  • Use open-source software (such as R) for data analysis. As the analysis takes the form of scripts, you already have a full, reproducible record of your analysis path (which you can then share). For info on how to make your data reproducible with R, see online courses on Open Science & Reproducibility ( & rmarkdown (

  • Make sure you are explicit about what analyses are hypothesis-based and which are exploratory (pre-registering helps with this).

Publishing / Sharing your data

  • Publish in open access journals

  • Share your study protocol (, or Nature's protocol exchange)

  • Share your code ( . For a friendly intro to github see

  • If possible/applicable, share your data (

  • Improve the quality of reporting to allow replication (reporting guidelines on

  • Preprints (such as These have pros and cons so make sure you read up on it before deciding.

  • Publish negative findings! Don't selectively report. There's a good article in Nature talking about how research cannot be self-correcting when information is missing ( Several journals have started to publish negative findings (e.g. European Journal of Neuroscience, Brain and Neuroscience Advances, BMC Research Notes). I have yet to find an exhaustive list but will update this if I do.

  • Be clear about what each author has contributed to the paper. A good resource with examples of roles to allocate (e.g. conceptualization, resources, formal analysis) can be found on

Additional resources:

  • Great list of resources on



bottom of page