top of page

Open Science

Updated: Apr 23, 2019

Reproducibility, Replicability & Reliability made easy (hopefully!)


After attending a series of workshops at the British Neuroscience Association conference, I realised how little I really knew about Open Science. I have decided to curate a list of resources to help me (and hopefully others), learn more about how to make our research more open. This list is a work in progress (and by no means exhaustive) and suggestions for resources to be added are more than welcome (cemail me or tweet @AleLautarescu)

 

Why open science


A good starting point is reading Munafo et al. (2017) - A manifesto for reproducible science https://www.nature.com/articles/s41562-016-0021.pdf

"Open science refers to the process of making the content and process of producing evidence and claims transparent and accessible to others."


Threats to reproducible science (Munafo et al. 2017)



What can you do


It is important to understand that open science is not an all-or-nothing approach. You can start by implementing only one or two of the suggestions below.


You can join your local UK Reproducibility Network hub. A list of hubs can be found at https://www.bristol.ac.uk/psychology/research/ukrn/networks/


Planning your research project

  • Conduct a systematic review when formulating your hypotheses. Systematic reviews reveal uncited studies, in the way that reading reference lists may not.

  • Conduct a sample size estimation (Low statistical power increases the likelihood of obtaining both false-positive and false-negative results!). You can also find and use existing datasets (https://toolbox.google.com/datasetsearch)

  • Preregistration of study design, primary outcome and analysis plans. You can pre-register on the Open Science Framework (http://osf.io/) or AsPredicted.org. Journals have also started adopting Registered Reports(i.e. peer review before results are known). This helps eliminate the bias against negative results & prevent HARKING (i.e. hypothesising after results are known) and P-hacking (i.e. analysis decisions made with knowledge of the observed data). You can find a list of journals accepting Registered reports at https://cos.io/rr/. Lastly, there are other options for publishing your peer-reviewed protocol (such as https://bio-protocol.org/default.aspx)


Infographic from osf.io


Analysing your data

  • Attend training courses in methodology and stats (this will help you understand what p-values actually mean, the importance of statistical power and effect sizes etc). There are some good free online courses on Coursera (such as https://www.coursera.org/learn/statistical-inferences) and other resources that you can implement, such as StatCheck - which checks for errors in statistical reporting (http://statcheck.io/)

  • Use open-source software (such as R) for data analysis. As the analysis takes the form of scripts, you already have a full, reproducible record of your analysis path (which you can then share). For info on how to make your data reproducible with R, see online courses on Open Science & Reproducibility (https://github.com/cbahlai/OSRR_course & rmarkdown (https://github.com/libscie/rmarkdown-workshop)

  • Make sure you are explicit about what analyses are hypothesis-based and which are exploratory (pre-registering helps with this).


Publishing / Sharing your data

  • Publish in open access journals

  • Share your study protocol (https://www.protocols.io/, or Nature's protocol exchange)

  • Share your code (https://github.com/) . For a friendly intro to github see https://kirstiejane.github.io/friendly-github-intro/

  • If possible/applicable, share your data (https://datahub.io/)

  • Improve the quality of reporting to allow replication (reporting guidelines on http://www.equator-network.org/)

  • Preprints (such as https://www.biorxiv.org/). These have pros and cons so make sure you read up on it before deciding.

  • Publish negative findings! Don't selectively report. There's a good article in Nature talking about how research cannot be self-correcting when information is missing (https://www.nature.com/articles/d41586-017-07325-2). Several journals have started to publish negative findings (e.g. European Journal of Neuroscience, Brain and Neuroscience Advances, BMC Research Notes). I have yet to find an exhaustive list but will update this if I do.

  • Be clear about what each author has contributed to the paper. A good resource with examples of roles to allocate (e.g. conceptualization, resources, formal analysis) can be found on https://www.casrai.org/credit.html



Additional resources:

  • Great list of resources on https://opensciencemooc.eu/resources/



Copyright:







bottom of page