Skip to content

ESMARCONF RECORDINGS

In the table below, you can find all previous ESMARConf talks, sessions and workshops. Search for a particular speaker, title/abstract word and filter by activity, category and presentation type.

NameTitle and AbstractActivityCategoryYearTypeYouTube
Neal HaddawayESMARConf2021 Opening Session livestream

The ESMARConf organising team will open the event and welcome participants.
Communities of practice/research practices generally; General (any / all stages)Summary / overview2021sessionhttps://youtu.be/J1R9SYL-zN0
Matthew GraingerMeta-Analysis in R: a thematic analysis and content analysis of meta-analytic R packages

Which packages are available already for Meta-analysis in R and how are they inter-related? Using R functions to develop a dependency network we show that there are currently 95 R packages on CRAN that are focused on meta-analysis and 546 "supporting" packages that underpin functions in the meta-analysis packages. We then use thematic analysis to identify clusters of meta-analysis packages that (based on their description) carry out similar functions.
Communities of practice/research practices generally; General (any / all stages)Theoretical framework / proposed process or concept2021talkhttps://youtu.be/mVv5NGU5N8I
Vivian WelchThe Future of evidence synthesis: an insider's perspective

Vivian Welch, Editor in Chief of the Campbell Collaboration, introduces her perspectives on where evidence synthesis developments seem to be headed.
Communities of practice/research practices generally; General (any / all stages)Theoretical framework / proposed process or concept2021talkhttps://youtu.be/Ngi5rHnJ-DM
Neal HaddawayESMARConf2021 Special Session 1: Automation: text analysis livestream

Study selection / screening; Qualitative analysis / synthesis (including text analysis and qualitative synthesis)Combination of code (chunks or packages) from multiple sources; Code package / library; Theoretical framework / proposed process or concept; Method validation study / practical case study2021sessionhttps://youtu.be/upMLe6_khDk
Arindam BasuUsing computational text analysis to filter titles and abstracts of initial search for meta-analysis: case of Quanteda and tidytext

One of the key areas of conducting an evidence synthesis (systematic review and meta analysis) is to screen articles on the basis of titles and abstracts based on pre-specified inclusion and exclusion criteria. This happens following an initial search of the literature, and is usually a manual process where one or more investigators screen each title and abstract for semantic information and keywords to select whether an article on the basis of information presented in the title and abstract can be retained for further processing or removed from the pool. In this presentation, I will demonstrate that using natural language processing tools such as tidytext and quanteda, it is possible to create corpus of texts and use the screening criteria to rapidly identify articles that can be retained for further processing in the context of systematic reviews. The rationale and steps will be demonstrated and the relevant codes will be shared with the audience so that they can work on their own.
Study selection / screeningCombination of code (chunks or packages) from multiple sources2021talkhttps://youtu.be/Agg9qCPZJ8s
Martin WestgateAn introduction to revtools - R and Shiny App for article screening and topic modelling

There are a large number of R packages that support meta-analysis in R, but comparatively few that support earlier stages of the systematic review process. This is a problem because locating, acquiring and screening scientific information is a time-consuming process that could benefit from improved support in R. The ‘revtools’ package supports manual screening of article titles and abstracts via custom shiny apps. It also allows the user to visualise and screen patterns in article text via topic modelling. In this talk, I will premiere the upcoming version which includes better integration with standard NLP packages (quanteda and stm) and new tools for screening as part of a team. These options will greatly increase the utility of revtools for a range of synthesis-related applications.
Study selection / screening; Qualitative analysis / synthesis (including text analysis and qualitative synthesis)Code package / library2021talkhttps://youtu.be/WMpPCvBeILQ
Max CallaghanIntroduction to the concept of robust stopping criteria for using machine learning classifiers for inclusion decisions in evidence syntheses

Active learning for systematic review screening promises to reduce the human effort required to identify relevant documents for a systematic review. Machines and humans work together, with humans providing training data, and the machine optimising the documents that the humans screen. This enables the identification of all relevant documents after viewing only a fraction of the total documents. However, current approaches lack robust stopping criteria, so that reviewers do not know when they have seen all or a certain proportion of relevant documents. This means that such systems are hard to implement in live reviews. This talk introduces a workflow with flexible statistical stopping criteria, which offer real work reductions on the basis of rejecting a hypothesis of having missed a given recall target with a given level of confidence. These criteria and their performance are presented here, along with open source R code to put this into practice.
Study selection / screeningTheoretical framework / proposed process or concept2021talkhttps://youtu.be/GssusSa3zeg
Richard CornfordAutomated identification of articles for ecological datasets

Synthesising data from multiple studies is necessary to understand broad-scale ecological patterns. However, current collation methods can be slow, involving extensive human input. Given rapid and increasing rates of scientific publication, manually identifying data sources amongst hundreds of thousands of articles is a significant challenge. Automated text-classification approaches, implemented via R and Python, can substantially increase the rate at which relevant papers are discovered and we demonstrate these techniques on two global biodiversity indicator databases. The best classifiers distinguish relevant from non-relevant articles with over 90% accuracy when using readily available abstracts and titles. Our results also indicate that, given a modest initial sample of just 100 relevant papers, high performing classifiers could be generated quickly through iteratively updating the training texts based on targeted literature searches. Ongoing work to facilitate the wider application of these methods includes the development of an easy-to-use Shiny App/R package and named-entity-recognition to assist the screening procedure. Additional research will also help to identify/mitigate potential biases that automated classifiers could propagate and evaluate model performance in other domains of evidence synthesis.
Study selection / screeningMethod validation study / practical case study2021talkhttps://youtu.be/i-ADHETikcE
Luke McGuinnessESMARConf2021 Special Session 2: Automation: other livestream

Searching / information retrieval; Report write-up / documentation / reporting; Communities of practice/research practices generally; Document / record management (including deduplication)Code package / library; Code chunk (e.g. single R or javascript function); Theoretical framework / proposed process or concept; Code package / library2021sessionhttps://youtu.be/2o4PBUBbWAE
Neal HaddawayGSscraper: an R package for scraping search results from Google Scholar

This presentation introduces the GSscraper package, which contains a suite of functions to scrape search results from Google Scholar by using a function that pauses before downloading each page of results to avoid IP blocking. It then scrapes the locally saved files for citation relevant information. These functions help to radically improve transparency in (particularly grey) literature searching, and to support integration of GS search results with other citations in the screening process of a review. In particular, GSscraper allows DOIs to be scraped from the hyperlinks in the search results, facilitating deduplication and crossreferencing with existing citations. Challenges remain in avoiding blocking from GS's bot detection, but options to minimise this risk exist and are in development.
Searching / information retrievalCode package / library2021talkhttps://youtu.be/unUOUpG8dOg
Wolfgang ViechtbauerAutomated report generation for meta-analyses using the R package metafor

After running a meta-analysis, users are often inundated with a wide variety of statistics and plots that represent various aspects of the data (e.g., forest and funnel plots, an estimate of the average effect and its uncertainty, the amount of heterogeneity and a test thereof, checks for the presence of potential outliers, tests for the presence of small-study effects / publication bias). One of the challenges is to translate this information into a coherent textual narrative based on current reporting standards. In this talk, I will describe how the R package 'metafor' can be used to automate this process. In particular, for a given meta-analysis, a report can be generated (either as an html, pdf, or docx file) that describes the statistical methods used and includes the various pieces of information in the way they would typically be reported in the results section of a research article.
Report write-up / documentation / reportingCode chunk (e.g. single R or javascript function)2021talkhttps://youtu.be/gAc66E4r-aU
Emily HennessyPromoting synthesis-ready research for immediate and lasting scientific impact

Synthesis of evidence from the totality of relevant research is essential to inform clear directions across scientific disciplines. Many barriers impede comprehensive evidence synthesis, which leads to uncertainty about the generalizability of findings, including: inaccurate terminology titles/abstracts/keywords (hampering literature search efforts); ambiguous reporting of study methods (resulting in inaccurate assessments of study rigor); and poorly reported participant characteristics, outcomes, and key variables (obstructing the calculation of an overall effect or the examination of effect modifiers). To address these issues and improve the reach of primary studies through their inclusion in evidence syntheses, we provide a set of practical guidelines to help scientists prepare synthesis-ready research. We highlight several tools and practices that can aid authors in these efforts, such as creating a repository for each project to host all study-related data files. We also provide step-by-step guidance and software suggestions for standardizing data design and public archiving to facilitate synthesis-ready research.
Communities of practice/research practices generallyTheoretical framework / proposed process or concept2021talkhttps://youtu.be/Ti0MsRPIxVs
Kaitlyn HairIdentifying duplicate publications with the ASySD (Automated Systematic Search De-duplication) tool

Researchers who perform systematic searches across multiple databases often identify duplicate publications. De-duplication can be extremely time-consuming, but failure to remove these records can, in the worst instance, lead to the wrongful inclusion of duplicate data. Endnote is a proprietary software commonly used for this purpose, but its automated duplicate removal has been found to miss many duplicates in practice and it lacks interoperability with other automated evidence synthesis tools. I developed the ASySD (Automated Systematic Search Deduplication) tool as an R function and created a user-friendly web application in R Shiny. Within ASySD, records undergo several formatting steps to enhance matching, text similarity scores are obtained using the RecordLinkage R package, and matching records are passed through a number of filtering steps to maximise specificity. I tested the tool on 5 unseen biomedical systematic search datasets of various sizes (1,845 – 79,880 records) and compared the performance to Endnote and a comparator automated de-duplication tool (Systematic Review Accelerator (SRA)). ASySD identified more duplicates than SRA and Endnote, with a sensitivity of 0.95-0.99 and had a false-positive rate comparable to human performance, with a specificity of 0.94-0.99. For duplicate removal in biomedical systematic reviews, the ASySD tool is a highly sensitive, reliable, and time-saving approach. It is open source and freely available online.
Document / record management (including deduplication)Code package / library2021talkhttps://youtu.be/WL0VDgxcUNE
Matthew GraingerESMARConf2021 Special Session 3: Quantitative Synthesis livestream

Data wrangling / curating; Quantitative analysis / synthesis (including meta-analysis)Combination of code (chunks or packages) from multiple sources; Theoretical framework / proposed process or concept; Code package / library2021sessionhttps://youtu.be/Q9Nce5pxebY
James E. PustejovskySynthesis of dependent effect sizes: Versatile models through clubSandwich and metafor

Across scientific fields, large meta-analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-analysis model, even when the nature of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models (available in the robumeta package) are limited to each describing a single type of dependence. We describe a workflow combining two existing packages, metafor and clubSandwich, that can be used to implement an expanded set of working models, offering benefits in terms of better capturing the types of data structures that occur in practice and improving the efficiency of meta-analytic model estimates.
Data wrangling / curating; Quantitative analysis / synthesis (including meta-analysis)Combination of code (chunks or packages) from multiple sources2021talkhttps://youtu.be/STVQc5OqpuE
Maya MathurR package PublicationBias: Sensitivity analysis for publication bias

I will discuss the R package PublicationBias, which implements the methods described in this JRSSC paper (https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssc.12440): We propose sensitivity analyses for publication bias in meta‐analyses. We consider a publication process such that ‘statistically significant’ results are more likely to be published than negative or “non‐significant” results by an unknown ratio, η. Our proposed methods also accommodate some plausible forms of selection based on a study's standard error. Using inverse probability weighting and robust estimation that accommodates non‐normal population effects, small meta‐analyses, and clustering, we develop sensitivity analyses that enable statements such as ‘For publication bias to shift the observed point estimate to the null, “significant” results would need to be at least 30 fold more likely to be published than negative or “non‐significant” results’. Comparable statements can be made regarding shifting to a chosen non‐null value or shifting the confidence interval. To aid interpretation, we describe empirical benchmarks for plausible values of η across disciplines. We show that a worst‐case meta‐analytic point estimate for maximal publication bias under the selection model can be obtained simply by conducting a standard meta‐analysis of only the negative and ‘non‐significant’ studies; this method sometimes indicates that no amount of such publication bias could ‘explain away’ the results.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2021talkhttps://youtu.be/YV3EdjKtr7s
Philip MartinDynamic meta-analysis: Providing evidence to guide local decisions using a global evidence base

Traditionally meta-analysis presents a set of statistical analyses for a large body of literature, providing a global answer. However, people who use evidence often want answers that address relatively local contexts. For example, a meta-analysis of interventions for managing invasive plant species might find that herbicides are highly effective, but a practitioner may wish to know about the effects of glyphosate on Japanese knotweed, an answer they cannot get from the original hypothetical meta-analysis. In this talk, I will introduce our solution to this problem using a new website and Shiny app, metadataset.com. This tool allows users to easily choose outcomes and interventions of interest to them and then filter or weight data before running a bespoke meta-analysis. The aim is to make analyses as relevant to user needs as possible. We call this process ‘dynamic meta-analysis.’ During the talk I will run through an example of the tools use and highlight areas in which we want to engage with the wider evidence synthesis community.
Quantitative analysis / synthesis (including meta-analysis)Theoretical framework / proposed process or concept2021talkhttps://youtu.be/oqStBL4eKgc
Wolfgang ViechtbauerSelection models for publication bias in meta-analysis

The non-replicability of certain findings in various disciplines has brought further attention to the problem that the published literature - which predominantly forms the evidence basis of research syntheses - may not be representative of all research that has been conducted on a particular topic. More specifically, concerns have been raised for a long time that statistically significant findings are overrepresented in the published literature, a phenomenon usually referred to as publication bias, which in turn can lead to biased conclusions. Various methods have been proposed in the meta-analytic literature for detecting the presence of publication bias, estimating its potential impact, and correcting for it. So-called selection models are among the most sophisticated methods for this purpose, as they attempt to directly model the selection process. If a particular selection model is an adequate approximation for the underlying selection process, then the model provides estimates of the parameters of interest (e.g., the average true effect and the amount of heterogeneity in the true effects) that are 'corrected' for this selection process (i.e., they are estimates of the parameters in the population of studies before any selection has taken place). In this talk, I will briefly describe a variety of models for this purpose and illustrate their application with the metafor package in R.
Quantitative analysis / synthesis (including meta-analysis)Theoretical framework / proposed process or concept; Code package / library2021talkhttps://youtu.be/ucmOCuyCk-c
Emily HennessyESMARConf2021 Special Session 4: Quantitative Synthesis:Network Meta-Analysis livestream

Quantitative analysis / synthesis (including meta-analysis)Code package / library; Theoretical framework / proposed process or concept2021sessionhttps://youtu.be/d4ufa__hGbY
Guido SchwarzerNetwork meta-analysis with netmeta - Present and Future

Network meta-analysis is a more recent development that provides methods to combine direct and indirect evidence and thus constitutes an extension of pairwise meta-analysis for combining the results from studies that used different comparison groups (Salanti, 2012). R package netmeta (Rücker et al., 2020) implements a frequentist approach for network meta-analysis based on graph-theoretical methods (Rücker, 2012) and is one of the most popular R packages for network meta-analysis. Main objective of netmeta is to provide a comprehensive set of R functions for network meta-analysis in a user-friendly implementation. In this presentation, we will give a brief overview of implemented methods in netmeta as well as planned future extensions. References: Salanti G (2012): Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Research Synthesis Methods. 3(2):80-97. Rücker G (2012): Network meta-analysis, electrical networks and graph theory. Research Synthesis Methods. 3(4):312-24. Gerta Rücker et al. (2020): netmeta: Network Meta-Analysis using Frequentist Methods. R package version 1.2-1. https://CRAN.R-project.org/package=netmeta
Quantitative analysis / synthesis (including meta-analysis)Code package / library2021talkhttps://youtu.be/CYdUUuGthGI
David Phillippomultinma: An R package for Bayesian network meta-analysis of individual and aggregate data

Network meta-analysis (NMA) extends pairwise meta-analysis to synthesise evidence on multiple treatments of interest from a connected network of studies. Standard pairwise and network meta-analysis methods combine aggregate data from multiple studies, assuming that any factors that interact with treatment effects (effect modifiers) are balanced across populations. Population adjustment methods aim to relax this assumption by adjusting for differences in effect modifiers. The “gold standard” approach is to analyse individual patient data (IPD) from every study in a meta-regression model; however, such levels of data availability are rare. Multilevel network meta-regression (ML-NMR) is a recent method that generalises NMA to synthesise evidence from a mixture of IPD and aggregate data studies, whilst avoiding aggregation bias and non-collapsibility bias, and can produce estimates relevant to a decision target population. We introduce a new R package, multinma: a suite of tools for performing ML-NMR and NMA with IPD, aggregate data, or mixtures of both, for a range of outcome types. The package includes functions that streamline the setup of NMA and ML-NMR models; perform model fitting and facilitate diagnostics; produce posterior summaries of relative effects, rankings, and absolute predictions; and create flexible graphical outputs that leverage ggplot and ggdist. Models are estimated in a Bayesian framework using the state-of-the art Stan sampler.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2021talkhttps://youtu.be/aNpwY-6nPjY
Thodoros PapakonstantinouHow to estimate the contribution of each study in network meta-analysis

In network meta-analysis where multiple interventions are synthesized, a study’s contribution to the summary estimate of interest depends not only on its precision, as in pairwise meta-analysis, but also on it position in the network. The contribution matrix contains for for every comparison direct or indirect the contribution of each study by following the flow of information [1]. The calculation of the contribution matrix is now included in the netmeta package. We will demonstrate the use of the function and present the use-case of judging risk of bias in a network. We will also present the open access database accessible through the nmadb package. The application of the contribution matrix to the database resulted in the study of the relative contribution of network paths of different lengths [2]. [1] Papakonstantinou T, Nikolakopoulou A, Rücker G et al. Estimating the contribution of studies in network meta-analysis: paths, flows and streams [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2018, 7:610 (https://doi.org/10.12688/f1000research.14770.3) [2] Papakonstantinou, T.; Nikolakopoulou, A.; Egger, M. & Salanti, G. In network meta-analysis, most of the information comes from indirect evidence: empirical study. Journal of Clinical Epidemiology, 2020, 124, 42 - 49
Quantitative analysis / synthesis (including meta-analysis)Theoretical framework / proposed process or concept2021talkhttps://youtu.be/N6hpfqgxU3Q
Hugo PedderMBNMAdose: An R package for incorporating dose-response information into Network Meta-Analysis

Network meta-analysis (NMA) is used to synthesise results from multiple treatments where the RCTs form a connected network of treatments. It provides a framework for comparative effectiveness and assessment of consistency between the direct and indirect evidence and is extensively employed in health economic modelling to inform healthcare policy. Multiple doses of different agents in an NMA are typically ""split"" or ""lumped"". Splitting involves modelling different doses of an agent as independent nodes in the network, making no assumptions regarding how they are related, and can results in sparse or even disconnected networks in which NMA is impossible. Lumping assumes different doses have the same efficacy, which can introduce heterogeneity or inconsistency. MBNMAdose is an R package that allows dose-response relationships to be explicitly modelled using Model-Based NMA (MBNMA). As well as avoiding problems arising from lumping/splitting, this modelling framework can improve precision of estimates over those estimated using standard NMA, allow for interpolation/extrapolation of predicted responses based on the dose-response relationship, and allow for the linking of disconnected networks via the dose-response relationship. MBNMAdose provides a suite of functions that make it easy to implement Bayesian MBNMA models, evaluate their suitability given the data, and produce meaningful outputs from the analyses that can be used in decision-making.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2021talkhttps://youtu.be/QGjzFul66EU
Michael Seobnma: Bayesian Network Meta-Analysis using 'JAGS'

Recently, there has been many developments of Bayesian network meta-analysis (NMA) packages in R. The general aim of these packages is to provide users who are not familiar with Bayesian NMA a tool that can automate the model. Many of these packages implement the Lu and Ades framework, often referred to as the contrast-based model. Currently, none of these packages incorporate options to utilize baseline risk. We have developed a package named ‘bnma’ which provide most of the implementations in ‘gemtc’ and additionally include options to model baseline risk. Our objectives are (1) to describe a general framework for incorporating baseline risk in Bayesian NMA (2) to illustrate how to implement this framework in ‘bnma’ using a dataset in smoking cessation counseling programs. We implemented two different approaches to model using the baseline risk. ‘bnma’ can model baseline risk as exchangeable and can implement the model commonly referred to as contrast-based model with random study intercept. Furthermore, by including baseline risk as a trial-level covariate, we can potentially reduce both heterogeneity and inconsistency in NMA and improve the overall mode fit. Different assumptions can be made when using baseline risk as a covariate i.e. common, exchangeable, and independent; we show how each can be fitted in ‘bnma’. We compare differences in the analysis with different assumptions on baseline risk.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2021talkhttps://youtu.be/jDP1y8wq5FU
Ciara KeenanESMARConf2021 Special Session 5: User Interfaces livestream

Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps); Code package / library2021sessionhttps://youtu.be/kDs3HezTrZ8
Clareece NevillPilot for an Interactive Living Network Meta-Analysis web application

The Complex Reviews Support Unit in the UK currently host a freely available web application to conduct network meta-analysis (NMA), namely MetaInsight. By using RShiny, MetaInsight has an interactive point-and-click platform and presents visually intuitive results. The application leverages established analysis routines (specifically the ‘netmeta’ and ‘gemtc’ packages in R), whilst requiring no specialist software for the user to install. Through these features, MetaInsight is a powerful tool to support novice NMA users. The current coronavirus-2019 (COVID-19) global pandemic acted as a motivating example to pilot a ‘living’ version of MetaInsight. Subsequently, MetaInsight Covid-19 was developed, a tool for exploration, re-analysis, sensitivity analysis, and interrogation of data from living systematic reviews (LSRs) of COVID-19 treatments over a 5-month pilot period [crsu.shinyapps.io/metainsightcovid]. The functionality within MetaInsight was carried forward and a front page added presenting a summary analysis of the ‘living’ dataset. Data was continuously extracted and updated every week, with the results presented in the app automatically updated. Functionality of MetaInsight Covid-19 will be demonstrated and inner workings explained, the challenges that were faced and overcome will be shared, and the potential for the pilot to be extended to a generic Living MetaInsight for LSRs will be discussed.
Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps)2021talkhttps://youtu.be/b0Chqk-T2pU
Suzanne FreemanMetaDTA: An interactive web-based tool for meta-analysis of diagnostic test accuracy studies

Diagnostic tests are routinely used in healthcare settings to split patients into one of two groups: diseased and non-diseased individuals. The accuracy of diagnostic tests is measured in terms of two outcomes: sensitivity and specificity. The recommended approach for synthesising diagnostic test accuracy (DTA) studies is to use either the bivariate or hierarchical summary receiver operating characteristic (HSROC) models, which account for the correlation between sensitivity and specificity with the results presented either around a mean accuracy point or as a summary receiver operating characteristic (SROC) curve. Software options for fitting bivariate and HSROC models are available in R but require statistical knowledge. We developed MetaDTA, a freely available web-based “point and click” interactive tool, which allows users to input their DTA study data and conduct meta-analyses of DTA studies, including sensitivity analyses. MetaDTA is a Shiny application, which uses the lme4 package for conducting statistical analyses and 22 other R packages to provide an intuitive easy-to-use interface. MetaDTA incorporates novel approaches to visualising SROC curves allowing users to visualise study quality and/or percentage study contributions on the same plot as the individual study estimates and the meta-analysis results. Multiple features can be combined within a single plot. All tables and plots can be downloaded. MetaDTA is available at: https://crsu.shinyapps.io/dta_ma/.
Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps)2021talkhttps://youtu.be/-FHX3kiAx0w
W. Kyle HamiltonJamovi as a Platform for Running a Meta-Analysis

Meta-analysis is a key quantitative component of evidence synthesis. Over the last 40 years the popularity and utility of meta analytic methods has had an enormous impact in biology, ecology, medicine, and the social sciences. Within the fields of medicine and public health it has become an essential method for medical professionals and scholars on summarizing the known effects of medical treatments and interventions. For example, these analyses can combine the results from multiple published studies to evaluate the effectiveness of a drug or investigate the effectiveness of behavior on health. MAJOR is an add on module for the open source Jamovi statistical platform which allows users to produce a publication quality meta-analysis. Both software projects are quickly being adopted for use in classes and workshops by colleges, universities, and medical schools around the world as a replacement for expensive proprietary statistical software. Jamovi uses an intuitive point and click interface which lowers the learning curve required to run analyses. Also, both Jamovi and MAJOR use R to run the analyses and can create reproducible analyses scripts for use in the native R environment. MAJOR combines a variety of packages from the R community, including the popular metafor package to produce fixed, random, and mixed effects meta-analytic models, methods for detecting publication bias, effect size calculators, and generation of publication grade graphics.
Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps); Code package / library2021talkhttps://youtu.be/O9NTMSAzDjs
Martin WestgateESMARConf2021 Special Session 6: Reporting livestream

Data visualisation; Report write-up / documentation / reportingCode package / library2021sessionhttps://youtu.be/3lMxPMIENE0
Neal HaddawayProducing interactive flow diagrams for systematic reviews using the PRISMA2020 package

Systematic review flow charts have great potential as a tool for communication and transparency when used not only as static graphics, but also as interactive ‘site maps’ for reviews. This package makes use of the DiagrammeR R package to develop a customisable flowchart that conforms to PRISMA2020 standards. It allows the user to specify whether previous and other study arms should be included, and allows interactivity to be embedded through the use of mouseover tooltips and hyperlinks on box clicks.The package has the following capabilities: 1) to allow the user to produce systematic review flow charts that conform to the latest update of the PRISMA statement (Page et al. 2020); 2) to adapt this code and publish a free-to-use, web-based tool (a Shiny App) for producing publication-quality flow chart figures without any necessary prior coding experience; 3) to allow users to produce interactive versions of the flow charts that include hyperlinks to specific web pages, files or document sections. This presentation will introduce the R package and ShinyApp and discuss the benefits that such an easily usable tool brings, along with the use of interactivity in such visualisations.
Data visualisation; Report write-up / documentation / reportingCode package / library2021talkhttps://youtu.be/D9CIm2co2dQ
Charles Graynmareporting:: a computational waystation for toolchains in network meta-analysis reporting

nmareporting:: is a living package and website that provides a collaborative open resource for practical toolchains to report Bayesian network meta-analyses in R. Included is a suite of R functions from packages such as multinma:: and nmathresh::, as well as providing scope for custom functions for specific use cases. Reporting model results to stakeholders from diverse backgrounds and disciplines by particular bodies’ protocols is a key feature of evidence synthesis. As such the vignettes provided on the associated site give guidance for reporting for different scientific reports, such as Cochrane. Anticipating the adoption of threshold analysis, tools and guidance are provided for practitioners to augment standard sensitivity reporting. nmareporting:: explores a template for collaborative and open development of practical toolchain walkthroughs that address specific scientific protocols. In addition to the intrinsic value for reporting network meta-analyses, nmareporting:: is also a case study in how we can bring together practitioners and stakeholders from different scientific organisations to collaborate on an open, shared tool for evidence synthesis.
Data visualisation; Report write-up / documentation / reportingCode package / library2021talkhttps://youtu.be/bUkdsBwYaLs
Luke McGuinnessESMARConf2021 Special Session 7: Data Viz livestream

Data visualisation; Quality assessment / critical appraisal; Data wrangling / curatingCode package / library; Graphical user interface (including Shiny apps); Theoretical framework / proposed process or concepts2021sessionhttps://youtu.be/dxxUcYEv4D8
Aaron ConwayUsing the flextable package to create graphical summaries of study characteristics in systematic reviews

It is very common in systematic reviews for a summary of the characteristics of each study included in the systematic review to be presented in the form of a table consisting entirely of text. This presentation demonstrates how the flextable package can be used to replace some of the text with graphical features for a study characteristics table. The printing of the table in landscape format in a knitted word document along with the remainder of the report can be achieved using the officedown package. Some features from the flextable package used in the table include: - Use of coloured inline ‘minibar’ images to show the male:female ratio in included studies using flextable::minibar; - Images of flags to indicate the country using flextable::asimage (in addition to being aesthetically more pleasing than presenting this information in text, this feature also reduced the amount of horizontal space required for this column, which is useful for static-print tables that need to be incorporated into Word documents); - Use of inline images so the reader can more quickly and clearly distinguish between studies that included higher numbers of participants (flextable::minibar) and measurements (flextable::lollipop) than if the information was presented in text format. Importantly, using the flextable package to create this table meets the requirements for most journals, which stipulate that tables must be submitted in a word document form.
Data visualisationCode package / library2021talkhttps://youtu.be/aHAeR97Zllw
Luke McGuinness"On the shoulders of giants": advantages and challenges to building on established evidence synthesis packages, using the {robvis} package as a case study

{robvis} is an established R package which allows users to create publication-quality risk-of-bias plots. Recently, it has expanded its scope to allow for new functionality, focusing in particular on better integration with the {metafor} package for meta-analysis. Following a public discussion of the proposed functionality run through GitHub and Twitter, in addition to consultation with the maintainer of the {metafor} package, two new functions were added to {robvis}. The first creates paired risk-of-bias plots, where a ""traffic light""-style risk-of-bias plot is appended onto a standard forest plot so that the risk of bias for each result in the meta-analysis is readily available to the reader. This function builds on the output of the {metafor} forest plot function to create these graphs. In addition, as users frequently wish to stratify their analysis by the level of bias in each included study, a new function which automatically subsets the data by this variable and presents a subgroup meta-analysis for each subgroup has also been developed. This presentation will use the {robvis} package as a case study to demonstrate how collaboration between packages in the evidence synthesis landscape can result in new functionality and increased ease-of-use for end users. It will also highlight the benefit of thinking of the evidence synthesis workflow as a whole when developing and expanding new functionality, rather than seeing packages as standalone silos.
Quality assessment / critical appraisal; Data visualisationCode package / library2021talkhttps://youtu.be/yJOTTc3y4iw
Alexander Fonseca MartinezAn Interactive forest plot for visualizing risk of bias assessment

A forest plot is the most common way that research synthesizers use to combine the results of multiple evaluations. As part of a living systematic review, a web application was built using Shiny. As part of this web-app, we developed an interactive forest plot with three novel functionalities: 1) Users can hover over the point estimate square on any study in the forest plot, and the text is displayed of the point estimate and confidence interval for the measure of association. 2) Users can click on the point estimate square on any study, and the risk‐of‐bias judgment is displayed in an interactive table alongside the forest plot. The risk‐of‐bias judgment is displayed as a colored-coded system indicating the level of bias. 3) In the risk of bias table, the user can click on the risk-of-bias judgment for a study, and a pop-up appears which contains the written risk of bias rationale for each domain evaluated. It is possible to visualize multiple associations at once by clicking on multiple point estimate squares. This interactive forest plot relies on the R packages ggplot2, ggiraph and formattable. We will aim to implement a R package to allow users to upload their data, integrate it into projects covering different aspects of the systematic review and meta-analysis such as metaverse, and develop a web app to make the tool accessible even to those without previous knowledge of R.
Quality assessment / critical appraisal; Data visualisationGraphical user interface (including Shiny apps)2021talkhttps://youtu.be/q8a8Y9RZ3ZE
Charles GrayStructuring data for systematic review: towards a future of open and interoperable analyses

Most systematic reviews and maps design bespoke data extraction tools. These databases obviously share similarities, for example citation information, study year, location, etc., and there are necessary differences depending on the focus of the review. However, each database uses different conventions related to column names, cell content formatting and structure. Because of the lack of templates across the evidence synthesis community, reviewers often learn the hard way that databases designed for data extraction are often not suitable for immediate visualisation, analysis or sharing. Using R functions we outline an approach to structuring databases, in which data are shared in tables wherein each row denotes an observation and each column a variable, and develop context-agnostic tools to translate databases between different formats for specific uses. This project offers computational workflows for reviewers to develop open syntheses and take a step toward standardisation of systematic review/map databases.
Data wrangling / curating; Data visualisationTheoretical framework / proposed process or concepts; Code package / library2021talkhttps://youtu.be/HfR7ifhnbLI
Andrew FeiermanEviAtlas – tool for producing visuals from evidence synthesis data

Systematic reviews and maps typically involve the production of databases showing the key attributes of the studies included in the review. These databases are a major intellectual contribution that can assist data users and future reviewers alike; but they are typically published as spreadsheets that are difficult to interpret and investigate without specialist knowledge or tools. To address this problem, we created eviatlas in 2018 as an online app to allow users to investigate their own datasets within a free, online portal (https://estech.shinyapps.io/eviatlas/). While useful, however, this tool only allowed reviewers to view their own data, and not to facilitate access by external stakeholders. Further, building the sorts of interactive websites needed to investigate data in this way is beyond the budget or skillset present in most organisations. To address this gap, we have developed a ‘packaged’ version of eviatlas that users can call from within the R statistical environment. This new version expands the functionality of the original by allowing users to build and deploy their own apps, providing open access to their data to external users. Once complete, we envisage that eviatlas will provide an easy way to deploy interactive databases to the web, while allowing advanced users to fully customize the resulting websites by supplying additional information such as markdown, css and image files. In this talk we will provide an introduction to the software and demonstrate how it can enable publication of interactive websites with only a few lines of code.
Data visualisationGraphical user interface (including Shiny apps); Code package / library2021talkhttps://youtu.be/v9TV_uhs2wU
Ciara Keenanevimappr an R package for producing interactive bubble plots of evidence

evimappr is an R package designed to help evidence synthesis authors display the volume of evidence across three categorical variables. The package produces bubble plots that separate any quantitative variable (e.g. the number of studies in a review) across an x and y axis and a third dimension (ideally with <c.6 levels). The package allows users to make their plots interactive, enabling tooltips and hyperlinks for each bubble. We give an example to demonstrate using this interactivity to link to a subset of the studies corresponding to a particular bubble in a web-based HTML table.
Data visualisationCode package / library2021talkhttps://youtu.be/1m2CuN8QRF0
Neal HaddawayESMARConf2021 Special Session 8: Research Weaving livestream

Evidence mapping / mapping synthesis; Quantitative analysis / synthesis (including meta-analysis); Data visualisationTheoretical framework / proposed process or concept; Method validation study / practical case study; Graphical user interface (including Shiny apps); Code package / library2021sessionhttps://youtu.be/JGEYkRKXIP4
Sarah YoungWhat can networks tell us in an evidence and gap map context? Vegetated strips in agricultural fields as a case study

Evidence and gaps maps (EGMs) bring together scattered and siloed knowledge from existing research to develop decision-making tools for evidence-based policy. In addition to the thematic categorization of studies by characteristics relevant to decision-makers, EGMs also often include a basic bibliometric analysis to understand trends in publishing and authorship. In 2019, the concept of 'research weaving' was introduced by Nakagawa and colleagues to describe a more advanced bibliometric analysis including network visualization and text analysis to provide insights into collaboration and citation dynamics in a systematic mapping context. The current work seeks to apply these concepts to a previously published EGM on the role of vegetated strips in agricultural fields. Taking this EGM as a case study, we seek to demonstrate the added value of including analyses of co-authorship, citation and keyword co-occurrence networks in EGMs, including what these analyses can tell us about the social dynamics of research communities that contribute knowledge to this area and shed light on terminology used to describe key concepts, which could aid in the search for additional literature. We hope to provide a clear path forward using existing open source tools in R, and other open platforms like VosViewer, to enable researchers to add new layers of understanding to EGMs that could help drive collaboration and facilitate the evidence synthesis process.
Evidence mapping / mapping synthesisTheoretical framework / proposed process or concept2021talkhttps://youtu.be/WDUbLnACypc
Tim AlamenciakAnalyzing Canadian ecological restoration literature with bibliometric analysis and a systematic map

Our presentation will discuss a novel use of bibliometric analysis (using Bibliometrix) paired with a systematic map to characterize and synthesize a broad selection of research. Ecological Restoration Knowledge Synthesis (ERKS) is a nationally-funded knowledge synthesis project that has involved a systematic literature review, interviews and case studies to assess and synthesize the current state of ecological restoration knowledge in Canada. We will demonstrate how we used the Bibliometrix R package to draw conclusions about a selection of 3,013 peer-reviewed journal articles. The analysis from Bibliometrix highlighted key clusters of literature. We then conducted a systematic map on studies that measured the outcomes of ecological interventions. The bibliometric analysis process helped inform the scope of our systematic map by providing insights about the body of literature. The systematic map was conducted using CADIMA to track the exclusions and data extraction. The extracted data was analyzed using R to cluster the results and create heatmaps for specific subject areas. The two approaches taken together allowed us to synthesize the broad sweep of the academic literature.The resulting analysis blends bibliometric analysis with a systematic map, resulting in a methodology that can be used to characterize a wide body of subject-specific literature. Our talk will highlight how these two methods of synthesis work together to highlight gaps in the research landscape.
Evidence mapping / mapping synthesisMethod validation study / practical case study2021talkhttps://youtu.be/-u2NPzfwZBw
Loretta GaspariniIntroducing metalabR: A package to facilitate living meta-analyses and dynamic meta-analytic visualizations

Developmental psychologists often make statements of the form “babies do X at age Y”. Such summaries can misrepresent a messy and complex evidence base, yet meta-analyses are under-used in the field. To facilitate developmental researchers’ access to current meta-analyses, we created MetaLab (metalab.stanford.edu), a platform for open, dynamic meta-analytic datasets. In 5 years the site has grown to 29 meta-analyses with data from 45,000 infants and children. A key feature is the unique standardized data storage format, which allows a unified framework for analysis and visualization, and facilitates addition of new datapoints to ensure living meta-analyses that give the most up-to-date summary of the body of literature. MetalabR facilitates and standardizes the process of conducting and integrating meta-analyses with the MetaLab platform. Existing key features focus on ensuring adherence to our standardized data format by providing functions for reading, validating, and cleaning new datasets and added datapoints. Furthermore, MetalabR helps access existing MetaLab functionalities for quantitative analysis, building on metafor (Viechtbauer, 2007). In progress are visualization tools for developmental meta-analyses and a report summarizing results of random effects models appropriate for developmental psychology.
Quantitative analysis / synthesis (including meta-analysis); Data visualisationGraphical user interface (including Shiny apps); Code package / library2021talkhttps://youtu.be/x4nu6wGqdeg
Ciara KeenanESMARConf2021 Special Session 9: Open Synthesis livestream

Communities of practice/research practices generally; General (any / all stages); Quantitative analysis / synthesis (including meta-analysis); Data / meta-data extraction; Document / record management (including deduplication); Education / capacity building; Report write-up / documentation / reportingTheoretical framework / proposed process or concept; Graphical user interface (including Shiny apps); Code package / library; Method validation study / practical case study2021sessionhttps://youtu.be/ICKTBPB2x0E
Neal HaddawayOpen Synthesis and R

This presentation will introduce the concept of Open Synthesis (the application of Open Science principles to evidence synthesis), ongoing work by the evidence synthesis community to develop definitions and operationalise the concept, and how R facilitates Open Synthesis practices.
Communities of practice/research practices generally; General (any / all stages)Theoretical framework / proposed process or concept2021talkhttps://youtu.be/a485bqzeY2A
Tanja BurgardPsychOpen CAMA - A platform for open and cumulative meta-analyses in psychology

Typically, meta-analyses are published as printed articles. This practice leads to serious limitations for the re-usability of meta-analytic data. Results of printed meta-analyses can often neither be replicated, nor can the sensitivity of the results to subjective decisions, as the use of statistical models, be examined. Results quickly become outdated and without access to the meta-analytic dataset, the process of meta-analytic data collection starts from scratch. A solution for an infrastructure for continuous updating of meta-analytic evidence is the concept of CAMA (Community-Augmented Meta-Analysis). A CAMA is an open repository for meta-analytic data, that provides a GUI for meta-analytic tools. PsychOpen CAMA serves the field of psychology. The PHP application relies on an OpenCPU server to process requests from the analyses called from the GUI. All functions needed for these analyses are stored in an R package. To ensure interoperability, all data available on the platform follow certain conventions defined in a data template. The results from the operations on the OpenCPU Server are given back as output on the GUI.Meta-analyses published on the platform are accessable and can be augmented continuously by the research community. A first release of the service will be available in early 2021. All meta-analytic functionalities (data exploration, meta-regression, publication bias, power analyses) can already be demonstrated with our test version.
Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps); Code package / library2021talkhttps://youtu.be/jI62P-HTQqs
Marc LajeunesseChallenges and lessons for automating data extractions from scientific plots

Here I present challenges to the semi-automation of data extraction from published figures. I first discuss failed implementations with an existing R package METAGEAR, and then focus on new technologies available in R that can help resolve deficiencies. Finally, I end with best practices for scientists to help make published figures more accessible to automated technologies.
Data / meta-data extractionTheoretical framework / proposed process or concept; Method validation study / practical case study2021talkhttps://youtu.be/60312q3ivqg
Thomas LeuchtefeldSysrev - An Open Access Platform for Review

Sysrev is an open access platform for data extraction and review. It supports FAIR data principles (findable, accessible, interoperable, and reproducible) with an emphasis on interoperability. Sysrev currently supports three forms of interoperability - a graphql server, an R packages (RSysrev), and a python client (PySysrev). Sysrev provides a free, and open access platform that can be used by developers as a source of review data or as a platform for supporting new reviews. In this talk, we'll briefly demonstrate how sysrev can be used to create a named entity recognition data set for genes (sysrev.com/p/3144) and how that data set can be used to create a named entity recognition model with PySysrev. We will also demonstrate how RSysrev can be used to read open access review data, and how it can be used to generate new data for review. Sysrev is eager to partner with developers who want to improve the way humans and machines work together, we provide a simple open-access platform that lets developers build applications that rely on review data, without needing to recreate a review platform from scratch.
General (any / all stages); Document / record management (including deduplication)Graphical user interface (including Shiny apps)2021talkhttps://youtu.be/8Bqm867i4TU
Alexandra Bannach-BrownResearch Ecosystems & the Role of R in Effective Evidence Synthesis: building bridges between researchers

Open Research Ecosystems are communities of researchers, evidence synthesists, tool makers, information specialists, research data managers, etc, that collaboratively recognise evidence synthesis as the end goal of research. Research Ecosystems support researchers to design, undertake, and report primary research and evidence synthesis in a way that optimises reuse, translation, and sharing of the data. Research Ecosystems are based on shared open principles, transparency of research methods in evidence synthesis and primary research, code of conduct and sharing of materials for collaboration and communication. This talk will present the concept of open research ecosystems to grow community and improve primary research and evidence synthesis. We present a pilot project idea, funded by Stifterverband (DE), to establish research ecosystems in biomedical translational research. This talk will explore the role of R in developing Research Ecosystems and ways to build bridges between researchers for effective evidence synthesis. Let’s revolutionise the way we synthesise evidence.
Communities of practice/research practices generally; Education / capacity buildingTheoretical framework / proposed process or concept2021talkhttps://youtu.be/TrwjoVKb3i4
Matteo ManciniA new ERA for meta-analysis: building Executable Research Articles

When we think about meta-analysis and more in general scientific papers, we think about a manuscript in PDF with figures and tables. This format has been almost unchanged for the last twenty years and reduces the reader experience to reading text and looking at charts, without any real chance to understand more about either the methodology or the results. As reproducibility has become a central point in several field, we need to rethink the publication media. I will be sharing some insights from our recent experience in putting together and publishing a meta-analysis in the format of an executable research article (ERA). ERAs not only can embed text and code but also allows to easily execute code on the fly and generate interactive figures. I will discuss potentials and challenges for this new format.
Report write-up / documentation / reportingTheoretical framework / proposed process or concept2021talkhttps://youtu.be/cRsm62eSq34
Martin WestgateWorkshop 1: Writing an R function and developing a package

This short workshop provides walkthroughs, examples and advice on how to go about building R functions and packages, and why you might wish to do so in the first place. It aims to discuss the benefits of using functions and packages to support your work and the work of others, and provides practical advice about when a package might be ready to 'go public'.
General (any / all stages); Report write-up / documentation / reportingWorkshop; R coding; Package development2021workshophttps://youtu.be/h5-gbq2-NJg
Neal HaddawayWorkshop 2: Systematic review coordinating bodies and how they can help you: panel discussion

This workshop and panel discussion focus on what the major systematic review coordinating bodies, the Campbell Collaboration, CEE and Cochrane, can provide by way of support to anyone wishing to conduct a robust evidence synthesis. Each organisation briefly presents itself, followed by a panel discussion with questions from the conference participants.
General (any / all stages); Communities of practice/research practices generally; Collaboration; Education / capacity building; Protocol development; Communication; Report write-up / documentation / reportingWorkshop; Theoretical framework / proposed process or concept2021workshophttps://youtu.be/COUOgAiN-mU
Luke McGuinnessWorkshop 3: Introduction to GitHub

This workshop will provide walkthroughs, examples and advice on how to use GitHub to support your work in R, whether developing packages or managing projects.
General (any / all stages); Report write-up / documentation / reportingWorkshop; GitHub; Version control; R coding; Package development2021workshophttps://youtu.be/BpIsVDmq9NU
Emily HennessyWorkshop 4: Collaborating to reduce research waste

This panel discussion focuses on how the evidence synthesis and technology development communities can work to ensure that research waste and redundancy of effort are minimised when tools to support evidence synthesis are developed, and how this can be balanced with innovation and bespoke tool development.
General (any / all stages); Communities of practice/research practices generally; CollaborationWorkshop; Theoretical framework / proposed process or concept2021workshophttps://youtu.be/L-Zw7ywXcTU
Neal HaddawayClosing Session with a panel on Training in R and Evidence Synthesis

This panel discussion covers why evidence synthesis capacity development is vital for rigorous synthesis and evidence-informed decision-making. The panellists discuss the beneficial role that systems like R can play in increasing awareness of and use of robust methods, and possible challenges with Open platforms for training future generations of evidence synthesists.
General (any / all stages); Communities of practice/research practices generally; Collaboration; Education / capacity buildingWorkshop; Theoretical framework / proposed process or concept2021sessionhttps://youtu.be/tOJVrlvQf_U
Haddaway, NealOpening Session livestream

Communities of practice/research practices generally; CollaborationTheoretical framework2022sessionhttps://youtu.be/gaIzk9-1L2U
Haddaway, NealWelcome to ESMARConf2022

This presentation opens the official ESMARConf2022 programme. Learn about ESMARConf's objectives and values, details from this year's funder, Code for Science & Society, and more about our Accessibility Policy and Code of Conduct.
Communities of practice/research practices generallyTheoretical framework2022talkhttps://youtu.be/7mP_6yA4oX4
Pigott, TerriKeynote presentation: Synthesizing Communities: Improving Evidence Synthesis through Collaboration

We are delighted and honoured to welcome Terri Pigott to give this year's ESMARConf conference opening lecture. In her talk, Terri will reflect on her 30+ years in research on evidence synthesis and discuss the importance of interdisciplinary collaborations to improve the field.
Communities of practice/research practices generally; CollaborationTheoretical framework2022talkhttps://youtu.be/sSVTQdUNkS8
Hennessy, EmilySpecial Session 1: Review processes from A to Z (part 1)

Study selection / screening; Qualitative analysis / synthesis (including text analysis and qualitative synthesis); Searching / information retrieval; Evidence mapping / mapping synthesis; Data visualisationMethod validation study / practical case study; Graphical user interface (including Shiny apps); Code package / library2022sessionhttps://youtu.be/_7yNNrIzcU0
Hunter, BronwenUsing state-of-the-art transformer models to automate text classification in R

The utilisation of automated classification tools from the field of Natural Language Processing (NLP) can massively decrease the amount of time required for the article screening stage of evidence synthesis. To achieve high accuracy, models often require huge volumes of ‘gold-standard’ labelled training data, which can be expensive and time-consuming to produce. As a result, ‘transfer learning’, in which NLP models, pre-trained on large corpora, are downloaded and finetuned on a smaller number of hand-labelled texts, is an increasingly popular method for achieving high-performance text classification. The availability of state-of-the-art transformer models via the open source ‘hugging face’ library has also improved the accessibility of this approach. However, materials outlining how to make use of such resources in R are limited. At ESMARCONF 2022, I will introduce and demonstrate how transfer learning can be carried out in R and seamlessly integrated with data collection from academic databases and internet sources.
Study selection / screening; Qualitative analysis / synthesis (including text analysis and qualitative synthesis)Method validation study / practical case study2022talkhttps://youtu.be/f31cTAr12F8
Takola, ElinaTowards an automated Research Weaving

We here present a systematic study on the concept of ecological niche. Ecological niche has been described in various ways; from habitat to role and from biotope to hypervolume. Although it has many different definitions, it remains one of the most fundamental concepts in Ecology. Our aim is to implement the Research Weaving framework on a large body of literature, relevant to the ecological niche in order to illustrate how this concept evolved since its introduction in the early 20th century. We analysed over 29,000 publications using systematic maps and bibliometric webs. Our synthesis consisted of 8 components: phylogeny, type/validity, temporal trends, spatial patterns, contents, terms, authors, citations. We used bibliometric analyses, quantitative analyses of publication metadata and text mining algorithms. This integrative presentation of the development of the ecological niche concept provides an overview of how dynamics changed over time. It also allows us to detect knowledge gaps, while presenting a systematic summary of existing knowledge. To our knowledge, this is one of the first projects that implements the research weaving framework using exclusively automated processes.
Evidence mapping / mapping synthesisMethod validation study / practical case study2022talkhttps://youtu.be/94Fkcm2zjOg
Haddaway, Nealcitationchaser: a tool for transparent and efficient forwards and backwards citation chasing in systematic searching

Systematic searching aims to find all possibly relevant research records from multiple sources to collate an unbiased and comprehensive set of bibliographic records. Along with bibliographic databases, systematic reviewers use a variety of additional methods to minimise procedural bias, including assessing records that are cited by and that cite a set of articles of known relevance (citation chasing). Citation chasing exploits connections between research articles to identify relevant records for consideration in a review by making use of explicit mentions of one article within another. Citation chasing is a popular supplementary search method because it helps to build on the work of primary research and review authors. It does so by identifying potentially relevant studies that might otherwise not be retrieved by other search methods; for example, because they did not use the review authors’ search terms in the specified combinations in their titles, abstracts or keywords. Here, we describe an open source tool that allows for rapid forward and backward citation chasing. We introduce citationchaser, an R package and Shiny app for conducting forward and backward citation chasing from a starting set of articles. We describe the sources of data, the backend code functionality, and the user interface provided in the Shiny app.
Searching / information retrievalGraphical user interface (including Shiny apps); Code package / library2022talkhttps://youtu.be/pyt2YgPUVfs
Polanin, JoshuaAn Evidence Gap Map Shiny Application for Effect Size or Summary Level Data

Evidence Gap Maps (EGMs) provide a structured visual framework designed to identify areas where research has been conducted, and where research has not been conducted. Traditional EGMs combine at least two characteristics—e.g., outcome measurement, research design—mapped onto x-axis and y-axis to form a grid. EGMs can be in table, graph, or chart format. The intersections of the axes on the grid, at minimum, contain information on the number of studies conducted for the combination of the levels of the characteristics. We created this Shiny app to ease the construction of EGMs, in the form of a graph. The app allows users to upload their dataset, and use point-and-click options to summarize data for combinations of factors, and then create an EGM using the ggplot2 package in R (Wickham, 2011). We also provide an example dataset for instructional purposes. Further, the app will output R syntax used to create the plot; users can download the syntax and customize the graph if needed.
Evidence mapping / mapping synthesis; Data visualisationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/-4WhXPgUQD4
Hennessy, EmilySpecial Session 2: Review processes from A to Z (part 2) livestream

Data wrangling/curating; Data / meta-data extraction; Qualitative analysis / synthesis (including text analysis and qualitative synthesis); Updating / living evidence syntheses; CommunicationTemplate (e.g. HTML web page or markdown file); Method validation study / practical case study; Graphical user interface (including Shiny apps)2022sessionhttps://youtu.be/w9c-EOKKhc4
Zimsen, StephAutomating data-cleaning and documentation of extracted data using interactive R-markdown notebooks

At the Institute for Health Metrics and Evaluation, we conduct ~40 systematic reviews each year. In our general process to search > screen > extract > analyze, we found we need an intervening step: cleaning extracted data before analysis. The problem arises from a feature of our workflow: one person extracts the data, while another analyzes. Clean-up falls through the gap as we hand off data. Analysts must then spend time cleaning, though the extractor is far more familiar with the dataset. To work faster with fewer errors, we developed a stepwise cleaning checklist, then wrote code modules to fix common problems. But juggling Excel and R and a checklist still takes time and attention. To streamline further, we are developing a systematic solution: an interactive R-markdown notebook to take in parameters of the specific extraction dataset; clean and validate the data; and return a new cleaned dataset. We are testing with a recent systematic review dataset of ~2800 observations from >150 sources. This semi-automated interactive code has other benefits besides valid, upload-ready analysis data. First, a flexible, parameterized template enables faster work, easily repeated. Also, the code can reproducibly make documentation of cleaning done, or extraction history, or other reports on data, parameters, and results. And critically, an interactive notebook makes sophisticated coding accessible to data extractors, who tend to have less coding experience than research analysts.
Data wrangling/curatingTemplate (e.g. HTML web page or markdown file)2022talkhttps://youtu.be/H64Bw6FvnMw
Wong, CharisDeveloping a systematic framework to identify, evaluate and report evidence for drug selection in motor neuron disease clinical trials.

Motor neuron disease (MND) is a rapidly progressive, disabling and incurable disease with an average of time to death between 18-30 months from diagnosis. Despite decades of clinical trials, effective disease modifying treatment options remain limited. Motor Neuron Disease – Systematic Multi-Arm Adaptive Randomisation Trial (MND-SMART; ClinicalTrials.gov registration number: NCT04302870) is an adaptive platform trial aimed at testing a pipeline of candidate drugs in a timely and efficient way. To inform selection of future candidate drugs to take to trial, we identify, evaluate and report evidence from (i) published literature via Repurposing Living Systematic Review (ReLiSyR-MND), a machine learning assisted, crowdsourced, three-part living systematic review evaluating clinical literature of MND and other neurodegenerative diseases which may share similar pathways, animal in vivo MND studies and in vitro MND studies, (ii) experimental drug screening including high throughput screening of human induced pluripotent stem cell based assays, (iii) pathway and network analysis, (iv) drug and trial databases, and (v) expert opinion. Our workflow implements automation and text mining techniques for evidence synthesis, and uses R shiny to provide interactive, curated living evidence summaries to guide decision making.
Data / meta-data extraction; Qualitative analysis / synthesis (including text analysis and qualitative synthesis)Method validation study / practical case study2022talkhttps://youtu.be/jJsL8QVW6og
Ramirez, VicenteSniffing though the Evidence: Leveraging Shiny to Conduct Meta Analysis on COVID-19 and Smell Loss

Early in the coronavirus pandemic, scientists sought to understand the symptoms associated with COVID-19. Among those most frequently reported was the loss of the sense of taste and smell. To estimate the prevalence of smell loss, we conducted a meta-analysis. However, the dissemination of new literature necessitated that we continue to track and update our analysis. To address this issue, we leveraged the ability of R shiny applications to update and disseminate our analysis. From June 2020 to May 2021, our web-based dashboard provided the public with daily analysis updates, which estimated the prevalence of smell loss. This approach proved to be an effective method of disseminating findings to our field's broader community. While the coronavirus pandemic is an exceptional example of rapid updates to the literature, the framework presented may apply to several other fields and topics.
Updating / living evidence syntheses; CommunicationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/R5vKhlI3-oY
Haddaway, NealSpecial Session 3: Graphical user interfaces livestream

Communities of practice/research practices generally; Education / capacity building; Quality assessment / critical appraisal; Quantitative analysis / synthesis (including meta-analysis); Data visualisation; Communication; Collaboration; Quantitative analysis / synthesis (including meta-analysis)Graphical user interface (including Shiny apps)2022sessionhttps://youtu.be/GzLLdBGWk3s
Harrer, MathiasDoing Meta-Analysis with R: Motivation, Concept and Features of an Open-Source Guide for Beginners

Meta-analytic methods have become an indispensable tool in many research disciplines. Worldwide, students and applied researchers acquire meta-analytic skills to address scientific questions pertinent to their field. Along with its extensions, R now arguably provides the most comprehensive, state-of-the-art toolkit for conducting meta-analyses. For novices, however, this wealth of R-based tools is often difficult to navigate and translate into practice, which may limit the uptake of available infrastructure. The “Doing Meta-Analysis with R” guide is one example of a project aiming to facilitate access to the R meta-analysis universe. It is primarily geared towards individuals without prior knowledge of R, meta-analysis, or both. We present the motivation, teaching concept, and core features of the guide. A brief overview of the technical implementation as an online, open-source resource based on {bookdown}, {shiny} and GitHub is also provided. Lastly, we discuss potential limitations of our approach, point to other user-friendly tools for new meta-analysts, and share general ideas to make the R meta-analysis infrastructure more accessible for everyone.
Communities of practice/research practices generally; Education / capacity buildingGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/b-FJ9GnrXRQ
Karyotaki, EiriniMetapsy: A meta-analytic database and graphical user interface for psychotherapy research synthesis

The number of trials on psychotherapies for depression is very large and quickly growing. Because of this large body of knowledge, it is important that the results of these studies are summarized and integrated in meta-analytic studies. More than a decade ago, we developed a meta-analytic database of these trials which is updated yearly through systematic literature searches. Currently, our database includes more than 800 trials and has been used for numerous systematic reviews and meta-analyses. We developed an open-access website, which includes all the trials of our database and all data we have extracted so far. The prototype of this freely accessible website provides a graphical user interface based on {shiny} to run full meta-analyses, subgroup, risk of bias, and publication bias analyses on sections of studies. We hope that this public database can be used as a resource for researchers, clinicians, and other stakeholders who want to conduct systematic reviews and meta-analyses on psychotherapies for depression. We also discuss future plans to extend the functionality of the website and integrate databases on other mental disorders.
Quality assessment / critical appraisal; Quantitative analysis / synthesis (including meta-analysis); Data visualisation; CommunicationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/iZFnyPtWkcw
Gasparini, LorettaMetaLab: Interactive tools for conducting and exploring community-augmented meta-analyses in developmental psychology

Meta-analyses are costly to conduct, often impossible to reanalyze, and outdated as soon as a new study emerges. How can we lower these hurdles, make data more accessible to researchers, and transform meta-analyses into a living resource? MetaLab (https://metalab.stanford.edu/) is an interactive platform that hosts community-augmented meta-analyses in the field of developmental psychology. On MetaLab, community members can contribute full datasets or update existing meta-analyses. To ensure that new records comply with our format and to make automatic processing possible, we provide a validator using a graphical user interface (GUI). This greatly facilitates the continuous growth of MetaLab and ensures that data contributors can almost instantly benefit from the rich infrastructure we provide. To allow an even broader range of researchers to leverage meta-analytic data, our interactive visualization and power analysis tools allow exploring meta-analytic datasets and planning future experiments using the best evidence available. We will provide a tour of these tools to demonstrate how contributing, updating, and exploring a meta-analysis is greatly facilitated through our GUI.
Collaboration; Quantitative analysis / synthesis (including meta-analysis); Data visualisationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/RRsIZeU-s2w
Hair, KaitlynR Shiny: why turn your R scripts into interactive web applications?

Researchers often want to share their datasets, complex analyses, or tools with others. However, if collaborators or decision makers lack coding expertise, this can be a significant barrier to engagement. Shiny is a package and framework for R users to create interactive online applications, without the need for web development skills. In this presentation, I will introduce the basic architecture of a Shiny application, highlight use-cases via example applications, and provide some tips for getting started.
Education / capacity building; General (any / all stages); CommunicationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/sef9DSHK_Wo
Keenan, CiaraSpecial Session 4: Quantitative synthesis - NMA livestream

Quantitative analysis / synthesis (including meta-analysis); Data visualisation; Quality assessment / critical appraisalGraphical user interface (including Shiny apps); Code package / library2022sessionhttps://youtu.be/BF_fa1VQxiw
Metelli, SilviaNMAstudio: a fully interactive web-application for producing and visualizing network meta-analyses

Several software tools have been developed in the last years for network meta-analysis (NMA) but presentation and interpretation of findings from large networks of interventions remain challenging. We developed a novel online tool, called ‘NMAstudio’, for facilitating the production and visualization of key NMA outputs in a fully interactive environment. NMAstudio is a Python web-application that provides a direct connection between a customizable network plot and all NMA outputs. The user interacts with the network by clicking one or more nodes-treatments or edges-comparisons. Based on their selection, different outputs and information are displayed: (a) boxplots of effect modifiers assisting the evaluation of transitivity; (b) pairwise or NMA forest plots and bi-dimensional plots if two outcomes are given; (c) league tables coloured by risk of bias or confidence ratings from the CINeMA framework; (d) incoherence tests; (e) comparison-adjusted funnel plots; (f) ranking plots; (g) evolution of the network over time. Pop-up windows with extra information are enabled. Analyses are performed in R using ‘netmeta’ and results are transformed to interactive and downloadable visualizations using reactive Python libraries such as ‘Plotly’ and ‘Dash’. A network of 20 drugs for chronic plaque psoriasis is used to demonstrate NMAstudio in practice. In summary, our application provides a truly interactive and user-friendly tool to display, enhance and communicate the NMA findings.
Quantitative analysis / synthesis (including meta-analysis); Data visualisationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/C63qKTG6kAw
Chiocchia, VirginiaThe ROB-MEN Shiny app to evaluate risk of bias due to missing evidence in network meta-analysis

We recently proposed a framework to evaluate the impact of reporting bias on the meta-analysis of a network of interventions, which we called ROB-MEN (Risk Of Bias due to Missing Evidence in Network meta-analysis). In this presentation we will show the ROB-MEN Shiny app which we developed to facilitate this risk of bias evaluation process. ROB-MEN first evaluates the risk of bias due to missing evidence for each pairwise comparison separately. This step considers possible bias due to the presence of studies with unavailable results and the potential for unpublished studies. The second step combines the overall judgements about the risk of bias in pairwise comparisons with the percentage contribution of direct comparisons on the network meta-analysis (NMA) estimates, the likelihood of small-study effects, and any bias from unobserved comparisons. Then, a level of “low risk”, “some concerns” or “high risk” of bias due to missing evidence is assigned to each estimate. The ROB-MEN Shiny app runs the required analysis, semi-automates some of the steps and built-in algorithm to assign the overall risk of bias level for the NMA estimates and produces the tool’s output tables. We will present how the ROB-MEN app works using an illustrative example from a published NMA. ROB-MEN is the first tool for assessing the risk of bias due to missing evidence in NMA and is also incorporated in the reporting bias domain of the CINeMA software for evaluating the confidence in the NMA results.
Quality assessment / critical appraisalGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/LccPtoFsdS4
Nevill, ClareeceDevelopment of a Novel Multifaceted Graphical Visualisation for Treatment Ranking within an Interactive Network Meta-Analysis Web Application

Network meta-analysis (NMA) compares the effectiveness of multiple treatments simultaneously. This project aimed to develop novel graphics within MetaInsight (interactive NMA web-app: crsu.shinyapps.io/MetaInsight) to aid assessment of the ‘best’ intervention(s). The most granular results are from Bayesian rank probabilities and can be visualised with (cumulative) rank-o-grams. Summary measures exists, however, simpler measures (e.g. probability best) may be easier to interpret but are often more unstable and don’t encompass the whole analysis. Surface under the cumulative ranking curve (SUCRA) is popular, directly linking with cumulative rank-o-grams. A critical assessment of current literature regarding ranking methodology and visualisation directed the creation of graphics in R using ‘ggplot’ and ‘shiny’. The Litmus Rank-O-Gram presents a cumulative rank-o-gram alongside a ‘litmus strip’ of SUCRA values acting as a key. The Radial SUCRA plot presents SUCRA values for each treatment radially with a network diagram of evidence overlaid. To aid interpretation and facilitate sensitivity analysis, the new graphics are interactive and presented alongside treatment effect and study quality results. Treatment ranking is powerful and should be interpreted cautiously with transparent, all-encompassing visualisations. This interactive tool will be pivotal for improving how researchers and stakeholders use and interpret ranking results.
Quantitative analysis / synthesis (including meta-analysis); Data visualisationGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/cP0_cWOXhUo
Hamza, Tasnimcrossnma: A new R package to synthesize cross-design evidence and cross-format data

Network meta-analysis (NMA) is commonly used to compare between interventions simultaneously by synthesising the available evidence. That evidence is obtained either from non-randomized studies (NRS) or randomized controlled trials and is accessible as individual participant data (IPD) or aggregate data (AD). We have developed a new R package, crossnma, which allows us to combine these different pieces of information while accounting for their differences. The package conducts a Bayesian NMA and meta-regression to synthesize cross-design evidence and cross-format data. It runs a range of models with JAGS by generating the code automatically from user’s input. A three-levels hierarchical model is implemented to combine IPD and AD and we also integrate four different models for combining the different study designs (a) ignoring their differences in risk of bias (b) using NRS to construct discounted treatment effect priors (c,d) adjusting for the risk of bias in each study in two different ways. Up to three study- or patient-level covariates can also be included, which may help explaining some of the heterogeneity and inconsistency across trials. TH and GS are supported by the HTx-project. The HTx project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement Nº 825162.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2022talkhttps://youtu.be/46qjSMJ0ml0
Pritchard, ChrisSpecial Session 5: Other quantitative synthesis livestream

Quantitative analysis / synthesis (including meta-analysis)Method validation study / practical case study; Theoretical framework; Graphical user interface (including Shiny apps); Code package / library2022sessionhttps://youtu.be/ieMKKxJxFKY
Plessen, Constantin YvesWhat if…? A very short primer on conducting multiverse meta-analyses in R

Even though conventional meta-analyses provide an overview of the published literature, they do not consider different paths that could have been taken in selecting or analyzing the data. At times, multiple meta-analyses with overlapping research questions reach different conclusions due to differences in inclusion and exclusion criteria, or data analytical decisions. It is therefore crucial to evaluate the influence such choices might have on the result of each meta-analysis. Was the meta-analytical method and exclusion criteria decisive, or is the same result reached via multiple analytical strategies? What if a meta-analyst would have decided to go a different path—would the same outcome occur? Ensuring that the conclusions of a meta-analysis are not disproportionately influenced by data analytical decisions, a multiverse meta-analysis can provide the entire picture and underpin the robustness of the findings—or lack thereof—by conducting multiple, namely all possible and reasonable meta-analyses at once. Hereby, multiverse meta-analyses provide a research integration like umbrella reviews yet additionally investigate the influence flexibility in data analysis could have on the resulting summary effect size. During the talk I will give insight into this potent method, and run through the multiverse of meta-analyses on the efficacy of psychological treatments for depression as an empirical example.
Quantitative analysis / synthesis (including meta-analysis)Theoretical framework2022talkhttps://youtu.be/qYUwIyRNOHU
Joshi, Meghawildmeta: Cluster Wild Bootstrapping for Meta-Analysis

Evidence synthesists are often interested in whether certain features of samples, interventions, or study designs are systematically associated with the strength of intervention effects. In the framework of meta-analysis, such questions can be examined through moderator analyses. In practice, moderator analyses are complicated by the fact that meta-analytic data often include multiple dependent effect sizes per primary study. A common method to handle dependence, robust variance estimation (RVE), leads to excessive false positive results when the number of studies is small. Small-sample corrections for RVE have been proposed but they have low power, especially for multiple-contrast hypothesis tests (e.g., tests for whether average effects are equal across three different types of studies). Joshi, Pustejovsky & Beretvas (2021) examined an alternative method for handling dependence, cluster wild bootstrapping. The paper showed through simulation studies that cluster wild bootstrapping maintained adequate rates of false positive results while providing more power compared to existing small sample correction methods. In this presentation, I will introduce a package, called wildmeta, that implements cluster wild bootstrapping particularly for meta-analysis. The presentation will cover when and why meta-analysts should use cluster wild bootstrapping and, how to use the functions in the package with robumeta and metafor models.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2022talkhttps://youtu.be/WzT301yAtdE
Nicol-Harper, AlexUsing sub-meta-analyses to maintain independence among spatiotemporally-replicated demographic datasets

We use population modelling to inform conservation for the common eider, a well-studied seaduck of the circumpolar Northern Hemisphere. Our models are parameterised by vital rates measuring survival and reproduction, which we collated through lit review and a call for data. We performed precision-weighted meta-analysis (Doncaster & Spake, 2018) for vital rates with >20 independent estimates: adult annual survival, clutch size (number of eggs laid) and hatching success (proportion of eggs producing hatchlings). We excluded estimates without associated sample size, and included variance estimates where provided/calculable, otherwise inputting the imputed mean variance. Random-effects error structure allowed for likely variation in population means across this species’ wide range; however, all I2 values were <1%, suggesting that most between-study variation was due to chance rather than true heterogeneity. In many cases, studies presented multiple estimates for a given vital rate – e.g. over different study areas and/or multiple years. Where appropriate, we conducted sub-meta-analyses to generate single estimates which could be handled equivalently to non-disaggregated estimates from other studies. These decisions align with the suggestions of Mengersen et al. (2013) and Haddaway et al. (2020) for maintaining independence among heterogeneous samples, and our workflow ensured that the overall meta-analysis was conducted on independent replicate observations for each vital rate.
Quantitative analysis / synthesis (including meta-analysis)Method validation study / practical case study2022talkhttps://youtu.be/Umyd9_rFEbc
Llambrich, MariaA new approach for meta-analysis using overall results: Amanida

The combination, analysis and evaluation of different studies which try to answer or solve the same scientific question, also known as a meta-analysis, plays a crucial role in answering relevant clinical relevant questions. Unfortunately, metabolomics studies rarely disclose all the statistical information needed to perform a meta-analysis in a traditional manner. Public meta-analysis tools can only by applied to data with standard deviation or directly to raw data. Currently there is no available methodology to do a meta-analysis based on studies that only disclose overall results. Here we present Amanida as a meta-analysis approach using only the most reported statistical parameters in this field: p-value and fold-change. The p-values are combined via Fisher’s method and fold-changes are combined by averaging, both weighted by the study size (n). The amanida package includes several visualization options: a volcano plot for quantitative results, a vote plot for total regulation behaviors (up/down regulations) for each compound, and a explore plot of the vote-counting results with the number of times a compound is found upregulated or downregulated. In this way, it is very easy to detect discrepancies between studies at a first glance. Now we have developed a Shiny app to perform meta-analysis using Amanida approach and make it more accessible for the community.
Quantitative analysis / synthesis (including meta-analysis)Theoretical framework; Graphical user interface (including Shiny apps); Code package / library2022talkhttps://youtu.be/bdUqN2-R24g
Stojanova, JanaSpecial Session 6: Quantitative synthesis with a Bayesian lens livestream

Quantitative analysis / synthesis (including meta-analysis)Code package / library2022sessionhttps://youtu.be/kaaeMCQkhqQ
Bartoš, FrantišekAdjusting for Publication Bias with Bayesian Model-Averaging and the RoBMA R Package

Publication bias presents a vital thread to meta-analysis and cumulative science. It can lead to overestimation of effect sizes and overstating the evidence against the null hypothesis. In order to mitigate the impact of publication bias, multiple methods of adjusting for publication bias were developed. However, their performance varies based on the true data generating process, and different methods often lead to conflicting conclusions. We developed a robust Bayesian meta-analysis (RoBMA) framework that uses model-averaging to combine different meta-analytic models based on their relative predictive. In other words, it allows researchers to base the inference proportionally to the degree of how well the different models predicted the data. We implemented the framework in the RoBMA R package. The package allows specification of various meta-analytic publication bias adjustment models, specification of default and informed prior distributions, and provides summaries and visualizations for the combined ensemble.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2022talkhttps://youtu.be/SOtjQ1tgSwY
Röver, ChristianUsing the bayesmeta R package for Bayesian random-effects meta-regression

The bayesmeta R package facilitates Bayesian meta-analysis within the simple normal-normal hierarchical model (NNHM). Using the same numerical approach, we extended the bayesmeta package to include several covariables instead of only a single "overall mean" parameter. We demonstrate the use of the package for several meta-regression applications, including modifications of regressor matrix and prior settings to implement model variations. Possible applications include consideration of continuous covariables, comparison of study subgroups, and network-meta-analysis.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2022talkhttps://youtu.be/5rQSNYJIJgc
Bannach-Brown, AlexSpecial Session 7: Building an evidence ecosystem for tool design livestream

Searching / information retrieval; Communities of practice/research practices generally; General (any / all stages)Method validation study / practical case study; Theoretical framework; Graphical user interface (including Shiny apps); Code package / library2022sessionhttps://youtu.be/8lLoNZgItxA
Waffenschmidt, SiwSearch strategy development at a German Health Technology Assessment agency: our experience with R from an end user perspective

IQWiG is a German health technology assessment (HTA) agency that has been using text mining tools to develop search strategies for bibliographic databases for more than 10 years. Originally we used the package tm in R for this purpose. We will describe the features we used and how we used them; we will also discuss why we have switched to a commercial tool for text analysis in the meantime and why we are currently looking for a new solution. In addition, we will summarize our requirements and explain which functions we think a new tool could have that go beyond simple text analysis.
Searching / information retrievalMethod validation study / practical case study2022talkhttps://youtu.be/g6agTLcY128
Riley, TrevorThe value of accessible packages for stakeholders in government

Federal research groups support information gathering and evidence synthesis for both primary research and policy/decision making. This presentation will discuss the various ways in which research products are used and discuss the value of accessible tools and evidence.
Communities of practice/research practices generally; General (any / all stages)Theoretical framework2022talkhttps://youtu.be/Vk9L3dcX6cA
Grames, ElizaIncreasing accessibility of evidence synthesis methods through tool development and capacity-building

Building methods and tools is only a first step toward facilitating and supporting evidence synthesis -- improving access for end users is a critical next step in making these methods and tools usable, sustainable and effective. This presentation will discuss how access to the R package litsearchr, used for information retrieval in evidence synthesis, was improved through two approaches: training and curriculum development and the development of a graphical user interface. We'll reflect on considerations for developers and end users in the building and maintenance of open source tools for access and accessibility.
General (any / all stages); Searching / information retrievalTheoretical framework; Graphical user interface (including Shiny apps); Code package / library2022talkhttps://youtu.be/DhfWuW6ld98
Hennessy, EmilySpecial Session 8: Developing the synthesis community livestream

Quantitative analysis / synthesis (including meta-analysis); Education / capacity building; CollaborationCode package / library; Theoretical framework; Method validation study / practical case study; Graphical user interface (including Shiny apps)2022sessionhttps://youtu.be/8IiVbgevHe8
Viechtbauer, WolfgangThe metadat Package: A Collection of Meta-Analysis Datasets for R

The metadat package is a data package for R that contains a large collection of meta-analysis datasets. Development of the package started at the 2019 Evidence Synthesis Hackathon at UNSW Canberra with a first version of the package released on CRAN on 2021-08-20. As of right now, the package contains 70 datasets from published meta-analyses covering a wide variety of disciplines (e.g., education, psychology, sociology, criminology, social work, medicine, epidemiology, ecology). The datasets are useful for teaching purposes, illustrating and testing meta-analytic methods, and validating published analyses. Aside from providing detailed documentation of all included variables, each dataset is also tagged with one or multiple 'concept terms' that refer to various aspects of a dataset, such as the field/topic of research, the outcome measure used for the analysis, the model(s) used for analyzing the data, and the methods/concepts that can be illustrated with the dataset. The package also comes with detailed instructions and some helper functions for contributing additional datasets to the package.
Quantitative analysis / synthesis (including meta-analysis); Education / capacity buildingCode package / library2022talkhttps://youtu.be/4Mc5dxeqvH4
Lajeunesse, MarcLessons on leveraging large enrollment courses to screen studies for systematic reviews

Here I describe eight semesters of experimentation with various abstract screening tools, including R, HTML, CANVAS, and Adobe, with the aims to (1) improve science literacy among undergraduate students, and (2) leverage large enrollment courses to process and code vast amounts of bibliographic information for systematic reviews. I then discuss the promise of undergraduate participation for screening and classification, but emphase (1) consistent failures of tools, in terms of student accessibility and ability to combine and compare student screening decisions, and (2) my consistent inability to get consistent, high-quality screening outcomes from students.
Education / capacity building; CollaborationTheoretical framework; Method validation study / practical case study2022talkhttps://youtu.be/C6IFlRl3rzg
Hobby, David‘LearnR’ & ‘shiny’ to support the teaching of meta-analysis of data from systematic review of animal studies.

Teaching meta-analysis involves combining theoretical statistical knowledge and applying theoretical aspects in practice. Teaching sessions for non-technical students involving R are often beset with technical problems such as outdated software versions, missing and conflicting dependencies, and a tendency for students to arrive on the session day without having installed required software. This causes the first hour(s) of practical sessions to turn into technical troubleshooting sessions. To circumvent these problems, we have created a self-contained web app using the ‘shiny’ and ‘LearnR’ R packages to demonstrate the capabilities of R in meta-analysis. This app runs on a web browser, without the need for students to run R or install packages on their own devices, thus allowing instructors to focus on teaching rather than technical troubleshooting. Using a dataset and code from a previously published systematic review and meta-analysis of animal studies, students are walked-through steps demonstrating theoretical and mathematical foundations of meta-analysis and ultimately replicate the analysis and results. This app supports our live educational workshops but is also designed to be a stand-alone learning resource. At each step, there are multiple choice questions for students to check their understanding of the material. We have demonstrated the use of existing R packages to generate a user-interface for students to learn meta-analysis in practice.
Education / capacity buildingGraphical user interface (including Shiny apps)2022talkhttps://youtu.be/LicJYxq87IE
Haddaway, NealClosing session

Communities of practice/research practices generally; Collaboration; General (any / all stages)Summary / overview2022sessionhttps://youtu.be/8rDuvoHCZ2s
Viechtbauer, WolfgangWorkshop 1: Introduction to meta-analysis in R

We will start by looking at methods for quantifying the results from individual studies included in a meta-analysis in terms of various effect size or outcome measures (e.g., raw or standardized mean differences, ratios of means, risk/odds ratios, risk differences, correlation coefficients). We will then delve into methods for combining the observed outcomes (i.e., via equal- and random-effects models) and for examining whether the outcomes depend on the characteristics of the studies from which they were derived (i.e., via meta-regression and subgrouping). A major problem that may distort the results of a meta-analysis is publication bias (i.e., when the studies included in a meta-analysis are not representative of all the relevant research that has been conducted on a particular topic). Therefore, current methods for detecting and dealing with publication bias will be discussed next. Finally, time permitting, we will look at some advanced methods for meta-analysis to handle more complex data structures that frequently arise in practice, namely when studies contribute multiple effect sizes to the same analysis, leading to dependencies in the data that need to be accounted for (via multilevel/multivariate models and robust variance estimation).
Education / capacity building; Quantitative analysis / synthesis (including meta-analysis); Data visualisationSummary / overview; Theoretical framework / proposed process or concept; Method validation study / practical case study2022workshophttps://www.wvbauer.com/doku.php/workshop_ma_esmarconf
Bethel, AlisonWorkshop 2: Searching for studies in meta-analyses and evidence syntheses

This workshop will provide an overview of why searching for studies in a meta-analysis or other evidence synthesis is a vital step that should be carefully planned and conducted. It will highlight methods that can be used to improve comprehensiveness, reduce risk of bias, and increase your working efficiency.
Protocol development; Searching / information retrieval; Report write-up / documentation / reportingSummary / overview; Theoretical framework / proposed process or concept; Method validation study / practical case study2022workshophttps://youtu.be/dip0sCk3emM
Grainger, MatthewWorkshop 3: Collaborative coding and version control - an introduction to Git and GitHub

This workshop will provide walkthroughs, examples and advice on how to use GitHub to support your work in R, whether developing packages or managing projects. This workshop will provide walkthroughs, examples and advice on how to use GitHub to support your work in R, whether developing packages or managing projects.
Collaboration; General (any / all stages); Document / record management (including deduplication); Data wrangling/curatingSummary / overview; Theoretical framework / proposed process or concept; Method validation study / practical case study2022workshophttps://youtu.be/UC0gAOlxVYg
Garside, RuthWorkshop 4: The Collaboration for Environmental Evidence and what it can do for you

This workshop focuses on what the Collaboration for Environmental Evidence (CEE), a key non-profit systematic review coordinating body, can provide by way of support to anyone wishing to conduct a robust evidence synthesis in the field of environmental science, conservation, ecology, evolution, etc. The workshop will involve a presentation of the organisation, its role and free services and support, followed by a Q&A.
Communities of practice/research practices generally; Education / capacity building; Collaboration; General (any / all stages); Stakeholder engagement; Protocol development; Report write-up / documentation / reporting; CommunicationSummary / overview; Theoretical framework / proposed process or concept2022workshophttps://youtu.be/HFsGNzZFEJ8
Basu, ArindamWorkshop 5: Structural Equation modelling

Meta-analysis of trials and observational studies can be conceptualised as mixed effects modelling where fixed-effects meta analyses are special cases of random-effects meta-analyses. Structural equation modelling can be used to conduct meta-analyses in many ways that can extend the scope of meta-analyses. In this workshop, we will show step by step how to use structural equation modelling for conducting meta-analyses using R with metaSEM, lme4, and OpenMx packages. As an attendee, you will not need any previous experience of using these packages as we will show from start to finish with a set of preconfigured data, and you can later try with your own data sets. In the workshop, the instructor will conduct live coding and attendees will follow suit with questions and answers. All materials will be openly distributed in a github repository and be available before and after the workshop. We will use a hosted Rstudio instance, so please RSVP for this workshop so that accounts can be set up ahead of time.
Quantitative analysis / synthesis (including meta-analysis)Summary / overview; Theoretical framework / proposed process or concept; Method validation study / practical case study2022workshophttps://youtu.be/z42HHrMRbV8
Westgate, MartinWorkshop 6: Introduction to writing R functions/packages

This workshop provides walkthroughs, examples and advice on how to go about building R functions and packages, and why you might wish to do so in the first place. It aims to discuss the benefits of using functions and packages to support your work and the work of others, and provides practical advice about when a package might be ready to 'go public'.
General (any / all stages); Quantitative analysis / synthesis (including meta-analysis); Data visualisationSummary / overview; Theoretical framework / proposed process or concept; Method validation study / practical case study2022workshophttps://youtu.be/A5XBh8zAMfo
Gerit WagnerCoLRev: A pipeline for collaborative and git-based literature reviews

Conducting highly collaborative literature reviews remains a key challenge and compelling visions for pipelines covering the literature review process end-to-end, i.e., from problem formulation to write-up, have yet to be proposed. We contend that, similar to other reproducible research contexts, git offers a viable data management foundation for literature reviews. However, the unique characteristics of literature reviews have yet to be fully considered, and corresponding design principles are yet to be proposed. In this context, our work builds on years of iterative prototype development, evaluation, and refinement. The proposed talk focuses on the aforementioned challenges and proposes potential solutions. Our objectives are twofold: First, we aim to demonstrate a novel data management and tool pipeline (CoLRev), and second, we summarize its key principles. Integrating such a pipeline with other emerging tools, we see exciting opportunities for evidence synthesis communities to transform the conduct of collaborative literature reviews.
Document / record management (including deduplication), Updating / living evidence syntheses, Collaboration / team working, General (any / all stages)Theoretical framework / proposed process or concept, Combination of code (chunks or packages) from multiple sources, Code package / library2023talkhttps://youtu.be/yfGGraQC6vs
Theodoros DiakonidisscreenmedR: a package for automatizing the screening of publications for meta-analysis or systematic reviews, using pubmed database.

A new program is introduced, in R computing language, fast and accurate, that could save researcher’s time in sifting the publications coming from a Pubmed search, in order to find the relevant ones for a systematic review or a meta-analysis. Suppling an input of 4-5 publications the researcher believes that belong to its study, the program and most specifically the function screenmed() can diminish its initial pubmed search 60% to 80% with almost 100% accuracy. The program uses the abstracts of the publications and apply text mining methods in combination with hierarchical clustering, an unsupervised machine learning practice, and cosine similarity to find similarities among them. It also provides 2 functions which use mesh terms definitions to find similarities between publications. One mesh_clean_bq()to find common mesh terms between two groups of publications and the second mesh_by_name_bq()to find specific mesh terms from a group of publications.
Study selection / screeningCode package / library2023talkhttps://youtu.be/7SWoQSvDgWY
Clareece NevillMetaImpact: Designing future studies whilst considering the totality of current evidence

Motivation When designing new studies, there is a balancing act regarding how many people to recruit. Too few, the trial may not detect an effect; too many, some participants would have undergone inferior treatments unnecessarily. Both scenarios are wasteful and unethical. In the UK, governing bodies must approve new treatments before offering them to the public. To aid decision-making, systematic reviews and meta-analyses are often presented where all relevant evidence is systematically found and combined to give an overall picture. Therefore, new trials should ideally add to the current evidence and influence a future review. Plan Sutton et al developed a method for estimating the sample size of a new trial such that it influences a future review. This involves simulating a new trial using parameters from the current meta-analysis, adding it to the review, and then seeing how the results changed. Repeating this multiple times estimates the ‘power’ of the sample size – the proportion of simulations that give the desired effect. This project aimed to create an interactive web-app for researchers to easily utilise these methods themselves. Web-App Using software ‘R’ and ‘shiny’, a free web-app was created, MetaImpact, to estimate the power of a future study with a certain sample size having an impact on a review. Educational features range from information boxes explaining different elements of the calculator, to plots illustrating how the estimation is calculated. Impact Past reviews were utilised to assess the benefit of MetaImpact by removing the most recent addition, adding a new trial generated using MetaImpact to the review, and then comparing the result to the original. MetaImpact has potential to benefit patients and research by encouraging ethical sample sizes and reducing ‘wasteful’ trials.
Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Updating / living evidence synthesesTheoretical framework / proposed process or concept, Graphical user interface (including Shiny apps)2023talkhttps://youtu.be/tcban07zOiw
Trevor RileyAn Introduction to CiteSource: Analyzing Sources, Methods, and Search Strategies

This tutorial will review the CiteSource R package and Shiny app. CiteSourse began as part of ESMARConf2022 and gives users the ability to deduplicate reference files, while maintaining custom metadata fields. This functionality allows users to compare search results of literature sources, search methodologies, and strategies. The package also allows users to analyze search results, methods, etc. across both title/abstract screening and full-text screening phases. This tutorial will provide an overview of the CiteSourse R package/Shiny app, and will explore various use cases through unique vignettes.
Searching / information retrieval, Document / record management (including deduplication), Study selection / screening, Quality assessment / critical appraisal, Data visualisation, Report write-up / documentation / reporting, Communities of practice / research practices generally, Education / capacity buildingSummary / overview, Code package / library2023talkhttps://youtu.be/xBW1wQDHk5g
Konstantinos BougioukasccaR: a package for assessing primary study overlap across systematic reviews in overviews

An overview of reviews aims to collect, assess, and synthesize evidence from multiple systematic reviews (SRs) on a specific topic using rigorous and reproducible methods. An important methodological challenge in conducting an overview of reviews is the management of overlapping information and data due to the inclusion of the same primary studies in SRs. We present an open-source R package called ccaR (https://github.com/thdiakon/ccaR) that provides easy-to-use functions for assessing the degree of overlap of primary studies in an overview of reviews with the use of the corrected cover area (CCA) index. A worked example with and without consideration of chronological structural missingness is outlined, illustrating the simple steps involved in calculating the CCA index and creating a publication-ready heatmap. We expect ccaR to be useful for overview authors, methodologists, and reviewers who are familiar with the basics of R. We also hope our package will contribute to the discussion about different methodological approaches in implementing the CCA index. Future research could further investigate the functionality of our package and other potential uses as well as the limitations.
Data visualisationGraphical user interface (including Shiny apps), Code package / library2023talkhttps://youtu.be/7Asp-HkMoks
Leonie TwenteImplementing text mining to optimize search term selection for systematic reviews in language education: a case study

Systematic reviews (SR) collate all available empirical findings to answer a clearly formulated question. The systematic and transparent approach minimizes the risk of biases on the selection and evaluation of relevant studies. Ideally, the search strategy finds both all documents relevant to the question (“sensitivity”) and as few irrelevant documents as possible (“precision”). Identifying search terms for searches in electronic databases is a challenge, in particular for SRs in educational research where there is no standardized system of terms such as "MeSH" in medicine. Since SRs in highly interdisciplinary fields require searching databases of different disciplines, the keywords assigned in thesauri are not an optimal solution. Searching using keywords based on few experienced individuals, however, introduces biases and reduce the likelihood of finding research one does not know. One possible solution is text mining which allows automatic determination of relevant search terms based on a large data set (cf. Grames, Stillman, Tingley & Elphick, 2019a). As part of a systematic review on the effect of language-sensitive subject teaching approaches (Vasylyeva, Woerfel, Twente & Höfler, in prep), a text mining method based on co-occurrence networks was used to optimize the search strategy. Using the R package "litsearchr" (Grames, Stillman, Tingley & Elphick, 2019b), terms that occur together and have a specific binding strength were identified in a collection of naively searched literature (2668 documents of a free and controlled search in Scopus and ERIC and 58 in a FIS Bildung). These terms, which are particularly representative of the content of relevant documents, supplement a multiple-stage search term selection process. This presentation presents the application of litsearchr in a German- and English-language SR in language education and discusses problems and benefits of the application.
Searching / information retrievalMethod validation study / practical case study2023talkhttps://youtu.be/Why9lYZjWMo
Jakub RuszkowskiKey role of citation chasing in the evidence synthesis on the gastrointestinal symptoms prevalence in chronic kidney disease: a case study

Even though meta-analyses are mainly performed to establish the effectiveness of various treatments on health and social outcomes, they can also be conducted to improve our understanding of patients’ experience of a disease. In the recently published systematic review and meta-analysis on lower gastrointestinal symptoms in patients with chronic kidney disease, we showed that citation chasing (using the Citationchaser app.) of articles introducing symptom questionnaires is an essential step in data collection as a quarter of the papers would not have been found using the standard database search method. Results for each prevalence outcome expressed as single proportions were pooled and visualized using the “meta” package. Using the “altmeta” package, we showed that both a conventional two-step method (with a Freeman–Tukey double arcsine transformation) and generalized linear mixed models (regardless of the choice of link function: logit, probit, cauchit, cloglog) provide relatively similar results. To assess “reporting biases” such as selective non-publication (publication bias) and selective non-reporting of results, we conducted a Peters’ regression test, calculated the Luis Furuya-Kanamori index, and generated both funnel and Doi plots using functions from the “meta” and the “metasens” packages. To sum up, our case supports using R packages and shiny apps to conduct a meta-analysis of prevalence.
Searching / information retrieval, Quantitative analysis / synthesis (including meta-analysis), Data visualisationMethod validation study / practical case study, Graphical user interface (including Shiny apps)2023talkhttps://youtu.be/GcPiS7EICUs
Marc LajeunesseA study-centric reference manager for research synthesis in R

Reference managers like Zotoro, Mendely, and Endnote are often shoehorned for key tasks like retrieving, organizing, screening, and coding studies for systematic reviews and meta-analysis. However, the biblio-centric, spreadsheet-like UIs of these tools are less than ideal for the fastidious study-level work typically needed for research synthesis. Here I introduce an experimental R package that offers an alternative reference managing design that de-emphasizes tabular interfaces for a more study-centric UI that enhances interactivity, task tracing, coding, and reference readability. The primary goal of the software is to improve user experience and make the diverse and repetitive tasks of research synthesis more palatable.
Searching / information retrieval, Document / record management (including deduplication), Study selection / screening, Quality assessment / critical appraisal, Data / meta-data extraction, Data wrangling /curating, Evidence mapping / mapping synthesis, Data visualisation, Collaboration / team workingSummary / overview, Graphical user interface (including Shiny apps), Code package / library2023talkhttps://youtu.be/aY194U7Y7GQ
Emma WilsonThe benefits of using R for systematic review reference management

Good reference management is essential when conducting systematic reviews or other evidence synthesis research. Many different reference management software programs are available to researchers, including Zotero, Mendeley, EndNote and Papers. However, reference management software can often struggle to handle large numbers of references, and the lack of version control means that changes made to references may not be recorded or reproducible. Many features of reference management software relevant to systematic reviews – such as importing references from databases searches, storing and organising references, filtering references, and retrieving full-text documents – can be performed using the R programming language. Additionally, changes made to references can be documented in a reproducible way using R scripts or RMarkdown files and GitHub. In this talk, I discuss how R can be used to effectively manage systematic review references, outline the benefits of using R to do so, and show examples from my own systematic review projects.
Searching / information retrieval, Document / record management (including deduplication), Data wrangling /curating, Report write-up / documentation / reportingSummary / overview, Combination of code (chunks or packages) from multiple sources2023talkhttps://youtu.be/T2ogiLRSbEw
Claudia Kapp„Text analysis for search strategies: implementing an approach with R“

As information specialists, our job is to think about how to create and optimize search strategies for systematic reviews. During this presentation, we will introduce the first version of a new R package which implements an updated approach for search strategy development based on Hausner et al., 2012 (1). We will discuss why we chose to create our own package, although we are all novices to programming. Furthermore, we will address what needs we see for new packages and tools for evidence synthesis from the perspective of information specialists. 1. Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Syst Rev. 2012;1:19.
Searching / information retrievalCode package / library2023talkhttps://youtu.be/pmVjjy2QAYE
Qiyang ZhangPaperfetcher: A tool to automate handsearching and citation searching for systematic reviews

Systematic reviews are vital instruments for researchers to understand broad trends in a field and synthesize evidence on the effectiveness of interventions in addressing specific issues. The quality of a systematic review depends critically on having comprehensively surveyed all relevant literature on the review topic. In addition to database searching, handsearching is an important supplementary technique that helps increase the likelihood of identifying all relevant studies in a literature search. Traditional handsearching requires reviewers to manually browse through a curated list of field-specific journals and conference proceedings to find articles relevant to the review topic. This manual process is not only time-consuming, laborious, costly, and error-prone due to human fatigue, but it also lacks replicability due to its cumbersome manual nature. To address these issues, we present a free and open-source Python package and an accompanying web-app, Paperfetcher, to automate the retrieval of article metadata for handsearching. We will also demonstrate how Paperfetcher can be used in R! With Paperfetcher's assistance, researchers can retrieve article metadata from designated journals within a specified time frame in just a few clicks. In addition to handsearching, it also incorporates a beta version of citation searching in both forward and backward directions. Paperfetcher has an easy-to-use interface, which allows researchers to download the metadata of retrieved studies as a list of DOIs or as an RIS file to facilitate seamless import into systematic review screening software. To the best of our knowledge, Paperfetcher is the first tool to automate handsearching with high usability and a multi-disciplinary focus.
Searching / information retrieval, Data / meta-data extractionSummary / overview, Graphical user interface (including Shiny apps), Code package / library, Code chunk (e.g. single R or javascript function)2023talkhttps://youtu.be/eCoM6omPaIo
Neal HaddawayGSscraper and greylitsearcher - useful but flawed tools for searching for studies in evidence syntheses

Web scraping is a useful technique for extracting patterned data when searching for studies in evidence syntheses. It holds promise where search results cannot be exported directly in bulk, and allows data to be integrated into eligibility screening pipelines. Here, I report on two tools (GSscraper and greylitsearcher) build using basic web scraping in R and hosted as Shiny apps. I explain the problems associated with these methods and call for additional support in helping to make these web scraping tools resilient to code fluctuations in the underlying websites (Google and Google Scholar).
Searching / information retrieval, Document / record management (including deduplication), Report write-up / documentation / reportingTheoretical framework / proposed process or concept, Graphical user interface (including Shiny apps), Code package / library2023talkhttps://youtu.be/XffNRf2BD-E
Antonina DolgorukovaA meta-analysis of preclinical studies with complex data structure: a practical example of using a multilevel model to account for dependencies

The low reliability and reproducibility of preclinical studies findings indicate the need for meta-analytic research allowing not only more accurate estimates, but also the identification of risks of bias, publication bias, and design features potentially affecting the results. The common challenge of preclinical meta-analyses is the complex data structure implying dependent effect sizes, which, if ignored, can result in misleading statistical inferences. Multilevel modelling and robust variance estimation are the most reliable approaches for handling dependencies, however, they have not yet been widely adopted. Here we demonstrate a practical example of the application of these methods in the meta-analysis of controlled studies testing migraine treatments in the animal model of trigeminovascular nociception (study protocol at PROSPERO: CRD42021276448). Our systematic search identified 13 studies reporting on 21 experiments, some of which used a shared control group. A three-level model with robust variance estimation was built using the rma.mv() and robust() functions of the metafor package for R. The extent to which methodological features and the reporting of measures to reduce bias explain the observed heterogeneity was assessed in subgroup analyses (meta-regression). To test the robustness of the results, we also examined the presence of outliers and influential cases, followed by sensitivity analysis and estimated potential publication bias. We believe that this work is a helpful example of using the metafor package for multilevel modelling in preclinical meta-analyses and would like to discuss the used methodology and results.
Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Communities of practice / research practices generallyMethod validation study / practical case study2023talkhttps://youtu.be/RB5EWNSb6qg
Rebecca HarrisPre-eclampsia in pregnancy and offspring blood pressure: a multilevel multivariate meta-analysis of observational studies

Background: In studies pertaining to cardiovascular health, systolic and diastolic blood pressure are key outcomes of interest and are both usually reported in primary studies. When conducting meta-analysis, such outcomes cannot be combined in a standard pairwise meta-analysis because they are not independent. An appropriate approach for addressing multiple dependent outcomes in meta-analysis is through multilevel modelling which accounts for the correlation by specifying how each effect size is nested in the included studies. We present a case-study where we used multilevel meta-analysis to analyse multiple outcomes (systolic and diastolic blood pressure) and multiple follow-up measures from cohort studies assessing the impact of pre-eclampsia on offspring blood pressure. Methods and Results: To identify articles, we searched the Medline (via PubMed), CINAHL (via EBSCO) and Embase (via Elsevier) databases from their inception to January 31, 2022. Meta-analysis of 42 effect sizes from 12 studies was conducted using the metafor package in R. When analysing effect sizes adjusted for confounders, offspring exposed to a pre-eclamptic pregnancy had higher systolic (SMD: 0.157; 95%CI: 0.098, 0.216) and diastolic blood pressure (SMD: 0.136; 95% CI: 0.068, 0.203). Compared to univariate pairwise meta-analysis, pooled effects from multilevel multivariate analyses were stronger and precision around DBP was greater. Results from meta-regression tests to compare early and late onset of pre-eclampsia were not statistically significant. Conclusions: This multilevel meta-analysis confirms the positive association between pre-eclampsia and offspring blood pressure after accounting for potential confounders while accounting for the multilevel structure of the data.
Quantitative analysis / synthesis (including meta-analysis)Method validation study / practical case study2023talkhttps://youtu.be/FllJA9Jub44
Wolfgang ViechtbauerLocation-scale models for meta-analysis using the metafor package

The purpose of most meta-analyses is to estimate the size of the average effect and/or to examine under what circumstances the effect tends to be higher/lower. However, equally important is the question how much the effect varies across studies. The latter question is focused on the amount of heterogeneity in the effects, which we can estimate under a random-effects model using well-established methods. However, these methods assume that the amount of heterogeneity does not depend on the study characteristics and hence is assumed to be constant (homoscedastic). An extension of the standard random-effects model - the meta-analytic location-scale model - relaxes this assumption and allows researchers to examine under what circumstances the amount of heterogeneity tends to be higher/lower. In this tutorial, I will demonstrate how such location-scale models can be fitted using the metafor package and discuss the potential and limitations of such models.
Quantitative analysis / synthesis (including meta-analysis)Code package / library2023talkhttps://youtu.be/589nU_9WO1o
Shinichi NakagawaMeta-analyses with missing standard deviations with log response ratio

The log response ratio, lnRR, is the most frequently used effect size statistic in ecology. However, missing standard deviations (SDs) are often present in meta-analytic datasets, preventing us from obtaining the sampling variance for lnRR. We propose three new methods to deal with missing SDs. All three methods use the square of the weighted average coefficient of variation CV to obtain sampling variances for lnRR when SDs are missing. Using simulation, we find that using the average CV to estimate the sampling variances for all observations, regardless of missingness, performs best. Surprisingly, even where SDs are missing, this simple method performs better than the conventional analysis with no missing SDs. This is because the conventional method incorporates biased estimates of sampling variances as opposed to less biased sampling variances with the average CV. All future meta-analyses of lnRR could take advantage of our new approach along with the other methods.
Quantitative analysis / synthesis (including meta-analysis)Summary / overview, Theoretical framework / proposed process or concept2023talkhttps://youtu.be/1y50v47ojyY
Sabine PatzlUsing multiverse and specification curve analyses as an assessment of generality of effects for MASEMs: A meta-analysis on creative potential and self-assessment measures

Creativity is not only a characteristic of, e.g., scientific geniuses and artists, but it is the general opinion that everyone has a certain amount of creative potential. A person's creative potential does not necessarily lead to creative achievements, as there is evidence that the relationship might be partially explained through creative self-assessments (=CSA) such as creative self-beliefs. However, the question of the relationship between CSA and the actual creative potential remains unresolved. The main goal of this meta-analysis is to investigate whether two indicators of creative potential (i.e., divergent thinking and intelligence) are associated with CSA. Here, we use a meta-analytical structural equation modeling (=MASEM) approach as proposed by Wilson and colleagues (2016), to model expectable effect size dependencies. Furthermore, we expect to find a substantial amount of heterogeneity in the data due to effects of moderating variables. However, because MASEMs are limited in how many moderators can be included, we will use two approaches to investigate the effect generality of the scrutinized MASEM. First, we will apply subgroup analyses to test if parameter estimates are equal across the different CSA types and age groups. Second, we will apply multiverse and specification curve analyses to all bivariate relationships. This allows us to investigate the influence of various study design variables and (reasonable) meta-analytical decisions simultaneously and thus contributes to disentangling the causes of inconsistent study results concerning the relationship between creative potential and CSA in the available literature. We show how multiverse and specification curve analyses combined with MASEMs can be used to assess the generality of research synthesis outcomes.
Quantitative analysis / synthesis (including meta-analysis)Method validation study / practical case study2023talkhttps://youtu.be/L1VG8s_Cidk
Daniel HeckmetaBMA: Bayesian Model Averaging for Meta-Analysis in R

Meta-analysis aims at the aggregation of observed effect sizes from a set of primary studies. Whereas fixed-effect meta-analysis assumes a single, underlying effect size for all studies, random-effects meta-analysis assumes that the true effect size varies across studies. Often, the data may not support one of these assumptions unambiguously, especially when the number of studies under consideration is small. In such a case, selecting one of the two models results in too narrow confidence intervals when assuming fixed-effects but in low statistical power when assuming random-effects. As a remedy, Bayesian model averaging can be used to combine the results of four Bayesian meta-analysis models: (1) fixed-effect null hypothesis, (2) fixed-effect alternative hypothesis, (3) random-effects null hypothesis, and (4) random-effects alternative hypothesis. Based on the posterior probabilities of these models, Bayes factors allow to quantify the evidence for or against the two key questions: "Is the overall effect non-zero?" and "Is there between-study variability in effect size?". Besides considering model uncertainty, Bayesian inference enables researchers to include studies sequentially in order to update a meta-analysis as new studies are added to the literature. The R package metaBMA facilitates the application of Bayesian model-averaging for meta-analysis by providing an accessible interface for computing posterior model probabilities, Bayes factors, and model-averaged effect-size estimates for meta-analysis.
Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Updating / living evidence syntheses2023talkhttps://youtu.be/DcsRnRgY_co
Jens FuenderichMetaPipeX: Data analysis & harmonization for multi-lab replications of experimental designs

The number of multi-lab replication studies (e.g. ManyLabs, Registered Replication Reports) in psychology is gradually increasing, with few uniform standards in data preparation or provision. This leads to challenges in both access and re-use of multi-lab replication data. The MetaPipeX framework takes on these challenges, serving as a novel proposal to standardize data structure, analysis code and reporting for experimental data of between groups comparisons in replication projects. It provides users with both structure and tools to synthesize and analyse mulit-lab replication data, select relevant subsets or create helpful graphics such as violin-, forest- and funnel-plots. MetaPipeX consists of three components: A descriptive pipeline for data transformations and analyses, analysis functions that implement the pipeline and a Shiny App utilizing the standardized structure for insights into the data at different aggregation levels. The analysis functions are largely built around meta-analysis of effect sizes (components), utilizing the metafor::rma.mv function (Viechtbauer, 2010). The analysis results consist of replication statistics, meta-analytical model- and heterogeneity-estimates. Additionally the functions provide documented data exports of various agreggation levels. All kinds of data subsets or graphics may be exported for further use. In this tutorial at ESMARConf we will present the framework and show personas ("prototypical users") with different use cases ranging from data analytical tasks to educational purposes. In order to contextualize the framework and its features we will provide a brief summary of the current state of repositories from multi-lab replication projects and discuss potential benefits and limitations of standardization. Using the MetaPipeX framework, we aim to save other researcher countless hours of data manipulation and harmonization, building a foundation for future reproducible multi-lab replication studies.
Data wrangling /curating, Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Collaboration / team working, Communities of practice / research practices generallyStructured methodology (e.g. critical appraisal tool or data extraction form), Graphical user interface (including Shiny apps), Code package / library2023talkhttps://youtu.be/m-W8O2yhReg
James PustejovskyClustered bootstrapping for handling dependent effect sizes in meta-analysis: Exploratory application for publication bias analysis

In many fields, quantitative meta-analyses involve dependent effect sizes, which occur when primary studies included in a synthesis contain more than one relevant estimate of the relation between constructs. When using meta-analysis methods to summarize findings or examine moderators, analysts can now apply well-established methods for handling dependent effect sizes. However, very few methods are available for examining publication bias issues when the data also include dependent effect sizes. Furthermore, applying existing tools for publication bias assessment without accounting for effect size dependency can produce misleading conclusions (e.g., too-narrow confidence intervals, hypothesis tests with inflated Type I error). In this presentation, we explore a potential solution: clustered bootstrapping, a general-purpose technique for quantifying uncertainty in data with clustered structures, which can be combined with many existing analytic models. We demonstrate how to implement the clustered bootstrap in combination with existing publication bias assessment techniques like selection models, PET-PEESE, trim-and-fill, or kinked meta-regression. After providing a brief introduction to the theory of bootstrapping, we will develop and demonstrate example code using existing R packages, including `boot` and `metafor`. Time permitting, we will also share findings from ongoing methodological studies on the performance of clustered bootstrap selection models.
Quality assessment / critical appraisal, Quantitative analysis / synthesis (including meta-analysis)Theoretical framework / proposed process or concept, Combination of code (chunks or packages) from multiple sources2023talkhttps://youtu.be/9DraJD6QDVs
Matt JonesThree challenges from a recent meta-analysis and how I tried to deal with them

Each meta-analysis presents its own set of unique challenges that the meta-analyst must seek ways of dealing with – especially as meta-analysis is increasingly being applied in non-medical fields where study designs and reporting standards are often more diverse. Here, Matt will present some reflections on his experience conducting a meta-analysis of veterinary microbiology studies of the antibiotic use-resistance relationship in beef cattle. In this field, issues such as multiple publications related to the same study, unit of analysis problems, and the use of proportional measures of the outcome present particular challenges. Matt will share his reflections on dealing with these challenges using R in the hope of stimulating discussion around these issues that may help others or himself better deal with them in the future!
Quantitative analysis / synthesis (including meta-analysis)Structured methodology (e.g. critical appraisal tool or data extraction form)2023talkhttps://youtu.be/TtD74wcBrL0
Theodoros EvrenogloumetaCOVID: A web-application for living meta-analyses of Covid-19 trials

COVID-NMA is an international initiative that performs ‘living’ evidence synthesis for all treatments and vaccines used against Covid-19. Through its platform COVID-NMA provides access to the most-up-to date findings regarding more than 300 treatment comparisons and more than 20 vaccines. The initiative has received recognition by important organizations such as WHO and Cochrane while many guideline developers have declared their engagement to the outputs of the platform. However, apart from real time access to the data, stakeholders also need to investigate the data and the impact of different characteristics on the results as well as to produce their preferred evidence summaries. To assist them, we developed and made freely available the metaCOVID application. This web-application allows the end-users of the COVID-NMA platform and other external researchers to directly use the most up-to-date database and perform meta-analyses tailored to their needs in a user-friendly environment. The users can interact with the data and customize their analysis by clicking one or more of the buttons which are available in the user interface. Based on their selection the default analysis can be modified in many different ways: (a) type of meta-analysis model (b) method for heterogeneity estimation (c) subgroup analysis criteria (d) exclusion of pre-prints from the analysis (e) exclusion of studies according their risk of bias status (f) Hartung-Knapp adjustment for the confidence intervals. Analyses are performed using the R-package metafor and the results are presented through downloadable forest plots. Those forest plots are enriched with several study characteristics as well as a risk of bias assessment for each study. In summary, metaCOVID offers open access to the most-up-to date database of Covid-19 trials for researchers, clinicians, or guideline developers interested to perform amendable meta-analyses and explore the impact of certain characteristics on the results.
Stakeholder engagement, Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Updating / living evidence synthesesGraphical user interface (including Shiny apps)2023talkhttps://youtu.be/IoOT2ncVGLI
Lukas RöselerCreating interactive ShinyApps for Meta-Analyses with metaUI

Meta-analyses are based on rich datasets that can be analyzed in numerous ways, and it is unlikely that authors and readers will always agree on the “best ways” to analyze the data. Whether it comes to the choice of model (e.g., random versus fixed effects), the methods for assessing or adjusting for publication bias (e.g., z-curve, p-curve, robust Bayesian meta-analysis), or the moderators to be included in a meta-regression, disagreements are likely to arise. This can lead to the inclusion of lengthy robustness checks and alternative analyses that are time-consuming and difficult to digest. Here we present metaUI, a new R package that supports researchers in creating an interactive web app that allows readers (and reviewers) to explore meta-analytic datasets in a variety of different ways. Apart from allowing readers (and reviewers) to assess the robustness and trustworthiness of results more comprehensively, metaUI allows others to assess the results that are most relevant to them, such as by filtering the dataset to focus on a specific group of participants, region, outcome variable, or research method. With the opportunity for users to download the dataset used and to upload alternatives, it will also facilitate the updating of meta-analyses. To date, some researchers have created similar web apps for their meta-analyses that have been well received, yet they require substantial time investment and advanced coding skills to use. With metaUI, researchers can get a working app by simply uploading their dataset and tagging key variables – while they still have the flexibility to tailor the display in line with their interests and requirements. In this session, we demonstrate the use of the package by creating interactive apps for illustrative datasets from the psymetadata package and collect initial feedback for further development.
Quantitative analysis / synthesis (including meta-analysis), Data visualisation, Report write-up / documentation / reporting, Updating / living evidence synthesesStructured methodology (e.g. critical appraisal tool or data extraction form), Graphical user interface (including Shiny apps), Code package / library, Template (e.g. HTML web page or markdown file)2023talkhttps://youtu.be/yRmjBBiE2Io
Daniel NobleMaking orchaRd plots for meta-analysis

Classic forest plots in meta-analyses are often of limited use when there are hundreds of effect sizes. We suggest that a new plot, called an orchard plot, is more useful across a broad array of meta-analytic research because it not only provides aggregated meta-analytic means along with 95% confidence intervals within sub-groups, but it also visualises the raw effect size data (scaled by their precision) and 95% prediction intervals. The 95% prediction intervals allow readers to understand the range of effect sizes expected from future studies and it's the most ideal measure of heterogeneity in a meta-analysis. We overview the functionality of our new R package, orchaRd, to show how it can be used to make orchard plots.
Data visualisationCode package / library2023talkhttps://youtu.be/NqL11El8kwM
Georgios SeitidisGraphical tools for visualizing the results of network meta-analysis of multicomponent interventions

Network meta-analysis (NMA) is an established method for assessing the comparative efficacy and safety of competing interventions. It is often the case that we deal with interventions that consist of multiple, possibly interacting, components. Examples of interventions’ components include characteristics of the intervention, mode (face-to-face, remotely etc.), location (hospital, home etc.), provider (physician, nurse etc.), time of communication (synchronous, asynchronous etc.) and other context related components. Networks of multicomponent interventions are typically sparse and classical NMA inference is not straightforward and prone to confounding. Ideally, we would like to disentangle the effect of each component to find out what works (or does not work). To this aim, we propose novel ways of visualizing the NMA results, describe their use, and illustrate their application in real-life examples. We developed an R package viscomp to produce all the suggested figures.
Quantitative analysis / synthesis (including meta-analysis), Data visualisation, OtherStructured methodology (e.g. critical appraisal tool or data extraction form), Code package / library2023talkhttps://youtu.be/RnGbsmUWx3U
Chris PritchardSignificant updates to the PRISMA2020 package supporting use as an API

The PRISMA2020 flow diagram app was created to provide an easy way to produce flow diagrams compliant with the PRISMA 2020 reporting standards. Over the past year, significant updates have been made to the app, including production of PRISMA-S compliant flow diagrams and an improved way of integrating the app within other tools. This presentation provides an overview of the new features and demonstrates the global impact of this tool.
Data visualisation, Report write-up / documentation / reportingGraphical user interface (including Shiny apps), Code package / library2023talkhttps://youtu.be/oek215rn4uM
Yefeng Yang, Malgorzata Lagisz and Alfredo Sánchez-TójarTest, adjust for and report publication bias

Meta-analyses are essential for summarising cumulative science, but their validity can be compromised by publication bias. Thus, it is essential to test whether publication bias occurs and adjust for its impact when drawing meta-analytic inferences. Large-scale surveys in many fields have shown that meta-analyses often distort the estimated effect size and evidence when no bias correction is made. We have two aims: (1) raising awareness of the importance of performing publication bias tests when conducting a meta-analysis, (2) providing meta-analysts with a tutorial on advanced but easy-to-implement techniques to properly test, adjust for and report publication bias.
2023workshophttps://youtube.com/live/aGp43Ng3QAw?feature=share
Marc LajeunesseWrangling large teams for research synthesis

Sometimes there is the opportunity to include 100s of participants into your research synthesis project -- but how do you harness that energy into something consistent? This workshop will provide tips, tricks, and tools to managing large-team research synthesis projects. Topics covered will include: management practices, consistency upkeeping, open-access software, and open-gaps for development.
2023workshophttps://youtube.com/live/HuaGnIFJnok?feature=share
Matthew PageReporting guidelines to ensure transparency of evidence syntheses: when and how to use them

The potential benefits of evidence syntheses are often not realised because of weaknesses in their reporting. Reporting guidelines, which typically comprise a checklist or explanatory text to guide authors in reporting, are designed to ensure the accuracy, completeness and transparency of research reports. One of the most widely used reporting guideline for evidence syntheses is the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement, originally published in 2009 and recently updated (to PRISMA 2020). The purpose of this workshop is to introduce attendees to reporting guidelines for evidence syntheses and outline how to use them. The workshop will begin with a brief presentation about the key features of various reporting guidelines for evidence syntheses. We will then ask participants to form small groups (three to five people) and evaluate how well a systematic review on a health-related topic adheres to the PRISMA 2020 checklist. A group discussion will follow to allow participants to share their thoughts on the completeness of reporting of the systematic review, and to assess the level of agreement in assessments. The workshop will conclude with a facilitated, structured discussion to gather feedback on potential tech solutions to improve implementation of reporting guidelines for evidence syntheses.
2023workshophttps://youtube.com/live/FsqU2Xg7hqs?feature=share
Kirsten ZiesemerTesting (semi)-automated de-duplication methods in evidence synthesis

When searching for a literature review it is required to search in multiple bibliographic databases with overlapping content. Removal of these duplicate references is essential to reduce reviewer workload when screening for relevant abstracts. Moreover, proper removal of duplicate references avoids the unintended removal of eligible studies, limiting potential bias to the literature review. Duplicate removal or de-duplication is a time and resource constraining process of evidence synthesis which (semi) automation of this task could reduce. The purpose of this interactive workshop is a run through of de-duplication methods using R and discuss and reflect on best practices during (semi) automated duplicate removal. Based on a systematic literature search and a national de-duplication workshop, we identified several (semi) automated de-duplication methods. In this interactive workshop we will perform a de-duplication on an available small dataset (e.g. 1000 references from three databases) using R. We will compare methods based on performance using a benchmark dataset (i.e. compare number of true positives, false positives, true negatives, false negatives, precision and sensitivity) and results from the national de-duplication workshop. Ideally, strategies for improving de-duplication procedures using R in evidence synthesis will be formulated during this interactive workshop.
2023workshophttps://youtube.com/live/Sas4XNjlXgg?feature=share
Guido Schwarzer and Gerta RückerNetwork meta-analysis using R package netmeta

The aim of this workshop is to make participants familiar with methods for performing frequentist network meta-analysis (NMA) using R package netmeta (Balduzzi et al., 2022). The workshop will be divided into a presentation of about 60 minutes followed by practical exercises with R conducted by the participants. The results of the practicals will be discussed at the end of the session. All examples come from real medical applications.
2023workshophttps://youtube.com/live/A4foA25UylY?feature=share
Jacqui EalesScreening studies for eligibility in evidence syntheses

A systematic process for deciding which studies to include is a key stage in any evidence synthesis . This workshop will present the principles of screening as transparently and objectively as possible, with worked examples and opportunities to pose questions.
2023workshophttps://youtube.com/live/JoCbc4XcPIs?feature=share
Chris Pritchard and Matt GraingerIntroduction to GitHub

In this introductory workshop you will learn how to create a new project in git using RStudio as well as how to clone and develop existing projects from GitHub in RStudio. We will cover basic terms and features of Git so that you can build git into your own workflows. If you haven’t used Git before, or want a refresher, this workshop will be ideal to help you to be a big part of the #ESMAR Community!
2023workshophttps://youtube.com/live/tjX7F5q73XE?feature=share
Chris Pritchard and Liz FeltonAdvanced Git & GitHub

In this advanced GitHub workshops, we’ll bring you on a journey from refreshing your knowledge of basic terms through to understanding some of the nuts and bolts of Git. You’ll learn how to use the command line to manipulate git repositories, and even get working on some continuous integration and delivery, so that your changes can be available to the world in real-time. We would need you to have a working knowledge of git and either GitHub, or another similar platform, alternatively, this workshop follows on from the “Introduction to Git and GitHub” workshop so feel free to sign up for both!
2023workshophttps://youtube.com/live/7VNaaTj9qHs?feature=share
Kaitlyn HairIntroduction to R Shiny

Shiny is an R package which allows you to develop interactive web applications without the need to learn HTML, CSS, or JavaScript. This workshop will walk you through the steps required to produce your own Shiny web application, making use of a sample dataset. We will discuss the fundamentals of how Shiny apps work, and the concept of inputs, and outputs, and reactivity. We will also discuss how to develop a user interface, and how to customise the layout and aesthetics for different use cases. Finally, we will discuss ways to publish Shiny applications online and share them with the world.
2023workshophttps://youtube.com/live/2mcjWh3ZYS4?feature=share
Geoff Frampton and Paul WhaleyHow the FEAT framework can help you select study appraisal tools suitable for your systematic review

Critical appraisal is a complex and challenging stage of systematic review. Published systematic reviews vary widely in whether and how they have assessed their included studies, and how the assessments were applied to inform their conclusions. For example, 85% of SRs in toxicology and environmental health have clear issues with the rigour of the appraisal methods they apply (Menon et al. 2022; Whaley & Roth 2022), and more than half of SRs in the CEEDER environmental management database have conducted no appraisal at all (Pullen et al. 2022). Part of the reason for inconsistency in study assessment is that choosing or adapting appraisal tools is very challenging. Many tools exist, they ask different questions, and they were developed for different contexts. Many appraisal instruments also do not differentiate between risk of bias and other aspects of study validity or “quality”. How is a SR author to choose a tool that is appropriate, or modify a tool so that it successfully supports the appraisal task they need to do? To answer that question, this workshop will present the “FEAT” criteria (standing for the Focus, Extent, Application, and Transparency). FEAT is a new general conceptual framework for structuring the critical appraisal of research. It was recently included in CEE Guidelines and Standards for Evidence Synthesis in Environmental Management (Pullen et al. (eds) 2022, Chapter 7). Participants will be taken through interactive examples of using FEAT in critiquing and modifying appraisal tools for risk of bias assessment in a systematic review. Participants will also be able to contribute to the development of a FEAT checklist that will help researchers consistently and transparently assess and modify appraisal tools for use in SRs.
2023workshophttps://youtube.com/live/pcVvPb_oius?feature=share
Claudia KappConsiderations around tools for information retrieval, including text analysis

This panel discussion covers tools and frameworks for building search strings and conducting systematic searching, particularly using novel tools and technologies, such as text analysis.
2023panel discussionhttps://youtube.com/live/NilTm91SEIU?feature=share
Chris PritchardHow do we scale evidence synthesis education and capacity building?

This discussion will focus on how capacity building and training in evidence synthesis can be scaled to ensure communities grow and future syntheses are as rigorous and well conducted as possible.
2023panel discussionhttps://youtube.com/live/bZf_NV7jNG4?feature=share
Mathias Harrer, Yves PlessenControlling for Publication Bias: Challenges & Future Directions

This panel discussion will focus on the challenges of identifying and mitigating for publication bias, and future directions for tools and frameworks in the area.
2023panel discussionhttps://youtube.com/live/HrQe_dEVyAM?feature=share
Matthew GraingerBuilding a community of practice

Join an engaing discussion between evidence synthesis community leaders and experts to find out what communities of practice in evidence synthesis can do for you, and how they can be fostered and nurtured.
2023panel discussionhttps://youtube.com/live/OdTDoynQy90?feature=share
Trevor RileyThe benefits (and challenges) of taking part in a hackathon

This panel discussion will feature previous hackathon participants and organisers to explain what it means to take part in a hackathon and what can be produced collectively.
2023panel discussionhttps://youtu.be/KchCMSdbYus
Matthew GraingerStakeholder engagement and evidence synthesis

Join a fascinating discussion about the importance of stakeholder engagement in evidence synthesis, including real world examples of projects that have engaged with end users, rightsholders and stakeholders from across disciplines.
2023panel discussionhttps://youtube.com/live/H54oxbvFlxs?feature=share
Matthew GraingerThe role of rapid reviews in the R evidence synthesis ecosystem

This panel discussion will focus on what role rapid reviews can play in the evidence synthesis ecosystem, with a special focus on how they fit into the R evidence synthesis landscape.
2023panel discussionhttps://youtube.com/live/iyfsvF8Rw9M?feature=share
Matt Lloyd JonesQ&A with coders - common problems, not so common solutions?

Join this panel discussion for a lively and fun chat about coding - the common problems coders have and how they overcome them!
2023panel discussionhttps://youtube.com/live/zqQIcBfHuVw?feature=share
Emily HennessyBarriers to Open synthesis and how to remove them

This panel discussion focuses on the concept of Open Synthesis - the application of Open Science principles to evidence synthesis. The panellists will discuss what Open Synthesis means and how Open syntheses are today, what challenges and barriers exist to truly Open Synthesis, and how we can break down these barriers to make all syntheses more Open.
2023panel discussionhttps://youtu.be/JxDxyfCfdjA