Introduction

This is an R Notebook, which means that I have prepared it using R Markdown so that you can read along and interact with the code in the boxes. When you conduct this yourself you will likely want to use RStudio, in that case the code in the boxes is used in the same way, just without all the extra explanations I have written for teaching purposes.

While we will not spend much time talking about how R works, instead we will talk about what you need to do specifically to run meta-analyses. If you are interested you can find out more here: https://cran.r-project.org/doc/manuals/R-intro.pdf

Before we start you can see how r code works below.

First let’s ask r to do something simple, multiply 3 by 7. You can execute the code by pressing the green arrow or ctrl+shift+enter. If you want you can run individual lines by clicking on the line and pressing ctrl+shift.

3*7
## [1] 21

So we have our answer, but if we want to store it for later we can save it to an object. Here I have proposed “output” as the object and then asked r to store it with “<-”

output<-3*7

That is now saved to “output.” We can check this by calling it.

output
## [1] 21

We can also now use that saved object in another calculation, here I want to add ten to the output:

output+10
## [1] 31
#You can also add notes that are not included in the code with "#"
#Packages have extra functions we can use.
#Below I demonstrate how to load packages, if it is not already installed then we need to install it first before we load it.

#Install package
if(!require(tidyverse)) +
  install.packages("tidyverse")
#Load package
library(tidyverse)

Tidyverse is an incredibly useful collection of packages that allow us to work with data in a tidy format. To find out more about tidy data you can read here: https://r4ds.hadley.nz/

Below I use a feature of the tidyverse package, the pipe “%>%”. This is an incredibly useful tool for chaining lines together. It essentially asks r to apply the outcome of the brackets in the line before the pipe to the brackets in the line after the pipe.

#Here I have created a function that squares the outcome of the brackets, I have assigned this to "square"
square <- function(x) x^2

#I then use the pipe to take the value stored in "output" and add 10 to it, in the next line we then apply the square function we defined earlier and apply it to the outcome
(output+10) %>% 
  square()
## [1] 961

Hopefully that all makes sense, we will not require very complex code for meta-analysis as the packages cover most of that for us, but it is still useful to understand that basics.

Why Conduct Meta-Analysis and When?

You can view the presentation slides here.

Setting Up

The first thing we want to do is install our packages, as we have already installed tidyverse we don’t need to do it again, however this code will check that for us. meta and metafor are essential for running meta-analyses, esc allows us to calculate effect sizes.

if(!require(tidyverse)) +
  install.packages("tidyverse")
if(!require(meta)) +
  install.packages("meta")
if(!require(metafor)) +
  install.packages("metafor")

Then we load the packages.

library(tidyverse)
library(meta)
library(metafor)
library(esc)

Calculating Effect Sizes

You might get lucky and find that all of the studies you have collected report the same effect sizes that you can easily plug straight into meta-analysis. More likely they will not have reported the effect size you want, or didn’t even report effect size at all!

In that case we need to do some detective work. Firstly we have to check if we have the data available to calculate effect sizes. There are many web tools that do this and you may want to use these to populate a spreadsheet you import to R, however you can also do calculations in R.

Standardised mean difference

Between groups

For comparing the means of two independent groups we can calculate the standardised mean difference (Cohen’s d). To do this we need the following information for the two groups: sample size, mean, and standard deviation. In the code below try adjusting the means, standard deviations and sample sizes to see the effect on the output.

Notice that the direction of the effect size is negative when group 1 is smaller than group 2 and positive when group 1 is bigger.

#This requires esc to be loaded which we did above
esc_mean_sd(grp1m = 20, grp1sd = 4, grp1n = 100,
            grp2m = 10, grp2sd = 4, grp2n = 100)
## 
## Effect Size Calculation for Meta Analysis
## 
##      Conversion: mean and sd to effect size d
##     Effect Size:   2.5000
##  Standard Error:   0.1887
##        Variance:   0.0356
##        Lower CI:   2.1301
##        Upper CI:   2.8699
##          Weight:  28.0702

Try it yourself. If I have a group with m = 2.4, SD = 0.2 and n = 20, and another group with m = 3.5, SD = 0.3 and n = 20, what is the effect size?

esc_mean_sd(grp1m = 0, grp1sd = 0, grp1n = 0,
            grp2m = 0, grp2sd = 0, grp2n = 0)
## 
## Effect Size Calculation for Meta Analysis
## 
##      Conversion: mean and sd to effect size d
##     Effect Size:      NaN
##  Standard Error:      NaN
##        Variance:      NaN
##        Lower CI:      NaN
##        Upper CI:      NaN
##          Weight:      NaN

However, we do not even need to go this far, as long as we have those values in our dataset we will see later that the packages can calculate effect sizes for us.

Within Groups

For repeated measures data within one group, we look at within-group Cohen’s d. For this we do need to calculate the effect size and standard error as it will not be done for us. To do this we need the two means, the sample size, the standard deviation at time point 1, and the correlation between the means.

# We can change the values here
m1 = 5.1
m2 = 4.3
sd1 = 0.3
n = 100
r = 0.5

# This calculates the effect size (d) and standard error (se)
ingroupd <- function()
{
meandiff = m2-m1
d <- meandiff/sd1
se <- sqrt(((2*(1-r))/n) + (d^2/(2*n)))
cat("d =", d, "\n")
cat("se =", se, "\n")
}
# We can then view the values here
ingroupd()
## d = -2.666667 
## se = 0.2134375

Try changing the values and see what changes.

Try it yourself. If I have the same 80 participants measured at 2 time points with a mean of 23 and standard deviation of 0.8 at time point 1 and a mean of 29 at time point 2 with a correlation strength of 0.6, what are the effect size and standard error?

Note that the formula is m2-m1 meaning that an increased mean from m1 to m2 will lead to a positive effect size.

m1 = 0
m2 = 0
sd1 = 0
n = 0
r = 0

ingroupd <- function()
{
meandiff = m2-m1
d <- meandiff/sd1
se <- sqrt(((2*(1-r))/n) + (d^2/(2*n)))
cat("d =", d, "\n")
cat("se =", se, "\n")
}

ingroupd()
## d = NaN 
## se = NaN

Hedges’ g

For small sample sizes both types of Cohen’s d can inflate sample size, this means that in meta-analysis small studies may bias the findings. As such this is usually corrected by calculating Hedges’ g.

To do this we just need Cohen’s d and the sample size.

# We can change these values
cohensd = 0.6
n = 100
# This is the function
g <- hedges_g(cohensd, n)
g
## [1] 0.5953964

Try changing the sample size, note that the smaller the sample size, the smaller Hedges’ g becomes.

Ratios

Odds Ratios

To calculate odds ratios from 2 groups we need binary data (often yes/no). From that we need the overall sample size, and then the number of events (actual occurrences of the outcome of interest, often a yes) per group.

#We can change these values
sample1 = 194
grp1yes = 112
sample2 = 187
grp2yes = 82

#This is the function
grp1no=sample1-grp1yes
grp2no=sample2-grp2yes
esc_2x2(grp1yes = grp1yes, grp1no = grp1no,
        grp2yes = grp2yes, grp2no = grp2no,
        es.type = "or")
## 
## Effect Size Calculation for Meta Analysis
## 
##      Conversion: 2x2 table (OR) coefficient to effect size odds ratio
##     Effect Size:   1.7490
##  Standard Error:   0.2070
##        Variance:   0.0428
##        Lower CI:   1.1657
##        Upper CI:   2.6240
##          Weight:  23.3412

Try changing es.type = “or” to es.type = “logit” to calculate log odds ratios rather than odds ratios.

Risk Ratios

To calculate risk ratios we need the same information as odds ratios.

#We can change these values
egevents <- 42
cgevents <- 98    
egsample <- 153
cgsample <- 147

#This is the function
riskratio <- (egevents/egsample)/(cgevents/cgsample)
riskratio
## [1] 0.4117647

However you will likely want log risk ratios and standard error which we can calculate from this:

ingroupd <- function()
{
logriskratio <- log(riskratio)
seriskratio <- sqrt((1/egevents)+(1/cgevents)-(1/egsample)-(1/cgsample))
seriskratio
cat("log risk ratio =", logriskratio, "\n")
cat("standard error =", seriskratio, "\n")
}

ingroupd()
## log risk ratio = -0.8873032 
## standard error = 0.1437878

Other designs

We can also conduct meta analysis using means if we also have standard deviations and sample size; proportions if we have number of events for a group and overall sample size; and correlations if we have correlation coefficients and sample size.

Preparing Our Data

There are many ways to prepare our data to use in meta-analysis. For simplicity we will use an excel spreadsheet, however you can input your data in many formats. For more information on importing data to R you can read here: https://cran.r-project.org/doc/manuals/r-release/R-data.html.

The data R requires is based on the type of effect size as discussed above. We will work through several scenarios: continuous data based on means and SDs or from precalculated effect sizes, correlation data and binary data.

Please note that the authors and data are entirely fictional and not based on any real world studies.

#Import Excel Spreadsheets
#We need the readxl package to do this:
library(readxl)

#Continuous Data
url <- "https://calummacgillivray.github.io/assets/files/Continuous_Data.xlsx"
destfile <- "Continuous_Data.xlsx"
curl::curl_download(url, destfile)
contdata <- read_excel(destfile)

#if importing from a file saved on your computer the code is shorter: instead you would do contdata <- read_excel("filepath/here/in/quotes.xlsx")

#View the imported dataset
View(contdata)
#Inspect the variable names
names(contdata)
## [1] "Study"                 "CG_Mean"               "CG_Standard_Deviation"
## [4] "CG_Sample_Size"        "EG_Mean"               "EG_Standard_Deviation"
## [7] "EG_Sample_Size"
#correlation data
url <- "https://calummacgillivray.github.io/assets/files/Correlation_Data.xlsx"
destfile <- "Correlation_Data.xlsx"
curl::curl_download(url, destfile)
cordata <- read_excel(destfile)
View(cordata)
names(cordata)
## [1] "Study"       "Sample_Size" "r"
#binary data
url <- "https://calummacgillivray.github.io/assets/files/Binary_Data.xlsx"
destfile <- "bindata.xlsx"
curl::curl_download(url, destfile)
bindata <- read_excel(destfile)
View(bindata)
names(bindata)
## [1] "Study"          "EG_Events"      "EG_Sample_Size" "CG_Events"     
## [5] "CG_Sample_Size"
#As an unneccessary step, to demonstrate meta-analysis with already calculated effect sizes, we will calculate hedges g for the continous dataset.
contdata_effsize <- pmap_dfr(
  contdata,
  function(EG_Mean, EG_Standard_Deviation, EG_Sample_Size, 
           CG_Mean, CG_Standard_Deviation, CG_Sample_Size, 
           Study)
    esc_mean_sd(grp1m = EG_Mean, grp1sd = EG_Standard_Deviation, grp1n = EG_Sample_Size,
                grp2m = CG_Mean, grp2sd = CG_Standard_Deviation, grp2n = CG_Sample_Size,
                study = Study,
                es.type = "g") %>% 
  as.data.frame())
view(contdata_effsize)
#We will use these datasets later to explore publication bias
url <- "https://calummacgillivray.github.io/assets/files/publication_bias_1.xlsx"
destfile <- "pubbias1.xlsx"
curl::curl_download(url, destfile)
pubbias1 <- read_excel(destfile)
names(pubbias1)
## [1] "Study"          "Effect Size"    "Standard Error"
url <- "https://calummacgillivray.github.io/assets/files/publication_bias_2.xlsx"
destfile <- "pubbias2.xlsx"
curl::curl_download(url, destfile)
pubbias2 <- read_excel(destfile)
names(pubbias2)
## [1] "Study"          "Effect Size"    "Standard Error"

Meta-Analysis of Continuous Data

For our first example we will assume a scenario in which we want to use between groups Hedge’s g with an independent control and experimental group. This means we either need pre-calculated Hedges’ g or sample size, mean, and standard deviation for each study.

The first thing that will be useful is to see what variables we have in the dataset.

names(contdata)
## [1] "Study"                 "CG_Mean"               "CG_Standard_Deviation"
## [4] "CG_Sample_Size"        "EG_Mean"               "EG_Standard_Deviation"
## [7] "EG_Sample_Size"

For all examples we will use the meta package which includes functions that are intuitively named. For this example we will use metacont, as the cont is short for continuous. The first thing we do is name the object where we will store the results of our analysis, here we will call it “cont_output”. Next we choose the appropriate function for our continuous data “metacont”.

After this we need to align our variables (on the right), with those that are expected by the function (on the left). Note that r code is always case sensitive. They have sample size (n), mean and standard deviation (sd) for both experimental (.e) and control (.c) groups and our labels do not match this, so we tell the function what goes where.

We also tell the function what our study labels (studlab) are called, in this case in our dataset they are under “Study”. The final easy answer is the dataset we are using, in our case we saved it earlier to “contdata”.

Under sm we declare if we are using mean difference (MD) or standardised mean difference (SMD). As we are using SMD we will also need to extra argument method.smd, where we specify what calculation we are using, in this case we will calculate Hedges g, by specifying “Hedges”.

For this example we will conduct random effects meta-analysis. To do this we specify that fixed, known as common, is FALSE and random is TRUE. If we wanted to conduct fixed effects analysis we could set common to TRUE and if we wanted to run both we can set common and random to TRUE.

As we discussed previously, when conducting random effects meta-analysis there are various methods of assessing heterogeneity, in this case we will specify the Restricted Maximum Likelihood method (REML). Some other codes are Maximum Likelihood (ML), DerSimonian-Laird (DL), Sidik-Jonkman (SJ), Paule-Mandel (PM). We also must specify if we use the Knapp-Hartung (HK) adjustment to calculate confidence intervals, which is often advisable with random effects analyses.

 cont_output <- metacont(n.e = EG_Sample_Size,
                   mean.e = EG_Mean,
                   sd.e = EG_Standard_Deviation,
                   n.c = CG_Sample_Size,
                   mean.c = CG_Mean,
                   sd.c = CG_Standard_Deviation,
                   studlab = Study,
                   data = contdata,
                   sm = "SMD",
                   method.smd = "Hedges",
                   common = FALSE,
                   random = TRUE,
                   method.tau = "REML",
                   method.random.ci = "HK",
                   title = "Example Continuous Data")
 summary(cont_output)
## Review:     Example Continuous Data
## 
##                                  SMD             95%-CI %W(random)
## Zeus et al. (2024)            0.7662 [ 0.2490;  1.2833]        9.8
## Hera (2019)                  -0.4422 [-0.8735; -0.0109]       10.7
## Poseidon et al. (2025)        0.7747 [ 0.2092;  1.3403]        9.3
## Apollo and Artemis (2017)    -0.3251 [-0.6884;  0.0383]       11.3
## Demeter (2024)                0.8952 [ 0.2573;  1.5331]        8.6
## Athena et al. (2019)          0.5412 [ 0.2236;  0.8589]       11.7
## Aphrodite (2023)              0.5884 [ 0.1021;  1.0748]       10.1
## Ares et al. (2025)            0.1627 [-0.2300;  0.5553]       11.0
## Hephaestus and Hermes (2019)  0.7528 [ 0.2187;  1.2869]        9.6
## Dionysus (2022)               0.9284 [ 0.2031;  1.6537]        7.8
## 
## Number of studies: k = 10
## Number of observations: o = 776 (o.e = 388, o.c = 388)
## 
##                         SMD           95%-CI    t p-value
## Random effects model 0.4266 [0.0669; 0.7863] 2.68  0.0251
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0.1960 [0.0600; 0.7652]; tau = 0.4427 [0.2450; 0.8748]
##  I^2 = 78.3% [60.4%; 88.1%]; H = 2.15 [1.59; 2.90]
## 
## Test of heterogeneity:
##      Q d.f.  p-value
##  41.43    9 < 0.0001
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 9)
## - Hedges' g (bias corrected standardised mean difference; using exact formulae)

Take a moment to look at the output here. We first see the the effect sizes and CIs for the individual studies. Then further down we see the pooled, weighted effect size and CIs across the studies. We can also see the significance test p-value.

Importantly we also have information regarding heterogeneity, we can see Tau2, I2 and Cochrane’s Q.

Precalculated Effect Sizes

When we want to use precalculated effect sizes we can use the generic function metagen. Helpfully everything except the data specification remains the same as in the example where the model did it for us. We will save this to a different object, this time “precalc_cont”. The key difference is that rather than means, Ns and SDs, we assign our effect size to TE and standard error to seTE.

To do this let’s have a look at our precalculated dataset.

names(contdata_effsize)
## [1] "study"       "es"          "weight"      "sample.size" "se"         
## [6] "var"         "ci.lo"       "ci.hi"       "measure"

We can see that effect size is “es” and standard error is “se”, we should also note that our study label is lowercase this time “study”.

precalc_cont <- metagen(TE = es,
                 seTE = se,
                 studlab = study,
                 data = contdata_effsize,
                 sm = "SMD",
                 common = FALSE,
                 random = TRUE,
                 method.tau = "REML",
                 method.random.ci = "HK",
                 title = "Example Precalculated Continuous Data")
summary(precalc_cont)
## Review:     Example Precalculated Continuous Data
## 
##                                  SMD             95%-CI %W(random)
## Zeus et al. (2024)            0.7662 [ 0.2497;  1.2827]        9.8
## Hera (2019)                  -0.4422 [-0.8734; -0.0111]       10.6
## Poseidon et al. (2025)        0.7747 [ 0.2101;  1.3394]        9.3
## Apollo and Artemis (2017)    -0.3251 [-0.6884;  0.0382]       11.3
## Demeter (2024)                0.8953 [ 0.2590;  1.5315]        8.6
## Athena et al. (2019)          0.5412 [ 0.2236;  0.8588]       11.7
## Aphrodite (2023)              0.5884 [ 0.1025;  1.0744]       10.1
## Ares et al. (2025)            0.1627 [-0.2300;  0.5553]       11.0
## Hephaestus and Hermes (2019)  0.7528 [ 0.2194;  1.2862]        9.6
## Dionysus (2022)               0.9284 [ 0.2058;  1.6511]        7.8
## 
## Number of studies: k = 10
## 
##                              SMD           95%-CI    t p-value
## Random effects model (HK) 0.4269 [0.0672; 0.7866] 2.68  0.0250
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0.1961 [0.0601; 0.7654]; tau = 0.4428 [0.2453; 0.8749]
##  I^2 = 78.3% [60.5%; 88.1%]; H = 2.15 [1.59; 2.90]
## 
## Test of heterogeneity:
##      Q d.f.  p-value
##  41.50    9 < 0.0001
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 9)

This result should be incredibly similar to our previous output as they are substantively the same, just with different methods of inputting the data.

Meta-Analysis of Correlations

We do something similar when conducting meta-analysis of correlation data. However we require fewer fields, just the sample size (n), correlation coefficient (cor), and label for the study (studlab).

Let’s have a look at the variables we have in our correlation dataset.

names(cordata)
## [1] "Study"       "Sample_Size" "r"

We will conduct a random effects meta-analysis again using similar settings. In the code below I have left the labels for cor, n and studlab blank. Based on the information in the code chunk above, try and fill those in yourself and run the analysis. Please note that code in R is case sensitive.

cor_output <- metacor(cor = r, 
                 n = Sample_Size,
                 studlab = Study,
                 data = cordata,
                 common = FALSE,
                 random = TRUE,
                 method.tau = "REML",
                 method.random.ci = "HK",
                 title = "Example Correlation Data")
summary(cor_output)
## Review:     Example Correlation Data
## 
##                             COR            95%-CI %W(random)
## Ra et al., 2023          0.2100 [-0.0033; 0.4050]        7.6
## Osiris and Sekhmet, 2019 0.3300 [ 0.1602; 0.4808]       10.8
## Horus, 2024              0.1800 [-0.0870; 0.4229]        4.9
## Anubis, 2020             0.2700 [ 0.1364; 0.3940]       18.2
## Seth et al., 2022        0.4200 [ 0.2118; 0.5917]        6.5
## Thoth, 2024              0.3600 [ 0.1709; 0.5236]        8.5
## Hathor and Bastet, 2019  0.2900 [ 0.1304; 0.4350]       12.6
## Geb, 2022                0.1500 [-0.1015; 0.3835]        5.5
## Nephthys, 2018           0.3100 [ 0.1715; 0.4365]       16.3
## Ptah et al., 2025        0.2500 [ 0.0584; 0.4239]        9.1
## 
## Number of studies: k = 10
## Number of observations: o = 1115
## 
##                         COR           95%-CI     t  p-value
## Random effects model 0.2869 [0.2376; 0.3347] 12.62 < 0.0001
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0 [0.0000; 0.0133]; tau = 0 [0.0000; 0.1151]
##  I^2 = 0.0% [0.0%; 62.4%]; H = 1.00 [1.00; 1.63]
## 
## Test of heterogeneity:
##     Q d.f. p-value
##  5.34    9  0.8033
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 9)
## - Fisher's z transformation of correlations

Meta-Analysis of Binary Data

For binary data we use another similar format, this time with metabin. For demonstration purposes, we will conduct a random and fixed effects meta-analysis. You will note that all we need to change to do the fixed effect analysis is common = TRUE. In our example below we also tell the fucntion we are working with risk ratios by setting sm = “RR”. We will also use the Paule-Mandel estimator this time for the random effects model as we are working with binary data, we can do this with method.tau = “PM”.

names(bindata)
## [1] "Study"          "EG_Events"      "EG_Sample_Size" "CG_Events"     
## [5] "CG_Sample_Size"
bin_output <- metabin(event.e = EG_Events, 
                 n.e = EG_Sample_Size,
                 event.c = CG_Events,
                 n.c = CG_Sample_Size,
                 studlab = Study,
                 data = bindata,
                 sm = "RR",
                 common = TRUE,
                 random = TRUE,
                 method.tau = "PM",
                 method.random.ci = "HK",
                 title = "Example Binary Data")
summary(bin_output)
## Review:     Example Binary Data
## 
##                            RR           95%-CI %W(common) %W(random)
## Odin et al., 2022      0.6353 [0.3312; 1.2186]        7.7        7.0
## Thor, 2017             0.6500 [0.3830; 1.1033]       11.4       10.6
## Freyja et al., 2024    0.6818 [0.2843; 1.6354]        4.2        3.9
## Heimdall and Tyr, 2025 0.6562 [0.4140; 1.0402]       15.5       14.0
## Frigg, 2023            0.7371 [0.4147; 1.3101]        8.4        9.0
## Sif et al., 2021       0.7797 [0.4703; 1.2926]       10.4       11.7
## Loki, 2023             0.7619 [0.5157; 1.1257]       17.2       19.5
## Bragi et al., 2018     0.7143 [0.3324; 1.5351]        5.3        5.1
## Freyr, 2022            0.6852 [0.4160; 1.1285]       12.9       12.0
## Mimir et al., 2025     0.8388 [0.4404; 1.5977]        6.9        7.2
## 
## Number of studies: k = 10
## Number of observations: o = 2300 (o.e = 1115, o.c = 1185)
## Number of events: e = 434
## 
##                          RR           95%-CI    z|t  p-value
## Common effect model  0.7122 [0.5993; 0.8465]  -3.85   0.0001
## Random effects model 0.7145 [0.6712; 0.7605] -12.18 < 0.0001
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0; tau = 0; I^2 = 0.0% [0.0%; 62.4%]; H = 1.00 [1.00; 1.63]
## 
## Test of heterogeneity:
##     Q d.f. p-value
##  0.88    9  0.9997
## 
## Details of meta-analysis methods:
## - Mantel-Haenszel method (common effect model)
## - Inverse variance method (random effects model)
## - Paule-Mandel estimator for tau^2
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 9)

Here we can see two lines the common (fixed) effect model and the random effects model. The pooled Risk Ratios are quite similar despite the different models. What do you think is a major contributing factor to the similarity?

Other Options

Before we move on it is worth flagging that there are plenty of other meta-analytic methods you can use that we haven’t covered, well documented here: https://doing-meta.guide/.

If you have found the above straightforward then you should have no problem with the metaprop and metamean functions of meta which can be used to pool proportions or means respectively.

If you want to conduct subgroup analyses you can also do this with meta using the subgroup argument.

You may also want to do further reading on other more advance forms of analysis such as: meta-regression, meta-analysis using multi-level modelling, structural equation modelling or bayesian modelling.

Visualisation

To visualise the data, the most common solution is the forest plot. Before we do this, let’s look again at the analysis we will use. Note that we previously saved this to “cont_output”.

summary(cont_output)
## Review:     Example Continuous Data
## 
##                                  SMD             95%-CI %W(random)
## Zeus et al. (2024)            0.7662 [ 0.2490;  1.2833]        9.8
## Hera (2019)                  -0.4422 [-0.8735; -0.0109]       10.7
## Poseidon et al. (2025)        0.7747 [ 0.2092;  1.3403]        9.3
## Apollo and Artemis (2017)    -0.3251 [-0.6884;  0.0383]       11.3
## Demeter (2024)                0.8952 [ 0.2573;  1.5331]        8.6
## Athena et al. (2019)          0.5412 [ 0.2236;  0.8589]       11.7
## Aphrodite (2023)              0.5884 [ 0.1021;  1.0748]       10.1
## Ares et al. (2025)            0.1627 [-0.2300;  0.5553]       11.0
## Hephaestus and Hermes (2019)  0.7528 [ 0.2187;  1.2869]        9.6
## Dionysus (2022)               0.9284 [ 0.2031;  1.6537]        7.8
## 
## Number of studies: k = 10
## Number of observations: o = 776 (o.e = 388, o.c = 388)
## 
##                         SMD           95%-CI    t p-value
## Random effects model 0.4266 [0.0669; 0.7863] 2.68  0.0251
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0.1960 [0.0600; 0.7652]; tau = 0.4427 [0.2450; 0.8748]
##  I^2 = 78.3% [60.4%; 88.1%]; H = 2.15 [1.59; 2.90]
## 
## Test of heterogeneity:
##      Q d.f.  p-value
##  41.43    9 < 0.0001
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 9)
## - Hedges' g (bias corrected standardised mean difference; using exact formulae)

Producing a forest plot is very straightforward, although if you would like to explore more advanced options have a look here: https://cran.r-project.org/web/packages/forestploter/vignettes/forestploter-intro.html

For now, we will look at the easy way.

meta::forest(cont_output, 
             sortvar = TE,
             prediction = TRUE)

There are a few pre-programmed formats we can use instead, check them out with the code below:

meta::forest(cont_output, layout = "JAMA")

meta::forest(cont_output, layout = "RevMan5")

Saving Forest Plots

There are other formats you can save this in but the most common tends to be PNG, we do this with the PNG function. First we must tell R the details of our PNG, the width, height and resolution. It then expects the output of the subsequent code to be saved with those details. Finally we use dev.off() to tell R to stop saving details to the PNG. We may need to adjust the height and width if the image is cut off.

png("contforestplot.png", width = 4000, height = 2000, res = 300)

meta::forest(cont_output, 
             sortvar = TE,
             prediction = TRUE)

dev.off()
## png 
##   2

We can find this PNG saved in our working directory. If you don’t know where that is you can use the following code to find the file path. When you are working on your own projects it is a good idea to set a folder for your working directory to keep everything in one place. However as we are working from a jupyter notebook in Colab this will instead get saved in the temporary /content directory. You can view this by selecting the folder icon from the menu on the left.

getwd()
## [1] "C:/Users/calum/OneDrive - University of Dundee/PhD/Meta-analysis Workshop"

You can change your working directory when working directly from r with the code setwd(“filepath/here”), by swapping filepath/here with your desired file path.

Publication Bias

Funnel Plots

We talked earlier about the theory behind small study bias and why funnel plots can be useful. They are quite easy to produce in R. With few studies these plots are less useful so for demonstration purposes the dataset we will use has 80 studies.

names(pubbias1)
## [1] "Study"          "Effect Size"    "Standard Error"

This dataset includes precalculated effect sizes so we will use metagen for the analysis.

Note that the variable headers have spaces which will confuse the code, the best way to deal with this is to change this before you import it, however if we do need to refer to a variable with a space you can place it in backticks e.g. `Effect Size`.

pubbias1_output <- metagen(
                 TE = `Effect Size`,
                 seTE = `Standard Error`,
                 studlab = Study,
                 data = pubbias1,
                 sm = "SMD",
                 common = FALSE,
                 random = TRUE,
                 method.tau = "REML",
                 method.random.ci = "HK",
                 title = "Publication Bias Example 1")
summary(pubbias1_output)
## Review:     Publication Bias Example 1
## 
##              SMD            95%-CI %W(random)
## Study_1   0.4214 [ 0.2447; 0.5982]        2.2
## Study_2   0.2958 [-0.1994; 0.7909]        0.3
## Study_3   0.3625 [ 0.1455; 0.5795]        1.5
## Study_4   0.0904 [-0.4239; 0.6047]        0.3
## Study_5   0.3118 [ 0.1390; 0.4846]        2.3
## Study_6   0.1153 [-0.4505; 0.6811]        0.2
## Study_7   0.3419 [ 0.0409; 0.6430]        0.8
## Study_8   0.4855 [ 0.2754; 0.6956]        1.6
## Study_9   0.6850 [ 0.1743; 1.1957]        0.3
## Study_10  0.3973 [ 0.0649; 0.7297]        0.6
## Study_11  0.3433 [ 0.0426; 0.6441]        0.8
## Study_12  0.4660 [ 0.2775; 0.6545]        2.0
## Study_13  0.3600 [-0.1055; 0.8256]        0.3
## Study_14  0.5356 [ 0.1475; 0.9238]        0.5
## Study_15  0.3658 [-0.0674; 0.7991]        0.4
## Study_16  0.2735 [-0.0651; 0.6121]        0.6
## Study_17  0.4374 [-0.0723; 0.9472]        0.3
## Study_18  0.1577 [-0.4249; 0.7403]        0.2
## Study_19  0.2553 [-0.2705; 0.7811]        0.3
## Study_20  0.1623 [-0.2021; 0.5266]        0.5
## Study_21  0.4356 [-0.1318; 1.0030]        0.2
## Study_22  0.5223 [ 0.2777; 0.7669]        1.2
## Study_23  0.2924 [-0.0473; 0.6320]        0.6
## Study_24  0.3955 [ 0.0743; 0.7167]        0.7
## Study_25  0.4198 [ 0.1206; 0.7190]        0.8
## Study_26  0.3141 [ 0.0541; 0.5740]        1.0
## Study_27  0.4595 [ 0.3430; 0.5759]        5.1
## Study_28  0.6877 [ 0.1604; 1.2151]        0.3
## Study_29  0.8249 [ 0.3762; 1.2735]        0.3
## Study_30 -0.1040 [-0.5506; 0.3426]        0.3
## Study_31  0.4004 [ 0.2083; 0.5925]        1.9
## Study_32  0.5413 [ 0.1759; 0.9067]        0.5
## Study_33  0.3813 [-0.1943; 0.9570]        0.2
## Study_34  0.5510 [ 0.4021; 0.6999]        3.1
## Study_35  0.4007 [ 0.3023; 0.4990]        7.2
## Study_36  0.3961 [-0.1469; 0.9391]        0.2
## Study_37  0.4025 [ 0.1823; 0.6227]        1.4
## Study_38  0.2835 [ 0.1037; 0.4633]        2.2
## Study_39  0.3726 [ 0.2511; 0.4940]        4.7
## Study_40  0.6360 [ 0.2704; 1.0016]        0.5
## Study_41  0.3988 [ 0.2701; 0.5276]        4.2
## Study_42  0.1584 [-0.2799; 0.5968]        0.4
## Study_43  0.6440 [ 0.0820; 1.2061]        0.2
## Study_44  0.3929 [ 0.0683; 0.7175]        0.7
## Study_45  0.4330 [ 0.0068; 0.8593]        0.4
## Study_46 -0.1512 [-0.6959; 0.3935]        0.2
## Study_47  0.4740 [ 0.3261; 0.6218]        3.2
## Study_48  0.4141 [ 0.2384; 0.5897]        2.3
## Study_49  0.0608 [-0.2072; 0.3287]        1.0
## Study_50  0.2548 [-0.2199; 0.7295]        0.3
## Study_51  0.6918 [ 0.2017; 1.1818]        0.3
## Study_52  0.4201 [ 0.2704; 0.5697]        3.1
## Study_53  0.4124 [ 0.1157; 0.7091]        0.8
## Study_54  0.2242 [-0.0037; 0.4522]        1.3
## Study_55  0.7828 [ 0.4382; 1.1273]        0.6
## Study_56  0.5495 [ 0.1843; 0.9148]        0.5
## Study_57  0.0763 [-0.4101; 0.5628]        0.3
## Study_58  0.3632 [ 0.1777; 0.5486]        2.0
## Study_59  0.4974 [ 0.2215; 0.7733]        0.9
## Study_60  0.5450 [ 0.1298; 0.9602]        0.4
## Study_61  0.3186 [ 0.0175; 0.6196]        0.8
## Study_62  0.3943 [ 0.2877; 0.5008]        6.1
## Study_63  0.4007 [ 0.2764; 0.5250]        4.5
## Study_64  0.3596 [-0.1651; 0.8844]        0.3
## Study_65  0.8140 [ 0.3674; 1.2606]        0.3
## Study_66  0.4671 [ 0.0785; 0.8557]        0.5
## Study_67  0.1785 [-0.0646; 0.4216]        1.2
## Study_68  0.2093 [-0.2475; 0.6662]        0.3
## Study_69  0.1126 [-0.2448; 0.4700]        0.5
## Study_70  0.2926 [ 0.0094; 0.5758]        0.9
## Study_71  0.4227 [-0.1058; 0.9511]        0.2
## Study_72  0.3922 [ 0.1287; 0.6556]        1.0
## Study_73  0.3783 [ 0.2683; 0.4883]        5.7
## Study_74  0.5464 [ 0.1504; 0.9423]        0.4
## Study_75  0.4102 [ 0.1405; 0.6798]        1.0
## Study_76  0.3538 [ 0.2007; 0.5070]        3.0
## Study_77  0.1756 [-0.3937; 0.7449]        0.2
## Study_78  0.2082 [-0.2854; 0.7018]        0.3
## Study_79  0.6604 [ 0.2766; 1.0442]        0.5
## Study_80  0.7074 [ 0.4250; 0.9898]        0.9
## 
## Number of studies: k = 80
## 
##                              SMD           95%-CI     t  p-value
## Random effects model (HK) 0.3981 [0.3720; 0.4241] 30.44 < 0.0001
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0 [0.0000; 0.0080]; tau = 0 [0.0000; 0.0892]
##  I^2 = 0.0% [0.0%; 26.9%]; H = 1.00 [1.00; 1.17]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  74.58   79  0.6198
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 79)

Now that we have conducted the analysis it is very straightforward to produce a funnel plot. In the brackets we put the object that we saved the analysis to “pubbias1_output”, then we can add a title with the second line if we want.

meta::funnel(pubbias1_output)
title("Example Data")

You can see here this data is quite well spread out and doesn’t seem to indicate anything particularly odd.

Now let’s look at another dataset.

pubbias2_output <- metagen(
                 TE = `Effect Size`,
                 seTE = `Standard Error`,
                 studlab = Study,
                 data = pubbias2,
                 sm = "SMD",
                 common = FALSE,
                 random = TRUE,
                 method.tau = "REML",
                 method.random.ci = "HK",
                 title = "Publication Bias Example 2")
summary(pubbias2_output)
## Review:     Publication Bias Example 2
## 
##              SMD            95%-CI %W(random)
## Study_1   0.3454 [ 0.0185; 0.6723]        0.8
## Study_2   0.3809 [ 0.1615; 0.6002]        1.7
## Study_3   0.2731 [ 0.1610; 0.3852]        6.5
## Study_4   0.3984 [ 0.1031; 0.6936]        0.9
## Study_5   0.5359 [ 0.0851; 0.9866]        0.4
## Study_6   0.2415 [ 0.0347; 0.4483]        1.9
## Study_7   0.3897 [ 0.1756; 0.6038]        1.8
## Study_8   0.4358 [ 0.1199; 0.7517]        0.8
## Study_9   0.4562 [ 0.1760; 0.7365]        1.0
## Study_10  0.6303 [ 0.0944; 1.1663]        0.3
## Study_11  0.0604 [-0.1769; 0.2977]        1.4
## Study_12  0.3308 [ 0.1008; 0.5607]        1.5
## Study_13  0.6961 [ 0.2454; 1.1468]        0.4
## Study_14  0.2271 [ 0.0299; 0.4243]        2.1
## Study_15 -0.0393 [-0.4850; 0.4063]        0.4
## Study_16  0.3877 [ 0.1072; 0.6682]        1.0
## Study_17  0.7422 [ 0.2314; 1.2531]        0.3
## Study_18  0.1867 [ 0.0299; 0.3435]        3.3
## Study_19  0.3815 [-0.1516; 0.9145]        0.3
## Study_20  0.2344 [ 0.0470; 0.4218]        2.3
## Study_21  0.6216 [ 0.3458; 0.8973]        1.1
## Study_22  0.3029 [ 0.1914; 0.4145]        6.5
## Study_23  0.4530 [ 0.1159; 0.7901]        0.7
## Study_24  0.4459 [ 0.2446; 0.6472]        2.0
## Study_25  0.4406 [ 0.0002; 0.8810]        0.4
## Study_26  0.3221 [-0.2560; 0.9003]        0.2
## Study_27  0.5171 [ 0.1301; 0.9041]        0.5
## Study_28  0.3025 [ 0.1725; 0.4324]        4.8
## Study_29  0.3111 [ 0.1999; 0.4223]        6.6
## Study_30  0.2397 [ 0.0957; 0.3837]        3.9
## Study_31  0.2197 [ 0.0766; 0.3628]        4.0
## Study_32  0.4243 [ 0.1216; 0.7270]        0.9
## Study_33  0.3315 [ 0.1868; 0.4762]        3.9
## Study_34  0.5224 [ 0.1705; 0.8743]        0.7
## Study_35  0.4543 [-0.1073; 1.0159]        0.3
## Study_36  0.4496 [ 0.1370; 0.7623]        0.8
## Study_37  0.4442 [ 0.0558; 0.8325]        0.5
## Study_38  0.3755 [ 0.2640; 0.4870]        6.5
## Study_39  0.4124 [ 0.2566; 0.5682]        3.3
## Study_40  0.2679 [ 0.0997; 0.4361]        2.9
## Study_41  0.4905 [ 0.2480; 0.7331]        1.4
## Study_42  0.4989 [ 0.0459; 0.9519]        0.4
## Study_43  0.4266 [ 0.1007; 0.7525]        0.8
## Study_44  0.2756 [-0.1281; 0.6793]        0.5
## Study_45  0.3387 [ 0.2375; 0.4398]        7.9
## Study_46  0.3607 [ 0.2324; 0.4890]        4.9
## Study_47  0.4140 [ 0.1507; 0.6774]        1.2
## Study_48  0.3138 [ 0.0867; 0.5408]        1.6
## Study_49  0.2985 [ 0.0451; 0.5520]        1.3
## Study_50  0.4611 [ 0.0303; 0.8918]        0.4
## 
## Number of studies: k = 50
## 
##                              SMD           95%-CI     t  p-value
## Random effects model (HK) 0.3302 [0.3026; 0.3578] 24.04 < 0.0001
## 
## Quantifying heterogeneity (with 95%-CIs):
##  tau^2 = 0 [0.0000; 0.0055]; tau = 0 [0.0000; 0.0741]
##  I^2 = 0.0% [0.0%; 33.0%]; H = 1.00 [1.00; 1.22]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  43.78   49  0.6841
## 
## Details of meta-analysis methods:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Calculation of I^2 based on Q
## - Hartung-Knapp adjustment for random effects model (df = 49)

And then run the funnel plot.

meta::funnel(pubbias2_output)

What do you notice here?

Egger’s Test

To explore the small study effect statistically we can employ Egger’s test. Here we use the function metabias, tell it what output we are using in the brackets, and that we want to use Egger’s test with “linreg”.

Have a look at the test for the first dataset.

metabias(pubbias1_output, method.bias = "linreg")
## Review:     Publication Bias Example 1
## 
## Linear regression test of funnel plot asymmetry
## 
## Test result: t = -0.67, df = 78, p-value = 0.5040
## Bias estimate: -0.1499 (SE = 0.2233)
## 
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 0.9507)
## - predictor: standard error
## - weight:    inverse variance
## - reference: Egger et al. (1997), BMJ

Now see if you can run the same test but this time for the analysis we saved under pubbias2_output.

You can also use Peters’ test with binary data, find out more here: https://doing-meta.guide/pub-bias#peters-test

Other methods

We won’t have time in this introductory session to cover all methods of approaching publication bias, however you may want to read more on other approaches such as p-curves, the trim and fill method and selection models, that are well explained here: https://doing-meta.guide/pub-bias#pub-bias

Further Reading

You can find helpful resources at the end of the presentation, linked to above.