Back

Drivers and ethical impacts of insufficient validation of antibodies in research

Biddle, M.; Cooper, J.; Blades, K.; Ruddy, D.; Krockow, E. M.; Virk, H.

2026-02-20 cell biology
10.64898/2026.02.19.706766 bioRxiv
Show abstract

Lack of antibody validation by researchers frequently misdirects biomedical research, yet the ethical consequences -- particularly avoidable use of animal and human biological materials -- remain unquantified. Using focus groups (n=12), surveys (n=107), and systematic analysis of 785 publications, we examined how researchers select and validate antibodies, and quantified the downstream ethical costs. Antibody selection is substantially influenced by social factors such as previous laboratory use and peer recommendations, while systematic evaluation of performance characteristics remains limited. From a dataset of 614 antibodies subjected to rigorous characterisation using knockout controls, 97 (15.8%) failed across all tested applications; we systematically searched for publications linked to these antibodies. Within the 760 publications (those where validation status could be confirmed), only 120 (15.8%) presented any validation evidence -- despite 72.0% of surveyed researchers reporting having used at least one recommended validation method. The remaining 640 papers consumed a minimum of 8,064 animal samples and 4,424 human tissue samples using antibodies with demonstrated poor performance, without any evidence confirming fitness for the specific experimental purpose used. Conservative extrapolation suggests millions of animal and human tissue samples have been consumed globally in experiments using antibodies that would fail independent testing. Researchers identified time, cost, and lack of supervisor support as primary barriers, whilst strongly supporting open data sharing, dedicated validation funding, and publisher requirements as solutions. These findings quantify for the first time the ethical costs of inadequate antibody validation and highlight the need for coordinated stakeholder interventions to reduce avoidable biological sample waste in biomedical research.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
PLOS ONE
4510 papers in training set
Top 8%
19.7%
2
PLOS Biology
408 papers in training set
Top 0.1%
13.2%
3
eLife
5422 papers in training set
Top 5%
10.7%
4
Scientific Reports
3102 papers in training set
Top 29%
4.2%
5
BMC Biology
248 papers in training set
Top 0.2%
3.8%
50% of probability mass above
6
Nature Communications
4913 papers in training set
Top 42%
3.2%
7
F1000Research
79 papers in training set
Top 0.6%
3.2%
8
Life Science Alliance
263 papers in training set
Top 0.2%
1.8%
9
Open Biology
95 papers in training set
Top 0.9%
1.4%
10
Communications Biology
886 papers in training set
Top 12%
1.4%
11
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 40%
0.9%
12
Scientific Data
174 papers in training set
Top 2%
0.9%
13
Frontiers in Cellular and Infection Microbiology
98 papers in training set
Top 5%
0.8%
14
BMJ Open
554 papers in training set
Top 12%
0.8%
15
FEBS Letters
42 papers in training set
Top 0.3%
0.8%
16
Science
429 papers in training set
Top 19%
0.8%
17
Data in Brief
13 papers in training set
Top 0.6%
0.7%
18
Philosophical Transactions of the Royal Society B: Biological Sciences
53 papers in training set
Top 2%
0.7%
19
European Respiratory Journal
54 papers in training set
Top 2%
0.7%
20
SLAS Discovery
25 papers in training set
Top 0.2%
0.7%
21
The Lancet Digital Health
25 papers in training set
Top 1%
0.7%
22
Thorax
32 papers in training set
Top 0.9%
0.7%
23
SLAS Technology
11 papers in training set
Top 0.4%
0.5%
24
Epidemics
104 papers in training set
Top 2%
0.5%
25
PLOS Computational Biology
1633 papers in training set
Top 28%
0.5%
26
Royal Society Open Science
193 papers in training set
Top 6%
0.5%
27
Wellcome Open Research
57 papers in training set
Top 3%
0.5%
28
Nature
575 papers in training set
Top 17%
0.5%
29
PLOS Digital Health
91 papers in training set
Top 3%
0.5%