Back

A framework for evaluating edited cell libraries created by massively parallel genome engineering

Cawley, S.; Abbate, E.; Abraham, C. G.; Alvarez, S.; Barber, M.; Bolte, S.; Bruand, J.; Church, D. M.; Davis, C.; Estes, M.; Federowicz, S.; Fox, R.; Gander, M. W.; Garst, A. D.; Gencer, G.; Hardenbol, P.; Hraha, T.; Jain, S.; Johnson, C.; Juneau, K.; Krishnamurthy, N.; Lambert, S.; Leland, B.; Pearson, F.; Ray, J. C. J.; Sanada, C. D.; Shaver, T. M.; Shepherd, T. R.; Spindler, E. C.; Struble, C. A.; Swat, M. H.; Tanner, S.; Tian, T.; Wishart, K.; Graige, M. S.

2021-09-23 genomics
10.1101/2021.09.23.458228 bioRxiv
Show abstract

Genome engineering methodologies are transforming biological research and discovery. Approaches based on CRISPR technology have been broadly adopted and there is growing interest in the generation of massively parallel edited cell libraries. Comparing the libraries generated by these varying approaches is challenging and researchers lack a common framework for defining and assessing the characteristics of these libraries. Here we describe a framework for evaluating massively parallel libraries of edited genomes based on established methods for sampling complex populations. We define specific attributes and metrics that are informative for describing a complex cell library and provide examples for estimating these values. We also connect this analysis to generic phenotyping approaches, using either pooled (typically via a selection assay) or isolate (often referred to as screening) phenotyping approaches. We approach this from the context of creating massively parallel, precisely edited libraries with one edit per cell, though the approach holds for other types of modifications, including libraries containing multiple edits per cell (combinatorial editing). This framework is a critical component for evaluating and comparing new technologies as well as understanding how a massively parallel edited cell library will perform in a given phenotyping approach.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
G3: Genes, Genomes, Genetics
222 papers in training set
Top 0.1%
21.9%
2
G3 Genes|Genomes|Genetics
351 papers in training set
Top 0.1%
14.3%
3
BMC Genomics
328 papers in training set
Top 0.4%
6.1%
4
NAR Genomics and Bioinformatics
214 papers in training set
Top 0.5%
4.2%
5
Nucleic Acids Research
1128 papers in training set
Top 5%
3.8%
50% of probability mass above
6
PLOS ONE
4510 papers in training set
Top 41%
3.5%
7
Genetics
225 papers in training set
Top 1%
3.5%
8
PLOS Computational Biology
1633 papers in training set
Top 12%
2.5%
9
Cell Genomics
162 papers in training set
Top 2%
2.5%
10
GENETICS
189 papers in training set
Top 0.5%
2.0%
11
Cell Reports Methods
141 papers in training set
Top 2%
2.0%
12
BMC Bioinformatics
383 papers in training set
Top 4%
1.8%
13
Bioinformatics
1061 papers in training set
Top 7%
1.6%
14
Bioinformatics Advances
184 papers in training set
Top 3%
1.6%
15
Scientific Reports
3102 papers in training set
Top 61%
1.6%
16
Frontiers in Genetics
197 papers in training set
Top 5%
1.6%
17
Genome Biology
555 papers in training set
Top 5%
1.3%
18
Nature Biotechnology
147 papers in training set
Top 5%
1.3%
19
Nature Communications
4913 papers in training set
Top 57%
1.2%
20
Cell Systems
167 papers in training set
Top 9%
1.2%
21
The CRISPR Journal
33 papers in training set
Top 0.2%
1.2%
22
Computational and Structural Biotechnology Journal
216 papers in training set
Top 7%
0.9%
23
G3
33 papers in training set
Top 0.4%
0.9%
24
Biology Methods and Protocols
53 papers in training set
Top 2%
0.9%
25
GigaScience
172 papers in training set
Top 3%
0.8%
26
Genome Research
409 papers in training set
Top 4%
0.7%
27
Life Science Alliance
263 papers in training set
Top 2%
0.7%
28
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 46%
0.7%
29
PeerJ
261 papers in training set
Top 18%
0.6%
30
mSystems
361 papers in training set
Top 8%
0.6%