1932

Abstract

Advancing scientific discovery requires investigators to embrace research practices that increase transparency and disclosure about materials, methods, and outcomes. Several research advocacy and funding organizations have produced guidelines and recommended practices to enhance reproducibility through detailed and rigorous research approaches; however, confusion around vocabulary terms and a lack of adoption of suggested practices have stymied successful implementation. Although reproducibility of research findings cannot be guaranteed due to extensive inherent variables in attempts at experimental repetition, the scientific community can advocate for generalizability in the application of data outcomes to ensure a broad and effective impact on the comparison of animals to translation within human research. This report reviews suggestions, based upon work with National Institutes of Health advisory groups, for improving rigor and transparency in animal research through aspects of experimental design, statistical assessment, and reporting factors to advocate for generalizability in the application of comparative outcomes between animals and humans.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-animal-021022-043531
2024-02-15
2024-04-29
Loading full text...

Full text loading...

/deliver/fulltext/animal/12/1/annurev-animal-021022-043531.html?itemId=/content/journals/10.1146/annurev-animal-021022-043531&mimeType=html&fmt=ahah

Literature Cited

  1. 1.
    Natl. Res. Counc 2011. Guide for the Care and Use of Laboratory Animals: Eighth Edition Washington, DC: Natl. Acad. Press
  2. 2.
    Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. 2010. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLOS Biol. 8:e1000412
    [Google Scholar]
  3. 3.
    Collins FS, Tabak LA. 2014. Policy: NIH plans to enhance reproducibility. Nature 505:612–13
    [Google Scholar]
  4. 4.
    FASEB (Fed. Am. Soc. Exp. Biol.) 2016. Enhancing Research Reproducibility: Recommendations from the Federation of American Societies for Experimental Biology Rockville, MD: FASEB
  5. 5.
    Natl. Acad. Sci. Eng. Med 2019. Reproducibility and Replicability in Science Washington, DC: Natl. Acad. Press
  6. 6.
    Wold B, Tabak L. 2021. ACD working group on enhancing rigor, transparency, and translatability in animal research Final Rep. Adv. Comm. Dir., Natl. Inst. Health Bethesda, MD: https://acd.od.nih.gov/documents/presentations/06112021_ACD_WorkingGroup_FinalReport.pdf
  7. 7.
    Goodman SN, Fanelli D, Ioannidis JP. 2016. What does research reproducibility mean?. Sci. Transl. Med. 8:341ps12
    [Google Scholar]
  8. 8.
    Grant S, Wendt KE, Leadbeater BJ, Supplee LH, Mayo-Wilson E et al. 2022. Transparent, open, and reproducible prevention science. Prev. Sci. 23:701–22
    [Google Scholar]
  9. 9.
    NIH Cent. Resour. Grants Fund. Inf 2023. Enhancing reproducibility through rigor and transparency https://grants.nih.gov/policy/reproducibility/index.htm
  10. 10.
    Clayton JA. 2018. Applying the new SABV (sex as a biological variable) policy to research and clinical care. Physiol. Behav. 187:2–5
    [Google Scholar]
  11. 11.
    Beynen A, Gärtner K, van Zutphen L. 2003. Standardization of animal experimentation. Principles of Laboratory Animal Science LFM van Zutphen, V Baumans, A Beynen 103–10 Amsterdam: Elsevier
    [Google Scholar]
  12. 12.
    Percie du Sert N, Hurst V, Ahluwalia A, Alam S, Avey MT et al. 2020. The ARRIVE guidelines 2.0: updated guidelines for reporting animal research. Exp. Physiol. 105:1459–66
    [Google Scholar]
  13. 13.
    Smith AJ, Clutton RE, Lilley E, Hansen KEA, Brattelid T. 2018. PREPARE: guidelines for planning animal research and testing. Lab. Anim. 52:135–41
    [Google Scholar]
  14. 14.
    Macleod M, Mohan S. 2019. Reproducibility and rigor in animal-based research. ILAR J. 60:17–23
    [Google Scholar]
  15. 15.
    Ramirez FD, Motazedian P, Jung RG, Di Santo P, MacDonald ZD et al. 2017. Methodological rigor in preclinical cardiovascular studies: targets to enhance reproducibility and promote research translation. Circ. Res. 120:1916–26
    [Google Scholar]
  16. 16.
    Prager EM, Chambers KE, Plotkin JL, McArthur DL, Bandrowski AE et al. 2019. Improving transparency and scientific rigor in academic publishing. J. Neurosci. Res. 97:377–90
    [Google Scholar]
  17. 17.
    Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD et al. 2017. A manifesto for reproducible science. Nat. Hum. Behav. 1:0021
    [Google Scholar]
  18. 18.
    Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Muhlbacher A et al. 2013. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force. Value Health 16:3–13
    [Google Scholar]
  19. 19.
    Hess KR. 2011. Statistical design considerations in animal studies published recently in cancer research. Cancer Res. 71:625
    [Google Scholar]
  20. 20.
    Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JP. 2016. Reproducible research practices and transparency across the biomedical literature. PLOS Biol. 14:e1002333
    [Google Scholar]
  21. 21.
    Ramirez FD, Jung RG, Motazedian P, Perry-Nguyen D, Di Santo P et al. 2020. Journal initiatives to enhance preclinical research: analyses of Stroke. Nature Medicine, Science Translational Medicine. Stroke 51:291–99
    [Google Scholar]
  22. 22.
    Kousholt BS, Praestegaard KF, Stone JC, Thomsen AF, Johansen TT et al. 2022. Reporting quality in preclinical animal experimental research in 2009 and 2018: a nationwide systematic investigation. PLOS ONE 17:e0275962
    [Google Scholar]
  23. 23.
    Menke J, Roelandse M, Ozyurt B, Martone M, Bandrowski A. 2020. The Rigor and Transparency Index quality metric for assessing biological and medical science methods. iScience 23:101698
    [Google Scholar]
  24. 24.
    Simundic AM. 2013. Bias in research. Biochem. Med. 23:12–15
    [Google Scholar]
  25. 25.
    Dirnagl U. 2016. Thomas Willis Lecture: Is translational stroke research broken, and if so, how can we fix it?. Stroke 47:2148–53
    [Google Scholar]
  26. 26.
    Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R et al. 2012. A call for transparent reporting to optimize the predictive value of preclinical research. Nature 490:187–91
    [Google Scholar]
  27. 27.
    Percie du Sert N, Ahluwalia A, Alam S, Avey MT, Baker M et al. 2020. Reporting animal research: explanation and elaboration for the ARRIVE guidelines 2.0. PLOS Biol. 18:e3000411
    [Google Scholar]
  28. 28.
    Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR. 2010. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLOS Biol. 8:e1000344
    [Google Scholar]
  29. 29.
    Kiyonaga A, Scimeca JM. 2019. Practical considerations for navigating Registered Reports. Trends Neurosci. 42:568–72
    [Google Scholar]
  30. 30.
    Chambers CD, Tzavella L. 2022. The past, present and future of Registered Reports. Nat. Hum. Behav. 6:29–42
    [Google Scholar]
  31. 31.
    Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E et al. 2013. Evaluation of excess significance bias in animal studies of neurological diseases. PLOS Biol. 11:e1001609
    [Google Scholar]
  32. 32.
    Altman DG. 2009. Missing outcomes in randomized trials: addressing the dilemma. Open Med. 3:e51–53
    [Google Scholar]
  33. 33.
    Rubin LH, Witkiewitz K, Andre JS, Reilly S. 2007. Methods for handling missing data in the behavioral neurosciences: Don't throw the baby rat out with the bath water. J. Undergrad. Neurosci. Educ. 5:A71–77
    [Google Scholar]
  34. 34.
    Lane P. 2008. Handling drop-out in longitudinal clinical trials: a comparison of the LOCF and MMRM approaches. Pharm. Stat. 7:93–106
    [Google Scholar]
  35. 35.
    Jakobsen JC, Gluud C, Wetterslev J, Winkel P. 2017. When and how should multiple imputation be used for handling missing data in randomised clinical trials—a practical guide with flowcharts. BMC Med. Res. Methodol. 17:162
    [Google Scholar]
  36. 36.
    Kang H. 2013. The prevention and handling of the missing data. Korean J. Anesthesiol. 64:402–6
    [Google Scholar]
  37. 37.
    Schulz KF, Altman DG, Moher DCONSORT Group 2010. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLOS Med. 7:e1000251
    [Google Scholar]
  38. 38.
    Han S, Olonisakin TF, Pribis JP, Zupetic J, Yoon JH et al. 2017. A checklist is associated with increased quality of reporting preclinical biomedical research: a systematic review. PLOS ONE 12:e0183591
    [Google Scholar]
  39. 39.
    Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A et al. 2015. Risk of bias in reports of in vivo research: a focus for improvement. PLOS Biol. 13:e1002273
    [Google Scholar]
  40. 40.
    Altman DG, Bland JM. 1999. How to randomise. BMJ 319:703–4
    [Google Scholar]
  41. 41.
    Kang M, Ragan BG, Park JH. 2008. Issues in outcomes research: an overview of randomization techniques for clinical trials. J. Athl. Train. 43:215–21
    [Google Scholar]
  42. 42.
    Efird J. 2011. Blocked randomization with randomly selected block sizes. Int. J. Environ. Res. Public Health 8:15–20
    [Google Scholar]
  43. 43.
    Bespalov A, Wicke K, Castagné V. 2019. Blinding and randomization. Good Research Practice in Non-Clinical Pharmacology and Medicine A Bespalov, M Michel, T Steckler 81–100 Handb. Exp. Pharmacol. 257 Cham, Switz: Springer Open
    [Google Scholar]
  44. 44.
    Kilkenny C, Parsons N, Kadyszewski E, Festing MF, Cuthill IC et al. 2009. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLOS ONE 4:e7824
    [Google Scholar]
  45. 45.
    Karp NA, Pearl EJ, Stringer EJ, Barkus C, Ulrichsen JC, Percie du Sert N. 2022. A qualitative study of the barriers to using blinding in in vivo experiments and suggestions for improvement. PLOS Biol. 20:e3001873
    [Google Scholar]
  46. 46.
    Bonapersona V, Hoijtink HRELACS Consort. Sarabdjitsingh RA, Joëls M. 2021. Increasing the statistical power of animal experiments with historical control data. Nat. Neurosci. 24:470–77
    [Google Scholar]
  47. 47.
    Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J et al. 2013. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14:365–76
    [Google Scholar]
  48. 48.
    Dumas-Mallet E, Button KS, Boraud T, Gonon F, Munafo MR. 2017. Low statistical power in biomedical science: a review of three human research domains. R. Soc. Open Sci. 4:160254
    [Google Scholar]
  49. 49.
    van Zwet EW, Goodman SN. 2022. How large should the next study be? Predictive power and sample size requirements for replication studies. Stat. Med. 41:3090–101
    [Google Scholar]
  50. 50.
    Nord CL, Valton V, Wood J, Roiser JP. 2017. Power-up: a reanalysis of ‘power failure’ in neuroscience using mixture modeling. J. Neurosci. 37:8051–61
    [Google Scholar]
  51. 51.
    Lazic SE, Clarke-Williams CJ, Munafo MR. 2018. What exactly is ‘N’ in cell culture and animal experiments?. PLOS Biol. 16:e2005282
    [Google Scholar]
  52. 52.
    Krzywinski M, Altman N. 2013. Power and sample size. Nat. Methods 10:1139–40
    [Google Scholar]
  53. 53.
    Calin-Jageman RJ. 2018. The new statistics for neuroscience majors: thinking in effect sizes. J. Undergrad. Neurosci. Educ. 16:E21–E25
    [Google Scholar]
  54. 54.
    Bakker M, Veldkamp CLS, van den Akker OR, van Assen M, Crompvoets E et al. 2020. Recommendations in pre-registrations and internal review board proposals promote formal power analyses but do not increase sample size. PLOS ONE 15:e0236079
    [Google Scholar]
  55. 55.
    Bartlett J. 2021. Introduction to power analysis: a guide to G*Power, jamovi, and Superpower https://osf.io/zqphw
  56. 56.
    Faul F, Erdfelder E, Buchner A, Lang AG. 2009. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41:1149–60
    [Google Scholar]
  57. 57.
    Kelley K, Preacher KJ. 2012. On effect size. Psychol. Methods 17:137–52
    [Google Scholar]
  58. 58.
    Champely S, Ekstrom C, Dalgaard P, Gill J, Weibelzahl S et al. 2020. pwr: basic functions for power analysis (1.3–0). https://CRAN.R-project.org/package=pwr
  59. 59.
    Bartlett J, Charles S. 2021. Power to the people: a beginner's tutorial to power analysis using jamovi. PsyArXiv. https://doi.org/10.31234/osf.io/bh8m9
    [Crossref]
  60. 60.
    Lazic SE. 2018. Four simple ways to increase power without increasing the sample size. Lab. Anim. 52:621–29
    [Google Scholar]
  61. 61.
    Nuijten MB, Hartgerink CH, van Assen MA, Epskamp S, Wicherts JM. 2016. The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods 48:1205–26
    [Google Scholar]
  62. 62.
    Greenberg L, Jairath V, Pearse R, Kahan BC. 2018. Pre-specification of statistical analysis approaches in published clinical trial protocols was inadequate. J. Clin. Epidemiol. 101:53–60
    [Google Scholar]
  63. 63.
    Kahan BC, Forbes G, Cro S. 2020. How to design a pre-specified statistical analysis approach to limit p-hacking in clinical trials: the Pre-SPEC framework. BMC Med. 18:253
    [Google Scholar]
  64. 64.
    Nieuwenhuis S, Forstmann BU, Wagenmakers EJ. 2011. Erroneous analyses of interactions in neuroscience: a problem of significance. Nat. Neurosci. 14:1105–7
    [Google Scholar]
  65. 65.
    Weissgerber TL, Garcia-Valencia O, Garovic VD, Milic NM, Winham SJ. 2018. Why we need to report more than ‘Data were Analyzed by t-tests or ANOVA. eLife 7:e36163
    [Google Scholar]
  66. 66.
    Wieschowski S, Biernot S, Deutsch S, Glage S, Bleich A et al. 2019. Publication rates in animal research. Extent and characteristics of published and non-published animal studies followed up at two German university medical centres. PLOS ONE 14:e0223758
    [Google Scholar]
  67. 67.
    Chan A-W, Hróbjartsson A, Jørgensen KJ, Gøtzsche PC, Altman DG. 2008. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols. BMJ 337:a2299
    [Google Scholar]
  68. 68.
    Reynolds PS. 2022. Between two stools: preclinical research, reproducibility, and statistical design of experiments. BMC Res. Notes 15:73
    [Google Scholar]
  69. 69.
    Durlak JA. 2009. How to select, calculate, and interpret effect sizes. J. Pediatr. Psychol. 34:917–28
    [Google Scholar]
  70. 70.
    Calin-Jageman RJ, Cumming G. 2019. The new statistics for better science: Ask how much, how uncertain, and what else is known. Am. Stat. 73:271–80
    [Google Scholar]
  71. 71.
    Crabbe JC, Wahlsten D, Dudek BC. 1999. Genetics of mouse behavior: interactions with laboratory environment. Science 284:1670–72
    [Google Scholar]
  72. 72.
    Lasseter HC, Provost AC, Chaby LE, Daskalakis NP, Haas M, Jeromin A. 2020. Cross-platform comparison of highly sensitive immunoassay technologies for cytokine markers: platform performance in post-traumatic stress disorder and Parkinson's disease. Cytokine X 2:100027
    [Google Scholar]
  73. 73.
    Ioannidis JP. 2005. Why most published research findings are false. PLOS Med. 2:e124
    [Google Scholar]
  74. 74.
    Prager EM, Bergstrom HC, Grunberg NE, Johnson LR. 2011. The importance of reporting housing and husbandry in rat research. Front. Behav. Neurosci. 5:38
    [Google Scholar]
  75. 75.
    Larrieu T, Cherix A, Duque A, Rodrigues J, Lei H et al. 2017. Hierarchical status predicts behavioral vulnerability and nucleus accumbens metabolic profile following chronic social defeat stress. Curr. Biol. 27:2202–10.e4
    [Google Scholar]
  76. 76.
    DeMarco G, Makidon P, Suckow M, Hankenson F. 2022. ACLAM position statement on reproducibility Position Statement, Am. Coll. Lab. Anim. Med. Chester, NH: https://www.aclam.org/media/83cf63c9-75ee-4271-a70a-7b83fcd401bc/S2Cx9g/ACLAM/About
  77. 77.
    Hogan M, Norton J, Reynolds R. 2018. Environmental factors: macroenvironment versus microenvironment. Management of Animal Care and Use Programs in Research Education and Testing R Weichbrod, G Thompson, J Norton 461–77 Boca Raton, FL: CRC Press
    [Google Scholar]
  78. 78.
    Hasenau JJ. 2020. Reproducibility and comparative aspects of terrestrial housing systems and husbandry procedures in animal research facilities on study data. ILAR J. 60:228–38
    [Google Scholar]
  79. 79.
    Lee VK, David JM, Huerkamp MJ. 2020. Micro- and macroenvironmental conditions and stability of terrestrial models. ILAR J. 60:120–40
    [Google Scholar]
  80. 80.
    Hanifin JP, Dauchy RT, Blask DE, Hill SM, Brainard GC. 2020. Relevance of electrical light on circadian, neuroendocrine, and neurobehavioral regulation in laboratory animal facilities. ILAR J. 60:150–58
    [Google Scholar]
  81. 81.
    Whittaker AL, Hickman DL. 2020. The impact of social and behavioral factors on reproducibility in terrestrial vertebrate models. ILAR J. 60:252–69
    [Google Scholar]
  82. 82.
    Franklin CL, Ericsson AC. 2020. Complex microbiota in laboratory rodents: management considerations. ILAR J. 60:289–97
    [Google Scholar]
  83. 83.
    Kurtz DM, Feeney WP. 2020. The influence of feed and drinking water on terrestrial animal research and study replicability. ILAR J. 60:175–96
    [Google Scholar]
  84. 84.
    Pritchett-Corning KR. 2020. Environmental complexity and research outcomes. ILAR J. 60:239–51
    [Google Scholar]
  85. 85.
    Van Loo PL, Mol JA, Koolhaas JM, Van Zutphen BF, Baumans V. 2001. Modulation of aggression in male mice: influence of group size and cage size. Physiol. Behav. 72:675–83
    [Google Scholar]
  86. 86.
    Kingston SG, Hoffman-Goetz L. 1996. Effect of environmental enrichment and housing density on immune system reactivity to acute exercise stress. Physiol. Behav. 60:145–50
    [Google Scholar]
  87. 87.
    Turner JG. 2020. Noise and vibration in the vivarium: recommendations for developing a measurement plan. J. Am. Assoc. Lab. Anim. Sci. 59:665–72
    [Google Scholar]
  88. 88.
    Reynolds R, Garner A, Norton J. 2020. Sound and vibration as research variables in terrestrial vertebrate models. ILAR J. 60:159–74
    [Google Scholar]
  89. 89.
    Reynolds RP, Kinard WL, Degraff JJ, Leverage N, Norton JN. 2010. Noise in a laboratory animal facility from the human and mouse perspectives. J. Am. Assoc. Lab. Anim. Sci. 49:592–97
    [Google Scholar]
  90. 90.
    Gardenier JS, Resnik DB. 2002. The misuse of statistics: concepts, tools, and a research agenda. Account. Res. 9:65–74
    [Google Scholar]
  91. 91.
    Franco NH. 2013. Animal experiments in biomedical research: a historical perspective. Animals 3:238–73
    [Google Scholar]
  92. 92.
    Kinter LB, DeHaven R, Johnson DK, DeGeorge JJ. 2021. A brief history of use of animals in biomedical research and perspective on non-animal alternatives. ILAR J. 62:7–16
    [Google Scholar]
  93. 93.
    Rood JE, Regev A. 2021. The legacy of the Human Genome Project. Science 373:1442–43
    [Google Scholar]
  94. 94.
    Zhu F, Nair RR, Fisher EMC, Cunningham TJ. 2019. Humanising the mouse genome piece by piece. Nat. Commun. 10:1845
    [Google Scholar]
  95. 95.
    Chinwalla AT, Cook LL, Delehaunty KD, Fewell GA, Fulton LA et al. 2002. Initial sequencing and comparative analysis of the mouse genome. Nature 420:520–62
    [Google Scholar]
  96. 96.
    Tarantal AF, Noctor SC, Hartigan-O'Connor DJ. 2022. Nonhuman primates in translational research. Annu. Rev. Anim. Biosci. 10:441–68
    [Google Scholar]
  97. 97.
    Tramacere A, Iriki A. 2021. A novel mind-set in primate experimentation: implications for primate welfare. Anim. Model. Exp. Med. 4:343–50
    [Google Scholar]
  98. 98.
    Natl. Acad. Sci. Eng. Med 2023. Nonhuman Primate Models in Biomedical Research: State of the Science and Future Needs Washington, DC: Natl. Acad. Press
  99. 99.
    Cooper TK, Meyerholz DK, Beck AP, Delaney MA, Piersigilli A et al. 2022. Research-relevant conditions and pathology of laboratory mice, rats, gerbils, guinea pigs, hamsters, naked mole rats, and rabbits. ILAR J. 62:77–132
    [Google Scholar]
  100. 100.
    Helke KL, Meyerholz DK, Beck AP, Burrough ER, Derscheid RJ et al. 2021. Research relevant background lesions and conditions: ferrets, dogs, swine, sheep, and goats. ILAR J. 62:133–68
    [Google Scholar]
  101. 101.
    Voelkl B, Altman NS, Forsman A, Forstmeier W, Gurevitch J et al. 2020. Reproducibility of animal research in light of biological variation. Nat. Rev. Neurosci. 21:384–93
    [Google Scholar]
  102. 102.
    Winter AL 2016. MSD Veterinary Manual Rahway, NJ: Merck & Co
  103. 103.
    Jamaiyar A, Juguilon C, Dong F, Cumpston D, Enrick M et al. 2019. Cardioprotection during ischemia by coronary collateral growth. Am. J. Physiol. Heart Circ. Physiol. 316:H1–H9
    [Google Scholar]
  104. 104.
    Houser SR, Margulies KB, Murphy AM, Spinale FG, Francis GS et al. 2012. Animal models of heart failure: a scientific statement from the American Heart Association. Circ. Res. 111:131–50
    [Google Scholar]
  105. 105.
    Liu T, De Los Santos FG, Phan SH. 2017. The bleomycin model of pulmonary fibrosis. Methods Mol. Biol. 1627:27–42
    [Google Scholar]
  106. 106.
    Hoffmann M, Schwertassek U, Seydel A, Weber K, Falk W et al. 2018. A refined and translationally relevant model of chronic DSS colitis in BALB/c mice. Lab. Anim. 52:240–52
    [Google Scholar]
  107. 107.
    Hackett J, Gibson H, Frelinger J, Buntzman A. 2022. Using the collaborative cross and diversity outbred mice in immunology. Curr. Protoc. 2:e547
    [Google Scholar]
  108. 108.
    Saul MC, Philip VM, Reinholdt LG, Chesler EJ. 2019. High-diversity mouse populations for complex traits. Trends Genet. 35:501–14
    [Google Scholar]
  109. 109.
    Storey J, Gobbetti T, Olzinski A, Berridge BR. 2021. A structured approach to optimizing animal model selection for human translation: the Animal Model Quality Assessment. ILAR J. 62:66–76
    [Google Scholar]
  110. 110.
    Ferreira GS, Veening-Griffioen DH, Boon WPC, Moors EHM, Gispen-de Wied CC et al. 2019. A standardised framework to identify optimal animal models for efficacy assessment in drug development. PLOS ONE 14:e0218014
    [Google Scholar]
  111. 111.
    Wendler A, Wehling M. 2012. Translatability scoring in drug development: eight case studies. J. Transl. Med. 10:39
    [Google Scholar]
  112. 112.
    MacRae CA. 2019. Closing the ‘phenotype gap’ in precision medicine: improving what we measure to understand complex disease mechanisms. Mamm. Genome 30:201–11
    [Google Scholar]
/content/journals/10.1146/annurev-animal-021022-043531
Loading
/content/journals/10.1146/annurev-animal-021022-043531
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error