Comparing methods for accounting for seasonal variability in a biomarker when only a single sample is available: insights from simulations based on serum 25-hydroxyvitamin d

Am J Epidemiol. 2009 Jul 1;170(1):88-94. doi: 10.1093/aje/kwp086. Epub 2009 Apr 30.

Abstract

In biomarker-disease association studies, the long-term average level of a biomarker is often considered the optimal measure of exposure. Long-term average levels may not be accurately measured from a single sample, however, because of systematic temporal variation. For example, serum 25-hydroxyvitamin D (25(OH)D) concentrations may fluctuate because of seasonal variation in sun exposure. Association studies of 25(OH)D and cancer risk have used different strategies to minimize bias from such seasonal variation, including adjusting for date of sample collection (DOSC), often after matching on DOSC, and/or using season-specific cutpoints to assign subjects to exposure categories. To evaluate and understand the impact of such strategies on potential bias, the authors simulated a population in which 25(OH)D levels varied between individuals and by season, and disease risk was determined by long-term average 25(OH)D. Ignoring temporal variation resulted in bias toward the null. When cutpoints that did not account for DOSC were used, adjustment for DOSC sometimes resulted in bias away from the null. Using season- or month-specific cutpoints reduced bias toward the null and did not cause bias away from the null. To avoid potential bias away from the null, using season- or month-specific cutpoints may be preferable to adjusting for DOSC.

Publication types

  • Comparative Study

MeSH terms

  • Biomarkers / blood*
  • Humans
  • Incidence
  • Models, Theoretical*
  • Odds Ratio
  • Risk Factors
  • Seasons*
  • Vitamin D / analogs & derivatives*
  • Vitamin D / blood
  • Vitamin D Deficiency / blood*
  • Vitamin D Deficiency / epidemiology

Substances

  • Biomarkers
  • Vitamin D
  • 25-hydroxyvitamin D