Annals of the American Thoracic Society

Real-world research can use observational or clinical trial designs, in both cases putting emphasis on high external validity, to complement the classical efficacy randomized controlled trials (RCTs) with high internal validity. Real-world research is made necessary by the variety of factors that can play an important a role in modulating effectiveness in real life but are often tightly controlled in RCTs, such as comorbidities and concomitant treatments, adherence, inhalation technique, access to care, strength of doctor–caregiver communication, and socio-economic and other organizational factors. Real-world studies belong to two main categories: pragmatic trials and observational studies, which can be prospective or retrospective. Focusing on comparative database observational studies, the process aimed at ensuring high-quality research can be divided into three parts: preparation of research, analyses and reporting, and discussion of results. Key points include a priori planning of data collection and analyses, identification of appropriate database(s), proper outcomes definition, study registration with commitment to publish, bias minimization through matching and adjustment processes accounting for potential confounders, and sensitivity analyses testing the robustness of results. When these conditions are met, observational database studies can reach a sufficient level of evidence to help create guidelines (i.e., clinical and regulatory decision-making).

The main aim of this paper is to (1) improve the reader’s knowledge and understanding of methodological issues specifically related to comparative observational studies using clinical and administrative datasets, and (2) to provide checklists (for researchers and reviewers) of key markers of quality when conducting and appraising such studies. Apprehending issues surrounding quality assessment of observational real-world research studies first requires understanding how the methodologies of these studies compare with other types of research designs, especially in terms of studied populations and modalities of care. This introduction is designed to establish the context of real-world research and precedes a discussion of key considerations for the design, performance, and reporting of observational studies, as illustrated by an example provided in the online supplement.

“Real-world” research has been the focus of an increasing number of scientific medical publications in recent years. Comparative effectiveness studies aim to evaluate the relative benefits of different available therapeutic options as used in real clinical practice situations (i.e., in unselected patients receiving usual care). Such studies can use observational or clinical trial designs but in both cases put emphasis on high external validity (1, 2). Their goal is to complement classical efficacy randomized controlled trials (RCTs), with high internal validity, which are required for the registration of treatments (3). In the respiratory field, and more specifically in asthma, it has been shown that the highly selected patient populations recruited to registration RCTs represent less than 5% of the general target patient population (4, 5). Thus, although these efficacy trials are rigorous in design and address important questions regarding the risk/benefit profile of new therapies, their conclusions strictly apply only to the selected population recruited to the trial. In other words, RCT findings are limited in the extent to which they can be extrapolated to reflect the treatment effects achievable at the population level. Outside the strictly controlled environment of the classical RCT, many factors can interfere with a therapeutic option’s potential efficacy. Smokers, for example, are routinely excluded from registration RCTs assessing inhaled corticosteroid therapy in asthma because cigarette smoking is known to diminish inhaled corticosteroid efficacy (6). Other patient characteristics that can affect therapeutic efficacy, and which are controlled for by tight RCT design, include excess weight, the presence of other comorbidities (79), concomitant treatments, and certain environmental exposures. Similarly, in RCTs some clinical management issues that can modulate the efficacy signal of a therapy are addressed through strict control of the extent, nature, and consistency of physician intervention; patient behavior; and the doctor–patient relationship. These issues include adherence, inhalation technique, access to care, strength of doctor–caregiver communication, and socio-economic and other organizational factors (1012). Thus, to ensure the widest possible generalizability of results, highly controlled RCTs conducted in highly selected populations must be complemented by larger studies performed in target populations (i.e., populations in whom we intend to use the intervention), settings, and durations that mimic the real world (13). The need for such research in asthma has been advocated by several groups in recent years (14, 15). “Real-world” studies belong to two main categories: pragmatic trials and observational studies (16).

Figure 1 depicts a simplified framework proposed by the Respiratory Effectiveness Group (REG) to enable classification of clinical research studies in terms of their general design. This framework is intended to complement the previously proposed PRECIS wheel for describing studies, which proposes 10 trial design domains along a pragmatic-explanatory continuum that includes eligibility criteria, flexibility and practitioner expertise regarding experimental and comparison interventions, follow-up intensity, primary outcome and analysis, participant compliance, and practitioner adherence (17). The REG framework relies on two axes: one describing the type of studied population in relation to the broadest target population and the other describing the “ecology of care” (or management approach) in relation to usual standard of care in the community (18). The position of a study within the framework serves as a description of a study, not as a comment on the quality of evidence it provides. In other words, this framework is intended to be used as a tool for describing the basic characteristics of the study design and population, not as an evidence assessment tool. Multiple studies can be placed relative to each other with respect to their relevance to the general target population, and for each study the appropriate quality assessment tools can be identified. Comprehensive quality assessment tools are available for efficacy trials (e.g., CONSORT Statement) (19) and for their pragmatic counterparts (CONSORT Statement extension) (20), for observational studies in epidemiology (e.g., STROBE statement) (21), and, more specifically, for pharmacoepidemiology and pharmacovigilance studies (EMA-ENCePP checklist for study protocols) (22). Another useful initiative called SPIRIT has recently published recommendations for describing clinical trials protocols (23). Quality criteria and minimal datasets requirements for observational studies are also the topic of the UNLOCK initiative (24). For metaanalyses, there is the QUOROM (quality of reporting of metaanalysis) initiative (25) and its successor, called PRISMA (preferred reporting items for systematic reviews and metaanalyses) (26).

There is often a perception that observational studies can provide, at best, weak evidence to support treatment recommendations, and efficacy RCTs still represent the top-level evidence in many guideline documents (2730). However, there are some questions that usual RCTs cannot answer. It is not possible, for example, to have a noninterventional RCT, so questions of “usual care” are best addressed via alternative means. Thus, several guideline developers are including observational studies in their systematic literature reviews (31) and are acknowledging that the quality of evidence from well-designed observational studies may be moderate (or even strong) if the treatment effect is large and the evaluation has accounted for all of the plausible confounders and biases in properly adjusted analyses (32). Accordingly, the GRADE system acknowledges the possibility to upgrade the level of evidence provided by an observational study when all relevant quality criteria are satisfied and the evidence is deemed “overwhelming.” Conversely, failure to meet these criteria will result in a downgrading of the evidence provided (33). One of the first proposals on quality criteria for observational comparative database studies was published in 2001 (34), an update of which is presented in Table 1.

Table 1. Quality criteria for observational database comparative studies

Section Quality Criteria
Background Clear underlying hypotheses and specific research question(s)
Methods  
 Study design Observational comparative effectiveness database study
  Independent steering committee involved in a priori definition of the study methodology (including statistical analysis plan), review of analyses and interpretation of results
  Registration in a public repository with a commitment to publish results
 Database(s) High-quality database(s) with few missing data for measures of interest
  Validation studies
Outcomes Clearly defined primary and secondary outcomes, chosen a priori
  The use of proxy and composite measures is justified and explained.
  The validity of proxy measures has been checked.
 Length of observation Sufficient duration to reliably assess outcomes of interest and long-term treatment effects
 Patients Well-described inclusion and exclusion criteria, reflecting target patients’ characteristics in the real world
 Analyses Study groups are compared at baseline using univariate analyses.
  Avoid biases related to baseline differences using matching and/or adjustments.
  Sensitivity analyses are performed to check the robustness of results.
 Sample size Sample size is calculated based on clear a priori hypotheses regarding the occurrence of outcomes of interest and target effect of studied treatment vs. comparator.
Results Flow chart explaining all exclusions
  Detailed description of patients’ characteristics, including demographics, characteristics of the disease of interest, comorbidities, and concomitant treatments
  If patients are lost to follow-up, their characteristics are compared with those of patients remaining in the analyses.
  Extensive presentation of results obtained in unmatched and matched populations (if matching was performed) using univariate and multivariate, unadjusted and adjusted analyses
  Sensitivity analyses and/or analyses of several databases go in the same direction as primary analyses.
Discussion Summary and interpretation of findings, focusing first on whether they confirm or contradict a priori hypotheses
  Discussion of differences with results of efficacy randomized control trials
  Discussion of possible biases and confounding factors, especially related to the observational nature of the study
  Suggestions for future research to challenge, strengthen, or extend study results

Whatever the design of a study type, it is crucial that it is reported in such a way that the appropriateness and quality of the chosen methodology and the relevance of the results can be assessed by readers (e.g., care givers, researchers, guidelines developers, policy makers, patients associations, journal editors, and reviewers) so they can determine whether (and how) they should use the findings.

The process aimed at ensuring the quality of comparative database observational studies can be divided into three parts: preparation of research, analyses and reporting, and discussion of results (35).

Preparation of Research

Several possible sources of bias have to be kept in mind when preparing an observational database study (36, 37). The main potential limitations of observational studies are (1) selection bias (e.g., confounding by severity or indication when treatments are differentially prescribed depending on the severity of the disease or other uncaptured patient characteristics), (2) information bias (e.g., data that lead to misclassificaton), (3) recall bias (when assessment of treatment exposure and/or outcomes depends on patients’ or caregivers’ recall; this bias is minimized when prospectively recorded databases are used vs. clinical observation), and (4) detection bias (when an event of interest is less, or more, likely to be captured in one treatment group than in the other) (38).

High-quality observational studies are not the result of “fishing” strategies; akin to RCTs, they have to define, a priori, their hypothesis, objectives, and analysis plan. All research components, including definition of the study population and primary and secondary objectives and endpoints, must be conceived and developed before any analyses are performed. In other words, although observational studies using previously collected data are retrospective by nature, they have to be designed prospectively. This assists in ensuring that all potentially relevant variables that are required to characterize patients are included and that the key outcomes of interest can be assessed.

Once the study question is established, a suitable database should be identified or appropriate prospective data collection should be planned. Candidate database(s) must be assessed to ensure that they contain adequate information on their constitution and enough good-quality data available for analysis from a sufficiently representative population. The UNLOCK initiative was created in 2010 to set minimum datasets requirements for observational therapeutic studies (24). A detailed database extraction and statistical analysis plan must then be prepared (see the “Statistical Issues” section below). Clear definition of planned outcome is essential within the analysis plan, especially when surrogate and/or combined (i.e., composite) endpoints are used. Similarly, the study population and subgroups of interest must be precisely defined. To decrease the risk of bias, possible confounders have to be identified and accounted for appropriately by matching and/or adjustment strategies (see Statistical Issues below) (39). Mimicking therapeutic interventional trials, well-designed observational effectiveness studies should also define an index event (e.g., treatment initiation or change) that can be reliably identified in the database. A dedicated independent steering committee should be set up to guide these steps and ensure that no step is unduly influenced by any party (e.g., by an interest group or sponsor).

The preparation process for observational studies should include the registration and, if possible, the publication of a study protocol in a public repository, with a commitment to publish regardless of the results.

Conduct of Analyses and Reporting of Results

Regarding research planning, there are a number of features of observational study analyses that can be used to improve the quality and robustness of the findings. It is of value to demonstrate the robustness of results by (1) exploring whether the studied database population is representative of the target patients, (2) establishing the consistency of results through sensitivity analyses and across relevant patient subgroups, and (3) demonstrating their reproducibility in different datasets where similar criteria have been used to define the target populations, index events, and outcomes (40, 41). For most research questions, another quality marker is the consistent use of the same predefined population for all components of analyses (e.g., effectiveness, tolerance, and medico-economic outcomes), rather than the use of subgroup populations selected to optimize a desired effect size.

The process of results reporting should begin with a flow chart (similar to the conventional CONSORT diagram) that allows readers and reviewers to follow the patient selection process used and to understand the characteristics of excluded patients and the relative size of the included/excluded populations. The demographic and medical characteristics of the studied population have to be described in detail and compared between treatment groups. As far as possible, medical characteristics should include markers of disease severity, comorbid diseases, and concomitant treatments as well as key demographic characteristics. When some of these characteristics differ between groups, patient matching or statistical modeling to adjust for differences may have to be performed (see below).

The results of all analyses that are conducted (e.g., matched, unmatched, adjusted, and unadjusted) results should be reported. Presentation of the unadjusted results helps to demonstrate the robustness of the chosen method of analysis; matched or adjusted results that differ substantially from the unmatched/unadjusted can reduce confidence in the matched/adjusted trends observed.

Discussion of Findings

Given the nature of observational comparative studies, the discussion has to address the specific aspects of the study design. First, the results have to be considered from the perspective of the initial hypotheses before being viewed from a broader perspective. In other words, before listing all possible interpretations of results, one has to determine whether these data confirm or contradict the underlying study hypothesis because this is the only question the data may confidently answer. Second, the results of observational database effectiveness studies should be set in context by comparing them with those of efficacy RCTs on the same topic. When trying to interpret any difference that may have been observed, the first plausible explanation that needs to be considered is the presence of some potential bias in the observational study that has not been adequately addressed by the study design (e.g., matching and/or adjusting). The authors then have to present the rationale for their analysis approach and discuss whether they feel it successfully reduced the risk of bias. Limitations of the study must also be acknowledged. Finally, conclusions should be qualified with a note about the level of confidence that readers should have in the reliability, robustness, and generalizability of results (i.e., the level of evidence provided by their study), and new studies should be suggested to challenge, strengthen, or extend the conclusions.

Due to the lack of randomization in observational effectiveness studies, there is a greater likelihood of confounders, either observed or hidden, that can introduce bias into any detected treatment difference and that require a great deal of thought when planning the analysis. Methods to reduce the risk of bias include adjusted analyses and matching processes. This article does not intend to provide a comprehensive statistical review on this topic. Readers will find useful information on bias reduction in, for example, a Task Force report from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) that addresses good research practices for comparative effectiveness research, focusing on nonrandomized studies of treatment effects using secondary data sources (37, 42).

Adjusted Analysis

With this approach, baseline data are summarized, and an unadjusted analysis is performed to identify any baseline covariates that may influence the final results using standard statistical methods (see online supplement). A conservative approach is then to consider covariates where the P value is less than, for example, 0.10; such a threshold is arbitrary and varies between studies. Any known predictive variables should also be considered in the adjusted analysis, even if they do not meet the specified criterion of P < 0.10 in univariate analysis. In addition to identifying confounding baseline covariates/predictors, correlation between these variables should be explored and identified before the adjusted analysis. After a thorough baseline assessment, looking for any imbalance in the data, an adjusted outcomes analysis can be performed using statistical techniques appropriate for binary outcomes, ordered categorical data, count data, and “survival” data (see online supplement).

Matching

If the differences are too great to apply adjusted analyses alone, matching should be considered as an additional tool for ensuring similarity of patients based on key demographic characteristics and markers of disease severity. This can be done using propensity scores, where patients are assigned a score based on their baseline profile and matched to other patients with a similar score (43). Alternatively, it can be approached by matching individual patients using a predefined set of key matching criteria (44). Both of these processes require close liaison between medical experts and statisticians to agree on suitable criteria for matching. If applied, correct matching is essential to subsequent analysis of the data, and time must be taken to get this right.

Even after careful matching and adjustment, two main sources of bias can persist. The first corresponds to unknown potential confounders that may not have been captured. The second relates to variables with effects that may vary over time (i.e., time-dependent covariates). Therefore, any suspected time-dependent covariate may be explored by plotting outcomes over time and introducing a time interaction term into the model. For survival analysis, time dependence is identified by examining the absence of proportional hazards. If time-dependent variables are identified, further subanalyses are required to limit or adjust for their influence.

Comprehensive assessment of treatments and therapeutic strategies requires evaluation of their efficacy under optimal conditions (clinical trials) and their effectiveness in more naturalistic, real-world situations. Observational studies and pragmatic trials are not designed to replace or oppose RCTs but to complement them and provide new insights into the use and outcomes possible with available therapies when used in a non-RCT population and/or follow-up setting. Although observational studies will never be able to achieve the high internal validity of a registration RCT, when the analyses are properly performed to minimize potential confounders (as discussed above), they can provide useful complementary data, helping to answer questions that RCTs do not or are unable to address (for reasons such as of feasibility, ethics, and affordability).

The goal should be to achieve a more integrated approach to evidence evaluation that complements data of high internal validity (classical RCTs) with those of greater external validity (pragmatic trials and observational studies) to inform clinical decision making, guidance, and policy. Before the results of different study designs can be integrated into clinical practice, guidelines, organization of care, or new research projects, the reliability and generalizability of their results must be determined. Once a study has been characterized in terms of the generalizability of its ecology of care and study population (Figure 1), its reliability can be assessed in more depth using design-specific tools.

Further work is required in this area to turn the well-intended calls for better integration of different study approaches into meaningful action. A systematic review of the existing respiratory guidelines is required to identify where real-world studies can add useful complementary data (e.g., management of smokers with asthma, of patients with comorbid conditions, etc.). There is also a need to test the REG’s integrated research framework, to apply it to published research, and to use it to critically appraise the quality of the existing real-world evidence base. Thereby, high-quality studies could be identified to help address the limitations of current guidelines. These types of activities are planned by the REG as part of a structured initiative to move the field of real-world respiratory research forward, establishing standards, improving quality, and working toward better integration of high-quality data into clinical guidance, decision making, and policy development. Until these systematic reviews are complete, this paper seeks to bring together the various challenges and considerations faced by those conducting and reviewing observational research and to provide useful checklists of key quality markers for observational research. The checklists are not absolute and are not directly applicable to all observational studies. They should be used as guidance such that their principles of a priori planning (definition of the study design, target population, outcomes of interest, and appropriate analysis that take into account potential confounders) and transparency (preregistration and commitment to publish) should be embodied for all those seeking to conduct high-quality observational research and recognized by those appraising it.

1 . Krishnan JA, Schatz M, Apter AJ. A call for action: comparative effectiveness research in asthma. J Allergy Clin Immunol 2011;127:123127.
2 . Travers J, Marsh S, Williams M, Weatherall M, Caldwell B, Shirtcliffe P, Aldington S, Beasley R. External validity of randomised controlled trials in asthma: to whom do the results of the trials apply? Thorax 2007;62:219223.
3 . Price D, Chisholm A, van der Molen T, Roche N, Hillyer EV, Bousquet J. Reassessing the evidence hierarchy in asthma: evaluating comparative effectiveness. Curr Allergy Asthma Rep 2011;11:526538.
4 . Herland K, Akselsen J-P, Skjønsberg OH, Bjermer L. How representative are clinical study patients with asthma or COPD for a larger “real life” population of patients with obstructive lung disease? Respir Med 2005;99:1119.
5 . Costa DJ, Amouyal M, Lambert P, Ryan D, Schünemann HJ, Daures JP, Bousquet J, Bousquet PJ, Languedoc-Roussillon Teaching General Practitioners Group. How representative are clinical study patients with allergic rhinitis in primary care? J Allergy Clin Immunol 2011;127:920926.e1.
6 . Chalmers GW, Macleod KJ, Little SA, Thomson LJ, McSharry CP, Thomson NC. Influence of cigarette smoking on inhaled corticosteroid treatment in mild asthma. Thorax 2002;57:226230.
7 . Peters-Golden M, Swern A, Bird SS, Hustad CM, Grant E, Edelman JM. Influence of body mass index on the response to asthma controller agents. Eur Respir J 2006;27:495503.
8 . Price DB, Swern A, Tozzi CA, Philip G, Polos P. Effect of montelukast on lung function in asthma patients with allergic rhinitis: analysis from the COMPACT trial. Allergy 2006;61:737742.
9 . Thomas M. Allergic rhinitis: evidence for impact on asthma. BMC Pulm Med 2006;6:S4.
10 . Molimard M, Raherison C, Lignot S, Depont F, Abouelfath A, Moore N. Assessment of handling of inhaler devices in real life: an observational study in 3811 patients in primary care. J Aerosol Med 2003;16:249254.
11 . Giraud V, Allaert F-A, Roche N. Inhaler technique and asthma: feasability and acceptability of training by pharmacists. Respir Med 2011;105:18151822.
12 . Price D, Bjermer L, Haughney J, Roche N, Bousquet J, Hillyer EV, Chisholm A. Real-life asthma strategies: the missing piece in the jigsaw. Treatment Strategies 2012;3:3746.
13 . Silverman SL. From randomized controlled trials to observational studies. Am J Med 2009;122:114120.
14 . Reddel HK, Taylor DR, Bateman ED, Boulet L-P, Boushey HA, Busse WW, Casale TB, Chanez P, Enright PL, Gibson PG, et al.; American Thoracic Society/European Respiratory Society Task Force on Asthma Control and Exacerbations. An official American Thoracic Society/European Respiratory Society statement: asthma control and exacerbations: standardizing endpoints for clinical asthma trials and clinical practice. Am J Respir Crit Care Med 2009;180:5999.
15 . Holgate S, Bisgaard H, Bjermer L, Haahtela T, Haughney J, Horne R, McIvor A, Palkonen S, Price DB, Thomas M, et al. The Brussels Declaration: the need for change in asthma management. Eur Respir J 2008;32:14331442.
16 . Berger ML, Dreyer N, Anderson F, Towse A, Sedrakyan A, Normand S-L. Prospective observational studies to assess comparative effectiveness: the ISPOR good research practices task force report. Value Health 2012;15:217230.
17 . Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, Tunis S, Bergel E, Harvey I, Magid DJ, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol 2009;62:464475.
18 . Roche N, Reddel H, Agusti A, Bateman ED, Krishnan JA, Martin R, Papi A, Postma D, Thomas M, Brusselle G, et al.; Respiratory Effectiveness Group. Integrating real-life studies in the global therapeutic research framework. Lancet Respir Med 2013;1:e29e30.
19 . Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gøtzsche PC, Lang T; CONSORT GROUP (Consolidated Standards of Reporting Trials). The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 2001;134:663694.
20 . Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, Oxman AD, Moher D; CONSORT group; Pragmatic Trials in Healthcare (Practihc) group. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390.
21 . Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M. STROBE initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Ann Intern Med 2007;147:W163–W194.
22 . The European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP). Guide on methodological standards in pharmacoepidemiology (revision 1) [accessed 2014 Jan 15]. Available from: http://www.encepp.eu/standards_and_guidances/documents/ENCePPGuideofMethStandardsinPE.pdf
23 . Chan A-W, Tetzlaff JM, Altman DG, Dickersin K, Moher D. SPIRIT 2013: new guidance for content of clinical trial protocols. Lancet 2013;381:9192.
24 . Chavannes N, Ställberg B, Lisspers K, Roman M, Moran A, Langhammer A, Crockett A, Cave A, Williams S, Jones R, et al. UNLOCK: Uncovering and Noting Long-term Outcomes in COPD to enhance knowledge. Prim Care Respir J 2010;19:408.
25 . Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 1999;354:18961900.
26 . Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009;339:b2700.
27 . Gruffydd-Jones K, Loveridge C. The 2010 NICE COPD Guidelines: how do they compare with the GOLD guidelines? Prim Care Respir J 2011;20:199204.
28 . Global Initiative for Asthma report. Global strategy for asthma management and prevention [accessed 2014 Jan 15]. Available from: http://www.ginasthma.com
29 . Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management and prevention of chronic obstructive lung disease [accessed 2014 Jan 15]. Available from: http://www.goldcopd.com.
30 . ATS. ERS. Standards for the diagnosis and management of patients with COPD [accessed 2014 Jan 15]. Available from: http://www.thoracic.org/clinical/copd/
31 . SIGN website [accessed 2014 Jan 15]. Available from: http://www.sign.ac.uk/guidelines/fulltext/50/checklist3.html
32 . Guyatt G, Akl EA, Oxman A, Wilson K, Puhan MA, Wilt T, Gutterman D, Woodhead M, Antman EM, Schünemann HJ; ATS/ERS Ad Hoc Committee on Integrating and Coordinating Efforts in COPD Guideline Development. Synthesis, grading, and presentation of evidence in guidelines: article 7 in Integrating and coordinating efforts in COPD guideline development. An official ATS/ERS workshop report. Proc Am Thorac Soc 2012;9:256261.
33 . Schünemann HJ, Oxman AD, Akl EA, Brozek JL, Montori VM, Heffner J, Hill S, Woodhead M, Campos-Outcalt D, Alderson P, et al.; ATS/ERS Ad Hoc Committee on Integrating and Coordinating Efforts in COPD Guideline Development. Moving from evidence to developing recommendations in guidelines: article 11 in Integrating and coordinating efforts in COPD guideline development. An official ATS/ERS workshop report. Proc Am Thorac Soc 2012;9:282292.
34 . Thomas M, Cleland J, Price D. Database studies in asthma pharmacoeconomics: uses, limitations and quality markers. Expert Opin Pharmacother 2003;4:351358.
35 . Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I. Value Health 2009;12:10441052.
36 . Dreyer NA, Schneeweiss S, McNeil BJ, Berger ML, Walker AM, Ollendorf DA, Gliklich RE; GRACE Initiative. GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care 2010;16:467471.
37 . Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report—Part II. Value Health 2009;12:10531061.
38 . MacMahon S, Collins R. Reliable assessment of the effects of treatment on mortality and major morbidity, II: observational studies. Lancet 2001;357:455462.
39 . Takahashi Y, Nishida Y, Asai S. Utilization of health care databases for pharmacoepidemiology. Eur J Clin Pharmacol 2012;68:123129.
40 . Kemp L, Haughney J, Barnes N, Sims E, von Ziegenweidt J, Hillyer EV, Lee AJ, Chisholm A, Price D. Cost-effectiveness analysis of corticosteroid inhaler devices in primary care asthma management: a real world observational study. Clinicoecon Outcomes Res 2010;2:7585.
41 . Colice G, Martin RJ, Israel E, Roche N, Barnes N, Burden A, Polos P, Dorinsky P, Hillyer EV, Lee AJ, et al. Asthma outcomes and costs of therapy with extrafine beclomethasone and fluticasone. J Allergy Clin Immunol 2013;132:4554.
42 . Johnson ML, Crown W, Martin BC, Dormuth CR, Siebert U. Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part III. Value Health 2009;12:10621073.
43 . Short PM, Williamson PA, Elder DHJ, Lipworth SIW, Schembri S, Lipworth BJ. The impact of tiotropium on mortality and exacerbations when added to inhaled corticosteroids and long-acting β-agonist therapy in COPD. Chest 2012;141:8186.
44 . Janson C, Larsson K, Lisspers KH, Ställberg B, Stratelis G, Goike H, Jörgensen L, Johansson G. Pneumonia and pneumonia related mortality in patients with COPD treated with fixed combinations of inhaled corticosteroid and long acting β2 agonist: observational matched cohort study (PATHOS). BMJ 2013;346:f3306.
Correspondence and requests for reprints should be addressed to Prof. Nicolas Roche, M.D., Ph.D., Pneumologie et Soins Intensifs Respiratoires, Groupe Hospitalier Cochin, Site Val de Grâce, 4eC, 74 Bd de Port Royal, 75005 Paris, France. E-mail:

Supported by the Respiratory Effectiveness Group, which paid the publication costs.

Author Contributions: All authors contributed to conception, critical revision, and final approval of the manuscript. N.R. wrote the first draft.

This article has an online supplement, which is accessible from this issue's table of contents at www.atsjournals.org

Author disclosures are available with the text of this article at www.atsjournals.org.

Related

No related items
Comments Post a Comment




New User Registration

Not Yet Registered?
Benefits of Registration Include:
 •  A Unique User Profile that will allow you to manage your current subscriptions (including online access)
 •  The ability to create favorites lists down to the article level
 •  The ability to customize email alerts to receive specific notifications about the topics you care most about and special offers
Annals of the American Thoracic Society
11
Supplement 2

Click to see any corrections or updates and to confirm this is the authentic version of record