Volume 117, Issue S15 p. 3551-3562
Supplement
Free Access

Assessing the impact of patient navigation§

Prevention and early detection metrics

Tracy A. Battaglia MD, MPH

Corresponding Author

Tracy A. Battaglia MD, MPH

Women's Health Unit, Section of General Internal Medicine, Department of Medicine, and Women's Health Interdisciplinary Research Center, Boston University School of Medicine, Boston, Massachusetts

Fax (617) 638-8096

801 Massachusetts Avenue, Crosstown Building, Suite 470, Boston, MA 02118===Search for more papers by this author
Linda Burhansstipanov MSPH, DrPH

Linda Burhansstipanov MSPH, DrPH

Native American Cancer Research Corporation, Pine, Colorado

Search for more papers by this author
Samantha S. Murrell MPH

Samantha S. Murrell MPH

Women's Health Unit, Section of General Internal Medicine, Department of Medicine, and Women's Health Interdisciplinary Research Center, Boston University School of Medicine, Boston, Massachusetts

Search for more papers by this author
Andrea J. Dwyer BS

Andrea J. Dwyer BS

Colorado School of Public Health, University of Colorado Cancer Center, Denver, Colorado

Search for more papers by this author
and Sarah E. Caron MPH

and Sarah E. Caron MPH

Women's Health Unit, Section of General Internal Medicine, Department of Medicine, and Women's Health Interdisciplinary Research Center, Boston University School of Medicine, Boston, Massachusetts

Search for more papers by this author
on behalf of The Prevention and Early Detection Workgroup from the National Patient Navigation Leadership Summit

on behalf of The Prevention and Early Detection Workgroup from the National Patient Navigation Leadership Summit

Women's Health Unit, Section of General Internal Medicine, Department of Medicine, and Women's Health Interdisciplinary Research Center, Boston University School of Medicine, Boston, Massachusetts

Search for more papers by this author
First published: 20 July 2011
Citations: 25

This conference and supplement were cosponsored by Pfizer Oncology, Livestrong (Lance Armstrong Foundation), Susan G. Komen for the Cure, the Oncology Nursing Society (ONS), the American College of Surgeons Commission on Cancer, the American Cancer Society, and AstraZeneca.

The articles in this supplement are based on presentations at the “National Patient Navigator Leadership Summit”; March 23-24, 2010; Atlanta, GA.

§

The opinions or views expressed in this supplement are those of the authors, and do not necessarily reflect the opinions or recommendations of the publisher, the editors, the University of Illinois at Chicago, the American Cancer Society, or the Ralph Lauren Center for Cancer Care and Prevention.

National Patient Navigator Leadership Summit (NPNLS): Measuring the Impact and Potential of Patient Navigation, Supplement to Cancer.

Abstract

BACKGROUND.

The lack of comparable metrics to evaluate prevention and early detection patient navigation programs impeded the ability to identify best practices.

METHODS.

The Prevention and Early Detection Workgroup of the Patient Navigation Leadership Summit was charged with making recommendations for common clinical metrics specific to the prevention and early detection phase of the cancer care continuum. The workgroup began with a review of existing literature to characterize variability in published navigation metrics; then developed a list of priority recommendations that would be applicable to the range of navigation settings (clinical, academic, or community-based).

RESULTS.

Recommendations for researchers and program evaluators included the following: 1) Clearly document key program characteristics; 2) Use a set of core data elements to form the basis of your reported metrics; and 3) Prioritize data collection using methods with the least amount of bias.

CONCLUSIONS.

If navigation programs explicitly state the context of their evaluation and choose from among the common set of data elements, meaningful comparisons among existing programs should be feasible. Cancer 2011;117(15 suppl):3551–562. © 2011 American Cancer Society.

INTRODUCTION

Cancer control begins with primary and secondary prevention efforts which aim to reduce cancer incidence and advanced disease, respectively. The evidence is clear that certain cancers—those caused by tobacco use, viruses, or sun exposure, for example—can be prevented completely. Regular use of proven screening modalities, such as Pap tests for cervical cancer and colonoscopy for colorectal cancer, also result in prevention through the removal of precancerous lesions. Other screening tests can detect cancers of the breast, colon, rectum, cervix, prostate, oral cavity, and skin at early stages and translate into a direct mortality benefit when abnormal screening is followed by prompt diagnosis and treatment. Mounting evidence suggests that the delivery of prevention and early detection (PED) services are responsible for a substantial portion of the documented reduction in both cancer incidence1 and mortality1, 2 in the United States.

It is also well documented that not all populations benefit equally from these prevention efforts, in part because our current healthcare delivery system does not provide consistent, high-quality care to all.3 Whether defined by age, gender, race, insurance status, geographic location, or comorbid medical condition, certain populations face significant barriers to accessing timely and quality cancer PED services consistently, if at all.4-6 Patient Navigation, which targets barriers faced by vulnerable populations in accessing timely, quality cancer care,7 was designed to address the critical disconnect between the discovery and delivery of life-saving cancer care services. In fact, the first patient navigation program was started in Harlem, New York, to increase the delivery of mammography screening to Black women who were too often presenting with advanced cancer as a result of a lack of screening.8 This groundbreaking work used lay navigators from the local community to help at-risk women overcome barriers to accessing screening and diagnostic services and resulted in profound improvements in breast cancer care.8 Since then, a growing number of studies documenting the promise of navigation have resulted in its widespread adoption as a means to deliver PED services.9-15

As navigation becomes integrated into standard cancer care services across the country,16 the lack of comparable metrics to evaluate these programs in different settings with diverse target populations impedes our ability to identify best practices and realize the full potential of this promising intervention. Thus, we aim here to provide recommendations for researchers and program evaluators to consider adopting when measuring the impact of their PED navigation programs. The intent is to facilitate consistent use of priority metrics, including process and intermediate outcome measures, that document the type and quality of work performed by prevention and early detection Patient Navigators (PN) working in diverse settings (clinical, academic, community-based organizations). Through the use of such measures, public health and health reform policies may be generated to provide reimbursement for services that ensure the delivery of timely, quality cancer prevention.

METHODS

In March 2010, the American Cancer Society hosted the first National Patient Navigation Leadership Summit, where it convened cancer clinicians, researchers, and practicing public health experts to develop a national evaluation agenda for patient navigation. The Prevention and Early Detection (PED) Workgroup was charged with making recommendations for common clinical metrics specific to the prevention and early detection phase of the cancer care continuum. The workgroup comprised 10 individuals, representing community-based organizations, clinical programs and academia with decades of experience implementing and evaluating patient navigation programs across diverse populations. The workgroup began with a review of existing literature to characterize variability in published metrics, then through discussion and consensus developed a list of priority recommendations.

In early 2010, the Summit Planning Committee conducted a comprehensive review of the navigation literature to guide discussion at the March meeting. The PED work group updated the literature review in October of 2010. We searched the Pubmed database to identify original articles published any year, in English, using the key terms “patient navigation,” “patient navigator,” “navigation,” “navigator,” or “case management.” We also searched the references of each publication for additional relevant literature. In keeping with the scope of navigation as outlined by Dr. Freeman at the Summit, we included only intervention studies where navigators actively link patients to clinical services. Educational or outreach navigation for the delivery of prevention education in community settings were excluded. Likewise, studies that used labels for functional navigators, such as community outreach worker, community health advisor/aide, promotores, or lay health advisor/educator, may have been excluded. We present here findings from the synthesis of 32 published articles we believed exemplified the breadth of published metrics. Although not meant to be an exhaustive review of the extensive literature, the studies included are representative of the variability in existing metrics.

FINDINGS FROM LITERATURE REVIEW

Most studies target breast and colorectal cancer, with fewer targeting cervical, lung and/or prostate cancer. Reported clinical outcome metrics fall into 2 discrete points along the continuum of cancer care: 1) screening and 2) diagnosis, while the remaining metrics focus on the processes specific to the navigation program. To date, no patient navigation intervention study has reported final endpoints such as survival or mortality. Rather, the current literature focuses mainly on intermediate clinical outcomes in the form of the delivery of recommended cancer prevention services. Only 2 studies17, 18 document a potential mortality benefit in the form of a stage shift at the time of diagnosis. As discussed below and summarized in Tables 1 and 2, there exists wide variation in both the reporting of nonmodifiable program characteristics as well as how study outcome metrics are defined and reported.

Table 1. Variability in Published Patient Navigation Studies: Screening Metrics
Study Setting Disease Eligibility Criteria Navigation Mode Outcome Metric Follow-Up Period Data Collection
Weber, 1997 Urban Clinical Breast Age 52 - 77 Telephone Receipt of mammo 10 mo Medical record review
Last mammo > 24 mo
Burhansstipanov L, 2000 Urban Community Breast Age ≥39 Face-to-face Receipt of screening test NR Self-report
Last mammo > 18 mo Telephone
Dignan, 2005 Urban Community Breast Age 40+ Face-to-face Receipt of mammo 12 mo Self-report
Last mammo > 18 mo Telephone
Jandorf, 2005 Urban Clinical Colorectal Age 50+ Telephone Receipt of CRC screening 6 mo Medical record review
FOBT > 1 yr
FS or BE >5 yr
Colonoscopy >10 yr
Dietrich, 2006 Urban Clinical Breast Age 50 - 69 Telephone Adherence to recommended screening 18 mo Medical record review
Cervical Overdue screening
Colorectal
Ford, 2006 Urban Clinical Prostate Age ≥55 Telephone Receipt of next scheduled screening test Time to next trial screening Medical record review
Lung
Colorectal
Nash, 2006 Urban Clinical Colorectal All colonoscopies NR Receipt of colonoscopy 11 mo Medical record review
Paskett, 2006 Rural Community Breast Age ≥ 41 Face-to-face Telephone Receipt of mammo 12 mo Medical record review
Last mammo > 12 mo Barriers to mammo
Myers, 2008 NR Clinical Colorectal Age 50 - 79 Telephone Receipt of CRC screening 6 mo Self-report
Last visit < 24 mo Medical record review
Percac-Lima, 2008 Urban Clinical Colorectal Age 52 - 79 Telephone Receipt of CRC screening 9 mo Medical record review
FOBT > 1 yr
FS or BE >5 yr Barriers to CRC screening
Colonoscopy >10 yr
Clark, 2009 Urban Clinical Breast Age 18 - 75 Telephone Receipt of mammo 3 yr Medical record review
Maintenance screening behavior
Timely adherence to diagnostic resolution
Fernandez, 2009 Rural Community Breast Age ≥50 Face-to-face Receipt of mammo, pap test 6 mo Self-report
Cervical Farm worker status Medical record review for validity
Han, 2009 NR Community Breast Age 40+ Face-to-face Receipt of mammo 6 mo Self-report
Last mammo > 24 mo Telephone
Lasser, 2009 Urban Clinical Colorectal Age 52 - 80 Telephone Receipt of CRC screening 6 mo Medical record review
No CRC screening # patients contacted
# hours spent navigating
Ma, 2009 NR Community Colorectal Age 50+ Face-to-face Receipt of CRC screening 12 mo Self-report verified with physician's office
ACS guidelines for CRC screening Barriers to CRC screening
Burhansstipanov L, 2010 Urban Community Breast Age ≥ 39 Face-to-face Receipt of screening mammo NR Self-report
Last mammo > 18 mo Telephone
Phillips, 2010 Urban Clinical Breast Age 51 - 70 Telephone Adherence to recommended screening 24 mo Medical record review
Last mammo > 18 mo
Wang, 2010 Urban Community Cervical Age 18+ Face-to-face Receipt of pap test 12 mo Self-report
Last pap > 12 mo
  • ACS, American Cancer Society; CRC, colorectal cancer; NR, not reported.
Table 2. Variability in Published Patient Navigation Studies: Diagnostic Metrics
Study Setting Disease Eligibility Criteria Navigation Mode Outcome Metric Follow-Up Period Data Collection
Ell, Cancer Pract, 2002 Urban Clinical Breast Age: None Prescribed follow-up screening/diagnostic test Telephone Receipt of diagnostic resolution NR Appointment records Self-report
Timely adherence to diagnostic resolution ACR 4&5: 60 days
Barriers to diagnostic resolution ACR 3: 240 days
Ell, J Wmn's Hlth Gend Based Med 2002 Urban Clinical Cervical Age: None LGSIL & HGSIL Prescribed follow-up screening/diagnostic Telephone Receipt of diagnostic resolution 12 mo Medical record review
Timely adherence to diagnostic resolution 30 days
Oluwole, 2003 Urban Clinical Breast Age: None Clinic patients NR Stage at diagnosis Retrospective, cross-sectional Medical record review
Battaglia, 2007 Urban Clinical Breast Age: >18 Referred for evaluation Telephone Timely adherence to diagnostic resolution 120 days Medical record review Self-report
Ell, 2007 Urban Hospital Breast Age: None ACR 3-5 Telephone Receipt of diagnostic resolution 8 mo Medical record review
Timely adherence to diagnostic resolution ACR 4&5: 60 days
Barriers to diagnostic resolution ACR 3: 240 days
Ferrante, 2007 Urban Clinical Breast Age: ≥ 21 BIRADS 4 & 5 Telephone Face-to-face Timely adherence to diagnostic resolution 60 days Medical record review
Time to diagnostic resolution Unclear
Gabram, 2008 Urban Clinical Breast Age: None Clinic patients NR Stage at diagnosis Retrospective, cross-sectional Medical record review
Palmieri, 2009 Urban Clinical Breast Age: None 200% FPL NR Timely adherence to diagnostic resolution 60 days Medical record review
Receipt of diagnostic resolution NR
Bastani, 2010 Urban Clinical Breast Age: None Referred to surgery or radiology for breast abnormality Telephone Timely adherence to diagnostic resolution 6 mo Medical record review
  • ACR, American College of Radiology; BIRADS, Breast Imaging Reporting and Data System; FPL, Federal Poverty Level; NR, not reported.

Screening

We reviewed 20 navigation studies that targeted cancer screening as an outcome (Table 1). We include studies with community- and clinically based navigators in urban19-25 and rural settings.14, 26 The studies targeted diverse populations, including American Indians,14, 15, 19, 26 Korean-Americans,27 Chinese women,28 Latinas,14, 29 Blacks,14, 26, 30, 31 non-English speaking,32 poor Whites,14 low-income.32-34 Few programs were comprehensive targeting multiple cancer sites,29, 30, 33 while most target only 1 disease-specific screening.14, 15, 19-28, 31, 32, 34-36 Even among studies targeting the same disease, eligibility criteria for inclusion in programs vary, including the age of participants and the time since their last screening. For example, 1 mammography screening navigation study included women 52 to 77 years who had not had a mammogram in the previous 2 years,21 while another included women over 40 years whose last mammogram was 12 or more months prior.26

Most studies document receipt of a screening test as the goal of navigation and report the outcome simply as a screening rate, defined as the proportion of eligible subjects who complete a recommended test, such as a mammogram, Pap test, or colonoscopy, during the intervention period. The range of the intervention period across studies was wide, such that the time subjects were followed to assess the outcome varied from 6 months21, 24, 32-34 to 3 years.30 The most common follow-up period was 6 months.22, 23, 27, 29, 36 Two studies document adherence to recommended screening20, 33 as the goal of navigation and report the outcome as an adherence rate, defined as the proportion of eligible subjects who are up-to-date with a screening test as recommended by an existing guideline or standard. These 2 breast navigation studies differed in how they defined “adherent”; 1 used United States Preventive Task Force (USPTF) guidelines,33 another Healthcare Effectiveness Data and Information Set (HEDIS) criteria.20 Only 1 study reports maintenance screening behavior,31 which was defined as the percentage of annual mammograms that were actually obtained during the study period. Data collection methods were either self-reported behaviors13-15, 19, 27-29, 35 or objective evidence from medical record review.20-24, 26, 31-33, 36

Diagnosis

Of the 13 studies included targeting the diagnostic phase of the cancer care continuum, 12 targeted breast cancer diagnosis8-10, 12, 13, 17, 18, 31, 37-40 while only 1 targeted cervical cancer.11 As shown in Table 2, we include studies with a range of program settings that target diverse populations with variable eligibility criteria. Similar to screening navigation studies, the range of the intervention period across studies was wide, and the time subjects were followed to assess the outcome varied. Studies report 4 clinical metrics at the point of diagnostic evaluation: 1) receipt of diagnostic resolution,8-11, 38, 40 2) time to diagnostic resolution,8, 37-39 3) timely adherence to diagnostic resolution,9-13, 31, 37, 38 and less commonly 4) stage at diagnosis.17, 18

Five studies report receipt of diagnostic resolution8-11, 38, 40 as the goal of navigation. These studies present this outcome simply as a resolution rate, defined as the proportion of eligible subjects who complete diagnostic testing during the intervention period. The majority of studies reviewed report timeliness of diagnostic care as the goal of patient navigation. These studies report timely in 2 distinct ways: either 1) the time to diagnostic resolution8, 37-39 as a continuous outcome, or 2) the rate of timely adherence to diagnostic resolution as a dichotomous outcome.9-13, 31, 37, 38

The most striking finding in reviewing these metrics is the lack of a consistent definition for what constitutes “diagnostic resolution” or the “timely” diagnostic interval. Most studies use the date the abnormal screening test was performed as the index event or start date.9, 10, 31, 37, 38 However, there is widespread variability in the data point indicating diagnosis, diagnostic resolution or adherence to recommended follow-up, ranging from the date of arrival to first diagnostic clinical visit12 to the actual date a tissue sample was obtained.8 When tissue diagnosis is not recommended, studies vary in reporting how a “diagnostic resolution” is determined. For example, 1 study reports the endpoint as “until negative mammogram, benign biopsy, 6 month follow-up test, or start of cancer treatment”,10 while other studies only include benign or malignant tissue as a diagnosis.39 There is similar variability in how investigators define “timely” ranging from 60 to 180 days.9, 13

The Patient Navigation Research Program, a collaborative multisite research program designed to evaluate the efficacy of navigation after abnormal cancer screening, developed a set of “common” data points using the National Comprehensive Cancer Network (NCCN) guidelines as the major focus of clinical outcomes.41 While the results of this program are not yet published, the PNRP is the largest study to date on PED navigation. In their program, diagnostic resolution is defined as completion of the diagnostic test that results in a diagnosis or clinical evaluation that determines that no further evaluation is indicated. For example, a colonoscopy with biopsy confirming a malignant polyp or a colonoscopy in which no malignant lesion is identified would both serve as a diagnostic resolution.

The 2 studies reporting breast cancer stage at diagnosis as the outcome similarly reported population level data to assess the impact of a navigation program targeting individuals.17, 18 These studies suggest a positive impact of patient navigation; however, due to the methods used, a causative association cannot be determined.

Process Metrics

In addition to intermediate clinical outcomes, 6 studies included here report metrics that evaluate whether the intervention was implemented as intended. Five studies report navigator-documented barriers to care.9, 10, 25, 31, 35 One study by Lasser et al documents the median number of contacts per patient and mean hours of telephone outreach per patient.23 A descriptive study by Lin et al documented the types of barriers to care and time spent by the navigator.42 The PNRP is collecting the following process metrics in their multi-site program: barriers to care identified by navigator, actions taken by the navigator, and details of navigation encounters such as type of encounter and time spent.41 Only 1 of these studies33 have examined the association between these process measures and their outcomes, which represents an area in critical need for further study.

Recommendations for PED Metrics

In keeping with the goal of having a common set of priority metrics for navigation programs to measure impact on individuals and populations, it would be ideal to have consistent study characteristics, including eligibility criteria, follow-up time intervals and outcome metrics. Obviously, this is not possible for several reasons. First, there are certain program characteristics that are inherently nonmodifiable such as program setting and the populations they serve. In addition, the specific needs of populations appropriately dictate the intended outcomes of navigation, the ideal mode of navigator contact or specific navigator activities. Finally, there is wide variability in the amount and type of resources available for evaluation efforts. Community programs wishing to conduct an evaluation of their program may well have fewer resources than a federally funded research project such as the PNRP. Regardless, the existing literature illuminates the need for consistency in reporting both modifiable and nonmodifiable program characteristics. Stating these clearly will facilitate meaningful program comparisons even in the absence of common outcome metrics.

Therefore, our first recommendation is to clearly document the following minimal set of program characteristics (Table 3):
  • 1

    Program setting. At a minimum knowledge of geographic settings such as urban, rural, suburban is an important distinction. More importantly, the system setting is essential to know when considering replicating a program. Beyond describing whether a program is community versus clinically-based, some detail on the specific area within a clinical setting (e.g. primary care versus radiology), or community setting (e.g. church versus YWCA) is important.

  • 2

    Eligibility criteria of navigated subjects. These programmatic elements are necessary to order to interpret the outcomes and their potential impact for other populations. Most important are age, race/ethnicity, primary spoken language and time since last screening.

  • 3

    Mode of navigation. The primary mode of delivering the navigation program is a minimal program element essential to comparing study findings. Specifically, did the navigator interact with their target community in person (in a community setting, in a clinical setting) or on the telephone? How many encounters did the navigator have over the course of the intervention? In addition, the amount of time spent per patient and overall navigator case load is important to document (with how many patients per week does the navigator interact, how many navigators worked with the same patient to access follow-up diagnostic services).

  • 4

    Time interval of the follow-up period at which outcomes are assessed. This detail is critical to interpreting the meaning of a defined outcome. For example, it would important to know that 2 programs reporting a similar clinical outcome (ie, 90% of program participants completed their mammogram) each measured their outcome at different time intervals (ie, 1 year vs 6 months).

Table 3. Reportable Program Characteristics for Prevention and Early Detection Navigation Programs
Construct Common Metric
Setting Urban v. Rural v. Suburban
Clinical v. community
Eligibility criteria of Age
patients Race
Primary language spoken
Time since last screening
Mode of navigation In person v. telephone
# encounters per patient
Time spent per patient
# patients navigated
Follow-up interval # months or years

Many of the observed differences in published PED outcomes are not in the data elements collected, rather in either the nomenclature used to describe them or the analyses used to report them. Thus, defining a common set of data elements, rather than firm outcome metrics is a much more realistic approach and comprises our second set of recommendations. Prioritizing the collection of these data elements will allow for the variability inherent in navigation programs that target different communities and systems of care while also allowing for meaningful comparisons. From these data elements, common metrics to represent prevention and early detection constructs can be created (see Tables 4, 5, and 6).

Table 4. Recommended Common Data Elements for Screening Metrics
Construct Common Data Elements Common Outcome Metrics
Receipt of screening test A. Date enrolled into navigation Completion of screening test (Yes/No)
B. Date referred for screening
C. Date test scheduled (#1, #2, #3, etc) Timely completion of screening (Yes/No)
D. Date test completed  • Must define “timely”
E. Date test results are read/reported
F. Date patient informed of test result Time to complete screening ( # days A - D)
Adherence to single recommended screening interval A. Name of professional guidelines that defines recommended screening (ie, USPSTF, NCCN) Adherent to single recommended screening (Yes/No)
B. Date current test completed
C. Date most recent screening test completed
Adherence to longitudinal recommended screening = maintenance A. Name of professional guidelines that defines recommended screening maintenance (ie, USPSTF, NCCN) Adherent to longitudinal screening (Yes/No) • Must define “longitudinal screening”
B. Date current screening test completed
C. Date most recent screening test completed
D. Date past screening tests completed
Table 5. Recommended Common Data Elements for Diagnostic Metrics
Construct Common Data Elements Common Outcome Metrics
Receipt of diagnosis or resolution of screening abnormality A. Date index screening test performed Completion of diagnostic resolution (Yes/No)
B. Date patient informed of test result  • Must define “diagnosis”
C. Date enrolled into navigation clinical evaluation  • Must define “resolution”
D. Date of first scheduled diagnostic test/clinic visit Time to completion of diagnostic resolution42 ( # days A - F)
• Date of second scheduled, third scheduled, etc
E. Date of completion of first diagnostic test/clinic visit
F. Date of completion of final diagnostic test/clinic visit Timely completion of diagnostic
G. Result of final test performed (cancer Yes/No) resolution (Yes/No)
H. Date diagnostic test read/reported  • Must define “timely”
I. Date patient informed of test results
Adherence to recommended diagnostic testing A. Name of professional organization/guidelinesB. Type of test resulting in diagnostic resolution Adherent to recommended diagnostic testing (Yes/No)Completion of appropriate test (Yes/No) • Must define appropriate test (eg, percutaneous biopsy v. open biopsy)
Stage at diagnosis A. TNM Classification of Malignant Tumors cancer staging criteria56 Stage 0-4
Table 6. Recommended Common Data Elements for Process Metrics
Construct Common Data Elements Common Outcome Metrics
Phase of cancer care targeted by navigation program Outreach / Screening or Diagnostic clinical visit / Follow-up Phase of cancer care
Adherence to scheduled clinical visit A. Date of appointment Adherent to appointment (Yes/No)
B. Type of appointment
C. Status of appointment
 • Arrive
 • No show
 • Cancel
 • Reschedule
Caseload A. # of patients navigated per navigator Navigator Caseload (# patients / time period)
B. Time spent per patient (minutes, hours)
C. # days in navigation
Communication A. Encounter type: in person, phone, letter Mode of communication
B. Interpreter used (Yes/No)
C. Date of first encounter
D. Date of last encounter
Barriers A. See PNRP Methods paper41 # of barriers / per patient
Type of barriers
Actions A. See PNRP Methods paper41 # of actions / per barrier or per patient
Type of actions

Table 4 displays recommendations for common data elements that may be used to create a set of intermediate outcome metrics that fit within the constructs of screening. The first construct is receipt of the screening test. Documenting the core data elements to measure this construct allows for the reporting of dichotomous outcomes metrics like “completion of screening test (yes/no)” or “timely completion of screening test (yes/no)” as well as the continuous outcome of “time to complete screening”. Either of these metrics may be reported using any or all of the common data elements outlined in Table 4. This measure is limited when comparing programs with different eligibility criteria and follow-up time periods. Thus, a more comparable construct to consider is adherence to recommended screening, which requires of course that a screening guideline (such as the USPSTF) be stated explicitly. This common metric allows for programs to compare their adherence rates across different populations. Because the full benefit of screening on survival is dependent on the longitudinal use of “routine” screening tests, or maintenance of screening over time, there should be an emphasis toward documenting screening maintenance behavior.

Table 5 displays recommendations for common data elements that may be used to create a set of intermediate outcome metrics that fit within the constructs of diagnostic outcomes. Common metrics for reporting the construct of diagnostic resolution must begin with a clear definition of which core data elements constitute diagnosis and/or resolution of the screening abnormality. Once this is clear, reporting of the dichotomous outcome metrics may include “completion of diagnostic resolution (yes/no),” “timely completion of diagnostic resolution (yes/no)” and the continuous outcome of “time to complete diagnostic resolution.” These metrics may be reported using any or all of the common data elements outlined in Table 5. Resources and program intent will create variability in which data elements programs are interested in and capable of collecting. The priority should be to have an explicit and consistent definition of diagnostic resolution and to collect the date corresponding with that definition, as recommended by the PNRP.41 When the diagnostic resolution is a diagnosis of cancer, metrics such as stage at diagnosis are also important to record. Another recommended construct that is often omitted from program evaluation is adherence to recommended testing, as determined by documentation of the type of diagnostic test performed in approaching diagnostic resolution. This common metric, completion of appropriate test, is another measure of quality to ensure populations have access to appropriate diagnostic testing.

Our third set of recommendations call for a minimal set of process data elements (Table 6). Process measures are intended to measure whether navigation was delivered as planned or designed. Without these details, replication of programs with successful outcomes is not possible. Knowledge of the specific components of a navigation program is necessary to apply lessons learned from 1 program to the next. The PED Working Group identified 3 distinct phases of PED where processes of navigation may differ: 1) outreach/promotion (helping community understand the need and availability of cancer screening); 2) support during clinical appointments and tests; and 3) tracking and follow-up after appointments/tests completed.

At a minimum, programs should document which phase(s) of PED their navigators address, as this captures broadly the types of activities involved in the navigation program.

In addition, we strongly recommend that programs document clinical appointment data to report health services process measures related to adherence to scheduled clinical visits. Also of importance to program function and impact is the number of patients navigated (over some specified time period: daily, weekly, or monthly) and the time spent with individual patients. From this information measures of caseload may be created. Mode of communication and whether an interpreter was used in an encounter is another important process measure. Documenting the date of last navigator encounter ensures a way to attribute the screening outcome to navigation. For example, we would not want to attribute a screening outcome to navigation if there has been no contact with a navigator in the prior 12 months.

Considering that addressing barriers to care are at the very center of the conceptual model of navigation, it is essential to measure these barriers along with navigator activities, or actions, taken to address them. Creating an optimal set of patient-level barriers to care is challenging given the specific needs of diverse populations, as barriers in 1 community may be vastly different from barriers in another. Freund et al describe recommendations for barriers used by the PNRP41 and provide a framework for documenting navigation activities that would facilitate meaningful comparisons. The Native American Cancer Research Corporation (NACR) provides another example of documenting barriers and actions routinely used in their program (Fig. 1).15 Finally, documenting healthcare usage along the screening process is an alternative way of capturing benefits of navigation, such as reduction in rates of missed appointments.

Details are in the caption following the image

Native American Cancer Research Corporation tool for documenting common process data elements: Navigator Actions.

Our fourth and final set of recommendations is related to data collection efforts. Data elements may be collected using patient self-report, navigator logs, clinical data sources, and/or objective observation. At a minimum, programs should document their data source, given the limitations/strengths of these various sources. In a research context, it would be inappropriate to have navigators administer outcome assessments for their own patients as it would introduce potential bias. Whereas it would be acceptable to have navigators document process measures, programs should avoid using navigators to document clinical outcomes without extensive quality assurance in place.

PN daily logs are an obvious source for process measures. Electronic programs can be used for those PN who have access to computers and/or the Internet. Tremendous effort should be made to ensure the layout consists of closed question format or checkboxes that address most prevalent responses with an “other” category that allows for text input. These lists or checkboxes should include space to document the amount of time the PN spent doing each task, or better yet checkboxes with time intervals.

Patient self-reported screening behaviors are often inaccurate43 and how the questions are asked may influence the responses. However, if patient self report is used, phrasing of questions should be drawn from standardized, validated instruments such as the National Health Interview Survey (NHIS), the Behavioral Risk Factor Surveillance System (BRFSS), or the National Medical Care Expenditure Survey (NMEPS). Likewise, there are differences in the types of responses when such instruments are administered face-to-face, over the phone, completed by the patient, use of CADI (Computer Aided Design Instrument) systems and/or through the Internet.44

While objective observation methods of patients and navigators have been developed,45 most programs will not have the resources to use them. With federal mandates requiring transition to electronic medical records, there is tremendous opportunity to use objective clinical data sources to measure these outcomes and should be the standard for navigation programs to aspire. For example, electronic medical records may be queried for the presence of screening reports or as a means to complete certain data points, while electronic registration systems may be queried to report adherence outcomes for scheduled appointments.

DISCUSSION

Patient navigation programs that target the prevention and early detection spectrum of care share similar goals, yet vary widely in how they document their success. Differences in program structure, population needs, outcomes of interest, and reported evaluation metrics make cross-study comparisons impossible. However, a review of the literature suggests that a common set of evaluation metrics relevant to multiple stakeholders can be developed. Based on a synthesis of existing navigation literature and expert consensus, we present here a set of 4 recommendations related to measuring and reporting PED navigation program success so that dissemination of the evidence may be used to delineate best practices in the design of care processes across diverse settings.

Our recommendations call for a core set of quality indicators that measure the intent of navigation—to bridge the critical disconnect between the discovery and delivery of life saving cancer care services. Knowledge of basic program characteristics is the starting point to contextualize comparisons between programs. While clinical outcome measures of quality (eg, stage of diagnosis or mortality) are generally more difficult or not feasible to measure, we provide a framework of common data elements that may be used to report a common set of intermediate clinical metrics. Equally important, we provide recommendations for collecting and reporting process measures (activities performed while receiving care) which are the most frequently used quality indicators,47, 48 because they are sensitive, unambiguous, and easily measured.49-51 Our review of the literature highlights the lack of evidence linking these processes to clinical outcomes, making these data elements of high priority for future study as process measures should be associated to outcome measures for effective quality assessment.52, 53

A limitation of this study was focusing the literature review to PubMed, certainly the most commonly used database. However, some navigation projects and studies using labels for staff who function as navigators linking individuals from community to the health system, such as community outreach worker, community health advisor/aide, promotores, or lay health advisor/educator, may have been excluded. Likewise, several of the latter articles are accessible through publication databases that focus on education, social work, and psychology that may or may not also be within PubMed. As a result, this article is less inclusive of community-based and academic-based navigation programs and emphasizes the clinical settings. Regardless, most of the recommended measures presented here are relevant to navigators working in these other settings as well.

Priorities should focus on defining the needs and demographics of the target population, which in turn should drive the expected outcomes of the intervention. As long as programs explicitly state the context of their evaluation and choose from among the core set of data elements, meaningful comparisons among existing programs should be feasible. While methods for collecting these metrics will depend upon resources and existing infrastructure, programs should aspire for rigor with objective sources when possible. When objective electronic data are not available, sites need creativity to determine the best way to retrieve the information, either from manual chart abstraction or navigator documentation. These recommendations are a first step toward adopting a minimal dataset for PED navigation programs, as has been done by other population based approaches to improving quality care.54

Navigation is emerging as an expected “standard” for cancer programs,54 yet the literature has yet to provide consistent insight into activities or processes of navigation that are linked to favorable outcomes. We demonstrate here the growing body of knowledge regarding the impact of prevention and early detection navigation on cancer care would benefit from some thoughtful standardization. In keeping with recommendations from the 2001 IOM report to deliver patient-centered care that is timely, efficient and equitable3 it is imperative that we evaluate the ability of PED patient navigation programs to realize that potential. Only then can we “apply evidence to health care delivery” as recommended. The responsibility for the analysis and synthesis of this medical evidence falls on all of us involved in the delivery of these services.

CONFLICT OF INTEREST DISCLOSURES

The authors made no disclosures.