As my partner had illustrated in his own blogpost, there is
believed to be a negative correlation between CTC count and the severity of
cancer. The study of CTCs is relatively new and there is a lack of general consensus
on what dangerous levels of CTCs actually are. Therefore, most studies
determine a concentration of CTCs in the blood at which the patients are
believed to be at risk. This is accomplished through a series of tests in which
the experimenters take a series of CTC concentrations that they believe to be relevant
and note at which concentration the hazard ratio appears to significantly
increase. A hazard ratio is a description of the relative risk of a
complication occurring with or without a specific event. In CTC studies the
event would refer to CTC concentration while the complication could be death or
cancer progression. However, the identity of these variables is the choice of
the researchers. Once a CTC concentration is determined to be related to a
relevant change in hazard ratio, it is labeled as the cutoff value. This cutoff
value is then used to mark the change between high and low CTC counts.
This article from the New England Journal of Medicine
details the difference in survivability for patients with high and low CTC
counts based on a cutoff level determined by a hazard ratio. This particular study
also included the training set and validation set. The training set displayed
here is a patient set used to test the cutoff level which utilizes all the
patients used to select the cutoff level. The results of this set display the
change in patient survivability at high and low CTC concentrations based on
cutoff level. The validation set is used to verify that the results of the
training set are replicable through the use of different patients under the
same conditions. Therefore, the results of the validation set are expected to
be similar to that of the training set. In this test once the cutoff value was
validated, patients from both sets were pooled together and placed into a full
set. This set is simply a group of all patients which was utilized over a
longer period of time than the training and validation sets. Due to the larger
patient pool and the longer time in the study, it should represent a more patient
survivability more accurately than the validation and training sets.
The study was
designed with about 177 breast cancer patients. The CellSearch and CellSpotter
Systems were used to identify, isolate, and enumerate CTCs from patient blood
draws. The cutoff level was determined using a training set consisting of 102
patients over a period of 40 weeks while the remaining 75 patients were used in
the validation test for 40 weeks. Patients from both sets were then pooled into
a full set in which blood was drawn over a period of 80 weeks. The results of
the study are displayed in the graphs below.
Figure 1. Kaplan-Meier Estimates of Patient Survival since
Baseline Blood Collection in Patients with Metastatic Breast Cancer. Plots A,
B, and C represent the data for progression free survival. Plots D, E, and F
display the data for overall survival. The hazard ratio for panel A is 1.97, B
is 1.81, C is 1.95, D is 3.98, E is 5.22, and F is 4.39. The
cutoff value between high and low CTC counts was determined to be at 5 CTCs per
7.5 mL of blood.
As illustrated by the figure, patients with a CTC count of ≥ 5 CTCs per 7.5 mL of blood had a lower
probability of both progression free and overall survival than those with < 5
CTCs per 7.5 mL of blood for all sets of data. Furthermore,
each validation set appears to confirm the cutoff value because of the
similarities in both the lines’ curvature and the median survival values. Thus
the cutoff value apparently displays a significant change in survivability
based on CTC concentration.
Because of its use in determining the cutoff range and its
relation to the risk of complication, one could expect that the magnitude of
the hazard ratio could make an accurate measurement of patient survivability by
itself. Interestingly, this value seemed to have no effect on the patient
survival across different sets. This was exemplified by what appeared to be a significant
difference in hazard values between the training and validation sets for
overall survival. The training set had a ratio of 3.98 while the validation set
had one of 5.22. Nonetheless, the appearance of both data curves was similar.
For the < 5 CTCs sets, the median overall survival was above 40 weeks and the
curvatures of the lines were almost identical. Both of the ≥ 5 CTCs sets had a median overall
survival of over 40 weeks and had reached approximately a 55% survival by week
40. However, the curve of the training set was higher than that of the
validation set between 20-25 weeks. Nonetheless, the plots matched each other
in all other areas. The experimenters likewise did not believe that the
difference meant anything significant. Thus, it would seem that hazard value
can only be used to estimate the relative degree of risk within a specific set
of people and that its use outside of these sets is fairly limited.
References
Cristofanilli, Massimo, G. Thomas Budd, Matthew J. Ellis,
Alison Stopeck, Jeri Matera, M. Craig Miller, James M. Reuben, Gerald V. Doyle,
W. Jeffrey Allard, Leon W.M.M. Terstappen, and F. Hayes. "Circulating
Tumor Cells, Disease Progression, and Survival in Metastatic Breast
Cancer." New England Journal of Medicine 351.8 (2004): 781-791.
Web. 18 May 2014.