The concept of “predisease” arose in 1914 when Dr. William Rodman came up with the idea of intervening early on those patients having signs of a precancerous state. However, Rodman acknowledged that his thesis would be controversial when noted: “I am aware that the term precancerous can be objected by at least two reasons: first, not always there will be a precancerous state; second, if this state existed, it does not mean that cancer will develop”.
Nevertheless, with the goal in preventing the appearance of morbid events, predisease as category makes sense if the following three conditions are met: 1) individuals who fall into this category should be more likely to develop disease; 2) There must be a an intervention that when directed towards this individuals at risk be effective in reducing the risk of evolution to disease, and 3) the benefits of intervening on the pre-disease must exceed the risks. Currently, the state of predisease applies for various conditions: pre-diabetes, pre-hypertension, subclinical thyroid dysfunction or even individuals tested positive for human immunodeficiency virus. All these clinical situations involve an increased risk of developing the disease. Although the lapse of time for this to happen might be uncertain in most cases, lots of studies have shown that there can be several damages at molecular and cellular levels that might be impairing tissues and at the same time fueling the occurrence of the disease.
Then, I wonder, are the current and most used cut-off points in medical practice really reliable to advise a patient on whether he or she has an unhealthy condition?
In the author’s opinion, a fundamental limitation of the cut-off points is that their use on biological variables might be biased since there is not any reliable foundation to do so. However, we keep labeling individuals as healthy or sick based on them and it has been this way since a long time and probably will be until we consider the problem more comprehensibly and stop staying on shallow waters instead of going into the deep end.
I acknowledge that currently the decision-making process would be very weak without cut-off point to make decisions but we must be very cautious when giving an opinion based on them.
In addition, it is valid to say that most of the cut-off points we use in our day to day practice with patient are not autochthonous but they have been taken from guidelines, pathways, etc that have nothing to do with the population of patient we deal with.
Can you imagine what would it be like to use a cut-off points to determine if one individual living in the middle east had any health condition using cut-off points offered by health institutions based in Canada, USA or Asia? It sounds like nonsense, however we do it every day. Why? because we have never thought on it. Some time we simply use what we have been given or taught as the best evidence, but this evidence is far away from us in terms of ethnicity, genetics or socioeconomic status and these factors indeed could have been a significant statistical impact in the countries they were used when where pooled to yield certain cut-off figures, but it does not mean that can be widely used across regions and continents.
Accordingly, I think that each country’s medical society must dig deep and pull out its own cut-off points, otherwise we will continue to miss key elements when it comes to diagnosing in medicine.
In the end my dear colleagues, the autochthonous is and always will be the most consistent and reliable. To be enticed by names of medical associations with a well gained fame in terms of taking all they offered as the absolute truth can be deceptive. So, never fall into the “Band Wagon“ fallacy (if most people like them, then they must be okay)
On one next post, I will expose some examples of some steps taken on this.
Note. This editorial has been written by Dr. Guillermo Alberto Perez Fernandez, author of this blog, and reflects his personal opinion about the topic.
Por favor “rate” esta entrada/Please rate this post