Tuesday, December 9, 2014

Pharmacogenomic Testing in Psychiatry

A fellow clinician asked me what I thought about "pharmacogenomic testing" services which are now being offered, and heavily promoted. I guess people think of me as some kind of expert on genetics in psychiatry (hardly!), and I've previously mentioned that I don't think these tests are worthwhile.

Basically, this is genetic testing only for variations in metabolism of drugs; there are no genetic tests for any of the usual psychiatric disorders, and no tests of this kind are on these panels (I've looked at specific companies that offer this, but I won't name any names here---just commenting on the state of the technology.) Inaccurate diagnosis is one of the main reasons people don't respond well to treatment, and this testing doesn't provide any guidance there. So, this is the first issue.

There also aren't any validated tests for subgroups of depression or bipolar or schizophrenia that can tell you which patients are most likely to respond to which drug class (ie TCA vs SSRI vs SNRI.) That's #2 (that's probably coming next, though, and there is a lot of research going on to identify biomarkers for drug response within dx.)

#3 is that side effects and therapeutic effects (that's #4) don't always correspond to plasma levels, and so they certainly don't correspond very well to variations in metabolism. But if we just look at the levels achieved at a given dose (pharmacokinetics), then we can see that there are variations in absorption, distribution, and even elimination of the drug that have nothing to do with genetics (but are related to lifestyle, diet, weight, age, etc.) An obvious example: smokers metabolize benzodiazepines at twice the rate of non-smokers, on average. So, you might have a genotype which implies slow metabolism of benzos, but if you smoke, that gets you right back to average. These tests don't take that into account (some of them might ask if the patient smokes, but that's not a genotype). An even simpler example: you like to have a glass of grapefruit juice every morning; this makes you a very low 3A4 metabolizer, regardless of your genotype. So, it's much more important to ask people about their diet and lifestyle than it is to test their cytochrome genes.

There's also a parallel pharmacodynamic effect (that would be #5 and #6): two people with the exact same drug level can have radically different side effects and therapeutic results. This could be purely subjective (for example, one person simply has a low threshold for discomfort) but it also could be related to the actual physiological sensitivity of the target (usually a receptor and signaling pathway, in the case of psych meds.) There are many genetic variations in neurotransmitter receptors which are not fully understood (and not tested in these panels), and there are many more variations in signaling pathways which determine the sensitivity of target pathways, as well as those off-target pathways which cause side effects.

Surprisingly few drugs show tight correlations between plasma levels and therapeutic effects; obviously, you need a certain amount in your bloodstream to do anything, but in cases where there is a tight correlation, you can usually test for the level directly (as with TCAs.) For drugs like SSRIs, plasma levels aren't especially informative, and so they aren't routinely available.

However, in my mind, the main reason why these tests aren't especially useful (#7) is that you still need to use good clinical practice---you need to start the drug at a relatively low (subtherapeutic) dose in most cases, then increase gradually, monitoring for side effects. In the case of very long half-life drugs like Prozac or Abilify, you might start at a full dose, but you're essentially tapering up by letting the drug accumulate. You still need to inform the patient about likely side effects to watch out for, and you still need to assess whether an adequate response has been achieved, rather than placebo (another main reason for treatment failure.)

There's an argument to be made that this kind of testing can help to avoid those (rare) serious adverse effects which occur in unusually sensitive individuals who are very slow metabolizers, though I think any decent clinician should be able to catch these right away. One might also argue that this kind of testing can help identify patients who will definitely need higher doses for best effect (rapid metabolizers.) Here again, any competent clinician should know that the dose needs to be pushed because they aren't getting a real therapeutic effect, though admittedly very few clinicians seem to be capable of making the distinction between drug and placebo reliably. Still, you just can't assume that a patient who metabolizes a drug rapidly, and needs more of it for a true therapeutic effect, can actually tolerate the higher dose. That could be a major downside to having this kind of information, especially if less competent clinicians assume they need to simply start out at a high dose and push it up really quickly based on this kind of test.

In any case, it all comes down to cost. If this is cheap and easy, it could be useful in some cases, and post-marketing studies could make this kind of test even more useful. But, if it's expensive and time-consuming, I just don't think it's worth the trouble.