May 28, 2011

To PE or not PE?

I don't like pulmonary embolism.

My dislike for PE works on many levels. On the one hand, PE can be fatal, and I generally don't like it when my patients die. On the other hand, many or even most PE (or is it PEs? or PE's?*) aren't particularly dangerous.

Further, we don't really know the prevalence of PE, we don't know how to tell dangerous PE from not-particularly-dangerous PE, and we're not even sure about the treatment for PE.

As discussed at length in what I think is still the best episode to date of SMART-EM, the evidence is not only weak but maybe even suggests that perhaps we shouldn't anticoagulate patients with PE at all, despite the clear "standard of care." And while thrombolysis might benefit some very sick patients with PE, nobody really knows.

Further, some very smart people suggest that a main function of the lungs might be to filter clots from making it to the brain.

That's not to say that PE isn't a dangerous entity. A new study by Weiner et al in the Archives of Internal Medicine notes that introduction and widespread adoption of CT pulmonary angiography has increased the diagnosis of PE about 70% but decreased the case fatality rate by only half that much, and overall mortality only modestly decreased. To be honest, I'm not completely clear on whether "case fatality rate" means much; they define it as "the proportion of hospital deaths among patients with a PE."

Not surprisingly, the rate of what appear to be clinically important complications of anticoagulation also increased by about 70%.

This is certainly not a perfect study. The study authors use mortality from death certificates as the primary outcome measure, and as someone put it, cause of death is largely determined by interns.

As the study authors discuss, the increase in PE diagnosis could be a good thing if more diagnosis means better outcomes. However, in the absence of better outcomes, we are likely diagnosing clinically insignificant disease. And there's no way to determine if the PE I just diagnosed is one of the bad ones or not.

So we use a lot more CTPA (not news) to detect a whole lot more PE, with minimal benefit. And I don't think the "ease" of CTPA is the only part of it. The Wells Criteria, d-dimer, and PERC were supposed to help us safely decrease testing, but aren't used properly, have lead to increased testing, and may not even work. Specifically, many physicians seem to use the d-dimer to increase, not decrease, testing. Despite our best intentions, perhaps the overall increase in conversation about PE leads to increased testing on it's own (i.e., availability bias).

Moreover, it's not just about diagnosing more patients with PE or deciding who to anticoagulate. It's not even about the very real radiation risks. As crowding becomes the norm in many EDs, many departments operate at full capacity much of the time. At both of the hospitals where I work, the CT scanners run non-stop. So each extra CT means that other patients have to wait for their own CT scans, each of them requiring nursing care, physician attention, and all of the other trappings of being in the department. In the age of ED crowding, everything has an opportunity costs. Crowding leads to overworked nurses, frustrated physicians, and all sorts of bad outcomes, including worse pain management; delays to antibiotics, thrombolytics and PCI; missing quality measures; increased mortality; and decreased patient satisfaction. If an extra CT means 12 patients wait for an extra half hour (my estimate) that means 6 more patient-in-ED-hours, with a conservative estimate of 2% increased mortality for every 6 extra hours patients spend in the ED. Does that mean every 50 extra CTs could mean that 1 more patient dies from crowding?**

I'm not sure what to do with all of this. Concern about PE seems genuine. (Boehringer Ingelheim developed dabigatran for primary stroke prevention in afib, not to treat PE. Compare about 500 cases per 100,000 vs. 112 cases per 100,000.) PE can be fatal. CTPA can detect PE. Initiating treatment may save a patient's life, but anticoagulation probably has limited benefits and carries very real risks. There don't seem to be easy answers.

But some things are clear:
We irradiate far too many people, and some people are destined to a life of eating rat poison.


*I think it's either PE or PEs, but not PE's.
**This is admittedly a very rough, back-of-the-envelope estimate that I made up. Perhaps in the future it can be quantified more accurately.


UPDATE 7/21/2012:
2 interesting recent articles:

Joe Lex on LITFL's R&R:

Venkatesh AK, Kline JA, Courtney DM, Camargo CA, Plewa MC, Nordenholz KE, Moore CL, Richman PB, Smithline HA, Beam DM, Kabrhel C. Evaluation of pulmonary embolism in the emergency department and consistency with a national quality measure: quantifying the opportunity for improvement. Arch Intern Med. 2012 Jul 9;172(13):1028-32.

Ryan Radecki discusses:

Prasad V, Rho J, Cifu A. The Diagnosis and Treatment of Pulmonary Embolism: A Metaphor for Medicine in the Evidence-Based Medicine Era. Arch Intern Med. 2012 Apr 2. [Epub ahead of print]

May 24, 2011

Skip lecture.

Angry Birds © 2011 Rovio Entertainment Ltd
We've been talking lately about resident conference attendance at my shop. It has always seemed to me that sitting a bunch of adults down and reading slides to them is an incredibly poor way to have them learn anything.

As a thought experiment: try to remember even a conference TOPIC from your lecture series last month. Bet you can't.

For what it's worth, I spent most of my second year of medical school snoozing the back of the lecture hall or doing crossword puzzles with Harold Bach. I studied, but lecture bored me nearly to death.

Since we're all a bunch of nerds (especially Seth) we looked up some data. Hern et al. showed that attendance at conference is poorly correlated with scores on the EM inservice exam. Michelle Lin has a much better discussion of this than I could ever manage over at ALIEM.

Mostly in conference I fiddle with my Blackberry and fantasize about switching to the Droid X. It is so silly to walk off a busy shift with critical patients to go listen to someone read PPT slides about pediatric abdominal pain; I wish the RRC would realize this and half conference time.

The last conference I gave was Weingart inspired: I talked about my practice (a little bit presumptuous for a PGY3) with vascular access and said some controversial things such as:
  • femoral lines suck (actually this is just true)
  • peripheral dopamine sucks
  • you can do subclavians in patients with mild to moderate coagulopathy without worrying too much
  • arterial lines can be quite dirty
Not everyone agreed with me, but we spent most of the time arguing back and forth. I didn't see anyone snoozing. I also used Prezi, which you should check out, instead of powerpoint.

When I am king* I will ban the following:

  • epidemiology slides
  • progressive maps showing how fat/diabetic/hypertensive/old the US population is getting
  • pathophysiology slides that take up more than 0.04% of the total talk
  • attempts to make boring lectures clinically relevant by having mini-cases
  • lectures about basic life skills, e.g. how to talk with people on the phone

-MJP


*Seth will be court jester, housing czar, Defense Minister, and ambassador to Laos.


from seth:
To combat conference atrophy, we're implemented an Asynchronous Learning curriculum, where residents will review book chapters, podcast episodes, or other resources, summarize them online, and create a quiz. Residents can get credit by either posting their own content or taking others' quizzes. Not a huge step but the goal is to encourage and reward the use of some of the great educational resources out there, while cutting down on didactic time, theoretically leaving the remaining traditional conference higher-yield (Grand Rounds, resident M&Ms, our new-and-improved journal clubs, and trauma/critical care talks).

Also, G. McMurtrie Godley's shoes will be tough to fill.

Finally, last month: hand emergencies & burns.




Hern HG, et al. Conference attendance does not correlate with emergency medicine residency in-training examination scores. Acad Emerg Med. 209; 16:S63-S66.

May 7, 2011

When quality measures go bad

This person is using two clipboards to assess quality measures.
Let me start out by saying that this is a shameless plug for one of my own research projects. Actually, the real credit for this study goes to Erin Quattromani, one of my co-residents, and Emilie Powell, a fellow at my program.

Basically we looked at a national inpatient sample of adults admitted for pneumonia. Erin and Emilie (with help from several others attendings) stratified hospitals according to their performance on the Center for Medicare Studies quality measure for getting appropriate antibiotics in pneumonia patients within 6 hours.

Unsurprisingly for those familiar with the controversy surrounding this quality measure, we didn't find much difference in mortality.

Now there are plenty of limitations and you could pick about the methodology until you were blue in the face. All we really said was:
Hospitals that are the best at getting antibiotics within 6 hours are not hospitals with the lowest inpatient mortality.
Again, pick apart to your heart's content.

That said, I did some of the lit search into how this rule came up and found that its scientific basis is shaky at best. For the best breakdown of this, see the Yu and Wyer paper I cite at the bottom.

Basically there were these two big studies of old, sick Medicare patients publishes in The JAMA and in Archives of Internal Medicine. They showed a trend towards increased survival with early antibiotics.

So there you go: because in 2004 someone showed that 84 year olds with cancer and pneumonia do better with early antibiotics, hospitals get dinged when you don't get them into your otherwise healthy 45 year old male on time.

The truly shocking thing is that there is decent research to suggest that attempted compliance with this silly rule has led to diagnostic errors, overtesting, and (worst of all) administration of antibiotics to patients who didn't need them.

Don't just do something-- stand there (at least until you know the damned diagnosis!).

-MJP

seth says:
One of the key differences between the data that suggest early antibiotics may be good and the CMS rule is that the studies were done with ED diagnoses of pneumonia, whereas CMS dings hospitals for missing the 4 hour window on patients with a discharge diagnosis of pneumonia.  
I can speculate that the patients who are not diagnosed initially with pneumonia may have more complex presentations and therefore might be sicker and more likely to die, but the truth is that no one knows.
Also, the PORT scoring system (or Pneumonia Severity Index) is a great tool to estimate mortality associated with pneumonia, but a lot of studies (and clinicians) use it as an admission criterion, although it has not been prospectively validated as a disposition tool.
-nst

References:
Quattromani E. Powell E. et al. Hospital-reported Data on the Pneumonia Quality Measure ‘‘Time to First Antibiotic Dose’’ Are Not Associated With Inpatient Mortality: Results of a Nationwide Cross-sectional Analysis. Academic Emerg Med. 2011;18:1-8.

Houck PM, Bratzler DW, Nsa W, et al. Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community acquired pneumonia. Arch Intern Med. 2004;164:637-644.

Meehan TP, Fine MJ, Krumholz HM, et al. Quality of care, process, and outcomes in elderly patients with pneumonia. JAMA. 1997;278:2080-2084.

Yu KT, Wyer PC. Evidence behind the 4 hour rule for initiation of antibiotic therapy in CAP. Annals of Emerg Med. 2008;51:651-662.

May 2, 2011

Unnecessary testing

In January, Emergency Medical Abstracts reviewed "Commonly prescribed medications and potential false-positive urine drug screens" by Brahm in the American Journal of Health-System Pharmacy.

It's not news that urine tox screens are useless; that's what I've been taught since medical school (admittedly, not all that long ago).

Emergency textbooks such as Tintinalli's (I checked) and PEER VII, the major board review for EM residents, agree. To me, the issue seems pretty much settled.

However, the people we admit our patients to -- internists and psychiatrists -- (anecdotally) always ask for the tox screen. The seemingly compelling argument from psychiatrists is that knowing what drugs were abused now helps with long-term care, i.e. they can catch their patients lying about drug abuse.* Internists seem to be genuinely interested in finding the cause of the patient's current illness, so that other causes can be lowered in the differential.

Although these goals are laudable, unfortunately, as the key table below from the Brahm paper highlights, tox screens are terrible tests. Not surprisingly, most labs put disclaimers in the their u-tox results, such as:
This assay provides a preliminary qualitative analytical test result. A more specific alternate chemical method must be used to obtain a confirmed analytical result, and should be correlated with clinical findings.
Maybe the patient was smoking some marijuana laced with PCP while popping phenobarbital; or, maybe he had a headache and took some legal, over-the-counter ibuprofen.
My general practice has been to order the urine tox to placate the inpatient teams as I don't want to start fights all the time, but I clarify with the nurses that obtaining a urine sample is their lowest priority.

The point here isn't that tox screens or internists are useless, merely that we should know how good our tests are when we decide whether or not to use them. 

Routine "screening"** labs are no different.

Unfortunately, my department recently came to an agreement with our Internal Medicine department about standard labs resulting before patients are listed for admission. The general idea is that there are a few levels of care -- a nurse practitioner/hospitalist floor service, a teaching/resident floor service, a handful of stepdown beds, and the ICU. Some patients are too sick for a the NP/hospitalist service and routine labs may identify a subset of those. 

But a large group of patients are clearly sick enough for the teaching service but obviously not sick enough to need to be upgraded to a stepdown or unit bed -- and routine labs are not going to identify any patients that might be. An INR of 7 or a surprising new renal failure does not mandate a stepdown bed. Yet these patients now, by interdepartmental policy, cannot be listed for admission until their routine labs result, regardless of the fact that those results will not alter the patients' disposition. This means that patients wait for generally at least an extra hour before they can be admitted, leading to further ED boarding, which leads to, among other things, higher mortality. This is another unfortunate example of misapplied testing leading to worse patient care.

-nst
 


*This argument was presented by a psychiatry resident at a joint EM-Psych resident conference last year.
 
**The term "screening" for routine labs is broadly misapplied, occasionally by myself. A screening test should be sensitive but not specific to catch all possible instances of disease; routine labs (e.g. chemistry and complete blood count) are incredibly insensitive for disease processes.


Reference:
Brahm NC, Yeager LL, Fox MD, Farmer KC, Palmer TA. Commonly prescribed medications and potential false-positive urine drug screens. 
Am J Health Syst Pharm. 2010 Aug 15;67(16):1344-50.


most benzos are missed


summary, aka why urine tox is fairly useless