Featured Post

So You've Decided to Tweet

As the new medical-academic year begins, I'm guessing a bunch of new interns will learn about how great FOAM is, and at the same time,...

June 20, 2013

A Graphic Reanalysis of the Leukocyte Count in Acute Appendicitis

The leukocyte count (aka white blood cell count; WBC in North America; WCC in Oz) has fallen out of favor with many, particularly in the diagnosis of acute appendicitis. Most EM docs I know love regaling each other with stories of the CT showing acute appy and the surgeon asking "what's the white count?" Always struck me like standing in the rain asking about the weather report.

(In all seriousness, I have a lot of respect for surgeons -- most that I know are very smart and incredibly hard-working; please don't cut me with your surgery knifey-thing).

A fun little paper that Scott Weingart has posted on his webtext shows likelihood ratios* (LR+) for appendicitis for different WBC levels.* Their sample is not huge but it's still nice.

The main conclusion is that WBC is only really useful in diagnosing appendicitis if it's below 7 or greater than 17, which is pretty much nobody.

Here is a graphic representation I put together of their results:

And here is their table with all the actual values:

This isn't to say that every patient with a possible appy should forego a CBC or that they all need CTs. See, for example, Choosing Wisely.

The lesson is that we should know the value -- or lack thereof -- of the tests that we do.

What's the deal with the title?

*Explanation of LRs that I wrote for The NNT a while back:

LR, pretest probability and posttest (or posterior) probability are daunting terms that describe simple concepts that we all intuitively understand.

Let's start with pretest probability: that's just a fancy term for my initial impression, before we perform whatever test it is that we're going to use.

For example, a patient with prior stents comes in sweating and clutching his chest in agony, I have a pretty high suspicion that he's having an MI – let's say, 60%. That is my pretest probability.

He immediately gets an ECG (known here as the "test") showing an obvious STEMI.

Now, I know there are some STEMI mimics, so I'm not quite 100%, but based on my experience I'm 99.5% sure that he's having an MI right now. This is my posttest probability - the new impression I have that the patient has the disease after we did our test.

And likelihood ration? That's just the name for the statistical tool that converted the pretest probability to the posttest probability - it's just a mathematical description of the strength of that test.

Using an online calculator, that means the LR+ that got me from 60% to 99.5% is 145, which is about as high an LR you can get (and the actual LR for an emergency physician who thinks an ECG shows an obvious STEMI).

Glossary Entry: Too Much Ultrasound

procrUStination (noun):

when residents spend too much time ultrasounding their patients, at the expense of other clinical duties.

The intern's been in with that patient for over an hour...

Don't forget to check out the other glossary entries!

June 17, 2013

Lactate Debate at BroomeDocs

Following the publication of this editorial by Marik & Bellomo, Casey Parker & I discuss the utility of lactate and/or physiology in the severely septic patient.

Broome Docs: The Lactate “Debate” with Dr Seth Trueger

EMCrit's response to Marik & Bellomo

Here's a little bit on the sepsis bundle as a pay-for-performance quality measure from Surviving Sepsis (overview)

June 12, 2013

This is NOT Gambling Advice





An adapted version of this post appears in print & online here in the August 2013 issue of EP Monthly

Subtitle: "Why You Can Stop a Trial Early for Harm but not Benefit"

This builds on a recent Twitter discussion with Jeremy Faust, David Marcus, Minh Le Cong, Pik Mukherji and CKB (and everyone else linked below).

It sounds odd when you first hear about it, but EBM experts say that you should stop a study if it shows sufficient harm early on, but stopping a study early because it showed great results is shady.

Why? Well, it's tough to explain, so here's an analogy I came up with*:
Stopping a trial early for benefit is like winning money gambling at a casino. If you've ever won money, why aren't you there right now, winning more?
Take any of the big games where you play against the House: slot machines, blackjack, or craps. I like craps -- it's fun, when you win everybody wins (except the one guy sitting next to the dealer betting wrong), and I hear that it's the best odds in a casino, other than counting cards at blackjack or cheating.

Now I know that the odds are stacked against me. The House has an edge, something like 51%. As the cliche goes, casinos are not built for me to make money.

But I know that the 51% House advantage is an average over time -- there are fluctuations around it. I saw a great video comparing it to walking a dog on a leash: the person walks in a straight line (overall trend) but the dog walks a little this way and a little that way (variation) but overall still follows the same path.** I'm hoping to catch a little variation in my favor, and quit playing before the game regresses back to the mean. And so is the drug company.

I know that overall, I am more likely to lose than win (the drug doesn't work). Now if I win some money in the first hour or two (early benefit) I know it's probably a fluke and not loaded dice (drug that works). I can take my money and walk away (stop trial) or I can keep playing. If I stopped with a few extra dollars in my pocket, would I conclude that I can win at craps (blockbuster drug!)? No -- I know that I just caught a little variation in the overall pattern, but in the long run, craps will cost me money.

What about if I lose money (harm)? Maybe I can win it back, should I keep playing? The problem is that eventually, if I keep losing money (drug doesn't work), large men will come after my family (drug is really harmful), and I don't want that.
Do not want.

Of course, in a clinical trial, we don't actually know whether or not the therapy works. Clinical trials start from a position of equipoise: we don't think the drug is harmful, but it might benefit patients, and the risk of harm vs risk of benefit is balanced. But if we show harm early, we lose that equipoise, and we have stop the trial before we harm the study subjects too much (before burly men show up at my house), knowing that we may have given up on a worthwhile drug but that it was just too risky. While it seems that the two are symmetric, beneficence vs maleficence is not a symmetric equation.

Are there times when we should stop a trial early for a huge, obvious benefit? I think so, but only if the study is adequately powered at that point, which it's very unlikely to be, because studies are designed to be powered at the end of the study, not midway through.

Back to the casino: if I walk into a casino, drop a quarter into a slot machine, and on my first try win $1 million, would I conclude that the machine is a winner? What if I win $10,000 on 3 of my first 5? It would take a combination of a big enough benefit over a big enough sample to demonstrate the power needed to end the study early, and that is rare.

At what point do I decide that the machine might be mis-calibrated (the drug works), and I should tell my parents to cash out their 401k and spend it all playing on this machine before the casino catches on (FDA approval)?

In summary: you have to stop early for harm because the study is hurting too many people, but stopping early for benefit isn't allowed, because that shows benefits that don't really exist.

UPDATE June 13, 2013:

Minh Le Cong brought up some good points regarding the ethics of stopping a trial early for benefit, i.e. if your trial is showing good results, is it ethical to withhold treatment from the control group (or others who may benefit)? (Snippets of this conversation.)

My main response is that until the study has acquired adequate power, the trial isn't showing good results. This is the essential question in statistics: do the differences that the study shows represent a true difference between the groups, or is it just a normal fluctuation in the data?

This is the point I was trying to make in the last few paragraphs above about the slot machine paying off early. Just because you win on the first few pulls (early benefit) does not mean the machine is a winner (true benefit). To quote myself: apparent benefit before adequate power is an illusion confusing our primitive brains.

Some others chimed in with some great points as well:


Tessa Davis brings up another great point:

I apologize for being unclear. ALL of these results should be published, whether the trial was stopped early for harm or if the study is completed, with positive, neutral or negative results. Otherwise we end up with "publication bias" or the "file drawer problem." My discussion above is simply about when to end a trial early, not whether or not to publish the results.


*I would guess that others have had the same idea before, like Newton & Leibniz simultaneously discovering the calculus, and a whole bunch of people simultaneously coming up with the term "FOAMites"

**I tried and failed to find the video. Sorry. 

June 2, 2013

Sharpie?

Scott Weingart got me hooked on carrying a Sharpie (industrial) on shift, primarily for marking neck landmarks on people whose airway may deteriorate. He also advocates using it for marking external landmarks for LPs, before they get distorted by local anesthetic and hidden behind a fenestrated drape.

I've since found a few mostly obvious uses:

1) Cric landmarks

2) LP landmarks

3) Paracentesis: find a pocket of fluid with ultrasound, mark with Sharpie. Then sterilize and tap; no need to fumble with sterile US probe.

4) Outlining cellulitis

Some others:

5) Serial or alternate site ECGs:


6) Quick labeling of syringes (pretty much any sticker will work)


7) Alexander Sammel shared this one: marking added meds
8) Chris Edwards adds: signing kids' casts

Any other ideas?

Updates: 

Per Bryan Kitch's suggestion (and kicking myself for not thinking of it):
Also, I got tired of counting out 20 floor tiles every time I used the Snellen chart on the wall: