Article Type
Changed
Fri, 01/11/2019 - 18:40
Display Headline
Which facts count?

Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.

I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.

The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.

Then we break down that third category. Why would a patient not use the cream? Reasons include:

• The tube was too small (15 g for a full-body rash).

• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)

• The patient was afraid of steroids. ("I heard they thin your skin.")

I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.

I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)

Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?

After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?

The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"

I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.

Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?

Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?

You know, the real stuff you have to memorize and document, to get in and to get by.

Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.

Author and Disclosure Information

Publications
Legacy Keywords
Under my Skin, dermatology, Alan Rockoff
Sections
Author and Disclosure Information

Author and Disclosure Information

Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.

I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.

The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.

Then we break down that third category. Why would a patient not use the cream? Reasons include:

• The tube was too small (15 g for a full-body rash).

• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)

• The patient was afraid of steroids. ("I heard they thin your skin.")

I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.

I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)

Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?

After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?

The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"

I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.

Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?

Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?

You know, the real stuff you have to memorize and document, to get in and to get by.

Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.

Students who spend a month with me always want a session on topical steroids, that great undiscovered world they have to know but dread to explore. They’ve all seen those tables of steroid potency based on the rabbit-ear bioassay. These run to long columns (or several pages) of small print ordering the steroid universe from the aristocracy of Class 1 ("Supernovacort" 0.015%) down through the midrange ("Mediocricort" 0.026% ointment is Class 2, while Mediocricort 0.026% cream is only Class 3), down to the humble "Trivialicort" 32%, which on a good day is just a measly Class 6. All those multisyllabic names and numbers and classes bewilder and intimidate the poor kids. Even their earnest medical-student memorization skills leave them in despair of mastering all this stuff.

I ask them to ponder a mini-scenario: Your patient was given a topical steroid cream. He says it didn’t work. List all possible explanations.

The next day we discuss their answers. Most students manage to come up with several types of reasons. Maybe the steroid didn’t work because the diagnosis was wrong. (It was a fungus.) Perhaps the condition is inherently unresponsive (like knee psoriasis). Sometimes, the patient didn’t use the cream.

Then we break down that third category. Why would a patient not use the cream? Reasons include:

• The tube was too small (15 g for a full-body rash).

• The steroid did work, but the patient thought it didn’t because the eczema came back. (Eczema comes back.)

• The patient was afraid of steroids. ("I heard they thin your skin.")

I end our session by noting that this third group (the patient didn’t use the cream) is a) intellectually uninteresting; and b) the reason behind most cases were "the steroid didn’t work." By contrast, using the wrong steroid – as defined by the fine-grained distinctions on steroid potency tables – is rarely the difference between success and failure.

I give students a list of four generics, from weak to strong, and advise them not to clutter up their brains with any others. (Since most of them are headed for primary care, those four will be plenty, freeing brain space for board memorization.)

Ever since medical school, which is a rather long time ago by now, I’ve wondered why some things are taught and others left out. More particularly, why are some kinds of facts thought to be important (the ones you can quantify or put numbers next to, for instance) and others are too squishy to mention (such as knowing what the patient thinks about the treatment)?

After all, knowing what a patient thinks about what a treatment does – how it might harm them, and what a treatment "working" really means – has a lot to do with whether the treatment is used properly, or used at all. Why isn’t that important? Because you can’t put it into a table laced with decimal points and percentages?

The tendency to reduce everything to what you can measure has been around for a long time but seems to be getting worse. I read the other day about something called the Human Connectome Project, an effort to produce data to help answer the central question, "How do differences between you and me and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, and our experiences?"

I am not the first to wonder whether functional MRIs, with those gaily colored snapshots of the brain in action, really tell us more about how the brain works than does talking with the people who own those brains. The assumption seems to be that pictures of brain circuits are "real," whereas mere talk is mush, not the stuff of science, whose fruits we physicians are supposed to apply. I am wired, therefore I am.

Suppose a patient thinks that topical steroids thin the skin? Suppose she expects your eczema cream to make the rash go away once and for all, and when it comes back, she takes that as proof that it "didn’t work" and stops using it because it’s clearly worthless? Would those opinions show up on a color photo of her amygdala?

Can my patients be the only ones whose opinions about health and disease matter more, and more often, than do the tabulated measures of clinical efficacy?

You know, the real stuff you have to memorize and document, to get in and to get by.

Dr. Rockoff practices dermatology in Brookline, Mass. He is on the clinical faculty at Tufts University School of Medicine, Boston, and has taught senior medical students and other trainees for 30 years. Dr. Rockoff has contributed to the Under My Skin column in Skin & Allergy News since 1997.

Publications
Publications
Article Type
Display Headline
Which facts count?
Display Headline
Which facts count?
Legacy Keywords
Under my Skin, dermatology, Alan Rockoff
Legacy Keywords
Under my Skin, dermatology, Alan Rockoff
Sections
Article Source

PURLs Copyright

Inside the Article