User login
Not Kidding: Yellow Dye 5 May Lead to Invisibility
Applying the dye to lab mice made their skin temporarily transparent, allowing Stanford University researchers to observe the rodents’ digestive system, muscle fibers, and blood vessels, according to a study published in Science.
“It’s a stunning result,” said senior author Guosong Hong, PhD, who is assistant professor of materials science and engineering at Stanford University in California. “If the same technique could be applied to humans, it could offer a variety of benefits in biology, diagnostics, and even cosmetics.”
The work drew upon optical concepts first described in the early 20th century to form a surprising theory: Applying a light-absorbing substance could render skin transparent by reducing the chaotic scattering of light as it strikes proteins, fats, and water in tissue.
A search for a suitable light absorber led to FD&C Yellow 5, also called tartrazine, a synthetic color additive certified by the Food and Drug Administration (FDA) for use in foods, cosmetics, and medications.
Rubbed on live mice (after areas of fur were removed using a drugstore depilatory cream), tartrazine rendered skin on their bellies, hind legs, and heads transparent within 5 minutes. With the naked eye, the researchers watched a mouse’s intestines, bladder, and liver at work. Using a microscope, they observed muscle fibers and saw blood vessels in a living mouse’s brain — all without making incisions. Transparency faded quickly when the dye was washed off.
Someday, the concept could be used in doctors’ offices and hospitals, Dr. Hong said.
“Instead of relying on invasive biopsies, doctors might be able to diagnose deep-seated tumors by simply examining a person’s tissue without the need for invasive surgical removal,” he said. “This technique could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin. It could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin.”
From Cake Frosting to Groundbreaking Research
Yellow 5 food dye can be found in everything from cereal, soda, spices, and cake frosting to lipstick, mouthwash, shampoo, dietary supplements, and house paint. Although it’s in some topical medications, more research is needed before it could be used in human diagnostics, said Christopher J. Rowlands, PhD, a senior lecturer in the Department of Bioengineering at Imperial College London, England, where he studies biophotonic instrumentation — ways to image structures inside the body more quickly and clearly.
But the finding could prove useful in research. In a commentary published in Science, Dr. Rowlands and his colleague Jon Gorecki, PhD, an experimental optical physicist also at Imperial College London, noted that the dye could be an alternative to other optical clearing agents currently used in lab studies, such as glycerol, fructose, or acetic acid. Advantages are the effect is reversible and works at lower concentrations with fewer side effects. This could broaden the types of studies possible in lab animals, so researchers don’t have to rely on naturally transparent creatures like nematodes and zebrafish.
The dye could also be paired with imaging techniques such as MRI or electron microscopy.
“Imaging techniques all have pros and cons,” Dr. Rowlands said. “MRI can see all the way through the body albeit with limited resolution and contrast. Electron microscopy has excellent resolution but limited compatibility with live tissue and penetration depth. Optical microscopy has subcellular resolution, the ability to label things, excellent biocompatibility but less than 1 millimeter of penetration depth. This clearing method will give a substantial boost to optical imaging for medicine and biology.”
The discovery could improve the depth imaging equipment can achieve by tenfold, according to the commentary.
Brain research especially stands to benefit. “Neurobiology in particular will have great use for combinations of multiphoton, optogenetics, and tissue clearing to record and control neural activity over (potentially) the whole mouse brain,” he said.
Refraction, Absorption, and The Invisible Man
The dye discovery has distant echoes in H.G. Wells’ 1897 novel The Invisible Man, Dr. Rowlands noted. In the book, a serum makes the main character invisible by changing the light scattering — or refractive index (RI) — of his cells to match the air around him.
The Stanford engineers looked to the past for inspiration, but not to fiction. They turned to a concept first described in the 1920s called the Kramers-Kronig relations, a mathematical principle that can be applied to relationships between the way light is refracted and absorbed in different materials. They also read up on Lorentz oscillation, which describes how electrons and atoms inside molecules react to light.
They reasoned that light-absorbing compounds could equalize the differences between the light-scattering properties of proteins, lipids, and water that make skin opaque.
With that, the search was on. The study’s first author, postdoctoral researcher Zihao Ou, PhD, began testing strong dyes to find a candidate. Tartrazine was a front-runner.
“We found that dye molecules are more efficient in raising the refractive index of water than conventional RI-matching agents, thus resulting in transparency at a much lower concentration,” Dr. Hong said. “The underlying physics, explained by the Lorentz oscillator model and Kramers-Kronig relations, reveals that conventional RI matching agents like fructose are not as efficient because they are not ‘colored’ enough.”
What’s Next
Though the dye is already in products that people consume and apply to their skin, medical use is years away. In some people, tartrazine can cause skin or respiratory reactions.
The National Science Foundation (NSF), which helped fund the research, posted a home or classroom activity related to the work on its website. It involves painting a tartrazine solution on a thin slice of raw chicken breast, making it transparent. The experiment should only be done while wearing a mask, eye protection, lab coat, and lab-quality nitrile gloves for protection, according to the NSF.
Meanwhile, Dr. Hong said his lab is looking for new compounds that will improve visibility through transparent skin, removing a red tone seen in the current experiments. And they’re looking for ways to induce cells to make their own “see-through” compounds.
“We are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” he said.
A version of this article first appeared on Medscape.com.
Applying the dye to lab mice made their skin temporarily transparent, allowing Stanford University researchers to observe the rodents’ digestive system, muscle fibers, and blood vessels, according to a study published in Science.
“It’s a stunning result,” said senior author Guosong Hong, PhD, who is assistant professor of materials science and engineering at Stanford University in California. “If the same technique could be applied to humans, it could offer a variety of benefits in biology, diagnostics, and even cosmetics.”
The work drew upon optical concepts first described in the early 20th century to form a surprising theory: Applying a light-absorbing substance could render skin transparent by reducing the chaotic scattering of light as it strikes proteins, fats, and water in tissue.
A search for a suitable light absorber led to FD&C Yellow 5, also called tartrazine, a synthetic color additive certified by the Food and Drug Administration (FDA) for use in foods, cosmetics, and medications.
Rubbed on live mice (after areas of fur were removed using a drugstore depilatory cream), tartrazine rendered skin on their bellies, hind legs, and heads transparent within 5 minutes. With the naked eye, the researchers watched a mouse’s intestines, bladder, and liver at work. Using a microscope, they observed muscle fibers and saw blood vessels in a living mouse’s brain — all without making incisions. Transparency faded quickly when the dye was washed off.
Someday, the concept could be used in doctors’ offices and hospitals, Dr. Hong said.
“Instead of relying on invasive biopsies, doctors might be able to diagnose deep-seated tumors by simply examining a person’s tissue without the need for invasive surgical removal,” he said. “This technique could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin. It could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin.”
From Cake Frosting to Groundbreaking Research
Yellow 5 food dye can be found in everything from cereal, soda, spices, and cake frosting to lipstick, mouthwash, shampoo, dietary supplements, and house paint. Although it’s in some topical medications, more research is needed before it could be used in human diagnostics, said Christopher J. Rowlands, PhD, a senior lecturer in the Department of Bioengineering at Imperial College London, England, where he studies biophotonic instrumentation — ways to image structures inside the body more quickly and clearly.
But the finding could prove useful in research. In a commentary published in Science, Dr. Rowlands and his colleague Jon Gorecki, PhD, an experimental optical physicist also at Imperial College London, noted that the dye could be an alternative to other optical clearing agents currently used in lab studies, such as glycerol, fructose, or acetic acid. Advantages are the effect is reversible and works at lower concentrations with fewer side effects. This could broaden the types of studies possible in lab animals, so researchers don’t have to rely on naturally transparent creatures like nematodes and zebrafish.
The dye could also be paired with imaging techniques such as MRI or electron microscopy.
“Imaging techniques all have pros and cons,” Dr. Rowlands said. “MRI can see all the way through the body albeit with limited resolution and contrast. Electron microscopy has excellent resolution but limited compatibility with live tissue and penetration depth. Optical microscopy has subcellular resolution, the ability to label things, excellent biocompatibility but less than 1 millimeter of penetration depth. This clearing method will give a substantial boost to optical imaging for medicine and biology.”
The discovery could improve the depth imaging equipment can achieve by tenfold, according to the commentary.
Brain research especially stands to benefit. “Neurobiology in particular will have great use for combinations of multiphoton, optogenetics, and tissue clearing to record and control neural activity over (potentially) the whole mouse brain,” he said.
Refraction, Absorption, and The Invisible Man
The dye discovery has distant echoes in H.G. Wells’ 1897 novel The Invisible Man, Dr. Rowlands noted. In the book, a serum makes the main character invisible by changing the light scattering — or refractive index (RI) — of his cells to match the air around him.
The Stanford engineers looked to the past for inspiration, but not to fiction. They turned to a concept first described in the 1920s called the Kramers-Kronig relations, a mathematical principle that can be applied to relationships between the way light is refracted and absorbed in different materials. They also read up on Lorentz oscillation, which describes how electrons and atoms inside molecules react to light.
They reasoned that light-absorbing compounds could equalize the differences between the light-scattering properties of proteins, lipids, and water that make skin opaque.
With that, the search was on. The study’s first author, postdoctoral researcher Zihao Ou, PhD, began testing strong dyes to find a candidate. Tartrazine was a front-runner.
“We found that dye molecules are more efficient in raising the refractive index of water than conventional RI-matching agents, thus resulting in transparency at a much lower concentration,” Dr. Hong said. “The underlying physics, explained by the Lorentz oscillator model and Kramers-Kronig relations, reveals that conventional RI matching agents like fructose are not as efficient because they are not ‘colored’ enough.”
What’s Next
Though the dye is already in products that people consume and apply to their skin, medical use is years away. In some people, tartrazine can cause skin or respiratory reactions.
The National Science Foundation (NSF), which helped fund the research, posted a home or classroom activity related to the work on its website. It involves painting a tartrazine solution on a thin slice of raw chicken breast, making it transparent. The experiment should only be done while wearing a mask, eye protection, lab coat, and lab-quality nitrile gloves for protection, according to the NSF.
Meanwhile, Dr. Hong said his lab is looking for new compounds that will improve visibility through transparent skin, removing a red tone seen in the current experiments. And they’re looking for ways to induce cells to make their own “see-through” compounds.
“We are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” he said.
A version of this article first appeared on Medscape.com.
Applying the dye to lab mice made their skin temporarily transparent, allowing Stanford University researchers to observe the rodents’ digestive system, muscle fibers, and blood vessels, according to a study published in Science.
“It’s a stunning result,” said senior author Guosong Hong, PhD, who is assistant professor of materials science and engineering at Stanford University in California. “If the same technique could be applied to humans, it could offer a variety of benefits in biology, diagnostics, and even cosmetics.”
The work drew upon optical concepts first described in the early 20th century to form a surprising theory: Applying a light-absorbing substance could render skin transparent by reducing the chaotic scattering of light as it strikes proteins, fats, and water in tissue.
A search for a suitable light absorber led to FD&C Yellow 5, also called tartrazine, a synthetic color additive certified by the Food and Drug Administration (FDA) for use in foods, cosmetics, and medications.
Rubbed on live mice (after areas of fur were removed using a drugstore depilatory cream), tartrazine rendered skin on their bellies, hind legs, and heads transparent within 5 minutes. With the naked eye, the researchers watched a mouse’s intestines, bladder, and liver at work. Using a microscope, they observed muscle fibers and saw blood vessels in a living mouse’s brain — all without making incisions. Transparency faded quickly when the dye was washed off.
Someday, the concept could be used in doctors’ offices and hospitals, Dr. Hong said.
“Instead of relying on invasive biopsies, doctors might be able to diagnose deep-seated tumors by simply examining a person’s tissue without the need for invasive surgical removal,” he said. “This technique could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin. It could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin.”
From Cake Frosting to Groundbreaking Research
Yellow 5 food dye can be found in everything from cereal, soda, spices, and cake frosting to lipstick, mouthwash, shampoo, dietary supplements, and house paint. Although it’s in some topical medications, more research is needed before it could be used in human diagnostics, said Christopher J. Rowlands, PhD, a senior lecturer in the Department of Bioengineering at Imperial College London, England, where he studies biophotonic instrumentation — ways to image structures inside the body more quickly and clearly.
But the finding could prove useful in research. In a commentary published in Science, Dr. Rowlands and his colleague Jon Gorecki, PhD, an experimental optical physicist also at Imperial College London, noted that the dye could be an alternative to other optical clearing agents currently used in lab studies, such as glycerol, fructose, or acetic acid. Advantages are the effect is reversible and works at lower concentrations with fewer side effects. This could broaden the types of studies possible in lab animals, so researchers don’t have to rely on naturally transparent creatures like nematodes and zebrafish.
The dye could also be paired with imaging techniques such as MRI or electron microscopy.
“Imaging techniques all have pros and cons,” Dr. Rowlands said. “MRI can see all the way through the body albeit with limited resolution and contrast. Electron microscopy has excellent resolution but limited compatibility with live tissue and penetration depth. Optical microscopy has subcellular resolution, the ability to label things, excellent biocompatibility but less than 1 millimeter of penetration depth. This clearing method will give a substantial boost to optical imaging for medicine and biology.”
The discovery could improve the depth imaging equipment can achieve by tenfold, according to the commentary.
Brain research especially stands to benefit. “Neurobiology in particular will have great use for combinations of multiphoton, optogenetics, and tissue clearing to record and control neural activity over (potentially) the whole mouse brain,” he said.
Refraction, Absorption, and The Invisible Man
The dye discovery has distant echoes in H.G. Wells’ 1897 novel The Invisible Man, Dr. Rowlands noted. In the book, a serum makes the main character invisible by changing the light scattering — or refractive index (RI) — of his cells to match the air around him.
The Stanford engineers looked to the past for inspiration, but not to fiction. They turned to a concept first described in the 1920s called the Kramers-Kronig relations, a mathematical principle that can be applied to relationships between the way light is refracted and absorbed in different materials. They also read up on Lorentz oscillation, which describes how electrons and atoms inside molecules react to light.
They reasoned that light-absorbing compounds could equalize the differences between the light-scattering properties of proteins, lipids, and water that make skin opaque.
With that, the search was on. The study’s first author, postdoctoral researcher Zihao Ou, PhD, began testing strong dyes to find a candidate. Tartrazine was a front-runner.
“We found that dye molecules are more efficient in raising the refractive index of water than conventional RI-matching agents, thus resulting in transparency at a much lower concentration,” Dr. Hong said. “The underlying physics, explained by the Lorentz oscillator model and Kramers-Kronig relations, reveals that conventional RI matching agents like fructose are not as efficient because they are not ‘colored’ enough.”
What’s Next
Though the dye is already in products that people consume and apply to their skin, medical use is years away. In some people, tartrazine can cause skin or respiratory reactions.
The National Science Foundation (NSF), which helped fund the research, posted a home or classroom activity related to the work on its website. It involves painting a tartrazine solution on a thin slice of raw chicken breast, making it transparent. The experiment should only be done while wearing a mask, eye protection, lab coat, and lab-quality nitrile gloves for protection, according to the NSF.
Meanwhile, Dr. Hong said his lab is looking for new compounds that will improve visibility through transparent skin, removing a red tone seen in the current experiments. And they’re looking for ways to induce cells to make their own “see-through” compounds.
“We are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” he said.
A version of this article first appeared on Medscape.com.
FROM SCIENCE
Delayed Bleeding: The Silent Risk for Seniors
This discussion was recorded on August 2, 2024. This transcript has been edited for clarity.
Robert D. Glatter, MD: Today, we’ll be discussing the results of a new study published in The Journal of Emergency Medicine, looking at the incidence of delayed intracranial hemorrhage among older patients taking preinjury anticoagulants who present to the emergency department (ED) with blunt head trauma.
Joining me today is the lead author of the study, Dr. Richard Shih, professor of emergency medicine at Florida Atlantic University. Also joining me is Dr. Christina Shenvi, associate professor of emergency medicine at the University of North Carolina (UNC) Chapel Hill, with fellowship training in geriatric emergency medicine.
Welcome to both of you.
Richard D. Shih, MD: Thanks, Rob.
Christina L. Shenvi, MD, PhD, MBA: Thanks. Pleasure to be here.
ICH Study Methodology
Dr. Glatter: It’s a pleasure to have you. Rich, this is a great study and targeted toward a population we see daily in the emergency department. I want you to describe your methodology, patient selection, and how you went about organizing your study to look at this important finding of delayed intracranial hemorrhage, especially in those on anticoagulants.
Dr. Shih: This all started for our research team when we first read the 2012 Annals of Emergency Medicine paper. The first author was Vincenzo Menditto, and he looked at a group of patients that had minor head injury, were anticoagulated, and had negative initial head CTs.
There were about 100 patients, of which about 10 of them did not consent, but they hospitalized all these patients. These were anticoagulated, negative-first head CTs. They hospitalized the patients and then did a routine second CT at about 24 hours. They also followed them for a week, and it turned out a little over 7% of them had delayed head CT.
We were wondering how many delayed intracranial hemorrhages we had missed because current practice for us was that, if patients had a good physical exam, their head CT was normal, and everything looked good, we would send them home.
Because of that, a number of people across the country wanted to verify those findings from the Menditto study. We tried to design a good study to answer that question. We happen to have a very large geriatric population in Florida, and our ED census is very high for age over 65, at nearly 60%.
There are two Level I trauma centers in Palm Beach County. We included a second multicenter hospital, and we prospectively enrolled patients. We know the current state of practice is not to routinely do second CTs, so we followed these patients over time and followed their medical records to try to identify delayed bleeding. That’s how we set up our methodology.
Is It Safe to Discharge Patients With Trauma After 24 Hours?
Dr. Glatter: For the bulk of these patients with negative head CTs, it’s been my practice that when they’re stable and they look fine and there’s no other apparent, distracting painful trauma, injuries and so forth, they’re safe to discharge.
The secondary outcome in your study is interesting: the need for neurosurgical intervention in terms of those with delayed intracranial hemorrhage.
Dr. Shih: I do believe that it’s certainly not the problem that Menditto described, which is 7%. There are two other prospective studies that have looked at this issue with delayed bleeding on anticoagulants. Both of these also showed a relatively low rate of delayed bleeding, which is between like 0.2% and 1.0%. In our study, it was 0.4%.
The difference in the studies is that Menditto and colleagues routinely did 24-hour head CTs. They admitted everybody. For these other studies, routine head CT was not part of it. My bet is that there is a rate of delayed bleeding somewhere in between that seen in the Menditto study and that in all the other studies.
However, talking about significant intracranial hemorrhage, ones that perhaps need neurosurgery, I believe most of them are not significant. There’s some number that do occur, but the vast majority of those probably don’t need neurosurgery. We had 14 delayed bleeds out of 6000 patients with head trauma. One of them ended up requiring neurosurgery, so the answer is not zero, but I don’t think it’s 7% either.
Dr. Glatter: Dr. Shenvi, I want to bring you into the conversation to talk about your experience at UNC, and how you run things in terms of older patients with blunt head trauma on preinjury anticoagulants.
Dr. Shenvi: Thanks, Rob. I remember when this paper came out showing this 7% rate of delayed bleeding and the question was, “Should we be admitting all these people?” Partly just from an overwhelming need for capacity that that would bring, it just wasn’t practical to say, “We’re going to admit every patient with a negative head CT to the hospital and rescan them.” That would be hundreds or thousands of patients each year in any given facility.
The other thing is that delayed bleeds don’t always happen just in the first 24 hours. It’s not even a matter of bringing patients into observation for 24 hours, watching them, and rescanning them if they have symptoms. It can occur several days out. That never, in almost any institution that I know of, became standard practice.
The way that it did change my care was to give good return precautions to patients, to make sure they have somebody with them to say, “Hey, sometimes you can have bleeding several days out after a fall, even though your CT scan here today looks perfect,” and to alert them that if they start having severe headaches, vomiting, or other symptoms of intracranial hemorrhage, that they should come back.
I don’t think it ever became standard practice, and for good reason, because that was one study. The subsequent studies that Richard mentioned, pretty quickly on the heels of that initial one, showed a much lower rate of delayed ICH with the caveats that the methodology was different.
Shift in Anticoagulants
Dr. Shenvi: One other big change from that original study, and now to Richard’s study, is the shift in anticoagulants. Back in the initial study you mentioned, it was all warfarin. We know from other studies looking at warfarin vs the direct oral anticoagulants (DOACs) that DOACs have lower rates of ICH after a head injury, lower rates of need for neurosurgical intervention, and lower rates of discharge to a skilled nursing facility after an intracranial hemorrhage.
Across the board, we know that the DOACs tend to do better. It’s difficult to compare newer studies because it’s a different medication. It did inform my practice to have an awareness of delayed intracranial hemorrhage so that I warn patients more proactively.
Dr. Glatter: I haven’t seen a patient on warfarin in years. I don’t know if either of you have, but it’s all DOACs now unless there’s some other reason. That shift is quite apparent.
Dr. Shih: The problem with looking at delayed bleeding for DOACs vs warfarin is the numbers were so low. I think we had 13 people, and seven were in the no-anticoagulant group. The numbers are even lower, so it’s hard to say.
I just wanted to comment on something that Dr. Shenvi said, and I pretty much agree with everything that she said. Anticoagulants and warfarin, and that Menditto study, have a carryover effect. People group DOACs with warfarin similarly. When a patient is brought in, the first thing they talk about with head trauma is, “Oh, they’re on an anticoagulant” or “They’re not on an anticoagulant.” It’s so ingrained.
I believe that, in emergency medicine, we’re pressed for space and time and we’re not as affected by that 24-hour observation. Maybe many of our surgeons will automatically admit those patients.
I haven’t seen a guideline from the United States, but there are two international guidelines. One is from Austria from 2019, and one is from Scandinavia. Both recommended 24-hour observation if you’re on an anticoagulant.
There is a bit of controversy left over with that. Hopefully, as more and more of information, like in our study, comes out, people will be a little bit more clear about it. I don’t think there’s a need to routinely admit them.
I do want to mention that the Menditto study had such a massive impact on everybody. They pointed out one subgroup (and it’s such a small number of patients). They had seven cases of delayed bleeding; four or five of them were within that 24 hours, and a couple were diagnosed later over the next couple days.
Of those seven people, four of them had international normalized ratios (INRs) greater than 3. Of those four patients, I’ve heard people talk about this and recommend, “Okay, that’s the subgroup I would admit.” There’s a toss-up with what to do with DOAC because it’s very hard to tell whether there’s an issue, whether there are problems with their dosing, and whatever.
We actually recently looked at that. We have a much larger sample than four: close to 300 patients who were on warfarin. We looked at patients who had INRs below 3 and above 3, and we didn’t show a difference. We still don’t believe that warfarin is a big issue with delayed bleeding.
Should We Be Asking: ‘Are They on Blood Thinners?’
Dr. Shenvi: One of the interesting trends related to warfarin and the DOACs vs no anticoagulant is that as you mentioned, Dr Shih, the first question out of people’s mouths or the first piece of information emergency medical services gives you when they come in with a patient who’s had a head injury is, “Are they on blood thinners or not?”
Yet, the paradigm is shifting to say it’s not actually the blood thinners themselves that are giving older patients the higher risk for bleeding; it’s age and other comorbidities.
Certainly, if you’re on an anticoagulant and you start to bleed, your prognosis is much worse because the bleeding doesn’t stop. In terms of who has a bleeding event, there’s much less impact of anticoagulation than we used to think. That, in part, may be due to the change from warfarin to other medications.
Some of the experts I’ve talked to who have done the research on this have said, “Well, actually, warfarin was more of a marker for being much older and more frail, because it was primarily prescribed to older patients who have significant heart disease, atrial fibrillation, and so on.” It was more a marker for somebody who is at risk for an intracranial hemorrhage. There are many changes that have happened in the past 10 years with medications and also our understanding.
Challenges in Patient Follow-up
Dr. Glatter: That’s a great point. One thing, Rich, I want to ask you about is in terms of your proxy outcome assessment. When you use that at 14 and 60 days with telephone follow-up and then chart review at 60 and 90 days (because, obviously, everyone can’t get another head CT or it’s difficult to follow patients up), did you find that worked out well in your prospective cohort study, in terms of using that as a proxy, so to speak?
Dr. Shih: I would say to a certain extent. Unfortunately, we don’t have access to the patients to come back to follow up all of them, and there was obviously a large number of patients in our study.
The next best thing was that we had dedicated research assistants calling all of the patients at 14 days and 60 days. I’ve certainly read research studies where, when they call them, they get 80%-90% follow-up, but we did not achieve that.
I don’t know if people are more inundated with spam phone calls now, or the older people are just afraid of picking up their phone sometimes with all the scams and so forth. I totally understand, but in all honesty, we only had about a 30%-35% follow-up using that follow-up pathway.
Then the proxy pathway was to look at their charts at 60 and 90 days. Also, we looked at the Florida death registry, which is pretty good, and then finally, we had both Level I trauma centers in the county that we were in participating. It’s standard practice that if you have an intracranial hemorrhage at a non–Level I trauma center, you would be transferred to a Level I trauma center. That’s the protocol. I know that’s not followed 100% of the time, but that’s part of the proxy follow-up. You could criticize the study for not having closer to 90% actual contact, but that’s the best we could do.
Dr. Glatter: I think that’s admirable. Using that paradigm of what you described certainly allows the reader to understand the difficulty in assessing patients that don’t get follow-up head CT, and hardly anyone does that, as we know.
To your point of having both Level I trauma centers in the county, that makes it pretty secure. If we’re going to do a study encompassing a similar type of regional aspect, it would be similar.
Dr. Shenvi: I think your proxies, to your credit, were as good as you can get. You can never get a 100% follow-up, but you really looked at all the different avenues by which patients might present, either in the death registry or a Level I center. Well done on that aspect.
Determining When to Admit Patients for Observation
Dr. Glatter: In terms of admissions: You admit a patient, then you hear back that this patient should not have been admitted because they had a negative head CT, but you put them in anyway in the sense of delayed bleeding happening or not happening.
It’s interesting. Maybe the insurers will start looking at this in some capacity, based on your study, that because it’s so infrequent that you see delayed bleeding, that admitting someone for any reason whatsoever would be declined. Do you see that being an issue? In other words, [do you see] this leading to a pattern in terms of the payers?
Dr. Shih: Certainly, you could interpret it that way, and that would be unfortunate. The [incidence of] delayed bleeding is definitely not zero. That’s the first thing.
The second thing is that when you’re dealing with an older population, having some sense that they’re not doing well is an important contributor to trying to fully assess what’s going on — whether or not they have a bleed or whether they’re at risk for falling again and then hitting their head and causing a second bleed, and making sure they can do the activities of daily life. There really should be some room for a physician to say, “They just got here, and we don’t know him that well. There’s something that bothers me about this person” and have the ability to watch them for at least another 24 hours. That’s how I feel.
Dr. Shenvi: In my location, it would be difficult to try to admit somebody purely for observation for delayed bleeding. I think we would get a lot of pushback on that. The reasons I might admit a patient after a fall with a negative head CT, though, are all the things that, Rob, you alluded to earlier — which are, what made them fall in the first place and were they unable to get up?
I had this happen just this week. A patient who fell couldn’t get off the ground for 12 hours, and so now she’s dehydrated and delirious with slight rhabdomyolysis. Then you’re admitting them either for the sequelae of the fall that are not related to the intracranial hemorrhage, or the fact that they are so debilitated and deconditioned that they cannot take care of themselves. They need physical therapy. Often, we will have physical and occupational therapists come see them in the ED during business hours and help make an assessment of whether they are safe to go home or whether they fall again. That can give more evidence for the need for admission.
Dr. Glatter: To bring artificial intelligence into this discussion, algorithms that are out there that say, “Push a button and the patient’s safe for discharge.” Well, this argues for a clinical gestalt and a human being to make an assessment because you can use these predictive models, which are coming and they’re going to be here soon, and they already are in some sense. Again, we have to use clinical human judgment.
Dr. Shih: I agree.
Advice for Primary Care Physicians
Dr. Glatter: What return precautions do you discuss with patients who’ve had blunt head trauma that maybe had a head CT, or even didn’t? What are the main things we’re looking for?
Dr. Shenvi: What I usually tell people is if you start to have a worse headache, nausea or vomiting, any weakness in one area of your body, or vision changes, and if there’s a family member or friend there, I’ll say, “If you notice that they’re acting differently or seem confused, come back.”
Dr. Shih: I agree with what she said, and I’m also going to add one thing. The most important part is they are trying to prevent a subsequent fall. We know that when they’ve fallen and they present to the ED, they’re at even higher risk for falling and reinjuring themselves, and that’s a population that’s already at risk.
One of the secondary studies that we published out of this project was looking at follow-up with their primary care physicians, and there were two things that we wanted to address. The first was, how often did they do it? Then, when they did do it, did their primary care physicians try to address and prevent subsequent falls?
Both the answers are actually bad. Amazingly, just over like 60% followed up.
In some of our subsequent research, because we’re in the midst of a randomized, controlled trial where we do a home visit, when we initially see these individuals that have fallen, they’ll schedule a home visit for us. Then a week or two later, when we schedule the home visit, many of them cancel because they think, Oh, that was a one-off and it’s not going to happen again. Part of the problem is the patients, because many of them believe that they just slipped and fell and it’s not going to happen again, or they’re not prone to it.
The second issue was when patients did go to a primary care physician, we have found that some primary care physicians believe that falling and injuring themselves is just part of the normal aging process. A percentage of them don’t go over assessment for fall risk or even initiate fall prevention treatments or programs.
I try to take that time to tell them that this is very common in their age group, and believe it or not, a fall from standing is the way people really injure themselves, and there may be ways to prevent subsequent falls and injuries.
Dr. Glatter: Absolutely. Do you find that their medications are a contributor in some sense? Say they’re antihypertensive, have issues of orthostasis, or a new medication was added in the last week.
Dr. Shenvi: It’s all of the above. Sometimes it’s one thing, like they just started tamsulosin for their kidney stone, they stood up, they felt lightheaded, and they fell. Usually, it’s multifactorial with some changes in their gait, vision, balance, reflex time, and strength, plus the medications or the need for assistive devices. Maybe they can’t take care of their home as well as they used to and there are things on the floor. It’s really all of the above.
‘Harder to Unlearn Something Than to Learn It’
Dr. Glatter: Would either of you like to add any additional points to the discussion or add a few pearls?
Dr. Shenvi: This just highlights the challenge of how it’s harder to unlearn something than to learn it, where one study that maybe wasn’t quite looking at what we needed to, or practice and prescribing patterns have changed, so it’s no longer really relevant.
The things that we learned from that, or the fears that we instilled in our minds of, Uh oh, they could go home and have delayed bleeding, are much harder to unlearn, and it takes more studies to unlearn that idea than it did to actually put it into place.
I’m glad that your team has done this much larger, prospective study and hopefully will reduce the concern about this entity.
Dr. Shih: I appreciate that segue. It is amazing that, for paramedics and medical students, the first thing out of their mouth is, “Are they on an anticoagulant?”
In terms of the risk of developing an intracranial hemorrhage, I think it’s much less than the weight we’ve put on it before. However, I believe if they have a bleed, the bleeds are worse. It’s kind of a double-edged sword. It’s still an important factor, but it doesn’t come with the Oh my gosh, they’re on an anticoagulant that everybody thinks about.
No. 1 Cause of Traumatic Injury Is a Fall from Standing
Dr. Glatter: These are obviously ground-level falls in most patients and not motor vehicle crashes. That’s an important part in the population that you looked at that should be mentioned clearly.
Dr. Shih: It’s astonishing. I’ve been a program director for over 20 years, and geriatrics is not well taught in the curriculum. It’s astonishing for many of our trainees and emergency physicians in general that the number-one cause for traumatic injury is a fall from standing.
Certainly, we get patients coming in the trauma center like a 95-year-old person who’s on a ladder putting up his Christmas lights. I’m like, oh my God.
For the vast majority, it’s closer to 90%, but in our study, for the patients we looked at, it was 80% that fall from standing. That’s the mechanism that causes these bleeds and these major injuries.
Dr. Shenvi: That’s reflective of what we see, so it’s good that that’s what you looked at also.
Dr. Glatter: Absolutely. Well, thank you both. This has been a very informative discussion. I appreciate your time, and our readers will certainly benefit from your knowledge and expertise. Thank you again.
Dr. Glatter, assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York, is a medical adviser for this news organization. He disclosed having no relevant financial conflicts. Dr. Shih is professor of emergency medicine at the Charles E. Schmidt College of Medicine at Florida Atlantic University, Boca Raton. His current grant funding and area of research interest involves geriatric emergency department patients with head injury and fall-related injury. He disclosed receiving a research grant from The Florida Medical Malpractice Joint Underwriting Association Grant for Safety of Health Care Services). Dr. Shenvi, associate professor of emergency medicine at the University of North Carolina at Chapel Hill, disclosed ties with the American College of Emergency Physicians, Institute for Healthcare Improvement, AstraZeneca, and CurvaFix.
A version of this article appeared on Medscape.com.
This discussion was recorded on August 2, 2024. This transcript has been edited for clarity.
Robert D. Glatter, MD: Today, we’ll be discussing the results of a new study published in The Journal of Emergency Medicine, looking at the incidence of delayed intracranial hemorrhage among older patients taking preinjury anticoagulants who present to the emergency department (ED) with blunt head trauma.
Joining me today is the lead author of the study, Dr. Richard Shih, professor of emergency medicine at Florida Atlantic University. Also joining me is Dr. Christina Shenvi, associate professor of emergency medicine at the University of North Carolina (UNC) Chapel Hill, with fellowship training in geriatric emergency medicine.
Welcome to both of you.
Richard D. Shih, MD: Thanks, Rob.
Christina L. Shenvi, MD, PhD, MBA: Thanks. Pleasure to be here.
ICH Study Methodology
Dr. Glatter: It’s a pleasure to have you. Rich, this is a great study and targeted toward a population we see daily in the emergency department. I want you to describe your methodology, patient selection, and how you went about organizing your study to look at this important finding of delayed intracranial hemorrhage, especially in those on anticoagulants.
Dr. Shih: This all started for our research team when we first read the 2012 Annals of Emergency Medicine paper. The first author was Vincenzo Menditto, and he looked at a group of patients that had minor head injury, were anticoagulated, and had negative initial head CTs.
There were about 100 patients, of which about 10 of them did not consent, but they hospitalized all these patients. These were anticoagulated, negative-first head CTs. They hospitalized the patients and then did a routine second CT at about 24 hours. They also followed them for a week, and it turned out a little over 7% of them had delayed head CT.
We were wondering how many delayed intracranial hemorrhages we had missed because current practice for us was that, if patients had a good physical exam, their head CT was normal, and everything looked good, we would send them home.
Because of that, a number of people across the country wanted to verify those findings from the Menditto study. We tried to design a good study to answer that question. We happen to have a very large geriatric population in Florida, and our ED census is very high for age over 65, at nearly 60%.
There are two Level I trauma centers in Palm Beach County. We included a second multicenter hospital, and we prospectively enrolled patients. We know the current state of practice is not to routinely do second CTs, so we followed these patients over time and followed their medical records to try to identify delayed bleeding. That’s how we set up our methodology.
Is It Safe to Discharge Patients With Trauma After 24 Hours?
Dr. Glatter: For the bulk of these patients with negative head CTs, it’s been my practice that when they’re stable and they look fine and there’s no other apparent, distracting painful trauma, injuries and so forth, they’re safe to discharge.
The secondary outcome in your study is interesting: the need for neurosurgical intervention in terms of those with delayed intracranial hemorrhage.
Dr. Shih: I do believe that it’s certainly not the problem that Menditto described, which is 7%. There are two other prospective studies that have looked at this issue with delayed bleeding on anticoagulants. Both of these also showed a relatively low rate of delayed bleeding, which is between like 0.2% and 1.0%. In our study, it was 0.4%.
The difference in the studies is that Menditto and colleagues routinely did 24-hour head CTs. They admitted everybody. For these other studies, routine head CT was not part of it. My bet is that there is a rate of delayed bleeding somewhere in between that seen in the Menditto study and that in all the other studies.
However, talking about significant intracranial hemorrhage, ones that perhaps need neurosurgery, I believe most of them are not significant. There’s some number that do occur, but the vast majority of those probably don’t need neurosurgery. We had 14 delayed bleeds out of 6000 patients with head trauma. One of them ended up requiring neurosurgery, so the answer is not zero, but I don’t think it’s 7% either.
Dr. Glatter: Dr. Shenvi, I want to bring you into the conversation to talk about your experience at UNC, and how you run things in terms of older patients with blunt head trauma on preinjury anticoagulants.
Dr. Shenvi: Thanks, Rob. I remember when this paper came out showing this 7% rate of delayed bleeding and the question was, “Should we be admitting all these people?” Partly just from an overwhelming need for capacity that that would bring, it just wasn’t practical to say, “We’re going to admit every patient with a negative head CT to the hospital and rescan them.” That would be hundreds or thousands of patients each year in any given facility.
The other thing is that delayed bleeds don’t always happen just in the first 24 hours. It’s not even a matter of bringing patients into observation for 24 hours, watching them, and rescanning them if they have symptoms. It can occur several days out. That never, in almost any institution that I know of, became standard practice.
The way that it did change my care was to give good return precautions to patients, to make sure they have somebody with them to say, “Hey, sometimes you can have bleeding several days out after a fall, even though your CT scan here today looks perfect,” and to alert them that if they start having severe headaches, vomiting, or other symptoms of intracranial hemorrhage, that they should come back.
I don’t think it ever became standard practice, and for good reason, because that was one study. The subsequent studies that Richard mentioned, pretty quickly on the heels of that initial one, showed a much lower rate of delayed ICH with the caveats that the methodology was different.
Shift in Anticoagulants
Dr. Shenvi: One other big change from that original study, and now to Richard’s study, is the shift in anticoagulants. Back in the initial study you mentioned, it was all warfarin. We know from other studies looking at warfarin vs the direct oral anticoagulants (DOACs) that DOACs have lower rates of ICH after a head injury, lower rates of need for neurosurgical intervention, and lower rates of discharge to a skilled nursing facility after an intracranial hemorrhage.
Across the board, we know that the DOACs tend to do better. It’s difficult to compare newer studies because it’s a different medication. It did inform my practice to have an awareness of delayed intracranial hemorrhage so that I warn patients more proactively.
Dr. Glatter: I haven’t seen a patient on warfarin in years. I don’t know if either of you have, but it’s all DOACs now unless there’s some other reason. That shift is quite apparent.
Dr. Shih: The problem with looking at delayed bleeding for DOACs vs warfarin is the numbers were so low. I think we had 13 people, and seven were in the no-anticoagulant group. The numbers are even lower, so it’s hard to say.
I just wanted to comment on something that Dr. Shenvi said, and I pretty much agree with everything that she said. Anticoagulants and warfarin, and that Menditto study, have a carryover effect. People group DOACs with warfarin similarly. When a patient is brought in, the first thing they talk about with head trauma is, “Oh, they’re on an anticoagulant” or “They’re not on an anticoagulant.” It’s so ingrained.
I believe that, in emergency medicine, we’re pressed for space and time and we’re not as affected by that 24-hour observation. Maybe many of our surgeons will automatically admit those patients.
I haven’t seen a guideline from the United States, but there are two international guidelines. One is from Austria from 2019, and one is from Scandinavia. Both recommended 24-hour observation if you’re on an anticoagulant.
There is a bit of controversy left over with that. Hopefully, as more and more of information, like in our study, comes out, people will be a little bit more clear about it. I don’t think there’s a need to routinely admit them.
I do want to mention that the Menditto study had such a massive impact on everybody. They pointed out one subgroup (and it’s such a small number of patients). They had seven cases of delayed bleeding; four or five of them were within that 24 hours, and a couple were diagnosed later over the next couple days.
Of those seven people, four of them had international normalized ratios (INRs) greater than 3. Of those four patients, I’ve heard people talk about this and recommend, “Okay, that’s the subgroup I would admit.” There’s a toss-up with what to do with DOAC because it’s very hard to tell whether there’s an issue, whether there are problems with their dosing, and whatever.
We actually recently looked at that. We have a much larger sample than four: close to 300 patients who were on warfarin. We looked at patients who had INRs below 3 and above 3, and we didn’t show a difference. We still don’t believe that warfarin is a big issue with delayed bleeding.
Should We Be Asking: ‘Are They on Blood Thinners?’
Dr. Shenvi: One of the interesting trends related to warfarin and the DOACs vs no anticoagulant is that as you mentioned, Dr Shih, the first question out of people’s mouths or the first piece of information emergency medical services gives you when they come in with a patient who’s had a head injury is, “Are they on blood thinners or not?”
Yet, the paradigm is shifting to say it’s not actually the blood thinners themselves that are giving older patients the higher risk for bleeding; it’s age and other comorbidities.
Certainly, if you’re on an anticoagulant and you start to bleed, your prognosis is much worse because the bleeding doesn’t stop. In terms of who has a bleeding event, there’s much less impact of anticoagulation than we used to think. That, in part, may be due to the change from warfarin to other medications.
Some of the experts I’ve talked to who have done the research on this have said, “Well, actually, warfarin was more of a marker for being much older and more frail, because it was primarily prescribed to older patients who have significant heart disease, atrial fibrillation, and so on.” It was more a marker for somebody who is at risk for an intracranial hemorrhage. There are many changes that have happened in the past 10 years with medications and also our understanding.
Challenges in Patient Follow-up
Dr. Glatter: That’s a great point. One thing, Rich, I want to ask you about is in terms of your proxy outcome assessment. When you use that at 14 and 60 days with telephone follow-up and then chart review at 60 and 90 days (because, obviously, everyone can’t get another head CT or it’s difficult to follow patients up), did you find that worked out well in your prospective cohort study, in terms of using that as a proxy, so to speak?
Dr. Shih: I would say to a certain extent. Unfortunately, we don’t have access to the patients to come back to follow up all of them, and there was obviously a large number of patients in our study.
The next best thing was that we had dedicated research assistants calling all of the patients at 14 days and 60 days. I’ve certainly read research studies where, when they call them, they get 80%-90% follow-up, but we did not achieve that.
I don’t know if people are more inundated with spam phone calls now, or the older people are just afraid of picking up their phone sometimes with all the scams and so forth. I totally understand, but in all honesty, we only had about a 30%-35% follow-up using that follow-up pathway.
Then the proxy pathway was to look at their charts at 60 and 90 days. Also, we looked at the Florida death registry, which is pretty good, and then finally, we had both Level I trauma centers in the county that we were in participating. It’s standard practice that if you have an intracranial hemorrhage at a non–Level I trauma center, you would be transferred to a Level I trauma center. That’s the protocol. I know that’s not followed 100% of the time, but that’s part of the proxy follow-up. You could criticize the study for not having closer to 90% actual contact, but that’s the best we could do.
Dr. Glatter: I think that’s admirable. Using that paradigm of what you described certainly allows the reader to understand the difficulty in assessing patients that don’t get follow-up head CT, and hardly anyone does that, as we know.
To your point of having both Level I trauma centers in the county, that makes it pretty secure. If we’re going to do a study encompassing a similar type of regional aspect, it would be similar.
Dr. Shenvi: I think your proxies, to your credit, were as good as you can get. You can never get a 100% follow-up, but you really looked at all the different avenues by which patients might present, either in the death registry or a Level I center. Well done on that aspect.
Determining When to Admit Patients for Observation
Dr. Glatter: In terms of admissions: You admit a patient, then you hear back that this patient should not have been admitted because they had a negative head CT, but you put them in anyway in the sense of delayed bleeding happening or not happening.
It’s interesting. Maybe the insurers will start looking at this in some capacity, based on your study, that because it’s so infrequent that you see delayed bleeding, that admitting someone for any reason whatsoever would be declined. Do you see that being an issue? In other words, [do you see] this leading to a pattern in terms of the payers?
Dr. Shih: Certainly, you could interpret it that way, and that would be unfortunate. The [incidence of] delayed bleeding is definitely not zero. That’s the first thing.
The second thing is that when you’re dealing with an older population, having some sense that they’re not doing well is an important contributor to trying to fully assess what’s going on — whether or not they have a bleed or whether they’re at risk for falling again and then hitting their head and causing a second bleed, and making sure they can do the activities of daily life. There really should be some room for a physician to say, “They just got here, and we don’t know him that well. There’s something that bothers me about this person” and have the ability to watch them for at least another 24 hours. That’s how I feel.
Dr. Shenvi: In my location, it would be difficult to try to admit somebody purely for observation for delayed bleeding. I think we would get a lot of pushback on that. The reasons I might admit a patient after a fall with a negative head CT, though, are all the things that, Rob, you alluded to earlier — which are, what made them fall in the first place and were they unable to get up?
I had this happen just this week. A patient who fell couldn’t get off the ground for 12 hours, and so now she’s dehydrated and delirious with slight rhabdomyolysis. Then you’re admitting them either for the sequelae of the fall that are not related to the intracranial hemorrhage, or the fact that they are so debilitated and deconditioned that they cannot take care of themselves. They need physical therapy. Often, we will have physical and occupational therapists come see them in the ED during business hours and help make an assessment of whether they are safe to go home or whether they fall again. That can give more evidence for the need for admission.
Dr. Glatter: To bring artificial intelligence into this discussion, algorithms that are out there that say, “Push a button and the patient’s safe for discharge.” Well, this argues for a clinical gestalt and a human being to make an assessment because you can use these predictive models, which are coming and they’re going to be here soon, and they already are in some sense. Again, we have to use clinical human judgment.
Dr. Shih: I agree.
Advice for Primary Care Physicians
Dr. Glatter: What return precautions do you discuss with patients who’ve had blunt head trauma that maybe had a head CT, or even didn’t? What are the main things we’re looking for?
Dr. Shenvi: What I usually tell people is if you start to have a worse headache, nausea or vomiting, any weakness in one area of your body, or vision changes, and if there’s a family member or friend there, I’ll say, “If you notice that they’re acting differently or seem confused, come back.”
Dr. Shih: I agree with what she said, and I’m also going to add one thing. The most important part is they are trying to prevent a subsequent fall. We know that when they’ve fallen and they present to the ED, they’re at even higher risk for falling and reinjuring themselves, and that’s a population that’s already at risk.
One of the secondary studies that we published out of this project was looking at follow-up with their primary care physicians, and there were two things that we wanted to address. The first was, how often did they do it? Then, when they did do it, did their primary care physicians try to address and prevent subsequent falls?
Both the answers are actually bad. Amazingly, just over like 60% followed up.
In some of our subsequent research, because we’re in the midst of a randomized, controlled trial where we do a home visit, when we initially see these individuals that have fallen, they’ll schedule a home visit for us. Then a week or two later, when we schedule the home visit, many of them cancel because they think, Oh, that was a one-off and it’s not going to happen again. Part of the problem is the patients, because many of them believe that they just slipped and fell and it’s not going to happen again, or they’re not prone to it.
The second issue was when patients did go to a primary care physician, we have found that some primary care physicians believe that falling and injuring themselves is just part of the normal aging process. A percentage of them don’t go over assessment for fall risk or even initiate fall prevention treatments or programs.
I try to take that time to tell them that this is very common in their age group, and believe it or not, a fall from standing is the way people really injure themselves, and there may be ways to prevent subsequent falls and injuries.
Dr. Glatter: Absolutely. Do you find that their medications are a contributor in some sense? Say they’re antihypertensive, have issues of orthostasis, or a new medication was added in the last week.
Dr. Shenvi: It’s all of the above. Sometimes it’s one thing, like they just started tamsulosin for their kidney stone, they stood up, they felt lightheaded, and they fell. Usually, it’s multifactorial with some changes in their gait, vision, balance, reflex time, and strength, plus the medications or the need for assistive devices. Maybe they can’t take care of their home as well as they used to and there are things on the floor. It’s really all of the above.
‘Harder to Unlearn Something Than to Learn It’
Dr. Glatter: Would either of you like to add any additional points to the discussion or add a few pearls?
Dr. Shenvi: This just highlights the challenge of how it’s harder to unlearn something than to learn it, where one study that maybe wasn’t quite looking at what we needed to, or practice and prescribing patterns have changed, so it’s no longer really relevant.
The things that we learned from that, or the fears that we instilled in our minds of, Uh oh, they could go home and have delayed bleeding, are much harder to unlearn, and it takes more studies to unlearn that idea than it did to actually put it into place.
I’m glad that your team has done this much larger, prospective study and hopefully will reduce the concern about this entity.
Dr. Shih: I appreciate that segue. It is amazing that, for paramedics and medical students, the first thing out of their mouth is, “Are they on an anticoagulant?”
In terms of the risk of developing an intracranial hemorrhage, I think it’s much less than the weight we’ve put on it before. However, I believe if they have a bleed, the bleeds are worse. It’s kind of a double-edged sword. It’s still an important factor, but it doesn’t come with the Oh my gosh, they’re on an anticoagulant that everybody thinks about.
No. 1 Cause of Traumatic Injury Is a Fall from Standing
Dr. Glatter: These are obviously ground-level falls in most patients and not motor vehicle crashes. That’s an important part in the population that you looked at that should be mentioned clearly.
Dr. Shih: It’s astonishing. I’ve been a program director for over 20 years, and geriatrics is not well taught in the curriculum. It’s astonishing for many of our trainees and emergency physicians in general that the number-one cause for traumatic injury is a fall from standing.
Certainly, we get patients coming in the trauma center like a 95-year-old person who’s on a ladder putting up his Christmas lights. I’m like, oh my God.
For the vast majority, it’s closer to 90%, but in our study, for the patients we looked at, it was 80% that fall from standing. That’s the mechanism that causes these bleeds and these major injuries.
Dr. Shenvi: That’s reflective of what we see, so it’s good that that’s what you looked at also.
Dr. Glatter: Absolutely. Well, thank you both. This has been a very informative discussion. I appreciate your time, and our readers will certainly benefit from your knowledge and expertise. Thank you again.
Dr. Glatter, assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York, is a medical adviser for this news organization. He disclosed having no relevant financial conflicts. Dr. Shih is professor of emergency medicine at the Charles E. Schmidt College of Medicine at Florida Atlantic University, Boca Raton. His current grant funding and area of research interest involves geriatric emergency department patients with head injury and fall-related injury. He disclosed receiving a research grant from The Florida Medical Malpractice Joint Underwriting Association Grant for Safety of Health Care Services). Dr. Shenvi, associate professor of emergency medicine at the University of North Carolina at Chapel Hill, disclosed ties with the American College of Emergency Physicians, Institute for Healthcare Improvement, AstraZeneca, and CurvaFix.
A version of this article appeared on Medscape.com.
This discussion was recorded on August 2, 2024. This transcript has been edited for clarity.
Robert D. Glatter, MD: Today, we’ll be discussing the results of a new study published in The Journal of Emergency Medicine, looking at the incidence of delayed intracranial hemorrhage among older patients taking preinjury anticoagulants who present to the emergency department (ED) with blunt head trauma.
Joining me today is the lead author of the study, Dr. Richard Shih, professor of emergency medicine at Florida Atlantic University. Also joining me is Dr. Christina Shenvi, associate professor of emergency medicine at the University of North Carolina (UNC) Chapel Hill, with fellowship training in geriatric emergency medicine.
Welcome to both of you.
Richard D. Shih, MD: Thanks, Rob.
Christina L. Shenvi, MD, PhD, MBA: Thanks. Pleasure to be here.
ICH Study Methodology
Dr. Glatter: It’s a pleasure to have you. Rich, this is a great study and targeted toward a population we see daily in the emergency department. I want you to describe your methodology, patient selection, and how you went about organizing your study to look at this important finding of delayed intracranial hemorrhage, especially in those on anticoagulants.
Dr. Shih: This all started for our research team when we first read the 2012 Annals of Emergency Medicine paper. The first author was Vincenzo Menditto, and he looked at a group of patients that had minor head injury, were anticoagulated, and had negative initial head CTs.
There were about 100 patients, of which about 10 of them did not consent, but they hospitalized all these patients. These were anticoagulated, negative-first head CTs. They hospitalized the patients and then did a routine second CT at about 24 hours. They also followed them for a week, and it turned out a little over 7% of them had delayed head CT.
We were wondering how many delayed intracranial hemorrhages we had missed because current practice for us was that, if patients had a good physical exam, their head CT was normal, and everything looked good, we would send them home.
Because of that, a number of people across the country wanted to verify those findings from the Menditto study. We tried to design a good study to answer that question. We happen to have a very large geriatric population in Florida, and our ED census is very high for age over 65, at nearly 60%.
There are two Level I trauma centers in Palm Beach County. We included a second multicenter hospital, and we prospectively enrolled patients. We know the current state of practice is not to routinely do second CTs, so we followed these patients over time and followed their medical records to try to identify delayed bleeding. That’s how we set up our methodology.
Is It Safe to Discharge Patients With Trauma After 24 Hours?
Dr. Glatter: For the bulk of these patients with negative head CTs, it’s been my practice that when they’re stable and they look fine and there’s no other apparent, distracting painful trauma, injuries and so forth, they’re safe to discharge.
The secondary outcome in your study is interesting: the need for neurosurgical intervention in terms of those with delayed intracranial hemorrhage.
Dr. Shih: I do believe that it’s certainly not the problem that Menditto described, which is 7%. There are two other prospective studies that have looked at this issue with delayed bleeding on anticoagulants. Both of these also showed a relatively low rate of delayed bleeding, which is between like 0.2% and 1.0%. In our study, it was 0.4%.
The difference in the studies is that Menditto and colleagues routinely did 24-hour head CTs. They admitted everybody. For these other studies, routine head CT was not part of it. My bet is that there is a rate of delayed bleeding somewhere in between that seen in the Menditto study and that in all the other studies.
However, talking about significant intracranial hemorrhage, ones that perhaps need neurosurgery, I believe most of them are not significant. There’s some number that do occur, but the vast majority of those probably don’t need neurosurgery. We had 14 delayed bleeds out of 6000 patients with head trauma. One of them ended up requiring neurosurgery, so the answer is not zero, but I don’t think it’s 7% either.
Dr. Glatter: Dr. Shenvi, I want to bring you into the conversation to talk about your experience at UNC, and how you run things in terms of older patients with blunt head trauma on preinjury anticoagulants.
Dr. Shenvi: Thanks, Rob. I remember when this paper came out showing this 7% rate of delayed bleeding and the question was, “Should we be admitting all these people?” Partly just from an overwhelming need for capacity that that would bring, it just wasn’t practical to say, “We’re going to admit every patient with a negative head CT to the hospital and rescan them.” That would be hundreds or thousands of patients each year in any given facility.
The other thing is that delayed bleeds don’t always happen just in the first 24 hours. It’s not even a matter of bringing patients into observation for 24 hours, watching them, and rescanning them if they have symptoms. It can occur several days out. That never, in almost any institution that I know of, became standard practice.
The way that it did change my care was to give good return precautions to patients, to make sure they have somebody with them to say, “Hey, sometimes you can have bleeding several days out after a fall, even though your CT scan here today looks perfect,” and to alert them that if they start having severe headaches, vomiting, or other symptoms of intracranial hemorrhage, that they should come back.
I don’t think it ever became standard practice, and for good reason, because that was one study. The subsequent studies that Richard mentioned, pretty quickly on the heels of that initial one, showed a much lower rate of delayed ICH with the caveats that the methodology was different.
Shift in Anticoagulants
Dr. Shenvi: One other big change from that original study, and now to Richard’s study, is the shift in anticoagulants. Back in the initial study you mentioned, it was all warfarin. We know from other studies looking at warfarin vs the direct oral anticoagulants (DOACs) that DOACs have lower rates of ICH after a head injury, lower rates of need for neurosurgical intervention, and lower rates of discharge to a skilled nursing facility after an intracranial hemorrhage.
Across the board, we know that the DOACs tend to do better. It’s difficult to compare newer studies because it’s a different medication. It did inform my practice to have an awareness of delayed intracranial hemorrhage so that I warn patients more proactively.
Dr. Glatter: I haven’t seen a patient on warfarin in years. I don’t know if either of you have, but it’s all DOACs now unless there’s some other reason. That shift is quite apparent.
Dr. Shih: The problem with looking at delayed bleeding for DOACs vs warfarin is the numbers were so low. I think we had 13 people, and seven were in the no-anticoagulant group. The numbers are even lower, so it’s hard to say.
I just wanted to comment on something that Dr. Shenvi said, and I pretty much agree with everything that she said. Anticoagulants and warfarin, and that Menditto study, have a carryover effect. People group DOACs with warfarin similarly. When a patient is brought in, the first thing they talk about with head trauma is, “Oh, they’re on an anticoagulant” or “They’re not on an anticoagulant.” It’s so ingrained.
I believe that, in emergency medicine, we’re pressed for space and time and we’re not as affected by that 24-hour observation. Maybe many of our surgeons will automatically admit those patients.
I haven’t seen a guideline from the United States, but there are two international guidelines. One is from Austria from 2019, and one is from Scandinavia. Both recommended 24-hour observation if you’re on an anticoagulant.
There is a bit of controversy left over with that. Hopefully, as more and more of information, like in our study, comes out, people will be a little bit more clear about it. I don’t think there’s a need to routinely admit them.
I do want to mention that the Menditto study had such a massive impact on everybody. They pointed out one subgroup (and it’s such a small number of patients). They had seven cases of delayed bleeding; four or five of them were within that 24 hours, and a couple were diagnosed later over the next couple days.
Of those seven people, four of them had international normalized ratios (INRs) greater than 3. Of those four patients, I’ve heard people talk about this and recommend, “Okay, that’s the subgroup I would admit.” There’s a toss-up with what to do with DOAC because it’s very hard to tell whether there’s an issue, whether there are problems with their dosing, and whatever.
We actually recently looked at that. We have a much larger sample than four: close to 300 patients who were on warfarin. We looked at patients who had INRs below 3 and above 3, and we didn’t show a difference. We still don’t believe that warfarin is a big issue with delayed bleeding.
Should We Be Asking: ‘Are They on Blood Thinners?’
Dr. Shenvi: One of the interesting trends related to warfarin and the DOACs vs no anticoagulant is that as you mentioned, Dr Shih, the first question out of people’s mouths or the first piece of information emergency medical services gives you when they come in with a patient who’s had a head injury is, “Are they on blood thinners or not?”
Yet, the paradigm is shifting to say it’s not actually the blood thinners themselves that are giving older patients the higher risk for bleeding; it’s age and other comorbidities.
Certainly, if you’re on an anticoagulant and you start to bleed, your prognosis is much worse because the bleeding doesn’t stop. In terms of who has a bleeding event, there’s much less impact of anticoagulation than we used to think. That, in part, may be due to the change from warfarin to other medications.
Some of the experts I’ve talked to who have done the research on this have said, “Well, actually, warfarin was more of a marker for being much older and more frail, because it was primarily prescribed to older patients who have significant heart disease, atrial fibrillation, and so on.” It was more a marker for somebody who is at risk for an intracranial hemorrhage. There are many changes that have happened in the past 10 years with medications and also our understanding.
Challenges in Patient Follow-up
Dr. Glatter: That’s a great point. One thing, Rich, I want to ask you about is in terms of your proxy outcome assessment. When you use that at 14 and 60 days with telephone follow-up and then chart review at 60 and 90 days (because, obviously, everyone can’t get another head CT or it’s difficult to follow patients up), did you find that worked out well in your prospective cohort study, in terms of using that as a proxy, so to speak?
Dr. Shih: I would say to a certain extent. Unfortunately, we don’t have access to the patients to come back to follow up all of them, and there was obviously a large number of patients in our study.
The next best thing was that we had dedicated research assistants calling all of the patients at 14 days and 60 days. I’ve certainly read research studies where, when they call them, they get 80%-90% follow-up, but we did not achieve that.
I don’t know if people are more inundated with spam phone calls now, or the older people are just afraid of picking up their phone sometimes with all the scams and so forth. I totally understand, but in all honesty, we only had about a 30%-35% follow-up using that follow-up pathway.
Then the proxy pathway was to look at their charts at 60 and 90 days. Also, we looked at the Florida death registry, which is pretty good, and then finally, we had both Level I trauma centers in the county that we were in participating. It’s standard practice that if you have an intracranial hemorrhage at a non–Level I trauma center, you would be transferred to a Level I trauma center. That’s the protocol. I know that’s not followed 100% of the time, but that’s part of the proxy follow-up. You could criticize the study for not having closer to 90% actual contact, but that’s the best we could do.
Dr. Glatter: I think that’s admirable. Using that paradigm of what you described certainly allows the reader to understand the difficulty in assessing patients that don’t get follow-up head CT, and hardly anyone does that, as we know.
To your point of having both Level I trauma centers in the county, that makes it pretty secure. If we’re going to do a study encompassing a similar type of regional aspect, it would be similar.
Dr. Shenvi: I think your proxies, to your credit, were as good as you can get. You can never get a 100% follow-up, but you really looked at all the different avenues by which patients might present, either in the death registry or a Level I center. Well done on that aspect.
Determining When to Admit Patients for Observation
Dr. Glatter: In terms of admissions: You admit a patient, then you hear back that this patient should not have been admitted because they had a negative head CT, but you put them in anyway in the sense of delayed bleeding happening or not happening.
It’s interesting. Maybe the insurers will start looking at this in some capacity, based on your study, that because it’s so infrequent that you see delayed bleeding, that admitting someone for any reason whatsoever would be declined. Do you see that being an issue? In other words, [do you see] this leading to a pattern in terms of the payers?
Dr. Shih: Certainly, you could interpret it that way, and that would be unfortunate. The [incidence of] delayed bleeding is definitely not zero. That’s the first thing.
The second thing is that when you’re dealing with an older population, having some sense that they’re not doing well is an important contributor to trying to fully assess what’s going on — whether or not they have a bleed or whether they’re at risk for falling again and then hitting their head and causing a second bleed, and making sure they can do the activities of daily life. There really should be some room for a physician to say, “They just got here, and we don’t know him that well. There’s something that bothers me about this person” and have the ability to watch them for at least another 24 hours. That’s how I feel.
Dr. Shenvi: In my location, it would be difficult to try to admit somebody purely for observation for delayed bleeding. I think we would get a lot of pushback on that. The reasons I might admit a patient after a fall with a negative head CT, though, are all the things that, Rob, you alluded to earlier — which are, what made them fall in the first place and were they unable to get up?
I had this happen just this week. A patient who fell couldn’t get off the ground for 12 hours, and so now she’s dehydrated and delirious with slight rhabdomyolysis. Then you’re admitting them either for the sequelae of the fall that are not related to the intracranial hemorrhage, or the fact that they are so debilitated and deconditioned that they cannot take care of themselves. They need physical therapy. Often, we will have physical and occupational therapists come see them in the ED during business hours and help make an assessment of whether they are safe to go home or whether they fall again. That can give more evidence for the need for admission.
Dr. Glatter: To bring artificial intelligence into this discussion, algorithms that are out there that say, “Push a button and the patient’s safe for discharge.” Well, this argues for a clinical gestalt and a human being to make an assessment because you can use these predictive models, which are coming and they’re going to be here soon, and they already are in some sense. Again, we have to use clinical human judgment.
Dr. Shih: I agree.
Advice for Primary Care Physicians
Dr. Glatter: What return precautions do you discuss with patients who’ve had blunt head trauma that maybe had a head CT, or even didn’t? What are the main things we’re looking for?
Dr. Shenvi: What I usually tell people is if you start to have a worse headache, nausea or vomiting, any weakness in one area of your body, or vision changes, and if there’s a family member or friend there, I’ll say, “If you notice that they’re acting differently or seem confused, come back.”
Dr. Shih: I agree with what she said, and I’m also going to add one thing. The most important part is they are trying to prevent a subsequent fall. We know that when they’ve fallen and they present to the ED, they’re at even higher risk for falling and reinjuring themselves, and that’s a population that’s already at risk.
One of the secondary studies that we published out of this project was looking at follow-up with their primary care physicians, and there were two things that we wanted to address. The first was, how often did they do it? Then, when they did do it, did their primary care physicians try to address and prevent subsequent falls?
Both the answers are actually bad. Amazingly, just over like 60% followed up.
In some of our subsequent research, because we’re in the midst of a randomized, controlled trial where we do a home visit, when we initially see these individuals that have fallen, they’ll schedule a home visit for us. Then a week or two later, when we schedule the home visit, many of them cancel because they think, Oh, that was a one-off and it’s not going to happen again. Part of the problem is the patients, because many of them believe that they just slipped and fell and it’s not going to happen again, or they’re not prone to it.
The second issue was when patients did go to a primary care physician, we have found that some primary care physicians believe that falling and injuring themselves is just part of the normal aging process. A percentage of them don’t go over assessment for fall risk or even initiate fall prevention treatments or programs.
I try to take that time to tell them that this is very common in their age group, and believe it or not, a fall from standing is the way people really injure themselves, and there may be ways to prevent subsequent falls and injuries.
Dr. Glatter: Absolutely. Do you find that their medications are a contributor in some sense? Say they’re antihypertensive, have issues of orthostasis, or a new medication was added in the last week.
Dr. Shenvi: It’s all of the above. Sometimes it’s one thing, like they just started tamsulosin for their kidney stone, they stood up, they felt lightheaded, and they fell. Usually, it’s multifactorial with some changes in their gait, vision, balance, reflex time, and strength, plus the medications or the need for assistive devices. Maybe they can’t take care of their home as well as they used to and there are things on the floor. It’s really all of the above.
‘Harder to Unlearn Something Than to Learn It’
Dr. Glatter: Would either of you like to add any additional points to the discussion or add a few pearls?
Dr. Shenvi: This just highlights the challenge of how it’s harder to unlearn something than to learn it, where one study that maybe wasn’t quite looking at what we needed to, or practice and prescribing patterns have changed, so it’s no longer really relevant.
The things that we learned from that, or the fears that we instilled in our minds of, Uh oh, they could go home and have delayed bleeding, are much harder to unlearn, and it takes more studies to unlearn that idea than it did to actually put it into place.
I’m glad that your team has done this much larger, prospective study and hopefully will reduce the concern about this entity.
Dr. Shih: I appreciate that segue. It is amazing that, for paramedics and medical students, the first thing out of their mouth is, “Are they on an anticoagulant?”
In terms of the risk of developing an intracranial hemorrhage, I think it’s much less than the weight we’ve put on it before. However, I believe if they have a bleed, the bleeds are worse. It’s kind of a double-edged sword. It’s still an important factor, but it doesn’t come with the Oh my gosh, they’re on an anticoagulant that everybody thinks about.
No. 1 Cause of Traumatic Injury Is a Fall from Standing
Dr. Glatter: These are obviously ground-level falls in most patients and not motor vehicle crashes. That’s an important part in the population that you looked at that should be mentioned clearly.
Dr. Shih: It’s astonishing. I’ve been a program director for over 20 years, and geriatrics is not well taught in the curriculum. It’s astonishing for many of our trainees and emergency physicians in general that the number-one cause for traumatic injury is a fall from standing.
Certainly, we get patients coming in the trauma center like a 95-year-old person who’s on a ladder putting up his Christmas lights. I’m like, oh my God.
For the vast majority, it’s closer to 90%, but in our study, for the patients we looked at, it was 80% that fall from standing. That’s the mechanism that causes these bleeds and these major injuries.
Dr. Shenvi: That’s reflective of what we see, so it’s good that that’s what you looked at also.
Dr. Glatter: Absolutely. Well, thank you both. This has been a very informative discussion. I appreciate your time, and our readers will certainly benefit from your knowledge and expertise. Thank you again.
Dr. Glatter, assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York, is a medical adviser for this news organization. He disclosed having no relevant financial conflicts. Dr. Shih is professor of emergency medicine at the Charles E. Schmidt College of Medicine at Florida Atlantic University, Boca Raton. His current grant funding and area of research interest involves geriatric emergency department patients with head injury and fall-related injury. He disclosed receiving a research grant from The Florida Medical Malpractice Joint Underwriting Association Grant for Safety of Health Care Services). Dr. Shenvi, associate professor of emergency medicine at the University of North Carolina at Chapel Hill, disclosed ties with the American College of Emergency Physicians, Institute for Healthcare Improvement, AstraZeneca, and CurvaFix.
A version of this article appeared on Medscape.com.
High-Frequency Electric Nerve Block Shows Promise in Postamputation Pain Management
TOPLINE:
in a new study, presenting a potential new therapeutic option for amputees.
METHODOLOGY:
- The study enrolled 180 patients with unilateral lower limb amputations who were experiencing severe post-procedure pain.
- Participants were randomized 1:1 to receive 3 months of treatment with either a high-frequency nerve block (Altius; Neuros Medical) or an active sham.
- Effectiveness was measured by the percentage of participants achieving at least a 50% reduction in pain in more than half of the treatment sessions.
- The researchers attempted to control for variables including pain type and baseline pain intensity.
TAKEAWAY:
- A total of 24.7% of patients in the group that received the nerve block were responders at 30 minutes post-treatment, significantly higher than 7.1% in the control group (P = .002).
- The rate of response rose to 46.8% in the treatment group at 120 minutes, compared with 22.2% in the sham group (P = .001).
- Patients who received the nerve block reported a greater improvement in their score on the Brief Pain Inventory than those in the sham arm — 2.3 ± 0.29 vs 1.3 ± 0.26, respectively (P = .01).
- Use of opioids trended toward a greater reduction in the treatment group, although that finding was not statistically significant.
IN PRACTICE:
The results suggested “high-frequency electric nerve block could be a viable option for managing chronic post-amputation pain, potentially improving patients’ quality of life and reducing reliance on opioids,” the authors wrote. “The study addresses a critical gap in treatment options for amputees suffering from persistent pain, offering evidence for a novel therapeutic approach.”
“We have never seen a study of this magnitude and rigor in this patient population,” said lead author Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, in a press release about the data. “The data demonstrated clear and lasting benefit of treatment for pain reduction and functional outcomes at 3 months, creating great optimism for the long-term study results. These findings represent a significant advancement for an at-risk and underserved patient population in desperate need of reliable and effective treatment.”
SOURCE:
The study was led by Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, and was published online in the Journal of Pain Research.
LIMITATIONS:
The sample size of 180 participants may limit the generalizability of the findings to all amputees. A 3-month duration for assessing treatment efficacy may not capture long-term outcomes and effects. The active-sham control design, while rigorous, may not fully account for the placebo effects inherent in pain perception studies.
DISCLOSURES:
The QUEST study was funded by Neuros Medical Inc. Dr. Kapural reported personal fees from various medical companies, unrelated to this work. No other conflicts of interest were reported in this work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
in a new study, presenting a potential new therapeutic option for amputees.
METHODOLOGY:
- The study enrolled 180 patients with unilateral lower limb amputations who were experiencing severe post-procedure pain.
- Participants were randomized 1:1 to receive 3 months of treatment with either a high-frequency nerve block (Altius; Neuros Medical) or an active sham.
- Effectiveness was measured by the percentage of participants achieving at least a 50% reduction in pain in more than half of the treatment sessions.
- The researchers attempted to control for variables including pain type and baseline pain intensity.
TAKEAWAY:
- A total of 24.7% of patients in the group that received the nerve block were responders at 30 minutes post-treatment, significantly higher than 7.1% in the control group (P = .002).
- The rate of response rose to 46.8% in the treatment group at 120 minutes, compared with 22.2% in the sham group (P = .001).
- Patients who received the nerve block reported a greater improvement in their score on the Brief Pain Inventory than those in the sham arm — 2.3 ± 0.29 vs 1.3 ± 0.26, respectively (P = .01).
- Use of opioids trended toward a greater reduction in the treatment group, although that finding was not statistically significant.
IN PRACTICE:
The results suggested “high-frequency electric nerve block could be a viable option for managing chronic post-amputation pain, potentially improving patients’ quality of life and reducing reliance on opioids,” the authors wrote. “The study addresses a critical gap in treatment options for amputees suffering from persistent pain, offering evidence for a novel therapeutic approach.”
“We have never seen a study of this magnitude and rigor in this patient population,” said lead author Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, in a press release about the data. “The data demonstrated clear and lasting benefit of treatment for pain reduction and functional outcomes at 3 months, creating great optimism for the long-term study results. These findings represent a significant advancement for an at-risk and underserved patient population in desperate need of reliable and effective treatment.”
SOURCE:
The study was led by Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, and was published online in the Journal of Pain Research.
LIMITATIONS:
The sample size of 180 participants may limit the generalizability of the findings to all amputees. A 3-month duration for assessing treatment efficacy may not capture long-term outcomes and effects. The active-sham control design, while rigorous, may not fully account for the placebo effects inherent in pain perception studies.
DISCLOSURES:
The QUEST study was funded by Neuros Medical Inc. Dr. Kapural reported personal fees from various medical companies, unrelated to this work. No other conflicts of interest were reported in this work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
in a new study, presenting a potential new therapeutic option for amputees.
METHODOLOGY:
- The study enrolled 180 patients with unilateral lower limb amputations who were experiencing severe post-procedure pain.
- Participants were randomized 1:1 to receive 3 months of treatment with either a high-frequency nerve block (Altius; Neuros Medical) or an active sham.
- Effectiveness was measured by the percentage of participants achieving at least a 50% reduction in pain in more than half of the treatment sessions.
- The researchers attempted to control for variables including pain type and baseline pain intensity.
TAKEAWAY:
- A total of 24.7% of patients in the group that received the nerve block were responders at 30 minutes post-treatment, significantly higher than 7.1% in the control group (P = .002).
- The rate of response rose to 46.8% in the treatment group at 120 minutes, compared with 22.2% in the sham group (P = .001).
- Patients who received the nerve block reported a greater improvement in their score on the Brief Pain Inventory than those in the sham arm — 2.3 ± 0.29 vs 1.3 ± 0.26, respectively (P = .01).
- Use of opioids trended toward a greater reduction in the treatment group, although that finding was not statistically significant.
IN PRACTICE:
The results suggested “high-frequency electric nerve block could be a viable option for managing chronic post-amputation pain, potentially improving patients’ quality of life and reducing reliance on opioids,” the authors wrote. “The study addresses a critical gap in treatment options for amputees suffering from persistent pain, offering evidence for a novel therapeutic approach.”
“We have never seen a study of this magnitude and rigor in this patient population,” said lead author Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, in a press release about the data. “The data demonstrated clear and lasting benefit of treatment for pain reduction and functional outcomes at 3 months, creating great optimism for the long-term study results. These findings represent a significant advancement for an at-risk and underserved patient population in desperate need of reliable and effective treatment.”
SOURCE:
The study was led by Leonardo Kapural, MD, PhD, of the Carolinas Pain Institute in Winston-Salem, North Carolina, and was published online in the Journal of Pain Research.
LIMITATIONS:
The sample size of 180 participants may limit the generalizability of the findings to all amputees. A 3-month duration for assessing treatment efficacy may not capture long-term outcomes and effects. The active-sham control design, while rigorous, may not fully account for the placebo effects inherent in pain perception studies.
DISCLOSURES:
The QUEST study was funded by Neuros Medical Inc. Dr. Kapural reported personal fees from various medical companies, unrelated to this work. No other conflicts of interest were reported in this work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Commonly Used Meds Tied to Lower Risk for Brain Aneurysm Rupture
(aSAH), a drug-wide association study suggested.
The blood pressure drug lisinopril; the cholesterol drug simvastatin; the diabetes drug metformin; and the drug tamsulosin, prescribed for an enlarged prostate, were all associated with decreased aSAH risk, investigators found.
Conversely, four other drugs were associated with an increased risk for this severely morbid, often deadly, condition.
“The motivation for this study was the fact that we can currently prevent bleeding from intracranial aneurysms only by invasive treatment of those aneurysms with inherent complication risks,” said study investigator Ynte Ruigrok, MD, PhD, associate professor of neurology and neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands. “Drugs to reduce or eliminate this risk are not yet available. This study is a first step in identifying such drugs.”
The findings were published online in Neurology.
Surprising Results
For the study, the researchers used the Secure Anonymized Information Linkage data bank in Wales to identify 4879 patients with aSAH between January 2000 and December 2019 and 43,911 patients without aSAH matched on age, sex, and year of database entry. Clustering resulted in 2023 unique drugs, of which 205 were commonly prescribed.
After adjusting for other factors such as high blood pressure, alcohol abuse, smoking, and a total number of health conditions, the results yielded two surprises, Dr. Ruigrok observed.
The first was a significant decrease in aSAH risk for current use of lisinopril, compared with nonuse (odds ratio [OR], 0.63; 95% confidence interval [CI], 0.44-0.90), and a nonsignificant decrease with current use of amlodipine (OR, 0.82; 95% CI, 0.65-1.04).
“Hypertension is a major risk factor for occurrence and bleeding from aneurysms. If there is indeed a specific blood pressure–lowering drug that not only has a blood pressure–lowering effect but also has additional protection against aSAH, then perhaps that drug should become the drug of choice in aneurysm patients in the future,” he said.
Notably, recent use of both drugs, defined as between 1 year and 3 months before the index date, was associated with an increased risk for aSAH. This trend was not found for other antihypertensives and was significant for amlodipine but not lisinopril.
The reasons are unclear, but “we trust the findings on lisinopril more,” Dr. Ruigrok said. “The findings on amlodipine may be due to confounding by indication, specifically caused by hypertension. Therefore, it is important to validate our findings in an independent research cohort, and we are in the process of doing so.”
The study’s second surprise was the antidiabetic drug metformin and cholesterol-lowering drug simvastatin were also associated with reduced aSAH risk, Dr. Ruigrok noted.
“We already knew from previous studies that diabetes and high cholesterol are protective factors for aSAH,” he said. “Our results suggest that perhaps not the conditions themselves are protective for aSAH but rather the drugs used to treat these conditions with are.”
The risk for a ruptured brain aneurysm among current users was 42% lower with metformin (OR, 0.58; 95% CI, 0.43-0.78), 22% lower with simvastatin (OR, 0.78; 95% CI, 0.64-0.96), and 45% lower with tamsulosin (OR, 0.55; 95% CI, 0.32-0.93).
An increased risk for aSAH was found only in current users of warfarin (OR, 1.35; 95% CI, 1.02-1.79), venlafaxine (OR, 1.67; 95% CI, 1.01-2.75), prochlorperazine (OR, 2.15; 95% CI, 1.45-3.18), and co-codamol (OR, 1.31; 95% CI, 1.10-1.56).
Other drugs within the classes of vitamin K antagonists, serotonin reuptake inhibitors, conventional antipsychotics, and compound analgesics did not show an association with aSAH.
The study was limited by the use of drug prescriptions, and patients may not take their drugs or use them incorrectly, noted the researchers, led by Jos P. Kanning, MSc, also with University Medical Center Utrecht.
The study was supported by the European Research Council. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
(aSAH), a drug-wide association study suggested.
The blood pressure drug lisinopril; the cholesterol drug simvastatin; the diabetes drug metformin; and the drug tamsulosin, prescribed for an enlarged prostate, were all associated with decreased aSAH risk, investigators found.
Conversely, four other drugs were associated with an increased risk for this severely morbid, often deadly, condition.
“The motivation for this study was the fact that we can currently prevent bleeding from intracranial aneurysms only by invasive treatment of those aneurysms with inherent complication risks,” said study investigator Ynte Ruigrok, MD, PhD, associate professor of neurology and neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands. “Drugs to reduce or eliminate this risk are not yet available. This study is a first step in identifying such drugs.”
The findings were published online in Neurology.
Surprising Results
For the study, the researchers used the Secure Anonymized Information Linkage data bank in Wales to identify 4879 patients with aSAH between January 2000 and December 2019 and 43,911 patients without aSAH matched on age, sex, and year of database entry. Clustering resulted in 2023 unique drugs, of which 205 were commonly prescribed.
After adjusting for other factors such as high blood pressure, alcohol abuse, smoking, and a total number of health conditions, the results yielded two surprises, Dr. Ruigrok observed.
The first was a significant decrease in aSAH risk for current use of lisinopril, compared with nonuse (odds ratio [OR], 0.63; 95% confidence interval [CI], 0.44-0.90), and a nonsignificant decrease with current use of amlodipine (OR, 0.82; 95% CI, 0.65-1.04).
“Hypertension is a major risk factor for occurrence and bleeding from aneurysms. If there is indeed a specific blood pressure–lowering drug that not only has a blood pressure–lowering effect but also has additional protection against aSAH, then perhaps that drug should become the drug of choice in aneurysm patients in the future,” he said.
Notably, recent use of both drugs, defined as between 1 year and 3 months before the index date, was associated with an increased risk for aSAH. This trend was not found for other antihypertensives and was significant for amlodipine but not lisinopril.
The reasons are unclear, but “we trust the findings on lisinopril more,” Dr. Ruigrok said. “The findings on amlodipine may be due to confounding by indication, specifically caused by hypertension. Therefore, it is important to validate our findings in an independent research cohort, and we are in the process of doing so.”
The study’s second surprise was the antidiabetic drug metformin and cholesterol-lowering drug simvastatin were also associated with reduced aSAH risk, Dr. Ruigrok noted.
“We already knew from previous studies that diabetes and high cholesterol are protective factors for aSAH,” he said. “Our results suggest that perhaps not the conditions themselves are protective for aSAH but rather the drugs used to treat these conditions with are.”
The risk for a ruptured brain aneurysm among current users was 42% lower with metformin (OR, 0.58; 95% CI, 0.43-0.78), 22% lower with simvastatin (OR, 0.78; 95% CI, 0.64-0.96), and 45% lower with tamsulosin (OR, 0.55; 95% CI, 0.32-0.93).
An increased risk for aSAH was found only in current users of warfarin (OR, 1.35; 95% CI, 1.02-1.79), venlafaxine (OR, 1.67; 95% CI, 1.01-2.75), prochlorperazine (OR, 2.15; 95% CI, 1.45-3.18), and co-codamol (OR, 1.31; 95% CI, 1.10-1.56).
Other drugs within the classes of vitamin K antagonists, serotonin reuptake inhibitors, conventional antipsychotics, and compound analgesics did not show an association with aSAH.
The study was limited by the use of drug prescriptions, and patients may not take their drugs or use them incorrectly, noted the researchers, led by Jos P. Kanning, MSc, also with University Medical Center Utrecht.
The study was supported by the European Research Council. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
(aSAH), a drug-wide association study suggested.
The blood pressure drug lisinopril; the cholesterol drug simvastatin; the diabetes drug metformin; and the drug tamsulosin, prescribed for an enlarged prostate, were all associated with decreased aSAH risk, investigators found.
Conversely, four other drugs were associated with an increased risk for this severely morbid, often deadly, condition.
“The motivation for this study was the fact that we can currently prevent bleeding from intracranial aneurysms only by invasive treatment of those aneurysms with inherent complication risks,” said study investigator Ynte Ruigrok, MD, PhD, associate professor of neurology and neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands. “Drugs to reduce or eliminate this risk are not yet available. This study is a first step in identifying such drugs.”
The findings were published online in Neurology.
Surprising Results
For the study, the researchers used the Secure Anonymized Information Linkage data bank in Wales to identify 4879 patients with aSAH between January 2000 and December 2019 and 43,911 patients without aSAH matched on age, sex, and year of database entry. Clustering resulted in 2023 unique drugs, of which 205 were commonly prescribed.
After adjusting for other factors such as high blood pressure, alcohol abuse, smoking, and a total number of health conditions, the results yielded two surprises, Dr. Ruigrok observed.
The first was a significant decrease in aSAH risk for current use of lisinopril, compared with nonuse (odds ratio [OR], 0.63; 95% confidence interval [CI], 0.44-0.90), and a nonsignificant decrease with current use of amlodipine (OR, 0.82; 95% CI, 0.65-1.04).
“Hypertension is a major risk factor for occurrence and bleeding from aneurysms. If there is indeed a specific blood pressure–lowering drug that not only has a blood pressure–lowering effect but also has additional protection against aSAH, then perhaps that drug should become the drug of choice in aneurysm patients in the future,” he said.
Notably, recent use of both drugs, defined as between 1 year and 3 months before the index date, was associated with an increased risk for aSAH. This trend was not found for other antihypertensives and was significant for amlodipine but not lisinopril.
The reasons are unclear, but “we trust the findings on lisinopril more,” Dr. Ruigrok said. “The findings on amlodipine may be due to confounding by indication, specifically caused by hypertension. Therefore, it is important to validate our findings in an independent research cohort, and we are in the process of doing so.”
The study’s second surprise was the antidiabetic drug metformin and cholesterol-lowering drug simvastatin were also associated with reduced aSAH risk, Dr. Ruigrok noted.
“We already knew from previous studies that diabetes and high cholesterol are protective factors for aSAH,” he said. “Our results suggest that perhaps not the conditions themselves are protective for aSAH but rather the drugs used to treat these conditions with are.”
The risk for a ruptured brain aneurysm among current users was 42% lower with metformin (OR, 0.58; 95% CI, 0.43-0.78), 22% lower with simvastatin (OR, 0.78; 95% CI, 0.64-0.96), and 45% lower with tamsulosin (OR, 0.55; 95% CI, 0.32-0.93).
An increased risk for aSAH was found only in current users of warfarin (OR, 1.35; 95% CI, 1.02-1.79), venlafaxine (OR, 1.67; 95% CI, 1.01-2.75), prochlorperazine (OR, 2.15; 95% CI, 1.45-3.18), and co-codamol (OR, 1.31; 95% CI, 1.10-1.56).
Other drugs within the classes of vitamin K antagonists, serotonin reuptake inhibitors, conventional antipsychotics, and compound analgesics did not show an association with aSAH.
The study was limited by the use of drug prescriptions, and patients may not take their drugs or use them incorrectly, noted the researchers, led by Jos P. Kanning, MSc, also with University Medical Center Utrecht.
The study was supported by the European Research Council. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM NEUROLOGY
‘Big Breakthrough’: New Low-Field MRI Is Safer and Easier
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.
Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.
In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.
The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.
Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.
If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.
But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
Improving Access to MRI
One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.
While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.
A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.
Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.
“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
Challenges and the Future
The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.
One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.
Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.
That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
A version of this article appeared on Medscape.com.
Is Meningitis a Risk Factor for Trigeminal Neuralgia? New Data
In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.
This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.
“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.
The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
Strong Clinical Risk Factors
Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.
To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.
Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.
Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.
Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.
In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).
Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.
“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.
She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
Ask About Meningitis, Fever
Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.
“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”
Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.
“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”
Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”
The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.
A version of this article appeared on Medscape.com.
In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.
This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.
“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.
The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
Strong Clinical Risk Factors
Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.
To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.
Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.
Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.
Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.
In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).
Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.
“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.
She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
Ask About Meningitis, Fever
Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.
“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”
Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.
“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”
Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”
The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.
A version of this article appeared on Medscape.com.
In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.
This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.
“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.
The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
Strong Clinical Risk Factors
Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.
To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.
Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.
Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.
Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.
In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).
Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.
“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.
She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
Ask About Meningitis, Fever
Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.
“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”
Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.
“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”
Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”
The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.
A version of this article appeared on Medscape.com.
FROM AANS 2024
Medtronic’s Duet EDMS Catheter Tubing Under Class I Recall
If this happens, potential harm to patients may include infections, cerebrospinal fluid (CSF) leakage, overdrainage of CSF, and abnormality of the ventricles. Uncontrolled overdrainage of CSF could lead to neurological injury or death if the disconnection is undetected.
The Food and Drug Administration has identified this as a Class I recall — the most serious type — due to the risk for serious injury or death. To date, there have been 26 reported injuries and no deaths related to this issue.
The recall includes 45,176 devices distributed in the United States between May 3, 2021, and January 9, 2024, with model numbers 46913, 46914, 46915, 46916, and 46917.
The Duet EDMS is used for temporary CSF drainage or sampling in patients who have surgery for open descending thoracic aortic aneurysm (TAA) or descending thoraco-abdominal aortic aneurysm (TAAA) or patients who have TAA/TAAA repair surgery and develop symptoms such as paraplegia.
Medtronic has sent an urgent medical device recall letter to all affected customers asking them to identify, quarantine, and return any unused recalled products.
Customers are also advised to check all Duet EDMS components for damage and ensure that all connections are secure and leak-free.
If a patient is currently connected to an impacted Duet EDMS and a leak or disconnection is detected, the device should be changed to a new alternative device utilizing a sterile technique.
It is not recommended that a Duet system device that is connected to a patient and working as intended be removed or replaced.
Customers in the United States with questions about this recall should contact Medtronic at 1-800-874-5797.
A version of this article appeared on Medscape.com.
If this happens, potential harm to patients may include infections, cerebrospinal fluid (CSF) leakage, overdrainage of CSF, and abnormality of the ventricles. Uncontrolled overdrainage of CSF could lead to neurological injury or death if the disconnection is undetected.
The Food and Drug Administration has identified this as a Class I recall — the most serious type — due to the risk for serious injury or death. To date, there have been 26 reported injuries and no deaths related to this issue.
The recall includes 45,176 devices distributed in the United States between May 3, 2021, and January 9, 2024, with model numbers 46913, 46914, 46915, 46916, and 46917.
The Duet EDMS is used for temporary CSF drainage or sampling in patients who have surgery for open descending thoracic aortic aneurysm (TAA) or descending thoraco-abdominal aortic aneurysm (TAAA) or patients who have TAA/TAAA repair surgery and develop symptoms such as paraplegia.
Medtronic has sent an urgent medical device recall letter to all affected customers asking them to identify, quarantine, and return any unused recalled products.
Customers are also advised to check all Duet EDMS components for damage and ensure that all connections are secure and leak-free.
If a patient is currently connected to an impacted Duet EDMS and a leak or disconnection is detected, the device should be changed to a new alternative device utilizing a sterile technique.
It is not recommended that a Duet system device that is connected to a patient and working as intended be removed or replaced.
Customers in the United States with questions about this recall should contact Medtronic at 1-800-874-5797.
A version of this article appeared on Medscape.com.
If this happens, potential harm to patients may include infections, cerebrospinal fluid (CSF) leakage, overdrainage of CSF, and abnormality of the ventricles. Uncontrolled overdrainage of CSF could lead to neurological injury or death if the disconnection is undetected.
The Food and Drug Administration has identified this as a Class I recall — the most serious type — due to the risk for serious injury or death. To date, there have been 26 reported injuries and no deaths related to this issue.
The recall includes 45,176 devices distributed in the United States between May 3, 2021, and January 9, 2024, with model numbers 46913, 46914, 46915, 46916, and 46917.
The Duet EDMS is used for temporary CSF drainage or sampling in patients who have surgery for open descending thoracic aortic aneurysm (TAA) or descending thoraco-abdominal aortic aneurysm (TAAA) or patients who have TAA/TAAA repair surgery and develop symptoms such as paraplegia.
Medtronic has sent an urgent medical device recall letter to all affected customers asking them to identify, quarantine, and return any unused recalled products.
Customers are also advised to check all Duet EDMS components for damage and ensure that all connections are secure and leak-free.
If a patient is currently connected to an impacted Duet EDMS and a leak or disconnection is detected, the device should be changed to a new alternative device utilizing a sterile technique.
It is not recommended that a Duet system device that is connected to a patient and working as intended be removed or replaced.
Customers in the United States with questions about this recall should contact Medtronic at 1-800-874-5797.
A version of this article appeared on Medscape.com.
Autoimmune Disease Risk May Rise Following Cushing Disease Remission After Surgery
Patients with Cushing disease have an increased risk for new-onset autoimmune disease in the 3 years after surgical remission, according to a new retrospective study published on February 20 in Annals of Internal Medicine.
Outcomes for patients with Cushing disease were compared against those with nonfunctioning pituitary adenomas (NFPAs). New-onset autoimmune disease occurred in 10.4% with Cushing disease and 1.6% among patients with NFPA (hazard ratio, 7.80; 95% CI, 2.88-21.10).
“Understanding and recognizing new and recurrent autoimmune disease in this setting is important to avoid misclassifying such patients with glucocorticoid withdrawal syndrome, which could result in failure to treat underlying autoimmune disease, as well as erroneous diagnosis of steroid withdrawal cases,” wrote Dennis Delasi Nyanyo of Massachusetts General Hospital and Harvard Medical School, Boston, and colleagues.
Given the general population’s annual incidence of major autoimmune diseases, estimated at about 100 cases per 100,000 people, and the 3-year incidence of 10.4% found in this study’s cohort, “our findings suggest that Cushing disease remission may trigger development of autoimmune disease,” the authors wrote.
Monitor Patients With Family History of Autoimmune Disease?
The study results were not necessarily surprising to Anthony P. Heaney, MD, PhD, an endocrinologist and professor of medicine at the University of California, Los Angeles, because past research has raised similar questions. The authors’ suggestion that the rapid postsurgical drop in cortisol that occurs as a result of treating Cushing disease becomes some sort of autoimmune trigger is interesting but remains speculative, Dr. Heaney pointed out.
If future evidence supports that possibility, “it would suggest, in terms of managing those patients in the postoperative setting, that there may be some merit to giving them higher concentrations of glucocorticoids for a short period of time,” Dr. Heaney said, thereby bringing their levels down more gradually rather than taking them off a cliff, in a sense. Or, if more evidence bears out the authors’ hypothesis, another approach might be treating patients with medicine to bring down the cortisol before surgery, though there are challenges to that approach, Dr. Heaney said.
At the same time, those who developed new autoimmune disease remain a small subset of patients with Cushing disease, so such approaches may become only potentially appropriate to consider in patients with risk factors, such as a family history of autoimmune disease.
The researchers conducted a retrospective chart review of adult patients who underwent transsphenoidal surgery for either Cushing disease or NFPA at Massachusetts General Hospital between 2005 and 2019.
The study involved 194 patients with Cushing disease who had postsurgical remission and at least one follow-up visit with a pituitary expert and 92 patients with NFPA who were matched to patients with Cushing disease based on age and sex. The authors regarded autoimmune disease diagnosed within 36 months of the surgery to be temporally associated with Cushing disease remission. Among the autoimmune diseases considered were “rheumatoid arthritis, Sjögren syndrome, systemic lupus erythematosus, autoimmune thyroiditis, celiac disease, psoriasis, vitiligo, autoimmune neuropathy, multiple sclerosis, myasthenia gravis, and ulcerative colitis.”
Patients differed in average body mass index and tumor size, but family history of autoimmune disease was similar in both groups. Average BMI was 34.5 in the Cushing group and 29.5 in the NFPA group. Average tumor size was 5.7 mm in the Cushing group and 21.3 mm in the NFPA group.
Before surgery, 2.9% of patients with Cushing disease and 15.4% of patients with NFPA had central hypothyroidism, and 8% in the Cushing group and 56.8% in the NFPA group had hyperprolactinemia. Central adrenal insufficiency occurred in 11% with NFPA and in all with Cushing disease, by definition.
After surgery, 93.8% in the Cushing group and 16.5% in the NFPA group had adrenal insufficiency. In addition, patients with Cushing disease had lower postsurgical nadir serum cortisol levels (63.8 nmol/L) than those with NFPA (282.3 nmol/L).
Of the 17 patients with Cushing disease — all women — who developed autoimmune disease within 3 years, 6 had a personal history of autoimmune disease and 7 had a family history of it. In addition, 41.2% of them had adrenal insufficiency when they developed the new autoimmune disease. Among the diseases were six autoimmune thyroiditis cases, three Sjögren syndrome cases, and two autoimmune seronegative spondyloarthropathy.
Dr. Heaney said he found it interesting that more than half of the new autoimmune diseases in patients with Cushing disease were related to the thyroid. “In this kind of setting, where you have a patient who has been producing too much steroid over a period of time and then you take that away, it’s almost like you release a brake on the TSH [thyroid-stimulating hormone],” Dr. Heaney said. “So, there’s probably some rebound in TSH that occurs, and that could be driving the thyroiditis, to some extent, that we see in these patients.”
Only one patient with NFPA developed new-onset autoimmune disease, a woman who developed Graves disease 22 months after surgery. When the researchers excluded patients in both groups with central hypothyroidism, new-onset autoimmune disease was still significantly higher (11.4%) in the Cushing group than in the NFPA group (1.9%; HR, 7.02; 95% CI, 2.54-19.39).
Could Postoperative Adrenal Insufficiency Contribute to Risk?
Within the Cushing cohort, those who developed autoimmune disease had a lower BMI (31.8 vs 34.8) and larger tumor size (7.2 vs 5.6 mm) than those who didn’t develop new autoimmune disease. Patients who developed autoimmune disease also had a lower baseline urine free cortisol ratio (2.7 vs 6.3) before surgery and more family history of autoimmune disease (41.2% vs 20.9%) than those who didn’t develop one.
“The higher prevalence of adrenal insufficiency and the lower nadir serum cortisol levels in the Cushing disease group suggest that the postoperative adrenal insufficiency in the Cushing disease group might have contributed to autoimmune disease pathogenesis,” the authors wrote. “This finding is clinically significant because cortisol plays a pivotal role in modulating the immune system.”
Most postoperative management among patients with Cushing disease was similar, with all but one patient receiving 0.5 or 1 mg daily dexamethasone within the first week after surgery. (The one outlier received 5 mg daily prednisone.) However, fewer patients who developed autoimmune disease (17.6%) received supraphysiologic doses of glucocorticoid — equivalent to at least 25 mg hydrocortisone — compared with patients who didn’t develop autoimmune disease (41.8%).
“Although the daily average hydrocortisone equivalent replacement doses within the first month and during long-term follow-up were within the physiologic range in both subgroups, patients with Cushing disease who had autoimmune disease received slightly lower doses of glucocorticoid replacement within the first month after surgery,” the authors reported. “The immediate postoperative period might be a critical window where supraphysiologic glucocorticoids seem to be protective with regard to development of autoimmune disease,” they wrote, though they acknowledged the study’s retrospective design as a limitation in drawing that conclusion.
At the least, they suggested that new symptoms in patients with Cushing disease, particularly those with a family history of autoimmune disease, should prompt investigation of potential autoimmune disease.
Recordati Rare Diseases funded the study. The research was also conducted with support from Harvard Catalyst (the Harvard Clinical and Translational Science Center) as well as financial contributions from Harvard University and its affiliated academic healthcare centers. One author reported holding stocks in Pfizer and Amgen, and another reported receiving consulting fees from Corcept. Dr. Heaney reported receiving institutional grants for trials from Corcept, Ascendis, Crinetics, and Sparrow Pharm; serving on the advisory board for Xeris, Recordati, Corcept, Novo Nordisk, Lundbeck, and Crinetics; and serving as a speaker for Chiesi, Novo Nordisk, and Corcept.
A version of this article appeared on Medscape.com.
Patients with Cushing disease have an increased risk for new-onset autoimmune disease in the 3 years after surgical remission, according to a new retrospective study published on February 20 in Annals of Internal Medicine.
Outcomes for patients with Cushing disease were compared against those with nonfunctioning pituitary adenomas (NFPAs). New-onset autoimmune disease occurred in 10.4% with Cushing disease and 1.6% among patients with NFPA (hazard ratio, 7.80; 95% CI, 2.88-21.10).
“Understanding and recognizing new and recurrent autoimmune disease in this setting is important to avoid misclassifying such patients with glucocorticoid withdrawal syndrome, which could result in failure to treat underlying autoimmune disease, as well as erroneous diagnosis of steroid withdrawal cases,” wrote Dennis Delasi Nyanyo of Massachusetts General Hospital and Harvard Medical School, Boston, and colleagues.
Given the general population’s annual incidence of major autoimmune diseases, estimated at about 100 cases per 100,000 people, and the 3-year incidence of 10.4% found in this study’s cohort, “our findings suggest that Cushing disease remission may trigger development of autoimmune disease,” the authors wrote.
Monitor Patients With Family History of Autoimmune Disease?
The study results were not necessarily surprising to Anthony P. Heaney, MD, PhD, an endocrinologist and professor of medicine at the University of California, Los Angeles, because past research has raised similar questions. The authors’ suggestion that the rapid postsurgical drop in cortisol that occurs as a result of treating Cushing disease becomes some sort of autoimmune trigger is interesting but remains speculative, Dr. Heaney pointed out.
If future evidence supports that possibility, “it would suggest, in terms of managing those patients in the postoperative setting, that there may be some merit to giving them higher concentrations of glucocorticoids for a short period of time,” Dr. Heaney said, thereby bringing their levels down more gradually rather than taking them off a cliff, in a sense. Or, if more evidence bears out the authors’ hypothesis, another approach might be treating patients with medicine to bring down the cortisol before surgery, though there are challenges to that approach, Dr. Heaney said.
At the same time, those who developed new autoimmune disease remain a small subset of patients with Cushing disease, so such approaches may become only potentially appropriate to consider in patients with risk factors, such as a family history of autoimmune disease.
The researchers conducted a retrospective chart review of adult patients who underwent transsphenoidal surgery for either Cushing disease or NFPA at Massachusetts General Hospital between 2005 and 2019.
The study involved 194 patients with Cushing disease who had postsurgical remission and at least one follow-up visit with a pituitary expert and 92 patients with NFPA who were matched to patients with Cushing disease based on age and sex. The authors regarded autoimmune disease diagnosed within 36 months of the surgery to be temporally associated with Cushing disease remission. Among the autoimmune diseases considered were “rheumatoid arthritis, Sjögren syndrome, systemic lupus erythematosus, autoimmune thyroiditis, celiac disease, psoriasis, vitiligo, autoimmune neuropathy, multiple sclerosis, myasthenia gravis, and ulcerative colitis.”
Patients differed in average body mass index and tumor size, but family history of autoimmune disease was similar in both groups. Average BMI was 34.5 in the Cushing group and 29.5 in the NFPA group. Average tumor size was 5.7 mm in the Cushing group and 21.3 mm in the NFPA group.
Before surgery, 2.9% of patients with Cushing disease and 15.4% of patients with NFPA had central hypothyroidism, and 8% in the Cushing group and 56.8% in the NFPA group had hyperprolactinemia. Central adrenal insufficiency occurred in 11% with NFPA and in all with Cushing disease, by definition.
After surgery, 93.8% in the Cushing group and 16.5% in the NFPA group had adrenal insufficiency. In addition, patients with Cushing disease had lower postsurgical nadir serum cortisol levels (63.8 nmol/L) than those with NFPA (282.3 nmol/L).
Of the 17 patients with Cushing disease — all women — who developed autoimmune disease within 3 years, 6 had a personal history of autoimmune disease and 7 had a family history of it. In addition, 41.2% of them had adrenal insufficiency when they developed the new autoimmune disease. Among the diseases were six autoimmune thyroiditis cases, three Sjögren syndrome cases, and two autoimmune seronegative spondyloarthropathy.
Dr. Heaney said he found it interesting that more than half of the new autoimmune diseases in patients with Cushing disease were related to the thyroid. “In this kind of setting, where you have a patient who has been producing too much steroid over a period of time and then you take that away, it’s almost like you release a brake on the TSH [thyroid-stimulating hormone],” Dr. Heaney said. “So, there’s probably some rebound in TSH that occurs, and that could be driving the thyroiditis, to some extent, that we see in these patients.”
Only one patient with NFPA developed new-onset autoimmune disease, a woman who developed Graves disease 22 months after surgery. When the researchers excluded patients in both groups with central hypothyroidism, new-onset autoimmune disease was still significantly higher (11.4%) in the Cushing group than in the NFPA group (1.9%; HR, 7.02; 95% CI, 2.54-19.39).
Could Postoperative Adrenal Insufficiency Contribute to Risk?
Within the Cushing cohort, those who developed autoimmune disease had a lower BMI (31.8 vs 34.8) and larger tumor size (7.2 vs 5.6 mm) than those who didn’t develop new autoimmune disease. Patients who developed autoimmune disease also had a lower baseline urine free cortisol ratio (2.7 vs 6.3) before surgery and more family history of autoimmune disease (41.2% vs 20.9%) than those who didn’t develop one.
“The higher prevalence of adrenal insufficiency and the lower nadir serum cortisol levels in the Cushing disease group suggest that the postoperative adrenal insufficiency in the Cushing disease group might have contributed to autoimmune disease pathogenesis,” the authors wrote. “This finding is clinically significant because cortisol plays a pivotal role in modulating the immune system.”
Most postoperative management among patients with Cushing disease was similar, with all but one patient receiving 0.5 or 1 mg daily dexamethasone within the first week after surgery. (The one outlier received 5 mg daily prednisone.) However, fewer patients who developed autoimmune disease (17.6%) received supraphysiologic doses of glucocorticoid — equivalent to at least 25 mg hydrocortisone — compared with patients who didn’t develop autoimmune disease (41.8%).
“Although the daily average hydrocortisone equivalent replacement doses within the first month and during long-term follow-up were within the physiologic range in both subgroups, patients with Cushing disease who had autoimmune disease received slightly lower doses of glucocorticoid replacement within the first month after surgery,” the authors reported. “The immediate postoperative period might be a critical window where supraphysiologic glucocorticoids seem to be protective with regard to development of autoimmune disease,” they wrote, though they acknowledged the study’s retrospective design as a limitation in drawing that conclusion.
At the least, they suggested that new symptoms in patients with Cushing disease, particularly those with a family history of autoimmune disease, should prompt investigation of potential autoimmune disease.
Recordati Rare Diseases funded the study. The research was also conducted with support from Harvard Catalyst (the Harvard Clinical and Translational Science Center) as well as financial contributions from Harvard University and its affiliated academic healthcare centers. One author reported holding stocks in Pfizer and Amgen, and another reported receiving consulting fees from Corcept. Dr. Heaney reported receiving institutional grants for trials from Corcept, Ascendis, Crinetics, and Sparrow Pharm; serving on the advisory board for Xeris, Recordati, Corcept, Novo Nordisk, Lundbeck, and Crinetics; and serving as a speaker for Chiesi, Novo Nordisk, and Corcept.
A version of this article appeared on Medscape.com.
Patients with Cushing disease have an increased risk for new-onset autoimmune disease in the 3 years after surgical remission, according to a new retrospective study published on February 20 in Annals of Internal Medicine.
Outcomes for patients with Cushing disease were compared against those with nonfunctioning pituitary adenomas (NFPAs). New-onset autoimmune disease occurred in 10.4% with Cushing disease and 1.6% among patients with NFPA (hazard ratio, 7.80; 95% CI, 2.88-21.10).
“Understanding and recognizing new and recurrent autoimmune disease in this setting is important to avoid misclassifying such patients with glucocorticoid withdrawal syndrome, which could result in failure to treat underlying autoimmune disease, as well as erroneous diagnosis of steroid withdrawal cases,” wrote Dennis Delasi Nyanyo of Massachusetts General Hospital and Harvard Medical School, Boston, and colleagues.
Given the general population’s annual incidence of major autoimmune diseases, estimated at about 100 cases per 100,000 people, and the 3-year incidence of 10.4% found in this study’s cohort, “our findings suggest that Cushing disease remission may trigger development of autoimmune disease,” the authors wrote.
Monitor Patients With Family History of Autoimmune Disease?
The study results were not necessarily surprising to Anthony P. Heaney, MD, PhD, an endocrinologist and professor of medicine at the University of California, Los Angeles, because past research has raised similar questions. The authors’ suggestion that the rapid postsurgical drop in cortisol that occurs as a result of treating Cushing disease becomes some sort of autoimmune trigger is interesting but remains speculative, Dr. Heaney pointed out.
If future evidence supports that possibility, “it would suggest, in terms of managing those patients in the postoperative setting, that there may be some merit to giving them higher concentrations of glucocorticoids for a short period of time,” Dr. Heaney said, thereby bringing their levels down more gradually rather than taking them off a cliff, in a sense. Or, if more evidence bears out the authors’ hypothesis, another approach might be treating patients with medicine to bring down the cortisol before surgery, though there are challenges to that approach, Dr. Heaney said.
At the same time, those who developed new autoimmune disease remain a small subset of patients with Cushing disease, so such approaches may become only potentially appropriate to consider in patients with risk factors, such as a family history of autoimmune disease.
The researchers conducted a retrospective chart review of adult patients who underwent transsphenoidal surgery for either Cushing disease or NFPA at Massachusetts General Hospital between 2005 and 2019.
The study involved 194 patients with Cushing disease who had postsurgical remission and at least one follow-up visit with a pituitary expert and 92 patients with NFPA who were matched to patients with Cushing disease based on age and sex. The authors regarded autoimmune disease diagnosed within 36 months of the surgery to be temporally associated with Cushing disease remission. Among the autoimmune diseases considered were “rheumatoid arthritis, Sjögren syndrome, systemic lupus erythematosus, autoimmune thyroiditis, celiac disease, psoriasis, vitiligo, autoimmune neuropathy, multiple sclerosis, myasthenia gravis, and ulcerative colitis.”
Patients differed in average body mass index and tumor size, but family history of autoimmune disease was similar in both groups. Average BMI was 34.5 in the Cushing group and 29.5 in the NFPA group. Average tumor size was 5.7 mm in the Cushing group and 21.3 mm in the NFPA group.
Before surgery, 2.9% of patients with Cushing disease and 15.4% of patients with NFPA had central hypothyroidism, and 8% in the Cushing group and 56.8% in the NFPA group had hyperprolactinemia. Central adrenal insufficiency occurred in 11% with NFPA and in all with Cushing disease, by definition.
After surgery, 93.8% in the Cushing group and 16.5% in the NFPA group had adrenal insufficiency. In addition, patients with Cushing disease had lower postsurgical nadir serum cortisol levels (63.8 nmol/L) than those with NFPA (282.3 nmol/L).
Of the 17 patients with Cushing disease — all women — who developed autoimmune disease within 3 years, 6 had a personal history of autoimmune disease and 7 had a family history of it. In addition, 41.2% of them had adrenal insufficiency when they developed the new autoimmune disease. Among the diseases were six autoimmune thyroiditis cases, three Sjögren syndrome cases, and two autoimmune seronegative spondyloarthropathy.
Dr. Heaney said he found it interesting that more than half of the new autoimmune diseases in patients with Cushing disease were related to the thyroid. “In this kind of setting, where you have a patient who has been producing too much steroid over a period of time and then you take that away, it’s almost like you release a brake on the TSH [thyroid-stimulating hormone],” Dr. Heaney said. “So, there’s probably some rebound in TSH that occurs, and that could be driving the thyroiditis, to some extent, that we see in these patients.”
Only one patient with NFPA developed new-onset autoimmune disease, a woman who developed Graves disease 22 months after surgery. When the researchers excluded patients in both groups with central hypothyroidism, new-onset autoimmune disease was still significantly higher (11.4%) in the Cushing group than in the NFPA group (1.9%; HR, 7.02; 95% CI, 2.54-19.39).
Could Postoperative Adrenal Insufficiency Contribute to Risk?
Within the Cushing cohort, those who developed autoimmune disease had a lower BMI (31.8 vs 34.8) and larger tumor size (7.2 vs 5.6 mm) than those who didn’t develop new autoimmune disease. Patients who developed autoimmune disease also had a lower baseline urine free cortisol ratio (2.7 vs 6.3) before surgery and more family history of autoimmune disease (41.2% vs 20.9%) than those who didn’t develop one.
“The higher prevalence of adrenal insufficiency and the lower nadir serum cortisol levels in the Cushing disease group suggest that the postoperative adrenal insufficiency in the Cushing disease group might have contributed to autoimmune disease pathogenesis,” the authors wrote. “This finding is clinically significant because cortisol plays a pivotal role in modulating the immune system.”
Most postoperative management among patients with Cushing disease was similar, with all but one patient receiving 0.5 or 1 mg daily dexamethasone within the first week after surgery. (The one outlier received 5 mg daily prednisone.) However, fewer patients who developed autoimmune disease (17.6%) received supraphysiologic doses of glucocorticoid — equivalent to at least 25 mg hydrocortisone — compared with patients who didn’t develop autoimmune disease (41.8%).
“Although the daily average hydrocortisone equivalent replacement doses within the first month and during long-term follow-up were within the physiologic range in both subgroups, patients with Cushing disease who had autoimmune disease received slightly lower doses of glucocorticoid replacement within the first month after surgery,” the authors reported. “The immediate postoperative period might be a critical window where supraphysiologic glucocorticoids seem to be protective with regard to development of autoimmune disease,” they wrote, though they acknowledged the study’s retrospective design as a limitation in drawing that conclusion.
At the least, they suggested that new symptoms in patients with Cushing disease, particularly those with a family history of autoimmune disease, should prompt investigation of potential autoimmune disease.
Recordati Rare Diseases funded the study. The research was also conducted with support from Harvard Catalyst (the Harvard Clinical and Translational Science Center) as well as financial contributions from Harvard University and its affiliated academic healthcare centers. One author reported holding stocks in Pfizer and Amgen, and another reported receiving consulting fees from Corcept. Dr. Heaney reported receiving institutional grants for trials from Corcept, Ascendis, Crinetics, and Sparrow Pharm; serving on the advisory board for Xeris, Recordati, Corcept, Novo Nordisk, Lundbeck, and Crinetics; and serving as a speaker for Chiesi, Novo Nordisk, and Corcept.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
New Tech Could Record Deep-Brain Activity From Surface
Modern technology for recording deep-brain activity involves sharp metal electrodes that penetrate the tissue, causing damage that can compromise the signal and limiting how often they can be used.
A rapidly growing area in materials science and engineering aims to fix the problem by designing electrodes that are softer, smaller, and flexible — safer for use inside the delicate tissues of the brain. On January 17, researchers from the University of California, San Diego, reported the development of a thin, flexible electrode that can be inserted deep within the brain and communicate with sensors on the surface.
But what if you could record detailed deep-brain activity without piercing the brain?
A team of researchers (as it happens, also from UC San Diego) have developed a thin, flexible implant that “resides on the brain’s surface” and “can infer neural activity from deeper layers,” said Duygu Kuzum, PhD, a professor of electrical and computer engineering, who led the research.
By combining electrical and optical imaging methods, and artificial intelligence, the researchers used the device — a polymer strip packed with graphene electrodes — to predict deep calcium activity from surface signals, according to a proof-of-concept study published this month in Nature Nanotechnology.
“Almost everything we know about how neurons behave in living brains comes from data collected with either electrophysiology or two-photon imaging,” said neuroscientist Joshua H. Siegle, PhD, of the Allen Institute for Neural Dynamics in Seattle , who not involved in the study. “ Until now, these two methods have rarely been used simultaneously.”
The technology, which has been tested in mice, could help advance our knowledge of how the brain works and may lead to new minimally invasive treatments for neurologic disorders.
Multimodal Neurotech: The Power of 2-in-1
Electrical and optical methods for recording brain activity have been crucial in advancing neurophysiologic science, but each technique has its limits. Electrical recordings provide high “temporal resolution”; they reveal when activation is happening, but not really where. Optical imaging, on the other hand, offers high “spatial resolution,” showing which area of the brain is lighting up, but its measurements may not correspond with the activity’s timing.
Research over the past decade has explored how to combine and harness the strengths of both methods. One potential solution is to use electrodes made of transparent materials such as graphene, allowing a clear field of view for a microscope during imaging. Recently, University of Pennsylvania scientists used graphene electrodes to illuminate the neural dynamics of seizures.
But there are challenges. If graphene electrodes are very small — in this case, 20 µm in diameter — they become more resistant to the flow of electricity. Dr. Kuzum and colleagues addressed this by adding tiny platinum particles to improve electrical conductivity. Long graphene wires connect electrodes to the circuit board, but defects in graphene can interrupt the signal, so they made each wire with two layers; any defects in one wire could be hidden by the other.
By combining the two methods (microelectrode arrays and two-photon imaging), the researchers could see both when brain activity was happening and where, including in deeper layers. They discovered a correlation between electrical responses on the surface and cellular calcium activity deeper down. The team used these data to create a neural network (a type of artificial intelligence that learns to recognize patterns) that predicts deep calcium activity from surface-level readings.
The tech could help scientists study brain activity “in a way not possible with current single-function tools,” said Luyao Lu, PhD, professor of biomedical engineering at George Washington University in Washington, DC, who was not involved in the study. It could shed light on interactions between vascular and electrical activity, or explain how place cells (neurons in the hippocampus) are so efficient at creating spatial memory.
It could also pave the way for minimally invasive neural prosthetics or targeted treatments for neurologic disorders, the researchers say. Implanting the device would be a “straightforward process” similar to placing electrocorticography grids in patients with epilepsy, said Dr. Kuzum.
But first, the team plans to do more studies in animal models before testing the tech in clinical settings, Dr. Kuzum added.
A version of this article appeared on Medscape.com.
Modern technology for recording deep-brain activity involves sharp metal electrodes that penetrate the tissue, causing damage that can compromise the signal and limiting how often they can be used.
A rapidly growing area in materials science and engineering aims to fix the problem by designing electrodes that are softer, smaller, and flexible — safer for use inside the delicate tissues of the brain. On January 17, researchers from the University of California, San Diego, reported the development of a thin, flexible electrode that can be inserted deep within the brain and communicate with sensors on the surface.
But what if you could record detailed deep-brain activity without piercing the brain?
A team of researchers (as it happens, also from UC San Diego) have developed a thin, flexible implant that “resides on the brain’s surface” and “can infer neural activity from deeper layers,” said Duygu Kuzum, PhD, a professor of electrical and computer engineering, who led the research.
By combining electrical and optical imaging methods, and artificial intelligence, the researchers used the device — a polymer strip packed with graphene electrodes — to predict deep calcium activity from surface signals, according to a proof-of-concept study published this month in Nature Nanotechnology.
“Almost everything we know about how neurons behave in living brains comes from data collected with either electrophysiology or two-photon imaging,” said neuroscientist Joshua H. Siegle, PhD, of the Allen Institute for Neural Dynamics in Seattle , who not involved in the study. “ Until now, these two methods have rarely been used simultaneously.”
The technology, which has been tested in mice, could help advance our knowledge of how the brain works and may lead to new minimally invasive treatments for neurologic disorders.
Multimodal Neurotech: The Power of 2-in-1
Electrical and optical methods for recording brain activity have been crucial in advancing neurophysiologic science, but each technique has its limits. Electrical recordings provide high “temporal resolution”; they reveal when activation is happening, but not really where. Optical imaging, on the other hand, offers high “spatial resolution,” showing which area of the brain is lighting up, but its measurements may not correspond with the activity’s timing.
Research over the past decade has explored how to combine and harness the strengths of both methods. One potential solution is to use electrodes made of transparent materials such as graphene, allowing a clear field of view for a microscope during imaging. Recently, University of Pennsylvania scientists used graphene electrodes to illuminate the neural dynamics of seizures.
But there are challenges. If graphene electrodes are very small — in this case, 20 µm in diameter — they become more resistant to the flow of electricity. Dr. Kuzum and colleagues addressed this by adding tiny platinum particles to improve electrical conductivity. Long graphene wires connect electrodes to the circuit board, but defects in graphene can interrupt the signal, so they made each wire with two layers; any defects in one wire could be hidden by the other.
By combining the two methods (microelectrode arrays and two-photon imaging), the researchers could see both when brain activity was happening and where, including in deeper layers. They discovered a correlation between electrical responses on the surface and cellular calcium activity deeper down. The team used these data to create a neural network (a type of artificial intelligence that learns to recognize patterns) that predicts deep calcium activity from surface-level readings.
The tech could help scientists study brain activity “in a way not possible with current single-function tools,” said Luyao Lu, PhD, professor of biomedical engineering at George Washington University in Washington, DC, who was not involved in the study. It could shed light on interactions between vascular and electrical activity, or explain how place cells (neurons in the hippocampus) are so efficient at creating spatial memory.
It could also pave the way for minimally invasive neural prosthetics or targeted treatments for neurologic disorders, the researchers say. Implanting the device would be a “straightforward process” similar to placing electrocorticography grids in patients with epilepsy, said Dr. Kuzum.
But first, the team plans to do more studies in animal models before testing the tech in clinical settings, Dr. Kuzum added.
A version of this article appeared on Medscape.com.
Modern technology for recording deep-brain activity involves sharp metal electrodes that penetrate the tissue, causing damage that can compromise the signal and limiting how often they can be used.
A rapidly growing area in materials science and engineering aims to fix the problem by designing electrodes that are softer, smaller, and flexible — safer for use inside the delicate tissues of the brain. On January 17, researchers from the University of California, San Diego, reported the development of a thin, flexible electrode that can be inserted deep within the brain and communicate with sensors on the surface.
But what if you could record detailed deep-brain activity without piercing the brain?
A team of researchers (as it happens, also from UC San Diego) have developed a thin, flexible implant that “resides on the brain’s surface” and “can infer neural activity from deeper layers,” said Duygu Kuzum, PhD, a professor of electrical and computer engineering, who led the research.
By combining electrical and optical imaging methods, and artificial intelligence, the researchers used the device — a polymer strip packed with graphene electrodes — to predict deep calcium activity from surface signals, according to a proof-of-concept study published this month in Nature Nanotechnology.
“Almost everything we know about how neurons behave in living brains comes from data collected with either electrophysiology or two-photon imaging,” said neuroscientist Joshua H. Siegle, PhD, of the Allen Institute for Neural Dynamics in Seattle , who not involved in the study. “ Until now, these two methods have rarely been used simultaneously.”
The technology, which has been tested in mice, could help advance our knowledge of how the brain works and may lead to new minimally invasive treatments for neurologic disorders.
Multimodal Neurotech: The Power of 2-in-1
Electrical and optical methods for recording brain activity have been crucial in advancing neurophysiologic science, but each technique has its limits. Electrical recordings provide high “temporal resolution”; they reveal when activation is happening, but not really where. Optical imaging, on the other hand, offers high “spatial resolution,” showing which area of the brain is lighting up, but its measurements may not correspond with the activity’s timing.
Research over the past decade has explored how to combine and harness the strengths of both methods. One potential solution is to use electrodes made of transparent materials such as graphene, allowing a clear field of view for a microscope during imaging. Recently, University of Pennsylvania scientists used graphene electrodes to illuminate the neural dynamics of seizures.
But there are challenges. If graphene electrodes are very small — in this case, 20 µm in diameter — they become more resistant to the flow of electricity. Dr. Kuzum and colleagues addressed this by adding tiny platinum particles to improve electrical conductivity. Long graphene wires connect electrodes to the circuit board, but defects in graphene can interrupt the signal, so they made each wire with two layers; any defects in one wire could be hidden by the other.
By combining the two methods (microelectrode arrays and two-photon imaging), the researchers could see both when brain activity was happening and where, including in deeper layers. They discovered a correlation between electrical responses on the surface and cellular calcium activity deeper down. The team used these data to create a neural network (a type of artificial intelligence that learns to recognize patterns) that predicts deep calcium activity from surface-level readings.
The tech could help scientists study brain activity “in a way not possible with current single-function tools,” said Luyao Lu, PhD, professor of biomedical engineering at George Washington University in Washington, DC, who was not involved in the study. It could shed light on interactions between vascular and electrical activity, or explain how place cells (neurons in the hippocampus) are so efficient at creating spatial memory.
It could also pave the way for minimally invasive neural prosthetics or targeted treatments for neurologic disorders, the researchers say. Implanting the device would be a “straightforward process” similar to placing electrocorticography grids in patients with epilepsy, said Dr. Kuzum.
But first, the team plans to do more studies in animal models before testing the tech in clinical settings, Dr. Kuzum added.
A version of this article appeared on Medscape.com.
FROM NATURE NANOTECHNOLOGY
Experimental Therapy Restores Cognitive Function in Chronic TBI
(msTBI) and chronic sequelae.
Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.
This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.
Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.
After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.
Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.
The findings were published online Dec. 4 in Nature Medicine.
“No Trivial Feat”
An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.
Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.
The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.
Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.
To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.
“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”
“A Moving Target”
Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.
“It was a literal moving target,” Dr. Henderson said.
In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.
When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.
Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.
The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.
After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.
The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.
After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.
After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.
The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.
“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”
New Hope
TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.
“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.
“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”
Investigators are working to secure funding for a larger phase 2 trial.
“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.
The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.
A version of this article first appeared on Medscape.com .
(msTBI) and chronic sequelae.
Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.
This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.
Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.
After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.
Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.
The findings were published online Dec. 4 in Nature Medicine.
“No Trivial Feat”
An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.
Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.
The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.
Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.
To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.
“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”
“A Moving Target”
Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.
“It was a literal moving target,” Dr. Henderson said.
In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.
When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.
Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.
The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.
After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.
The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.
After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.
After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.
The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.
“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”
New Hope
TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.
“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.
“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”
Investigators are working to secure funding for a larger phase 2 trial.
“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.
The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.
A version of this article first appeared on Medscape.com .
(msTBI) and chronic sequelae.
Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.
This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.
Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.
After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.
Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.
The findings were published online Dec. 4 in Nature Medicine.
“No Trivial Feat”
An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.
Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.
The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.
Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.
To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.
“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”
“A Moving Target”
Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.
“It was a literal moving target,” Dr. Henderson said.
In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.
When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.
Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.
The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.
After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.
The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.
After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.
After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.
The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.
“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”
New Hope
TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.
“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.
“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”
Investigators are working to secure funding for a larger phase 2 trial.
“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.
The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.
A version of this article first appeared on Medscape.com .