[music]
Herbert Y. Kressel, MD Hi. This is Herb Kressel and welcome to the September Radiology podcast.
This morning I'm joined by Dr. Jeffrey Weilburg who is Medical Director of the Physicians
Organization at Massachusetts General Hospital and Associate Professor of Radiology at Harvard
Medical School and Massachusetts General Hospital.
Dr. Weilburg and his colleagues wrote a very provocative article entitled "Outpatient
High Cost Imaging Utilization Management in a Large Stable Patient Provider Cohort Studied
over 7 Years."
Welcome Dr. Weilburg and thanks so much for joining us.
Jeffrey B. Weilburg, MD Thank you.
Now Dr. Weilburg many of those who are watching or listening to the podcast are not that familiar
with how healthcare is organized in the U.S. and how efforts like utilization management
are actually addressed.
Can you tell us a little bit more about your institution, the payer mix and the requirement
that you attempted to address with developing the decision support tool that you use.
Yes, so I do need to correct your introduction.
I'm Associate Professor of Psychiatry.
Oh yes.
Everybody that I meet is a radiologist until proven otherwise.
Yeah, so I think I'm an honorary radiologist.
Okay, perfect.
So this is Mass General Hospital which is a large, urban academic tertiary (inaudible)
medical center located in Boston Massachusetts.
It is part of what we call the Partners Health Care System which includes the Brigham and
Women's Hospital and a bunch of other hospitals throughout the eastern Massachusetts area.
The way we worked it out is that there's the general hospital which is the hospital
and then the physicians organization which includes about 2500 doctors primary care specialty.
Our payer mix is about half – 40% to half commercial payers and the rest government
payers Medicare and Medicaid.
To put it in a nutshell for your viewers, what we've done over the last 10 to 15 years
is we moved from a standard fee for service system to gradually more and more towards
what we call a risk based system which is essentially capitation.
That's very relevant for what happened here because what we found is as we entered risks
we were sharing with the insurance companies the problem that healthcare costs were going
up and we knew that one of the reasons healthcare costs were going up is people were using more
high cost imaging.
So we joined them in a way to figure out how do you get the doctors on the same side as
the insurance company and the government to control healthcare costs.
When you just devolve the cost control to the insurer or the government, they worry
most about cost.
When you include the doctors they start to worry about medical quality as well.
So that's why you want doctors to be in to control costs in a way that collaborates.
So what happened here was that as imaging costs were going up the payers came to us
and said to us 'hey we need to control theses costs.
We'd like these radiology benefit management corporations to get prior authorization every
time a doctor orders a scan.'
And that's where the psychiatry piece comes in.
I had gone through that with psychiatry years before and it had some concerns about it.
So we decided okay we will take risk for imaging, for high cost imaging, and we shared risk
with our payers; but to be able to do that we had to develop a system of utilization
management which meant the doctor wanted a CT scan of the head.
Why do they want it?
Is it appropriate?
And then how do you manage that to say yes you can do or um we don't know that you
should do it, maybe do something else.
Does that answer the question?
Yeah.
Now just to drill down a little bit more, in terms of the way the institution is organized
to actively manage the care; at Beth Israel Deaconess I remember we had these primary
care pods.
So rather than having several hundred doctors sharing risks together they were in smaller
groups and they would meet and exchange ideas and look over reports and how is it done at
Mass General?
Is it done centrally or?
It's a very good question.
It's evolved because in the early days of capitation the individual primary care doctors
were themselves at risk which meant that they were worried about high cost cases and spending
because it affected their personal income, and that was too much of an incentive to control
costs because they had to also provide care for the patient no matter what.
In the interim period that this study took place the MGPO and the MGH, the physician's
organization at the hospital, were the risk bearing agents.
So we diffused the risks across all of the primary care doctors and the specialists shared
in the risk.
Right now the risk has moved more centrally even to partners and a little bit further
away from the institution although we have what's called an internal performance framework
where we give imaging data to every single doctor in the institution, because unless
we do and they can keep going up and up and up and when there are outliers we say hey
we're concerned about your performance can you look at it.
So now partners is holding the risk but we end up being very concerned about it.
And then where does imaging utilization fit in the context of overall utilization management?
Is it a major focus, a fraction of it, or…
The big ticket items now for healthcare spending are inpatient admissions and emergency room
and often surgery.
So in our system imaging, high cost imaging, is maybe 6 to 7% of the total spent, but what
the paper showed and what's been very valued at the institution is we brought it down from
9% to around 7% and that has been very important in helping us perform well under a risk based
contract.
Good.
So who at the institution developed the decision support tool and how were the recommendations
determined?
So the – what we started out in 2005 and…
Is that why they hired a psychiatrist to manage the physician pool?
That's actually how it happened.
The radiology order entry commonly known here as ROE was developed by Dan Rosenthal and
Keith Dreyer and Dr. Jim Thrall as a way to do structured better entry and move from fax,
but when the payers approached us and we came back to them and said hey we'd like to it
ourselves, they've reached out to me and I've been working very closely with them
on adding the decision support component to their already built system.
What they did was they had lit by Dan and the other radiologists because I don't know
radiology content.
My job has been interacting with the doctors about this and building decision support.
We developed internally a set of rules based on our best guess of evidence.
Then as you well know the American College of Radiology used some of those rules, but
has developed its own products ACR Select and other things like that and done I think
a pretty good job of looking on a national level engaging experts in particular areas
as well as primary care to develop a pretty open evidence based set of rules which now
cover a pretty substantial portion of all of the indications and orderables.
So there is a pretty good set, there are other competing things with ACR that have gone through
a similar process nationally.
So you largely are using the ACR guidelines and then you modify them as appropriate for
the institution?
Actually the other way around.
What happened in this paper is during the time course of this paper we were using our
internal system called ROE.
Since that time we've switched to Epic and Epic uses this ACR select which incorporated
some of the things of ROE.
So that brings us to the paper.
Why did you decide to do the retrospective review now?
I wanted to keep my job.
Hey we're going to invite you back you keep up the humor.
We put a lot of time and energy into imaging management and I wanted to know were we effective
in what we did.
That's really the straight and honest answer.
And do you know it was an important question from a health policy point of view because
we wanted to know could we take this on ourselves?
Could we manage risk?
Could doctors and the healthcare system be the guardians of the cost as well as the guardians
of the quality and utilization or did we need to have it done by some external agencies.
So if you could show it for imaging then can you do it for other areas like labs and all
sorts of other resource use?
Because no offense to the radiologists, but from a medical director's point of view,
imaging is just one of the things the doctors do.
In my opinion it's absolutely a core piece of diagnosis and treatment and that's why
I didn't want them to carve it out, I wanted us to be directing it based on real medical
evidence.
Good.
Getting to the details of your study, in your study you looked at the utilization of a group
that you called a loyalty cohort of primary care physicians, can you tell us what that
means and what fraction of the total pool of primary care doctors does this represent
and do you get like a merit badge if you're a loyal?
I'm afraid that's not how it works at MGH.
But the loyalty cohort was something developed by one of our lead primary care doctors, Dr.
Stephen Atlas.
He developed it because about 12 to 15 years ago we were trying to get our hands on utilization
overall and we were using the primary care doctor that was designated by the payer, but
when you would show them their imaging they would say to me, what this is not good data.
You're showing me imaging on patients I've never seen.
I didn't order that stuff.
How can you hold me responsible for that?
So the PCPs got together and said let's look at a way of finding the patients that
the PCPs themselves really do take care of; they order imaging, they order labs, they
prescribe drugs, they have visits.
He developed this incredibly reliable way to say doctor these are your patients, this
is your patient panel.
It's essentially 100% to the PCPs at MGH.
Okay.
Well it seems like that's a very important group to identify appropriately for this type
of effort.
Going back to the methods, you sort of as a figure of merit, you looked at the utilization
rates of high cost imaging compared to laboratory utilization and what is the expected relationship
between these two.
Did you have any information about that when you made that decision?
Well no and the humble and direct answer is that this was a retrospective study and we
were trying to do our best to use retrospective analysis to establish as much of a causal
relationship as one could in such studies because it's really mostly just correlational.
We knew that the other research in the area had some correlation between imaging UM and
utilization.
But what we asked is if – I joked with you before about imaging being a resource from
a medical director's point of view, labs is another resource.
The reason we picked labs is the doctors at that point they knew the unit cost of an image
tended to be higher than the unit cost of a lab, but it's not always true if you order
a genetic panel it's many thousands of dollars.
So they tended to see these things as diagnostic tests that they would order and manage for
patients.
The imaging was very intensely managed.
Laboratory at that time was not managed, so in some way there's a rough comparison.
And in the paper we discussed it's at best a rough comparison, but when the paper focuses
on the probability that a doctor will order a scan.
We measure propensity, probability to order and the intensity of the number they order
when they do measure.
The probability that the doctor is going to order the imaging is the thing that is probably
most subject to utilization management because that's under the doctor's control, do
I do it or not.
So when you look at the probability that the doctor is going to order imaging versus the
probability they're going to order a lab on the same panel of patients.
The labs went down because we were trying to control costs, we had choosing wisely,
and lots of other things.
People may know there was a recession in this country and patients were more concerned about
utilization.
But imaging went down nearly twice as much and it was a statistically significant difference
and we thought that suggests to us that it was a differential impact on the doctor behavior
of utilization management.
In fact the doctors tell us that.
Can you highlight some of your other key findings in the study that people listening may have
not read the article yet?
It's pretty straight forward.
We found that the total utilization of imaging across our system when down significantly.
I think it was 21 point something percent.
And that the imaging attributed to the primary care doctors, the stuff they ordered for their
patients, went down like 36% whereas the specialists ordering on those same patients also went
down, but a little bit less.
That made sense to us because the utilization management efforts were mostly focused on
the primary care doctors.
It's really much harder to tell the orthopedic surgeon and the neurosurgeon or the endocrine
specialist what to do about imaging because they do a narrower range of things and they
know more exactly what they want when they order an imaging, whereas the primary care
doctors often value the help because they have to cover this wide range of disorders
and they're less expert at any particular thing.
So it made sense that imaging went down for PCPs more.
We found that there didn't seem to be any dumping that we didn't go from the PCPs
ordering it to the specialist ordering it.
And then when we compared it internally to the labs and then we were fortunate enough
to get some local data, we found that even in eastern Massachusetts imaging utilization
went down during the same time period, but imaging utilization here went down substantially
more and we thought hmm they're using these radiology benefit management companies outside
of us, we're using our own internal system.
Maybe that suggests that the internal system is working pretty well.
I see.
Now you eluded to this that as you shifted to Epic the decision support tool actually
changed, but is there a structure for managing this going forward.
There was lot of changes in imaging technology, so are you going to let Epic decide this or
the college or is this still going to be an "autonomous" effort?
So it's a very difficult question to answer.
I think that the understanding of clinical decision support and utilization management
for imaging and many other areas is evolving rapidly.
I think that the – what we're seeing now is Epic is using a company called National
Decision Support Company to provide a ROE like experience but it's very different
in different institutions and work is ongoing to improve it.
I think that the ACR is doing a good job in trying to provide a transparent set of clinical
decision support recommendations that anyone in the United States, doctor or patient, can
see what they're getting and what the evidence is.
I think that kind of thing will continue to grow.
I think we're learning in answer to your question that it's in imaging and other
areas it's not just the clinical decision support, it's not just Epic, it's about
how you engage the doctors and how you make the system useful for them and how you give
them feedback about how they're doing so they can learn.
The problem with the RBMs is it's an external group that says yes you can, no you can't.
What we did is we said yes you can even if you get a score, a red score, that says decision
support says it's not likely to be that useful, you can still go ahead but we're
gonna count up your red exams and we're going to feed that back to you and show it
to you publically, can you explain to us the utility of this?
When you do that doctors are pretty data driven.
They start to understand maybe I'm a little bit off the reservation if I'm much higher
than a like group of colleagues, why is that, and am I getting value.
The best use of clinical decision support is when it becomes a teaching experience for
the doctors.
I'll say one other thing that some of our patients have really liked it because when
a doctor can say there's a set of evidence that helped guide me about whether an image
is useful for you and I'm not ordering this image for your low back pain because I think
it may not be helpful and here let me turn the screen around and show you what the system
says, the patients love that.
If they understand the doctors that try and do the best evidence based system thing for
them rather than just skimping them and not doing it.
I have one last question for you.
I'm sorry to but one thing that you said just sort of struck a chord.
You know we publish a lot of diagnostic accuracy studies and one of things that we are increasingly
requiring is to have multiple readers or multiple institutions because the performance results
of an exam are not just driven by the machine the protocols that we use alike…
Absolutely.
So the question is where is the interface between generalized recommendations and your
own abilities at your local institution in your population with the prevalence of disease
that you're seeing and I think at Mass General you have your arms around that locally, but
I think for people looking at this do you have any suggestions as to how they deal with
that issue?
Boy that's going a little bit beyond the paper.
Our department of radiology let me see if this is on track to what you're saying.
It's really interesting because we've been looking at what we call leakage, is imaging
– is there a value in the imaging done within our system where we pay very close attention
to the protocols and the technologists and how they do the images and the quality of
the imaging versus something that's done in another system and sometimes it's just
as good, but sometimes it's not and then do you have to repeat the scan and is that
a cost issue and a quality issue, so we've tried to steer things we think towards our
own internal system to control leakage because of the protocol and interpretation.
The other thing is there's a big issue about radiologist's recommendations.
We looked at how much of imaging is driven by recommendations made by the radiologists…
In the reports.
…a sort of an urban myth among the primary care doctors that the radiologist
made me do it.
It turns out that when you look at it it's not as big as you would think; it's less
than 10% and declining.
We also looked at, and other systems may want to do this we had fun with this, we looked
at profiling our radiologists by their recommendation rate, repeat or follow-up imaging, and we
found that not surprisingly there was a direct correlation with the tendency to suggest repeat
or follow-up imaging with the experience level of the radiologist.
The very senior, very experienced radiologists were more comfortable with their ability to
interpret the image.
We have not completed work on how accurate they are, but that would be a next step.
It is hubris or is it true experience.
Dr. Weilburg I really want to thank you for spending the time and sharing your insights
on this fascinating paper and we look forward to more papers on your work.
Thank you very much.
It's a pleasure to talk with you.
Thanks for joining us.
Không có nhận xét nào:
Đăng nhận xét