By request (check the comments), here is the latest Pediatric E&M Distribution data from PCC. You can't get it much fresher than this - a few million E&M CPT codes, from May 2007 to May 2008. I had a good, in-depth, analysis earlier this year.
To understand the chart, let me explain a few things. The first set of codes, in red, represents our "vanilla" E&M distribution - plain ol' 99213s, etc. The second set, by label (and in blue), represents the distribution of -25 modified codes which are about 1/6 the size of the vanilla codes now (i.e., enough to have an effect). The final set, in green, represents the combined E&M distribution using all the codes.
Enjoy. Click here or the pic for a better shot.
I have been a little remiss in my blog duties lately - "work" intervenes. As I type, I am attending a class on dashboard design with Igor. We've been fascinated by what we have learned during our development of our PCC dashboards - if you want to check out a sample, go for it (name: demo, password: pccsales). This is a shot of our new Pediatric Financial Pulse while we work on the clinical one.
After some discussion, Igor and I have agreed that we could make a cool spreadsheet tool for non-PCC customers to calculate their own Financial Pulses. Although you would have to calculate a number of the figures yourself, and we can't do all of the measurements, it would still include all kinds of PCC's magic pediatric benchmarks, etc. And the price would be right. We'll see how that goes, I have to get started on it.
New topic - phreesia.com. It came up on the SOAPM list. The concept is interesting enough, but we have the constant challenge of things like this:
Phreesia's educational content is derived from two separate sources:
We have sponsored content provided by some of the most prominent companies in the health care industry.
I certainly don't blame Phreesia (someone has to pay the bills!), so I'm curious to hear from anyone out there who is using them. Cool? Not cool? Comments are anonymous if you wish, but you can always write me directly.
I am, again, in the back of the room at our Coding and Practice management event and listening to Dr. Tuck. I have to steal a line or two from him, he’s got some good ones. Dr. Lander is next.
I’ve noticed some real changes in the audiences I speak to over the last few years. Today, when I ask, “Who uses RVUs to set prices?” most of the hands go up. Even 2-3 years ago, almost none went up. I like that.
Oh, I did get to make the first official announcement of our Disney C&PM event! Woohoo!
While Dr. Lander now explains how to collect money, let me share a few things we learned recently. Igor and I have been working on our “clinical benchmark” for PCC clients. Essentially, we’ve defined a series of pediatric specific measurements that even non-PCC clients can generate to get some sense of their clinical effectiveness. To do this, we started by making some assumptions and then ran the data. Then, we reviewed the results to confirm or deny our suspicions. We turned up some interesting results:
A long time ago, PCC came up with the concept of an important pediatric clinical measurement, the Sick-to-Well Visit ratio. The concept was straightforward: how could we encapsulate, in a single number, a practice's focus on providing preventive care?
The ratio and another (intentional) use - it's a really good indicator of the financial health of a practice. There are exceptions - queue Lynn Cramer - but unless a practice has a strong chronic disease management program in place, I can tell a lot about a practice based on this simple number.
Originally, we calculated the ratio the easy way: we added up 99201s through 99215s and divided them by 99381s through 99395s. There are all kinds of problems with this, though the results were better than nothing.
Today, we're a little smarter. The big change we've made is recognizing the growth of -25 modified code usage. Obviously, when trying to measure the preventive care focus of a practice, we don't want to count well visits with attached sick visit codes as sick visits. So, now we count all visits that have a well visit in them as a well visit (even with additional, modified E&M codes). The results are interesting. Thanks to Igor, as usual.
So, anyone want to tell me what happened on 2004?!
I love pediatric benchmarks. Knowing the real underlying behavior in my clients' offices helps me speak to them and understand them better. There's a particular joy and insight to finding data that challenges the status quo understanding that physicians have of their own offices.
Here's one, and this is very, very interesting data. Ask a pediatrician - ask yourself - how many of the kids you treat are up-to-date with their physicals. When I ask, I usually hear things like, "Well, just about all of them!" Or, "90% or higher!" Practices feel like they do a good job getting all those visits in when we know that's really not true.
Not only are getting those physicals done good medicine, they're good for your bottom line. Most practice I work with have missed thousands of well visits this year. Yes, thousands. At ~$100-200 a pop, you can do the math.
Don't believe me? Don't think you're one of those offices?
Igor and I already broke down active physical rates for our internal benchmarking dashboard our clients enjoy, but we wanted to do something for everyone. So, here it is.
What "we" did (I say "we" because Igor did most of the work, I just do the heavy thinking, you know...) - we broke kids down into 5 different age groups and then took a look at how many of them were up-to-date with their physicals. For all the kids over the age of three, it's easy to calculate: all of the active kids who have had a physical in the last year divided by the total active kids in your practice.
For example, if you count up all the active kids between the ages of 3 and 6 years old who had had a physical in the last 365d, you might find 1500. If you then count up all the active kids between 3 and 6 years old, regardless of their physical statuses, you might find that there are 2000. 1500/2000 = 75%. Get it? Non PCCers can do this easily, in theory. For kids in the 15m to 3yr category, we looked for a well visit within the last 6 months. Younger than that, it gets tricky.
Let's take a look at the first part of the results (click on the image for a zoomed in view):
I hope this is easy enough to follow - the blue band represents the mean percentage of active patients in each age group who are up-to-date with their physicals. The surrounding green bands represent the 25/75th percentiles and the red bands represent the 10/90th percentiles.
So, which percentile are you in?
Are 1/2 of your kids from 7-11 overdue for their physicals? What scares me the most is the big dropoff for the kids in the 15m-3y range. More next week.
Wow, I've got some awesome data today. So awesome, it even trumps a series of important pediatric RVU and H1N1 announcements (care of the helpful people at the AAP) and Jill Stoller's NBC debut.
What would trump that kind of information?
Igor and I (note how I take credit) have gone through all of the visits performed by PCC clients from 2003 through 2008 to help with some of our practice consulting, customer dashboards, etc. It's a couple billion dollars' worth, so a decent sample size. We've deliberately narrowed it down to pediatricians who have charged at least $300,000 in a year to try to examine the full-full-time providers. What did I learn?
Depending on whose benchmark you use (ours or the MGMAs), the average PCC clients makes between $153 and $172K based on non-immunizations collections. Sure, we have folks making 5-8x that amount, and plenty who don't make nearly that. There's a $66K stddev.
OK, that's a start...there's plenty more in here. I even made a cool chart, click on it to zoom in!
One of my little birdies mentioned discussions on the SOAPM mailing list about new patient benchmarks - what kind of visit volume should be focused on new visits?
Although, on a practice by practice basis, it's difficult to determine what these figures should be, having the benchmarks can help. Here they are:
Make sure that you read the graphs correctly. For example, 38% of all PCC clients had between 1-2% of their sick visits recorded as a new one. 23% had between 2-4%. Et cetera.
Note: the distributions are not equally divided, as each graph jumps greatly to capture the high-exceptions.
Before I give out details (means, medians, deviations, etc.), someone tell me if this is what SOAPM wanted.