Wednesday, April 2, 2014

Practice rescue: Steps to start fixing a failing practice

Today, we kick off a recurring series that we like to call “practice rescue.”  In this segment, we discuss stories and lessons from practices that have been struggling to the point of potentially shutting down or downsizing substantially and the steps we have taken to turn things around.  
While we fully recognize that by the time we get to some practices, they have passed the point of rescue, or in some cases, the solutions and/or personalities may not be a great fit.  What you get there is an unsuccessful or laterally moving mess.  And in the coming posts, we are certainly planning to discuss some of those.
In the majority of cases, however, opportunities can be found and grabbed or weaknesses can be rectified.  Then one morning, you take a look at results and you have the makings of a successful, happy practice.
To start, we’d like to discuss some of the key principles employed in effecting a practice rescue.  I am not outright suggesting that anyone use this as a path to implementing their own practice rescue, but if you wish to do so, so be it.

Put together a set of reliable, meaningful, consistent numbers
If you’ve read My Practice Engine in the past, you know that we are passionate about reporting, quality numbers and using them properly.  So, it’s no surprise that we always start here.  No matter what practice management system you use or if you are still on paper, there is tremendous value in the data.  Our job is to gather it, organize it and understand it.  The gathering and organizing is relatively simple and that there are quite a number of organizations out there that can do this in a competent, automated and cost-efficient manner (we generally do not charge for this service, but I cannot speak for everyone out there).
The understanding part is where the value lies.  The numbers should tell you where to start looking for potential problems and which of the potential problems to tackle first.  In addition, they should tell you what kinds of goals for improvement to set.  
Most importantly, make sure the numbers you review are meaningful.  We had one practice that tracked the number of payments per patient.  If a patient has a $500 charge and decides to pay with 10 checks of $50 each or one credit card swipe of $500, what does it matter?  The practice still gets the same $500.  
As we’ve discussed in the past in a number of posts, our focus is on the components of new patient activity and whether or not those new patients make payments.  Other values may help with further analysis, but when starting out, we stick to the fundamentally most meaningful numbers. 
Once we develop a rule for the values or counts included in a particular number, that rule needs to be consistently applied to each period so that numbers can be compared across periods.  If you compare apples to oranges, the comparison has no meaning.

Look and listen
Numbers certainly don’t tell the whole story.  To complete the picture, we need to take time to talk to people in the office.  Here are some basic questions that have to be asked: 
What do you see as the problem with the practice?
Why don’t patients sign with you?
If patients don’t sign with you, with whom are they signing?  Why do you think that is the case?
What is your strategy for convincing a new patient to sign with you?  What are the key selling points of your practice?
We try to take the time to discuss this with as many people as possible and not just the doctor, office manager and clinical supervisor.  Everyone has a perspective and more perspectives can help to create a picture of the critical weaknesses and top opportunities.  Now, the key here is to filter through the noise.  Anytime you ask someone “what’s the problem?” that conversation can turn into an airing of grievances.  And too many discordant stories can muddle a picture even more.  The task here to sort out the self-serving complaints and criticisms to build a practice story.

Materiality matters 
When dealing with weaker offices in our Japan subsidiary, we would regularly suggest changes and new policies to improve results.  And regularly, we would get responses along the lines of “you show 94 consultation appointments for the quarter and we show 92.  If we can’t count on the numbers, how can we take your suggestions?”  Would any suggestions or decisions change based on these 2 patient discrepancy?  Absolutely not.  Getting caught in minutiae like this consumes valuable time and takes attention from the important problems that need to be solved.  If we had shown 94 consults appointed and they had 54 internally, we would need to spend time reconciling numbers.  But a 2% difference like the one above is not material for these purposes.
We also need to avoid applying anecdotal evidence to the whole population.  If a person tells you a story about a couple with 9 children divorcing because their youngest child got a job on a Disney channel show, you have to ask yourself how often that would happen.  The answer is probably that time and that time only.  You don’t want to make any decisions based on stories that are unique or affect a very small percentage of the population.  Decisions need to be made based on the situation for the majority of patients.

As an example, don’t assume that a patient who pays a $5,000 contract in full up front is the model for your patient base.  That is an outlying case.  Only in a very rare set of circumstances would a patient like that be a basis for the entirety of the group.
Next up, some actual situations and solutions.  Stay tuned...

No comments:

Post a Comment