A well-designed survey can help you improve your practice. The key? ‘Keep it simple,’ and act on what you learn.
Fam Pract Manag. 1999;6(1):40-44
Patient satisfaction surveys are a good idea — true or false? If you're inclined to answer “false,” you might choose from any number of objections. Perhaps you're not convinced the data are reliable. Perhaps you don't believe the results justify the costs. Or perhaps you don't want to be measured simply for the sake of being measured. All are legitimate concerns, but, as you'll see, they can be overcome.
The truth about patient satisfaction surveys is that they can help you identify ways of improving your practice. Ultimately, that translates into better care and happier patients. “Unless a physician is not interested at all in information, a patient satisfaction survey can be useful,” says John Rollet, MD, a family physician in Chatham, Ill., whose group practice recently conducted its first survey. What's more, he says, “it shows your staff and the community that you're interested in quality. It demonstrates that you are looking for ways to improve.”
If that's not enough of a reason to push you nearer to the point of surveying your patients, consider this: “Whether you think patient satisfaction surveys are good or bad, the fact of the matter is that the marketplace you work in is demanding that data on patient satisfaction be used to empower consumers,” says Leonard Fromer, MD, a family physician in group practice in Santa Monica, Calif., and member of the AAFP's Commission on Health Care Services. “If we physicians don't get on board and try to make the data as good as possible and get our scores as high as possible, we're going to be hurt in the marketplace. We'll be noncompetitive. That's the biggest reason of all to be doing this.”
KEY POINTS:
A patient satisfaction survey can demonstrate that a practice is interested in quality and in doing things better.
When choosing (or designing) a survey questionnaire, look for three things: brevity, clarity and consistency.
Even an in-house survey can be statistically correct if practices stick to some basic rules.
The setting
Before you conduct your own patient satisfaction survey, make sure your practice is ready. First, work at cultivating an environment that embraces quality improvement. “You have to put quality up front,” says Fromer. “It must be the core of your practice's vision, values and goals.”
Second, determine how much money you can afford to invest in a survey project and how expensive yours is likely to be. For a survey conducted through an outside vendor, “a rough rule of thumb, for a practice with a minimum of three physicians, would typically be in the range of $300 to $400 per physician,” says A.C. Myers, president of The Myers Group, an Atlanta-based firm specializing in health care surveys and data analysis. You can conduct an in-house survey for less than that, but it will require more of a time investment. Staff-related costs might result from the time spent designing the survey instrument, selecting a sample, preparing the survey for mailing, tabulating the responses and analyzing the data. The main physical costs of a survey include paper, printing and postage.
Additionally, keep your staff and colleagues well informed about the process, and let them know how you will interpret and act on the results. “Recognize that this is just a snapshot of how your patients view you right now,” says Myers. “Then take that feedback and organize improvement projects around those comments or scores.”
While your improvement projects will focus on areas of weakness, make sure you also plan to celebrate your practice's successes. When you conduct a patient satisfaction survey, chances are “you're going to get a lot of positive reinforcement about the many things that you are doing well,” says Myers.
In search of the right tool
Practices can solicit feedback from patients in a variety of ways: phone surveys, written surveys, focus groups or personal interviews. Most practices will want to use written surveys, which tend to be the most cost-effective and reliable approach, according to Myers. “Phone surveys yield similarly reliable results,” he says, “and have the added value of allowing you to probe for more specific information.”
With a written survey, practices have the option of creating a questionnaire from scratch or using a product that's already been developed by an outside vendor (see “Survey assistance”). Most experts recommend the latter because the product has likely been tested and validated. Doing it yourself is certainly possible, but it can be “time-consuming and taxing on the practice's internal staff,” says Myers.
Whether you choose to do it yourself or turn to the experts, “Keep it simple,” says Fromer, and keep these lessons in mind:
Ask about the top three issues. Practices have three general goals when they interact with patients: to provide quality health care, to make that care accessible, and to treat patients with courtesy and respect. Your survey questions, then, should cover each of the three areas: quality issues (i.e., is the patient satisfied with his or her medical care?), access issues (i.e., is it easy to make an appointment or get a referral?), and interpersonal issues (i.e., are the physicians and staff caring and compassionate?).
You may be tempted to think that access issues are less important than quality (after all, what does waiting time have to do with competent medical care?). But understand that your patients think otherwise. Fromer points out that data from the National Committee for Quality Assurance (NCQA) has shown that patients place access issues at the top of their list of what makes them satisfied. “Yes, it's important to make the right diagnosis and to prescribe the right treatment,” says Fromer. “But if your patients don't put that until number seven on their top-10 list of what makes them satisfied, you can't say one through six are irrelevant. Access issues matter to the customer, and if you ignore that, you're going to lose.”
Ask the essential question. “Sometimes people will put a questionnaire together, and they'll follow the flow of the patient and ask about everything under the sun,” says Myers. “But they don't come back and ask the key question: ‘Overall, how satisfied are you with your physician?’”
You'll need that score for two reasons. “When you're trying to report on your overall performance, or an HMO is asking for feedback on how your patients view you, you'll have a single representative indicator,” says Myers. Additionally, that score is an important part of deducing the key drivers of satisfaction in your practice. If, for example, your patients rate your receptionist as “excellent” but give you a “fair” rating overall, it may suggest that “the courtesy and friendliness of the receptionist, while it has some impact on overall satisfaction, can't overcome any kind of a bad physician or nurse relationship with the patient,” says Myers.
Word questions carefully. Survey questions should be brief and easy to understand. “You want to avoid asking biased, vague or double-barreled questions” (those that actually incorporate two or more questions), explains Myers. Instead, questions should be focused: Rather than asking, “How would you rate our staff?” or the double-barreled “How would you rate the courtesy and efficiency of our receptionist?” dig deeper with a more specific question, such as “How would you rate the helpfulness of our receptionist?”
Use consistent scales. The majority of questions on a patient satisfaction survey should be answered using a scale. Examples include 10-point scales, Likert scales (e.g., five points ranging from “strongly agree” to “strongly disagree”), four-point scales (which force a sided response) and many other variations. “The most generally used and accepted scale that you'll see quoted in the literature and utilized by the NCQA is the five-point scale,” says Myers. He advocates a five-point scale that ranges from “excellent” to “poor.” The most important thing, he says, is to “use a consistent scale. You don't want to use a four-point scale on some questions and a five-point scale on others because then you can't compare the results.”
Survey assistance
If you've decided to conduct a patient satisfaction survey but need assistance along the way, you may want to call in the experts. Health care research firms can provide your practice with tested survey questionnaires and can handle the entire survey process, including data analysis. The National Committee for Quality Assurance (NCQA) has compiled a list of “certified vendors” (those that are qualified to collect Health Plan Employer Data and Information Set survey results). NCQA's list is available at http://www.ncqa.org/tabid/170/Default.aspx and includes the following:
The Myers Group, 2429 A East Main St., Suite 304, Snellville, GA 30078; phone: 800-692-0041; http://www.themyersgroup.net/clinician.asp.
National Research Corporation, Gold's Galleria, 1033 O St., Lincoln, NE 68508; phone: 402-475-2525; http://www.nationalresearch.com.
Press, Ganey Associates Inc., 404 Columbia Place, South Bend, IN 46601; phone: 219-232-3387; http://www.pressganey.com.
Additionally, the AAFP has developed a
that physicians may use in their own practices. The questionnaire was originally available as part of a 25-page monograph titled Patient Satisfaction Surveys (item 754), along with Survey Analysis Software, a computer program designed to simplify data analysis (items 761 and 762).
If you're looking for ideas of how other practices have gone about patient surveys, the Medical Group Management Association (MGMA) has published Patient Satisfaction Questionnaires (item 3472), a booklet that shares the experiences of 285 groups, telling how their patients received the questionnaires, how frequently the surveys were conducted and how they analyzed their results. It also offers 116 samples of patient satisfaction questionnaires. Order this item through MGMA, 104 Inverness Terrace East, Englewood, CO 80112-5306; 303-397-7888. Or visit http://www.mgma.com/.
Include an open-ended question. Another important survey element, says Myers, is the open-ended question. “Generally, you want to include one or, we suggest, two open-ended questions, one of which is essentially ‘What do you like best about our practice?’ or ‘What are we doing especially well?’” he says. “Another question should ask, ‘What can we do to improve?’”
While verbatim comments aren't easy to tabulate, they will bring meaning to some of your scores. “On your scaled questions, you're going to find out you're, say, a 4.2 out of 5. The verbatim responses will help you understand what is behind that score,” says Myers. “It's pretty powerful to see exactly what some of your patients are saying about you.”
Collect demographic data. At the end of your survey, you should also collect patients' demographic information, so you can identify how certain groups of patients responded to a particular question. You may, for example, want to include a question about the patient's health plan so you can track whether satisfaction scores vary from plan to plan.
Strive for anonymity. Generally, patients are more likely to answer survey questions honestly if they believe their identity is protected. Make every effort, then, to keep the entire survey process anonymous. Patients should be able to complete their surveys in private and return them without fear of being identified. Some practices have chosen to assign a unique patient identification number to each survey, which enables them to track which surveys have been returned. But this, of course, is not a license to check up on an individual's responses.
In some cases, patients may want to provide their names. Rollet's practice gave patients this option so they could ask to have a staff member contact them about their comments or concerns.
Statistical correctness
One of the main criticisms of patient satisfaction surveys is that their results are not reliable. It's true that not all surveys meet the standards for statistical reliability. But yours can, if you stick to these guidelines.
Sample size. When you distribute your questionnaire, try to survey the largest group possible. This will improve your chances of getting an adequate number of responses. Fromer's large group practice manages to survey nearly every patient after every encounter. If surveying every patient is simply out of your reach, you can survey a random sample of your patients — say, every fifth one. A minimum goal for a group of four physicians would be to distribute 670 surveys (this is based on an anticipated response rate of 30 percent, yielding 200 responses; see “Response rates” and “Number of responses,” below).
Distribution methods. Myers recommends mailing surveys rather than handing them out in the office or using a drop box. Drop boxes are too often ignored, he says, and physically handling the surveys and being able to influence which patients receive them can introduce error. If, for example, you set out to survey every fifth patient who comes to your office, your staff might be tempted to skip an irate patient and pick up again with the next one. That would be tampering with the process.
Mailed surveys do cost more than those handled in the office. If you opt to hand out surveys in the office, “do it consistently,” says Myers. Make sure the survey is handed to every patient the same way every time.
Response rates. Thirty percent to 35 percent is a typical response rate for a mailed survey, says Myers. To bring your response rate to that level, mail the survey with a postage-paid reply envelope and a cover letter from the physicians that explains the importance of patient feedback to the practice. Follow up on the survey five to seven days later with a thank you/reminder card, says Myers. “Where you might have otherwise gotten a 30-percent response rate, it may boost it to 36 or 38 percent.”
Number of responses. An adequate response rate is important, but what trumps that is the number of responses you receive. If you've managed to get a 40 percent response rate on your survey, but you've surveyed only 100 patients, don't kid yourself that you have enough data to draw meaningful conclusions. The more responses you can get, the more valid and reliable your results are likely to be. But what's the minimum? Different experts draw the line at different places, but “for aggregated result reporting, we suggest a minimum of 200 responses,” says Myers. If your responses are lower than that, he says, the margin of error becomes unacceptable. “If a practice wants physician-specific scores, we suggest it give out enough surveys to get back at least 50 per physician. For a small group of two or three physicians, we still suggest mailing out enough to get back 200.” For a group of four or more physicians, the minimum would be 50 times the number of physicians in the group.
Analyzing the data. Analyzing the data may be the most complex part of the survey process. “Usually, you can get a survey put together in-house, you can get a database of people you want to send it out to, and you may happen to come up with a reasonable response rate,” says Myers. “The primary challenge emerges when the completed surveys are returned. If you don't have someone in-house with strong analytic and database-management skills, you are prone to end up with a stack of surveys that are never analyzed adequately. That's where you're going to get the meaning and the value out of it.”
When you analyze your results, particularly if you are near the minimum number of responses suggested above, avoid lumping responses together into broad categories; instead, calculate a score that takes all the individual responses into account. For example, says Myers, do not combine scores for “excellent” and “very good” into one category called “satisfied.” “That's called ‘top-box scoring,’” he says, and generalizing from this relatively low volume of responses leaves too much room for error. Instead, “you need to do a weighted score that looks at all 50 patients who sent back the questionnaire for Dr. Smith.” Calculate a weighted score based on five points for each person who said “excellent,” four points for “very good,” three points for “good” and so on. Total the weighted responses for each question and average them to get the score. Weighted calculations, by maximizing the value of the sample size, “allow you to get to a good indicator of satisfaction without having a huge volume of responses,” says Myers. Besides, he says, “What was the point of using a five-point scale if you're just planning to lump the results in the end?”
If your practice does not have the time or resources to analyze your survey data, consider outsourcing this step to a firm that specializes in health care data analysis.
What do I do with the results?
While you don't have to act on every suggestion that your patients give you, you should take action on the key items that are causing dissatisfaction. Remember that your goal is to improve quality, not to place blame.
In Rollet's practice, a concern of patients was waiting time in the office. To improve in this area, the practice developed a “time-analysis worksheet,” which tracks patients' visits by the minute for the time a patient arrived at the office, entered the exam room, was greeted by the doctor and so on. This information allows the physicians and staff to see how they're spending their time and identify possible sources of delays.
Another item Rollet gleaned from his survey data was simply a pat on the back for his staff and colleagues. “Overall, our patients are happy. It's nice to know that there are many patients with a positive image and positive feelings about our office,” says Rollet.
Fromer's group uses its patient satisfaction data in another way. The group builds the results into its compensation structure. “We believe in the carrot, not the stick, so it's not a penalty, it's a bonus,” he says. “The higher you score on the satisfaction surveys, the more money you're going to make. That gets people to pay attention.”
Other industries have been paying attention to customer satisfaction for years. “Health care is the only industry — service or manufacturing — that for years has said, ‘Let's leave the customer out of it,’” says Fromer. “It's the physician mentality that health care is a special thing and the only people trained well enough to really understand what's supposed to happen are the doctors. But that is absolutely prehistoric thinking. To ignore the input from the patient, to ignore the customer, to say the customer's desires are irrelevant is not living with reality.”