One element of the IMPROVE study compared patient responses on the questionnaire with their actual experience of a consultation with a GP.
The researchers wanted to know if the behaviours that patients comment on (in the section about their last GP consultation) can be accurately assessed by the questionnaire. Underpinning this was a recognition that while the section of the GP Patient Survey asks about the patient’s most recent consultation using six questions, the patient’s choice of answers can be influenced by many factors. These include the pre-existing relationship with the GP (if any), the relationship with the wider practice and the outcome of the consultation.
A study was conducted in two geographical areas of England in practices with lower than average GP patient communication scores in the most recent GP survey. A sample of 529 patients agreed to have their consultation with a GP video-recorded. Just after the consultation the patients completed a short questionnaire evaluating aspects of the GP’s communication, using wording and rating scales similar to the GP consultation section of the national GP survey questionnaire.
The researchers also asked four experienced, trained clinical raters to each review 56 of these video-recorded interviews using an internationally recognised rating instrument which gives an overall score of between 0 and 10 for the communication quality of the consultation. The research team asked each rater to score each consultation and a mean score was calculated for all 56. Each of the 56 consultations was also rated by the GP who had carried out the consultation (a total of 37 GPs) and the patient.
There was weak evidence of an association between the patient questionnaire scores and the scores of the trained raters. When trained raters assessed communication in a consultation to be of a high standard, patients tended to do the same. But when the trained raters judged communication in a consultation to be of a poor standard, patients reported communication as anything from poor to very good. While trained raters and patients tended to agree what good communication looks like in a consultation, clinical raters were more likely than patients to judge communication as poor.
The research team suggests that these differences may be due to a wide range of factors that inhibit some patients from assigning poor scores to consultations. They noted that earlier qualitative research suggested that patients struggle to criticise doctors’ performance in surveys and found that the rating of videoed consultations support the view that patients may be inhibited in criticising doctors’ performances. They believe that patient surveys as they are currently used may be limited in their usefulness for feeding back views about consultation. They caution that a high patient mean rating of communication with GPs should not necessarily be assumed to indicate that all is well.
“When it comes to GPs learning from the GP patient survey, the data may get picked up by a few individuals – perhaps a Continuing Professional Development programme director who might base an in-house training session on it, or sometimes the Royal College of General Practitioners local faculty may put on a workshop. If you want to make things happen at the grassroots you need an educational lead to put on an activity of interest and attractive to GPs to attend. With a survey like this and the feedback from it, the key thing is to make it possible for GPs to see it is worthwhile to spend time on it and that it can help improve their provision of care somehow.”
Dr Richard Weaver, Director of Primary Care & GP Education & Head of School, Health Education England, Wessex