The GP Patient Survey data is published online each year and any reader of the website can see both the national data and the data derived from individual practices. While the GP Patient Survey data for any given practice is used as part of its CQC inspection record, there is no standard requirement for practices to review that data or act upon them. As a result there are wide variations in the use of the data and the value that can be derived from them.
The IMPROVE research team conducted a piece of qualitative research to look at the role that patient feedback is seen to play in both assessing and improving standards of care. They surveyed patients from 25 practices, asking them about their experience with the practice using questions based on those in the GP Patient Survey. The findings were fed back to the practice staff. Focus groups were then run in 14 of these practices, involving GPs, practice managers, nurses, receptionists, administrators/secretaries and other staff such as dispensers and healthcare assistants. The focus groups explored practice staff’s attitudes towards patient surveys (such as the local one conducted by the research team, and the national GP Patient Survey) and their potential to improve experience of the practice generally, and GP consultations in particular. During the group discussions, the research team noted that organisational response to patient experience surveys was dominated by GPs and practice managers with far less input from receptionists and administrative staff.
Overall, practice staff found it hard to trust many surveys to reflect ‘reality’, even though they expressed interest in, and engagement with, the findings. Practice teams mistrusted survey methods. They particularly criticised the design and administration of the GP Patient Survey and expressed concern about its representativeness, reliability, sample size, bias and political ends. Two major concerns about the GP Patient Survey were that, firstly, it samples all the patients registered with a practice rather than its most recent or most frequent service users and secondly, that the survey generates data only at practice, and not individual practitioner level.
However, practice teams noted that the GP Patient Survey process was well-established and could prove useful in comparing their practice with national average data, as well as identifying potential changes to be made in the practice. However, there were wide variations among practices in how patient survey data were shared, analysed and used.
“If you collect data, you’ve got to use it. I review the GPPS data to see how we are positioned in the local health economy, to spot areas where we are deficient and to see if we can analyse why the patients are reporting problems in specific areas.
"We try to identify the ‘hotspots’ to ensure we understand where the main problems are and either allocate more resources to that or try to encourage patient expectations to be more realistic – then the patient participation group plays a key role in communicating with our patients.”
Joseph Todd, Practice Manager, Westlands Medical Centre, Portchester
Practice staff discussed the part that patients themselves might play in the smooth running of the practice and the scope for their involvement through patient participation groups.
Several managers remarked that it was very hard to tackle individual doctors’ performance on the basis of the survey data, even when individual doctor data is available. (Specially-collected data had been made available confidentially for the focus group work, whereas it is not collected in that way from the GP Patient Survey).
“We see the GPPS data as being most informative in relation to practical points such as access times, prescription systems, online booking, that kind of thing.”
Dr Craig Kyte, General Practitioner, Salisbury
Overall, staff in many practices felt that there was little external support for making changes in response to the patient feedback. The research team suggests that staff in English General Practice broadly view the role of patient feedback as one of quality assurance while its function in quality improvement seemed much less certain.
The research team concludes that the overall value of patient feedback from surveys is undermined by a combination of variable attitudes to the credibility of the data and challenges for practice staff in bringing about meaningful changes. They suggest that the expectation that survey feedback alone will stimulate major changes in care is unrealistic; most commonly, when change does happen, survey findings were only one of the spurs to action to address a problem that had already been acknowledged.
“The perpetual problem from the GP point of view is that the survey samples the whole adult practice population (though not the children) rather than those patients who most depend on the practice, and for whom we design our services. And yet we know that the GPPS does produce useful comparable data & I am aware that some practices make good use of it, especially where there are enthusiasts for change. On my patch, some patient groups have also discussed the data together and take important issues back to their practices for a plan of action.”
Dr Guy Watkins, Chief Executive, Cambridgeshire Local Medical Committee