Quality in Primary Care

Quality in Primary Care

contact@imedpub.com Submit a Manuscript

Don't use plagiarized sources. Get Your Custom Essay on
Quality in Primary Care
Just from $13/Page
Order Essay

Top of Form

Bottom of Form

Reach Us   +32 466902153

Using quality improvement methods for evaluating health care

A Niroshan Siriwardena MMedSci PhD FRCGP*

Foundation Professor of Primary Care, School of Health and Social Care, University of Lincoln, UK

Corresponding Author:

A Niroshan Siriwardena
School of Health and Social Care
University of Lincoln
Lincoln LN6 7TS, UK
Tel: +44 (0)1522 886939
Fax: +44 (0)1522 837058
Email: nsiriwardena@lincoln.ac.uk


Visit for more related articles at Quality in Primary Care

Quality improvement initiatives are a ubiquitous feature of modern healthcare systems because of actual and perceived gaps in the quality of healthcare delivery.[1,2] However, such initiatives are often not subject to evaluation, or when evaluation is conducted this is done poorly.[3]

Quality improvementmethods are increasingly being used to aid diffusion of innovations in health and can be used as a research tool to model and design complex healthcare interventions.[4] However, as well as being components of quality improvement programmes they can sometimes be a useful adjunct to other more traditional evaluation methods, thus serving a dual role.

Evaluation is often undertaken to determine the quality of care being provided by an individual, team or service where quality is taken to mean the effectiveness, efficiency, safety or patient experience of that care.[1] Evaluation is also undertaken to ensure that the aims of care are being met, to provide information for service users, commissioners, healthcare providers or other stakeholders about the quality of services being provided, and finally to establish the basis for future improvements. Quality improvement research is applied research involving evaluation of quality improvement initiatives which is aimed at informing policy and practice.[5] Current guidelines for reporting quality improvement include ‘descriptions of the instruments and procedures (qualitative, quantitative or mixed) used to assess the effectiveness of implementation, the contributions of intervention components and context to effectiveness of the intervention and the impact on primary and secondary outcomes’.[6]

A useful starting point for an evaluation is a logic model where the clinical population and problem that the healthcare intervention is aimed at, inputs (in terms of resources provided for planning, implementation and evaluation), outputs (in terms of healthcare processes implemented and the population that is actually reached) and longer-term outcomes are measured in terms of health and wider benefits or harms, whether intended or incidental and in the short, medium or long term (see Figure 1).[7]

Figure 1: A logic model for evaluating health care

A logic model can be expanded, either as a whole or in specific areas to form a ‘cause and effect’ (sometimes call a fishbone or Ishikawa) diagram (see Figure 2). The central line representing the patient pathway, is affected by patients themselves, but also by the other inputs and outputs (processes) as patients are travelling through the healthcare system being evaluated.[8]

Figure 2: Cause and effect (‘fishbone’) diagram

Traditional evaluation methods look at the structure, processes (outputs) or outcomes of care using various qualitative or quantitative methods (see Box 1).[9]

Box 1: Examples of traditional healthcare evaluation methods

However, a number of quality improvementmethods can also be used for evaluation and these overlap considerably with traditional evaluative techniques (Box 2). These methods have potential to enable better understanding of the processes of care and, importantly, to shed light on how to improve upon these.

Box 2: Examples of quality improvement evaluation methods

Clinical audit, which is the ‘systematic, critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources and the resulting outcome for the patient’[10] builds evaluation into the process. It involves measurement of care (‘how are we doing?’) against established criteria and standards (‘what should we be doing?’) through which performance and changes in performance can be measured (‘have the changes we have made led to improvement?’). Audit can and has been used as an evaluation method, even in randomised studies.

Significant event audit is another technique that is frequently used to evaluate care, particularly care that is considered to fall below standards or that is outstandingly good.[11] It is a powerful tool for evaluating healthcare processes by attempting to understand the detailed factors that led to care being outside the norm, but it can also help improve communication, team building and quality.[12]

Plan, do, study, act (PDSA) cycles are another means of investigating care processes while rapidly implementing evidence-based or common sense changes to processes of care, enabling changes to be spread more easily and effectively.[13] The third stage of the PDSA cycle involves studying the effect of a change using numerical or qualitative data – even with smallscale changes, the effect over time on processes of care can be measured and analysed using statistical process control techniques. The PDSA model is a useful means of evaluating while introducing rapid change to healthcare processes.[14]

Focus groups and individual interviews are important traditional techniques for gathering data about the experiences of patients and staff about services. An important quality improvement tool, which is a development from this, is the ‘discovery interview’.[15] This narrative technique involves listening to the stories of patients and carers of the care that they have received in order to understand experiences from a user perspective. Other narrative techniques for quality improvement research and evaluation include naturalistic story gathering during a project or collective sense-making of a complete project by a participant observer and the organisational case study.[5]

Root cause analysis is a specific type of significant event analysis which aims to find explanations for adverse or untoward events through the systematic review of written and oral evidence to establish underlying causes.[16] The analysis involves defining the problem, gathering evidence, identifying possible root causes and the underlying reasons for these and then deciding which causes are amenable to change. This leads to recommendations, the effect of which can be further evaluated.[17]

The Pareto (or 80/20) principle (see Figure 3), describes how a relatively small number of key causes will lead to most of the important outcomes, for example, 80% of outputs, outcomes or harms are due to 20% of inputs or causes. This can help to distinguish the most important causes.[18]

Figure 3: Pareto diagram for prescribing errors

Process mapping can describe the patient journey through the system of care and even complex pathways can be visualised using spaghetti diagrams or ‘swim lane’ diagrams (see Figure 4) to separate processes into different job roles or team activities.

Figure 4: Swim lane diagram for asthma care

Components of a process which are critical to quality (CTQ) can be represented as aCTQtree (see Figure 5). Such evaluations can determine whether the right treatment is given by the right person at the right time and place.[19]

Figure 5: Critical to quality (CTQ) tree

Another important aspect of evaluation is the human factors involved in change.[20] Ownership of change is particularly important for healthcare professionals, such as doctors and nurses, who at the front line of care have the power to promote or subvert change. This, the inverted pyramid of control,[21] has been applied to health care to emphasise the importance of clinical leadership.[22] An understanding of internal strengths and challenges (weaknesses) as well as external opportunities and threats, together with individual and group drivers and barriers to change is critical to successful health services, an approach which has its basis in Lewin’s ‘forcefield theory’.[23]

Comparing and benchmarking individual or organisational performance using statistical process control can help identify differences or gaps in performance, [24] which enable ‘special causes’ to be highlighted and explanations to be sought to look at ways of changing practice to improve performance (Figure 6).

Figure 6: Funnel plot showing institutional performance for aspirin administration to patients with ST-elevation myocardial infarction

Statistical process control charts plotted against time can also show where improvements have occurred in response to planned interventions,[25] and feedback using this technique as part of ongoing evaluation can contribute to improvement.[26,27]

Larger-scale evaluation or more robust evaluations may require more complex techniques such as quasi-experimental methods including time series or non-randomised control group designs as well as cost analysis.[28,29]

Quality improvement methods, despite their increasing application to health services,[30] have not been widely considered or used as part of healthcare evaluation but could provide a useful addition to the evaluative techniques that are currently in use.

Conflicts of Interest



  1. Darzi AD. High Quality Care for All: NHS Next Stage Review final report. London: Stationery Office, 2008.
  1. Institute of Medicine. Crossing the Quality Chasm: a new health system for the 21st century. Washington DC: National Academy Press, 2001.
  1. Øvretveit J. Producing useful research about quality improvement. International Journal ofHealth Care Quality Assurance Incorporating Leadership in Health Services 2002; 15:294–302.
  1. Siriwardena AN. The exceptional potential for quality improvement methods in the design and modelling of complex interventions. Quality in Primary Care 2008; 16:387–9.
  1. Greenhalgh T, Russell J and Swinglehurst D. Narrative methods in quality improvement research. Quality and Safety in Health Care 2005:14: 443–449.
  1. Davidoff F, Batalden P, Stevens D, Ogrinc G and Mooney S. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Quality and Safety in Health Care 2008;17(Suppl 1):i3– i9.
  1. Medeiros LC, Butkus SN, Chipman H et al. A logic model framework for community nutrition education. Journal of Nutrition Education and Behaviour 2005;37: 197–202.
  1. Volden CM and Monnig R. Collaborative problem solving with a total quality model. American Journal of Medical Quality 1993;8:181–6.
  1. Marsh P and Glendenning R. The Primary Care Service Evaluation Toolkit. Leeds: National Coordinating Centre for Research Capacity Development, 2005.
  1. Secretaries of State for Health, Wales, Northern Ireland and Scotland. Working for Patients. The health service: working for the 1990s. Cm 555. London: HMSO, 1989.
  1. Pringle M. Significant event auditing. Scandinavian Journal of Primary Health Care 2000;18:200–202.
  1. Westcott R, Sweeney G and Stead J. Significant event audit in practice: a preliminary study. Family Practice 2000;17:173–9.
  1. Langley GJ. The Improvement Guide: a practical approach to enhancing organizational performance. San Francisco: Jossey-Bass, 1996.
  1. Plsek P. Innovative thinking for the improvement of medical systems. Annals of Internal Medicine 1999;131: 438–44.
  1. NHS Modernisation Agency. A Guide to Using Discovery Interviews to Improve Care. Leicester: Department of Health, 2003
  1. Burroughs TE, Cira JC, Chartock P, Davies AR and Dunagan WC. Using root cause analysis to address patient satisfaction and other improvement opportunities.The Joint Commission Journal on Quality Improvement 2000;26:439–49.
  1. Woloshynowych M, Rogers S, Taylor-Adams S and Vincent C. The investigation and analysis of critical incidents and adverse events in healthcare. Health Technology Assessment 2005;9:1–143, iii.
  1. Ziegenfuss JT Jr and McKenna CK. Ten tools of continuous quality improvement: a review and case example of hospital discharge. American Journal of Medical Quality 1995;10:213–20.
  1. NHS Modernisation Agency. Improvement Leaders’ Guide: process mapping, analysis and redesign. London: Department of Health, 2005.
  1. NHS Modernisation Agency. Improvement Leaders’ Guide: managing the human dimensions of change. London: Department of Health, 2005.
  1. Quinn JB. Intelligent Enterprise: a knowledge and service based paradigm for industry. New York: Free Press, 1992.
  1. Ham C. Improving the performance of health services: the role of clinical leadership. Lancet 2003;361:1978–80.
  1. Lewin K. Frontiers in group dynamics. Human Relations 1947;1:4–41
  1. Mohammed MA, Worthington P and Woodall WH. Plotting basic control charts: tutorial notes for healthcare practitioners. Quality and Safety in Health Care 2008;17:137–45.
  1. Mohammed MA. Using statistical process control to improve the quality of health care. Quality and Safety in Health Care 2004;13:243–5.
  1. Thomson O’Brien MA, Oxman AD, Davis DA et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2000:CD000259.
  1. Thor J, Lundberg J, Ask J et al. Application of statistical process control in healthcare improvement: systematic review. Quality and Safety in Health Care 2007;16:387– 99.
  1. Ukoumunne OC, Gulliford MC, Chinn S, Sterne JAC and Burney PGJ. Methods for evaluating area-wide and organisation-based interventions in health and healthcare: a systematic review. Health Technology Assessment 1999:3.
  1. Siriwardena AN. Experimental methods in health research. In: SaksMandAllsop J (eds). Researching Health: qualitative, quantitative, and mixed methods. Los Angeles: Sage, 2007.
  1. Plsek PE. Quality improvement methods in clinical medicine. Pediatrics 1999;103:203–14.


Still struggling to complete your homework?
Get instant homework help from our expert academic writers!