Research | Volume 36, Article 79, 09 Jun 2020 | 10.11604/pamj.2020.36.79.23658

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do

Hanneke Brits, Johan Bezuidenhout, Lynette Jean Van der Merwe

Corresponding author: Hanneke Brits, Department of Family Medicine, School of Clinical Medicine, University of the Free State, Bloemfontein, South Africa

Received: 21 May 2020 - Accepted: 27 May 2020 - Published: 09 Jun 2020

Domain: Health education

Keywords: Quality assessment, focus group interview, clinical competence

©Hanneke Brits et al. Pan African Medical Journal (ISSN: 1937-8688). This is an Open Access article distributed under the terms of the Creative Commons Attribution International 4.0 License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Cite this article: Hanneke Brits et al. Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do. Pan African Medical Journal. 2020;36:79. [doi: 10.11604/pamj.2020.36.79.23658]

Available online at: https://www.panafrican-med-journal.com/content/article/36/79/full

Home | Volume 36 | Article number 79

Research

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do

Hanneke Brits1,&, Johan Bezuidenhout2, Lynette Jean Van der Merwe3

 

1Department of Family Medicine, School of Clinical Medicine, University of the Free State, Bloemfontein, South Africa, 2Health Sciences Education, Faculty of Health Sciences, University of the Free State, Bloemfontein, South Africa, 3Undergraduate Programme Management, School of Clinical Medicine, University of the Free State, Bloemfontein, South Africa

 

 

&Corresponding author
Hanneke Brits, Department of Family Medicine, School of Clinical Medicine, University of the Free State, Bloemfontein, South Africa

 

 

Abstract

Introduction: the outcome of the undergraduate medical training programme in South Africa is to produce competent medical doctors who can integrate knowledge, skills and attitudes relevant to the South African context. Training facilities have a responsibility to ensure that they perform this assessment of competence effectively and defend the results of high-stakes assessments. This study aimed to obtain qualitative data to suggest practical recommendations on best assessment practices to address the gaps between theoretical principles that inform assessment and current assessment practices.

 

Methods: a focus group interview was used to gather this data. The teaching and learning coordinators for five of the six modules that are offered in the clinical phase of the undergraduate medical programme participated in the focus group interview. The focus group interview proceeded as planned and took 95 minutes to complete. The responses were transcribed and recorded on a matrix.

 

Results: the lack of formal feedback to students was identified as an area of concern; feedback plays an important role to promote student learning and improve patient care. The role of teaching and learning coordinators as drivers of quality assessment were recognized and supported. All participants agreed on the outcome of the programme and the central role of the outcome in all assessments.

 

Conclusion: the training of assessors and the implementation of workplace-based assessment and assessment portfolios were recommended and can also address feasibility challenges. Participants recommended decreasing summative assessments and only performing these for borderline students.

 

 

Introduction    Down

Quality assessment requires that the type and content of the assessment is aligned with the outcome of the training programme [1]. The outcome of the undergraduate medical training programme in South Africa is to produce competent medical doctors who can integrate knowledge, skills and attitudes relevant to the South African context [2]. Assessment of clinical competence is a complex process, due to a number of factors, which include the constant emergence of new best-practice medical evidence [3], the theory-practice gap between what is taught and what is observed in clinical practice [4-6], what is feasible [7], and the challenges of assessment in real-life situations that may compromise the reliability of the assessment [8]. Competence assessment must satisfy various stakeholders, which include patients and the general public, training providers, regulatory bodies and students.

Training facilities have a responsibility to ensure that they perform this assessment task effectively and can defend the results of high-stakes assessments [9]. A paper describing a framework to benchmark the quality of clinical assessment in a South African undergraduate medical programme, provides context-specific theoretical principles for undergraduate medical assessment [10]. Assessment reports and quantitative studies (In press) on current assessment practices used for undergraduate medical students at the University of the Free State (UFS) showed that these principles are not always adhered to, which may compromise the defensibility of high-stakes assessments. This study aimed to obtain qualitative data to suggest practical recommendations on best assessment practices to address the gaps between theoretical principles, that inform assessment, and current assessment practices. These recommendations will be combined with other research results to prepare a proposal to inform quality assessment at the UFS.

 

 

Methods Up    Down

Research design: a focus group interview (FGI) was used to triangulate theory (i.e. theoretical principles that inform assessment) with current assessment practices, to compile recommendations that should assist with quality assessment in undergraduate medical training. An FGI can be used in a mixed-methods design to triangulate qualitative and quantitative data from different sources [11], as was done in this study. Various definitions exist for an FGI, and some researchers even use the terms FGI and focus group discussion (FGD) interchangeably [12]. The difference between an FGI and an FGD is that the main objective of an FGI is to obtain answers to specific questions while, in an FGD, the interaction between the group members and the group dynamics are as important as the information gathered [12, 13].

Merton and Kendall (in Cohen et al.) [14] first described the concept of an FGI in 1946 and concluded that: during an FGI, there is a greater degree of interviewer control; the people participating in the interview should share experiences; the interview questions are based on previous data analysis; and subjective experiences of people who have been exposed to the same experience are gathered. The strength of a focus group is that it stimulates new or forgotten ideas and that members can build on the input of others. Some of its limitations are that it can be difficult to get members together, the group may not be representative, and some group members may dominate others [14, 15].

Participants: in an FGI, between five and 12 members interact, debate and argue their opinions on a specific issue. The participants of the focus group should represent the target population. Members that participate should do so voluntarily, should be knowledgeable on the subject and able to communicate in a group [11]. The clinical phase at the UFS comprises six modules. The six teaching and learning (T&L) coordinators of these modules were invited to participate in the FGI. Five of these T&L coordinators participated in the FGI.

Facilitator: the facilitator asks specific questions with the view to obtain answers to specific questions [13]. It is important for the facilitator to monitor the group dynamics and ensure participation by all members. The facilitator must be in control of the situation and should avoid too much or too little personal participation [12]. A facilitator with experience in higher education and in conducting FGIs was used to facilitate the process.

Questions: an FGI is not merely a general discussion, but is focused on a specific topic. Usually, the discussion starts broadly and, then, spirals inwards to address the research question/s [16]. The questions asked during this FGI derived from an assessment framework for undergraduate medical programmes [10], as well as the results of current assessment practices (In press) and publications with recommendations for undergraduate medical assessment [1, 2, 9, 17]. The guidelines for developing “good focus group questions”, which include that the questions must be short, clear, open-ended and directional, as described by Krueger and Casey [18] were followed. Questions were categorised and grouped. All the questions were available in the facilitator and participant guides which the facilitator and participants received before the FGI.

Logistics: an FGI should last between 60 and 90 minutes [19]. To capture all the information, the facilitator needs to take notes of the discussions and non-verbal cues. It can be helpful to record or videotape the discussion, and to use a co-facilitator to take notes and write down observations too [12]. The researcher arranged a neutral venue, confirmed the availability of the facilitator and participants and provided refreshments. The facilitator received all the necessary documents well in advance of the FGI. The researcher met with the facilitator in person about the process to be followed and to clarify uncertainties and agreed on the process. All participants received a participant guide one week before the FGI and a reminder to attend one day before the FGI was conducted on the 29th January 2020.

Data collection: the aim of an FGI is not consensus, but rather the gathering of rich ideas [11]. The facilitator asked one question at a time and encouraged active participation by all participants. Discussions continued until all participants were satisfied with the answer to a particular question. If no answer or more than one answer or suggestion were offered, the facilitator encouraged participation until no new ideas were produced. More than one answer or disagreement between opinions were allowed.

Pretesting of focus group and explorative interview: no test run of the FGI was done, as it is important to obtain the collaborative feedback of the whole group. The validity of the questions asked in the FGI was discussed in an explorative interview with the promotors, and was based on previous experience of the researcher.

Analysis of data and reporting: an audio recording of the FGI was transcribed by the researcher immediately after the FGI concluded. The researcher used a video recording to verify the accuracy of the transcription. A matrix, as suggested by Onwuegbuzie et al. [20] was used to transfer the answers of the specific questions. Data were reported under specific categories and questions. The audio recording was used again to verify the information on the matrix.

Ethical considerations: ethical approval for the study was obtained from the Health Sciences Research Ethics Committee, UFS (UFS-HSD 2019/0001/2304). UFS authorities approved the inclusion of personnel. Informed consent was obtained from participants for participation and for making the audio and video recordings. Participants were not identified and a participant number was allocated to each, which is also used for data reporting.

Quality and rigour of the data management: to ensure the credibility of the data collection, all the research questions were clarified with the promotors. The facilitator ensured active participation by all participants, and clarified concepts to improve the quality of the data. Local, national and international assessment guidelines were included to make the recommendations transferable to other institutions. The focus group participants and interview process were clearly described for the purpose of assessing the dependability of the results. Confirmability was ensured by audio and video recording of the process and verifying results after completion of the result template.

 

 

Results Up    Down

The T&L coordinators for five of the six modules that are offered in the clinical phase of the undergraduate medical programme attended the FGI. The process proceeded as planned and took 95 minutes to complete. The audio recording was of good quality, with all conversations clearly audible and respondents identifiable. The participants provided answers to all the questions in the FGI, and all participants contributed and gave original suggestions and participated in the discussions. No participant dominated or withdrew during discussions. The results of the focus group interview are displayed in three tables according to the adjusted template suggested by Onwuegbuzie et al. [20]. In Table 1 the results for the outcome of the programme, competence, validity and reliability are displayed. Table 2 addresses the results for fairness, feasibility, educational effect and assessment methods and Table 3 quality assurance, training and general comments.

 

 

Discussion Up    Down

The FGI met the requirements for a good FGI regarding participants, the facilitator, the questions, logistics, explorative interview and data collection and analysis. The results are also representative of the study population, with five of the possible six participants included. The first question was around the outcome of die undergraduate medical programme. All the participants agreed with the outcome as is, namely, to produce a competent medical doctor who can integrate knowledge, skills and attitudes relevant to the South African context. This clear outcome should be kept in mind during all assessments. This outcome is in line with the regulations stipulated in the Health Professions Act of South Africa, the South African Quality Assurance Authority and the assessment policy of the UFS [2, 17, 21].

The next questions focused on competence and the way it is assessed. Clinical competence must be assessed on the “Does” level, according to Miller's pyramid [22]. It was mentioned that the actual demonstration of this competence only occurs during internship, which is still part of training (students must complete internship and community service before registering as independent medical practitioners with the Health Professions Council of South Africa (HPCSA). A suggestion to implement pass/fail stations and not only an average of 50% or above to pass, was well accepted. A discussion on the difficulty to ensure competence with a pass mark of 50% (the pass mark according to the UFS assessment policy) provided more questions than answers. It must be recognised that a mark of 50% indicate that the student is competent and not “half competent”. All assessors should be aware of how they allocate marks and the implication thereof. Further discussion in this regard was recommended to clarify the meaning of 50% in the context of competence.

During questions regarding validity, good practices were shared and recommendations made. It was agreed that T&L coordinators should take responsibility for assessments, to ensure the validity of assessments. Blueprinting of all assessments should be done. Blueprinting will improve content validity, and using appropriate assessment methods will improve construct validity [10]. There is no need to add additional assessment methods, as most assessment methods described for undergraduate clinical assessment [23] are currently used at the UFS. It was recognized that a shortage in the workforce favours the use of less labour-intensive assessment methods, e.g. multiple-choice questions rather than longer written questions that can assess higher cognitive levels. The lack of trained assessors also limits the use of workplace-based assessment (WBA) and assessment/competency portfolios to assess competence. To address the workforce issue, all clinicians should be trained as assessors, and registrars can be included in the assessment process. By including registrars, they are trained on the important skill of assessment, and it may help to spread the workload. Regarding the assessment of professionalism and “soft skills,” the suggestion to implement a “professionalism portfolio” and implement the graduate attributes policy of the university were supported and should be investigated.

The participants gave valuable input on aspects to improve the quality of assessment, including recommendations on reliability, fairness, educational effect and feasibility. Competency assessment cannot be 100% reliable, but the suggestions to use WBA and assessment/competency portfolios were recommended to increase the number of assessments. WBA and assessment portfolios are excellent ways to assess competence, but reliability may be compromised [24]. Although portfolios and WBA are labour intensive, these methods are more authentic and the number and type of assessments can increase, thereby contributing to reliability [25]. The lack of formal feedback to students was identified as an area of concern - feedback plays an important role to promote student learning and improve patient care [26]. Feedback is also a requirement stipulated in the assessment policy of the UFS [21]. The scheduling of formal feedback sessions after assessments may assist with the implementation of formal feedback, a practice that is currently lacking in the undergraduate medical programme.

Participants in the FGI recommended decreasing summative assessments and only performing these for borderline students. This practice will also address some of the problems with the feasibility of summative assessments. Less emphasis on summative assessment is well supported in the literature e.g. assessment results should not depend on a single summative assessment, as competency in one case is a poor predictor of competency in another [27]. Performance stress during high-stakes assessments may also contribute to less reliable outcomes [28], and a single poor performance should not affect the outcome of years of training [23]. The lack of post-assessment moderation was identified as a risk for quality assessment. Although procedures and checklists for moderation are available, the implementation is not standard practice in all departments. Quality assurance and moderation are important components of ensuring and maintaining the quality of assessment [21]. An e-mail to remind departments to do moderation, and spot checks, may reinforce the implementation of this important practice.

During the FGI, clinical training was also discussed in relation to assessment. Biggs [29] describes the term constructive alignment as comprising outcomes, teaching and training activities and assessment that are planned to complement and support each other. Students indicated in their feedback before the FGI that they want more on-site practical training in wards and clinics (In press). The increase in student numbers and decrease in teacher numbers also decreases supervised, hands-on practical training for students. A suggestion for countering the lack of clinical exposure is to stipulate clearly and monitor available clinical training time. Another factor that affects clinical training negatively is overburdened clinicians, who may not necessarily be good role models and tend to give students time off, so that the clinicians can get clinical work done, rather than spend time on training. This practice may be due to burnout, as evidenced by a study in this academic setting that showed that only 3.4% of the doctors included in the study showed no signs of burnout [30]. The participants mentioned the importance of developing core competencies in undergraduate students, such as professionalism, leadership and scholarship [31], how to cope in difficult situations, and practicing self-care which should be included in clinical training.

The training platform may be an opportunity for students to see how to behave professionally, but also how not to behave. It was discussed that students may not be aware that, although they are trained in tertiary facilities, they are not expected to perform as specialists, but that they should rather use the opportunity to identify clinical signs and develop an approach to a specific problem. Better communication on the outcome of specific training rotations may assist both students and clinicians and was recommended. The FGI concluded with a discussion on the effect of the introduction of T&L coordinators on student assessment and training. The excellent work of the T&L coordinators was recognised and appreciated. All agreed that the T&L coordinators should continue to play a leading role in student assessment and training.

Limitations and strengths: only the T&L coordinators of the major disciplines participated in the FGI, and the FGI may have failed to capture contributions by excluding minor disciplines. However, these smaller disciplines were indirectly represented by the major disciplines. Strengths of the FGI were that the FGI was conducted according to the planning, and within the guidelines for a FGI, as described in the methods, and that data management met the criteria for credibility.

 

 

Conclusion Up    Down

The clear, agreed-upon outcome, namely, to produce a competent medical doctor who can integrate knowledge, skills and attitudes relevant to the South African context, should be kept in mind during all assessments. The difficulty of how to measure and allocate marks to competence was recognised. The lack of formal feedback to students and blueprinting should be addressed. The important place of WBA and assessment portfolios, with less emphasis on summative assessment were important recommendations from the FGI. A proposal to improve the quality of assessment in the clinical phase of the undergraduate medical programme will be compiled from this and other research information. This proposal will be submitted to the Executive Committee of the School of Clinical Medicine for implementation. Finally, an FGI can be recommended as an appropriate way to get rich data for practical solutions.

What is known about this topic

  • Assessment should be aligned with the outcome of the training programme;
  • Assessment of clinical competence is a complex process.

What this study adds

  • Workplace-based assessment should form part of competency assessment;
  • The difficulty of how to measure and allocate marks to competence was recognised;
  • Competency and professional portfolios should be implemented.

 

 

Competing interests Up    Down

The authors declare no competing interests.

 

 

Authors' contributions Up    Down

HB: conceptualisation of study, protocol development, data collection and writing of paper, JB and LJVdM: promotors who assisted with conceptualisation and planning of the study, as well as critical evaluation and final approval of the manuscript.

 

 

Acknowledgments Up    Down

Prof Mathys Labuschagne for facilitating the focus group interview. Mrs Hettie Human for language editing.

 

 

Tables  Up    Down

Table 1: results of the focus group interview displayed for outcome of programme, competence, validity and reliability adjusted according to the template by Onwuegbuzie et al.

Table 2: results of the focus group interview displayed for fairness, feasibility, educational effect and assessment methods adjusted according to the template by Onwuegbuzie et al.

Table 3: results of the focus group interview displayed for quality assurance, training and general comments adjusted according to the template by Onwuegbuzie et al.

 

 

References Up    Down

  1. John Norcini, Brownell Anderson, Valdes Bollela, Vanessa Burch, Manuel João Costa, Robbert Duvivier et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33(3):206-14. PubMed | Google Scholar

  2. Published under Government Notice R139 in Government Gazette 31886 of 19 February 2009. Regulations Relating to the Registration of Students, Undergraduate Curricula and Professional Examinations in Medicine. Accessed 20th January 2020.

  3. Cooke SJ, Johansson S, Andersson K, Livoreil B, Post G, Richards R, Stewart R, Pullin AS. Better evidence, better decisions, better environment: emergent themes from the first environmental evidence conference. Environmental Evidence. 2017;6(1):15. Google Scholar

  4. Ajani K, Moez S. Gap between knowledge and practice in nursing. Procedia - Social and Behavioral Sciences. 2011;15:3927-31. Google Scholar

  5. Hussein MH, Osuji J. Bridging the theory-practice dichotomy in nursing: The role of nurse educators. J Nurs Educ Pract. 2017;7(3):20-2. Google Scholar

  6. Salah AA, Aljerjawy M, Salama A. Gap between theory and practice in the nursing education: The role of clinical setting. Emergency. 2018;24:17-18. Google Scholar

  7. Zlatkin-Troitschanskaia O, Pant HA. Measurement advances and challenges in competency assessment in higher education. Journal of Educational Measurement. 2016; 53(3):253-264. Google Scholar

  8. Clauser BE, Margolis MJ, Swanson DB. Issues of validity and reliability for assessments in medical education. In ES Holmboe, SJ Durning, RE Hawkins (Eds). Practical guide to the evaluation of clinical competence. Philadelphia, Elsevier. 2018;2nd ed.

  9. Richard B Hays, Gary Hamlin, Linda Crane. Twelve tips for increasing the defensibility of assessment decisions. Med Teach. 2015;237(5): 433-436. PubMed | Google Scholar

  10. Hanneke Brits, Johan Bezuidenhout, Lynette J Van der Merwe. A framework to benchmark the quality of clinical assessment in a South African undergraduate medical programme. S Afr Fam Pract (2004). 2020 Feb 4;62(1):e1-e9. PubMed | Google Scholar

  11. Carey MA, Asbury JE. Focus group research. New York, Taylor and Francis Group, Routledge. 2016. Google Scholar

  12. Nyumba T, Wilson K, Derrick CJ, Mukherjee N. The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods in Ecology and Evolution. 2018;9(1):20-32. Google Scholar

  13. Boddy C. A rose by another name may smell as sweet but “group discussion” is not another name for a “focus group” nor should it be. Qualitative Market Research: An International Journal. 2005;8(3):248-255. Google Scholar

  14. Cohen L, Manion K, Morrison K. Research methods in education. New York, Taylor and Francis Group, Routledge. 2002;6th ed:317-382. Google Scholar

  15. Michael D Fetters, Timothy C Guetterman, Debra Power, Donald E Nease Jr. Split-session focus group interviews in the naturalistic setting of family medicine offices. The Annals of Family Medicine. 2016;14(1):70-75. PubMed | Google Scholar

  16. Nieuwenhuis J. Qualitative research design and data gathering techniques. In Maree, K. (Ed). First steps in research. 7th Impression. 2016 Pretoria. Van Schaik Publishers: 70-97.

  17. South African Qualifications Authority (SAQA), Sabinet Online. National policy and criteria for designing and implementing assessment for NQF qualifications and part-qualifications and professional designations in South Africa. Accessed 20th January 2020.

  18. Krueger RA, Casey MA. Focus Groups. A practical guide for applied research. Singapore, Sage Publications. 2015;. 5th ed:39-76. PubMed | Google Scholar

  19. Skinner D. Qualitative research methodology: An introduction. In Ehrlich, R & Joubert, G (Eds). Epidemiology - A research manual for South Africa, Cape Town, Oxford University Press. 2014;3rd ed:349-359.

  20. Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. A qualitative framework for collecting and analyzing data in focus group research. International Journal of Qualitative Methods. 2009;8(3):1-21. PubMed | Google Scholar

  21. University of the Free State (UFS). Assessment policy on the UFS coursework learning programme. Accessed 20th January 2020.

  22. GE Miller. The assessment of clinical skills/competence/performance. Academic Medicine. 1990;65(Suppl 9): S63-S67. PubMed | Google Scholar

  23. Yudkowski R, Park YS, Downing SM. Introduction to assessment in health professions education. In Yudkowski R, Park YS, Downing SM (Eds). Assessment in health professions education. 2nd ed. 2019. New York: Routledge.

  24. Cees P M van der Vleuten. Revisiting Assessing professional competence: From methods to programmes. Med Educ. 2016 Sep;50(9):885-8. PubMed | Google Scholar

  25. Schumacher DJ, Tekian A, Yudkowsky R. Assessment portfolios. In Yudkowski R, Park YS, Downing SM (Eds). Assessment in health professions education. 2nd ed. 2019. New York: Routledge.

  26. Marjan Govaerts. Workplace-based assessment and assessment for learning: Threats to validity. Journal of Graduate Medical Education. 2015;7(2):265-267. PubMed | Google Scholar

  27. Amin Z, Seng CY, Eng KH (Eds). Practical guide to medical student assessment. Singapore: World Scientific Publishing. 2006. Google Scholar

  28. Attali Y. Effort in low-stakes assessments: What does it take to perform as well as in a high-stakes setting? Educ Psychol Meas. 2016;76(6): 1045-1058. Google Scholar

  29. Biggs JB. Enhancing teaching through constructive alignment. Higher Education. 1996;32:347-364. Google Scholar

  30. Sirsawy U, Steinberg WJ, Raubenheimer JE. Levels of burnout among registrars and medical officers working at Bloemfontein public healthcare facilities in 2013. South African Family Practice 2016; 58(6): 213-218. Google Scholar

  31. HPCSA (Health Professions Council of South Africa). Core competencies for undergraduate students in clinical associate, dentistry and medical teaching and learning programmes in South Africa. Health Professions Council of South Africa. 2014.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Research

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do

Research

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do

Research

Quality assessment in undergraduate medical training: how to bridge the gap between what we do and what we should do