Point of View: Competency-based assessment for the training of PhD students and early-career scientists
Abstract
The training of PhD students and early-career scientists is largely an apprenticeship in which the trainee associates with an expert to become an independent scientist. But when is a PhD student ready to graduate, a postdoctoral scholar ready for an independent position, or an early-career scientist ready for advanced responsibilities? Research training by apprenticeship does not uniformly include a framework to assess if the trainee is equipped with the complex knowledge, skills and attitudes required to be a successful scientist in the 21st century. To address this problem, we propose competency-based assessment throughout the continuum of training to evaluate more objectively the development of PhD students and early-career scientists.
https://doi.org/10.7554/eLife.34801.001Main text
The quality of formal training assessment received by PhD students and early-career scientists (a label that covers recent PhD graduates in a variety of positions, including postdoctoral trainees and research scientists in entry-level positions) is highly variable, and depends on a number of factors: the trainee’s supervisor or research adviser; the institution and/or graduate program; and the organization or agency funding the trainee. The European approach, for example, relies more on one final summative assessment (that is, a high stakes evaluation at the conclusion of training, e.g. the dissertation and defense), whereas US doctoral programs rely more on multiple formative assessments (regular formal and informal assessments to evaluate and provide feedback about performance) before the final dissertation defense (Barnett et al., 2017). Funding agencies in the US such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) have recently increased expectations for formal training plans for individuals supported by individual or institutional training grants (NIH, 2012); but these agencies support only a small fraction of PhD trainees via these funding mechanisms. This variation in the quality and substance of training assessment for PhD students and early-career scientists (Maki and Borkowski, 2006) underscores the need for an improved approach to such assessment.
The value of bringing more definition and structure to the training environment has been recognized by professional organizations such as the National Postdoctoral Association, the American Physiological Society/Association of Chairs of Departments of Physiology, and some educational institutions and individual training programs. In addition, a recent NIH Funding Opportunity Announcement places increased emphasis on the development of both research and career skills, with a specific charge that “Funded programs are expected to provide evidence of accomplishing the training objectives”. Lists of competencies and skills provide guidelines for training experiences but they are rarely integrated into training assessment plans.
Based on our experience as graduate and postdoctoral program leaders, we recognized the need both to identify core competencies and to develop a process to assess these competencies. To minimize potential confirmation bias we deliberately chose not to begin this project with a detailed comparison of previously described competencies. Each author independently developed a list of competencies based on individual experiences. Initial lists were wide-ranging, and included traditional fundamental research skills (e.g., critical thinking skills, computational and quantitative skills), skills needed for different career pathways, (e.g., teaching skills), and business and management skills (e.g., entrepreneurial skills such as the ability to develop a business or marketing plan). Although we recognize that many of the competencies we initially defined are important in specific careers, from the combined list we defined 10 core competencies essential for every PhD scientist regardless of discipline or career pathway (Table 1).
-
Table 1—source data 1
- https://doi.org/10.7554/eLife.34801.005
Core competencies and subcompetencies
Broad Conceptual Knowledge of a Scientific Discipline refers to the ability to engage in productive discussion and collaboration with colleagues across a discipline (such as biology, chemistry, or physics).
Deep Knowledge of a Specific Field encompasses the historical context, current state of the art, and relevant experimental approaches for a specific field, such as immunology or nanotechnology.
Critical Thinking Skills focuses on elements of the scientific method, such as designing experiments and interpreting data.
Experimental Skills includes identifying appropriate experimental protocols, designing and executing protocols, troubleshooting, lab safety, and data management.
Computational Skills encompasses relevant statistical analysis methods and informatics literacy.
Collaboration and Team Science Skills includes openness to collaboration, self- and disciplinary awareness, and the ability to integrate information across disciplines.
Responsible Conduct of Research (RCR) and Ethics includes knowledge about and adherence to RCR principles, ethical decision making, moral courage, and integrity.
Communication Skills includes oral and written communication skills as well as communication with different stakeholders.
Leadership and Management Skills includes the ability to formulate a research vision, manage group dynamics and communication, organize and plan, make decisions, solve problems, and manage conflicts.
Survival Skills includes a variety of personal characteristics that sustain science careers, such as motivation, perseverance, and adaptability, as well as participating in professional development activities and networking skills.
Because each core competency is multi-faceted, we defined subcompetencies. For example, we identified four subcompetencies of Critical Thinking Skills: (A) Recognize important questions; (B) Design a single experiment (answer questions, controls, etc.); (C) Interpret data; and (D) Design a research program. Each core competency has between two to seven subcompetencies, resulting in a total of 44 subcompetencies (Table 1—source data 1: Core Competencies Assessment Rubric).
Assessment milestones
Individual competencies could be assessed using a Likert-type scale (Likert, 1932), but such ratings can be very subjective (e.g., “poor” to “excellent”, or “never” to “always”) if they lack specific descriptive anchors. To maximize the usefulness of a competency-based assessment rubric for PhD student and early-career scientist training in any discipline, we instead defined observable behaviors corresponding to the core competencies that reflect the development of knowledge, skills and attitudes throughout the timeline of training.
We used the “Milestones” framework described by the Accreditation Council for Graduate Medical Education: “Simply defined, a milestone is a significant point in development. For accreditation purposes, the Milestones are competency-based developmental outcomes (e.g., knowledge, skills, attitudes, and performance) that can be demonstrated progressively by residents and fellows from the beginning of their education through graduation to the unsupervised practice of their specialties.”
Our overall approach to developing milestones was guided by the Dreyfus and Dreyfus model describing five levels of skill acquisition over time: novice, advanced beginner, competent, proficient and expert (Dreyfus and Dreyfus, 1986). As trainees progress through competent to proficient to expert, their perspective matures, their decision making becomes more analytical, and they become fully engaged in the scientific process (Dreyfus, 2004). These levels are easily mapped to the continuum of PhD scientist training: beginning PhD student as novice, advanced PhD student as advanced beginner, PhD graduate as competent, early-career scientist (that includes postdoctoral trainees) as proficient, and science professional as expert (see Table 2).
We therefore defined observable behaviors and outcomes for each subcompetency that would allow a qualified observer, such as a research adviser or job supervisor, to determine if a PhD student or early-career scientist had reached the milestone for their stage of training (Table 1—source data 1: Core Competencies Assessment Rubric). A sample for the Critical Thinking Skills core competency is shown in Table 3.
Recommendations for use
We suggest that such a competency-based assessment be used to guide periodic feedback between PhD students or early-career scientists and their mentors or supervisors. It is not meant to be a checklist. Rather than assessing all 44 subcompetencies at the same time, we recommend that subsets of related competencies (e.g., “Broad Conceptual Knowledge of a Scientific Discipline” and “Deep Knowledge of a Specific Field”) be considered during any given evaluation period (e.g., month or quarter). Assessors should read across the observable behaviors for each subcompetency from left to right, and score the subcompetency based on the last observable behavior they believe is consistently demonstrated by the person being assessed. Self-assessment and mentor or supervisor ratings may be compared to identify areas of strength and areas that need improvement. Discordant ratings between self-assessment and mentor or supervisor assessment provide opportunities for conversations about areas in which a trainee may be overconfident and need improvement, and areas of strength which the trainee may not recognize and may be less than confident about.
The competencies and accompanying milestones can also be used in a number of other critically important ways. Combined with curricular mapping and program enhancement plans, the competencies and milestones provide a framework for developing program learning objectives and outcomes assessments now commonly required by educational accrediting agencies. Furthermore, setting explicit expectations for research training may enhance the ability of institutions to recruit outstanding PhD students or postdoctoral scholars. Finally, funding agencies focused on the individual development of the trainee may use these competencies and assessments as guidelines for effective training programs.
Why should PhD training incorporate a competency-based approach?
Some training programs include formal assessments utilizing markers and standards defined by third parties. Medical students, for example, are expected to meet educational and professional objectives defined by national medical associations and societies.
By contrast, the requirements for completing the PhD are much less clear, defined by the “mastery of specific knowledge and skills” (Sullivan, 1995) as assessed by research advisers. The core of the science PhD remains the completion of an original research project, culminating in a dissertation and an oral defense (Barnett et al., 2017). PhD students are also generally expected to pass courses and master research skills that are often discipline-specific and not well delineated. Whereas regional accrediting bodies in the US require graduate institutions to have programmatic learning objectives and assessment plans, they do not specify standards for the PhD. Also, there are few – if any – formal requirements and no accrediting bodies for early-career scientist training.
We can and should do better. Our PhD students, postdoctoral scholars, early-career scientists and their supervisors deserve both a more clearly defined set of educational objectives and an approach to assess the completion of these objectives to maximize the potential for future success. A competency-based approach fits well with traditional PhD scientist training, which is not bound by a priori finish dates. It provides a framework to explore systematically and objectively the development of PhD students and early-career scientists, identifying areas of strength as well as areas that need improvement. The assessment rubric can be easily implemented for trainee self-assessment as well as constructive feedback from advisers or supervisors by selecting individual competencies for review at regular intervals. Furthermore, it can be easily extended to include general and specific career and professional training as well.
In its recent report “Graduate STEM education for the 21st Century”, The National Academies of Sciences, Engineering, and Medicine, 2018 briefly outlined core competencies for STEM PhDs. In its formal recommendations specifically for STEM PhD education, the first recommendation is, “Universities should verify that every graduate program that they offer provides for these competencies and that students demonstrate that they have achieved them before receiving their doctoral degrees.” This assessment rubric provides one way for universities to verify that students have achieved the core competencies of a science PhD.
We look forward to implementing and testing this new approach for assessing doctoral training, as it provides an important avenue for effective communication and a supportive mentor–mentee relationship. This assessment approach can be used for any science discipline, and it has not escaped our notice that it is adaptable to non-science PhD training as well.
Data availability
There are no datasets associated with this work
References
-
BookMind over machine: The power of human intuition and expertise in the era of the computerNew York: The Free Press.
-
The Five-Stage model of adult skill acquisitionBulletin of Science, Technology & Society 24:177–181.https://doi.org/10.1177/0270467604264992
-
BookThe Assessment of Doctoral Education: Emerging Criteria and New Models for Improving OutcomesSterling, VA: Stylus Publishing.
-
ReportThe Competency-Based Approach to Training.Baltimore, MD: JHPIEGO Corporation.
-
BookGraduate STEM Education for the 21st CenturyLeshner A, Scherer L, editors. The National Academy Press.https://doi.org/10.17226/25038
Decision letter
-
Emma PewseyReviewing Editor; eLife, United Kingdom
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your article "Competency-Based Assessment for the Training of PhD Scientists" to eLife. I have assessed your article in my capacity as Associate Features Editor alongside my colleague Peter Rodgers (the eLife Features Editor) and a reviewer who has chosen to remain anonymous.
We all enjoyed reading your article and felt that the framework you have developed provides valuable guidance to PhD training that is currently lacking. Indeed, the reviewer stated that "I would look at introducing such a framework at my institution – this benefits both the student and supervisor".
I would therefore like to invite you to submit a revised version of the manuscript that addresses the following points.
Major revisions
1) There are comments about postdocs that do not always fit the narrative of the article. Therefore, it would be best to remove these to focus upon PhD students. Postdoc training is a lot harder to provide a framework for because it could be for anything between 1–5 years. I appreciate the need for such a postdoc framework but overall feel it is best to focus on the PhD aspects.
2) It is not clear from the article whether any PhD scientists have used your competency-based framework yet. Please could you include a section that discusses:
- How extensively the framework has been used so far
- Any assessments you might have performed of the effectiveness of competency-based training, or feedback you've received from PhD scientists and their advisors who have used the framework
- Any plans you have for implementing the approach more widely, or assessing its effectiveness
3) In paragraph four you state that you did not draw your list of competencies from previous lists, but to avoid confirmation bias developed lists based on your own experiences. Please could you discuss how this method avoided bias – could you not have been biased due to reading existing lists some time previously?
4) Mentioning the NSF and the NIH in the first sentence of the second paragraph makes the article very US focused – moving this sentence to later in this paragraph would make the article less US-centric. The article could also incorporate a deeper overview of other countries and practices. For example, many institutions across Europe require students to publish a number of articles before completion of their PhD, and such a goal does provide a "loose" framework for their development.
5) There are a few potential talking points that could be added to the competencies – there is flexibility as to which of the 10 competencies these points could fit. (1) Supervision of other students – this may develop from undergrad supervision through to MSc and lastly new PhD students. (2) An awareness of Innovation/Commercialisation – such as assay or technique development. (3) Public awareness/accountability – This is probably related to ethics but as most PhDs are public or charity funded then there should be a note relating to engagement with funders to understand their funding/operating mechanisms.
https://doi.org/10.7554/eLife.34801.007Author response
Major revisions
1) There are comments about postdocs that do not always fit the narrative of the article. Therefore, it would be best to remove these to focus upon PhD students. Postdoc training is a lot harder to provide a framework for because it could be for anything between 1–5 years. I appreciate the need for such a postdoc framework but overall feel it is best to focus on the PhD aspects.
We agree that a full framework for postdoctoral scholars would be complex, however, acquisition of skills continues after PhD completion, regardless of the first post-graduate position, or the ultimate career path. Accordingly we have renamed this category “Early Career Scientist”, and added a footnote describing this category.
2) It is not clear from the article whether any PhD scientists have used your competency-based framework yet. Please could you include a section that discusses:
- How extensively the framework has been used so far
- Any assessments you might have performed of the effectiveness of competency-based training, or feedback you've received from PhD scientists and their advisors who have used the framework
- Any plans you have for implementing the approach more widely, or assessing its effectiveness
We have not deployed this framework yet but are anxious to do so precisely because we have not yet assessed it – we strongly feel that deploying it in the absence of at least concomitant testing to assess its validity would be unfair to trainees given the risk that it could be inappropriately used as a summative assessment tool (which is most definitely not our intent).
We have shared it with colleagues at several meetings of the AAMC’s Graduate Research Education and Training (GREAT) professional development group (this group consists of biomedical PhD program leaders, and associate deans of graduate education and postdoctoral training at US medical schools). We have also shared it with FASEB’s policy subcommittee on Training and Career Opportunities for Scientists. It has been presented at the NIH (NIGMS) and at a recent joint AAMC-FASEB conference. We have received extremely positive feedback. Based on this feedback, we are anxious to share it more widely, and feel that publication in eLife is an excellent vehicle to accomplish this first step.
From the beginning we have been committed to assessing the effectiveness of this framework. As we have presented it, we have had a number of discussions with variety of organizations (scientific societies, private foundations, educational advocacy groups, and others) that have expressed an interest in funding a multi-institutional assessment project. We anticipate that publication would generate additional interest and move the assessment process forward quickly.
3) In paragraph four you state that you did not draw your list of competencies from previous lists, but to avoid confirmation bias developed lists based on your own experiences. Please could you discuss how this method avoided bias – could you not have been biased due to reading existing lists some time previously?
As graduate education leaders we are immersed in the issues of the day, and are certainly aware of and had previously read some of the existing lists of competencies; indeed our extensive experience across the many facets of graduate education and postdoctoral training is what prompted us to start this project in the first place. Thus, we agree with the reviewers that ‘avoiding’ confirmation bias is too strong. We have revised the text to more accurately reflect our meaning: our goal was to minimize confirmation bias, which we did by not contemporaneously reviewing the lists of which we were aware when we initiated this project. We have revised the text accordingly.
4) Mentioning the NSF and the NIH in the first sentence of the second paragraph makes the article very US focused – moving this sentence to later in this paragraph would make the article less US-centric. The article could also incorporate a deeper overview of other countries and practices. For example, many institutions across Europe require students to publish a number of articles before completion of their PhD, and such a goal does provide a "loose" framework for their development.
We have moved the sentences discussing US practices to later in the paragraph. A recent article by Barnett et all (FEBS Open Bio. 7: 1444 (2017)) discussed the lack of formative assessments in the predominant European PhD model in comparison to regular such assessments in the US. While is it true that the US system has regular formative and summative assessments throughout a student’s PhD program (regular dissertation committee meetings, and one or more benchmark examinations during the student’s program), we would argue that the US system is a ‘loose’ framework, to the detriment of our students – hence the importance of this work. We have incorporated this point into the text early in paragraph two.
5) There are a few potential talking points that could be added to the competencies – there is flexibility as to which of the 10 competencies these points could fit. (1) Supervision of other students – this may develop from undergrad supervision through to MSc and lastly new PhD students. (2) An awareness of Innovation/Commercialisation – such as assay or technique development. (3) Public awareness/accountability – This is probably related to ethics but as most PhDs are public or charity funded then there should be a note relating to engagement with funders to understand their funding/operating mechanisms.
1) Supervision of others is already built into many of the assessments; we list several here (out of more than 20 milestones):
a) Competency 2 – Deep Knowledge: “Educate others”, “Train others” (at the Early Career Scientist stage)
b) Competency 3 – Critical thinking skills: “Evaluate protocols of others” (at the PhD Graduate Stage”, “Critique experiments of others (at the Early Career Scientist stage)
c) Competency 4 – Experimental skills: “Assist others” (at the Advanced PhD Student stage), “Help Others” (at the PhD Graduate Stage)
2) “Innovation… such as assay or technique development” is already incorporated into Sub-Competency 2B – Design and execute experimental protocols: “Build a new protocol” (at the Early Career Scientist stage). We determined that “Commercialisation” [sic] and other business and entrepreneurship skills are career specific skills that, while certainly important in some careers, are not a “core” PhD competency. We have slightly elaborated the text to bring this point out.
3) As we understand this comment “Public awareness” falls under Sub-competency 8-F “Communication with the public”. We strongly believe that “Public… accountability” is infused in every aspect of Competency 7 -Responsible Conduct of Research and Research Ethics”.
https://doi.org/10.7554/eLife.34801.008Article and author information
Author details
Funding
The authors declare that there was no funding for this work.
Acknowledgements
We thank our many colleagues at the Association of American Medical Colleges Graduate Research, Education and Training (GREAT) Group for helpful discussions, and Drs. Istvan Albert, Joshua Crites, Valerie C Holt, and Rebecca Volpe for their insights about specific core competencies. We also thank Drs. Philip S Clifford, Linda Hyman, Alan Leshner, Ravindra Misra, Erik Snapp, and Margaret R Wallace for critical review of the manuscript.
Publication history
- Received:
- Accepted:
- Accepted Manuscript published:
- Version of Record published:
Copyright
© 2018, Verderame et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 14,428
- views
-
- 1,871
- downloads
-
- 36
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.