A Mixed Methods Researcher, Evaluator, and Strategist.
Program Evaluation
Case Study
Relevance
This case study demonstrates the value of integrating formative and summative mixed methods program evaluation to encourage consistent, high quality participant experiences. This particular example highlights a portion of the evaluation where significant efforts were invested to identify and address selection bias in a student application review process resulting in dramatically increased participation of women and US Black and Latinx students.
Context and Objective
The International High Performance Computing Summer School (IHPCSS) is an annual week-long advanced computing training program for graduate students and postdoctoral scholars from Canada, Europe, Japan, and the United States. IHPCSS would like to increase the diversity of their participants and presenters.
Role
Co-led and designed the program evaluation, experimentation, and data collection instruments. Secured Institutional Review Board approval, ensured GDPR compliance, collected quantitative and qualitative data, redesigned application forms, built selection tool, analyzed data, drafted reports, and presented findings to internal and external stakeholders.
Methodology
A mixed-methods evaluation was conducted to provide valid and useful information to program leadership, managers, and both domestic and international funders to guide program improvement, assess short- and long-term effectiveness and impact, and increase the likelihood of sustainability. As recommended in the values-engaged, educative (VEE) approach [1], the evaluation measured program quality as that which effectively incorporated cutting-edge scientific content, strong pedagogy, and sensitivity to diversity and equity issues.
Guided by VEE, the following key evaluation questions framed the evaluation:
-
Implementation: Is the IHPCSS program being implemented on schedule and as planned?
-
Effectiveness: Are the two areas of the program’s goals (expanding HPC knowledge and fostering collaboration) operating effectively? How might they be improved?
-
Impact: What outcomes are associated with participation in the program? What is the value-added of participation in the program?
-
Institutionalization: How and to what extent are elements of the program becoming institutionalized? What opportunities and barriers exist?
-
Equity Aspects: In what ways and to what extent does the program serve to advance the interest of those least served in HPC?
Note that this case study is limited to select findings regarding evaluation question five, Equity Aspects.
Study Participants
Population study of IHPCSS key stakeholders specifically, applicants, selected students, reviewers, mentors, presenters, and program coordinators.
Data Sources
On site observations, focus groups, interviews, surveys, and quiz.
Findings
This evaluation found evidence of gender-based selection bias in the program’s review process based on the following findings:
-
Reviewers consistently rated women significantly lower than men during the application review process and explicitly linked differences in applicant quality to gender.
-
No gender differences were found during direct assessment of participant knowledge in concepts tied to selection criteria.
-
No gender differences were found in how often each group engaged in relevant coding or programming activity.
-
Reviewers’ suggestions for improving the process centered on strengthening applicant submissions through constructive feedback (54%) and clarifying goals for increasing the participation of women (69%) and members of underrepresented racial and ethnic groups (58%).
Insights
Based on these findings, the following insights were generated.
-
The application form preferred a particular experience and communication style that did not accurately identify skills or achievement among diverse applicants rendering it biased and inappropriate for this program.
-
Suggestions to address equity were deficit-minded, centering on “fixing” the student, not system solutions.
-
Reviewers overestimated their capacity to objectively review candidates and did not see the fundamental deficiencies in a process they created to conduct reviews.
Opportunities
These insights identified the following strategies and opportunities for improving equity in the review process.
-
Create and implement a revised application form that centers on measurable skills.
-
Build an applicant scoring tool that automatically conducts initial reviews based on measurable skills.
-
Institute a blind review process that shields applicant demographics from reviewers.
Evaluation Impact
After gaining buy-in from the program, I was able to construct a new application form and build an automated scoring tool for the committee to integrate into their process. Immediate impacts included a 30% increase in female participation and 33% increase in US Black and Latinx participation following the introduction of these new products. Furthermore, gender-based selection bias was reduced to an undetectable amount and the review process itself was scaled-up by redistributing effort from reviewers to automated processes.
Additional Resources
A copy of the full evaluation report from 2015 is available here (PDF). Note, however, that the evaluation is ongoing and this case study has been updated to reflect recent outcomes.
[1] Greene, J. C., Destefano, L., Burgon, H., & Hall, J. (2006). An Educative, Values-Engaged Approach to Evaluating STEM Educational Programs. New Directions for Evaluation, 2006(109), 53–71. https://doi.org/10.1002/ev.178