Patterns of genAI bias in guiding prospective undergraduate students: a study of UK neuroscience programmes
Potter, H. G.
Show abstract
Generative artificial intelligence (genAI) tools are increasingly used by prospective higher education (HE) applicants seeking guidance on university and programme selection. Despite rapidly expanding use, little is known about how genAI systems may introduce or amplify bias in undergraduate admissions decision-making. Here, we systematically examined patterns of bias across three widely used genAI chatbots (ChatGPT, Copilot, Gemini) using neuroscience as a representative UK undergraduate programme. We constructed 216 prompts that varied by applicant characteristics (e.g. gender, study type, academic attainment). Each prompt was submitted to all three chatbots, generating 648 responses and 3240 individual programme recommendations. Output responses underwent text analysis (e.g. n-grams, gender-coded language), and national HE markers of esteem (REF21, TEF23, NSS24) were analysed. Applicant grades and priorities produced the strongest effects on genAI outputs. Higher-grade applicants and those prioritising research received significantly more masculine-coded language, independent of applicant gender. N-gram patterns also diverged: high-grade prompts more frequently elicited terms relating to excellence and research intensity, whereas lower-grade prompts produced greater emphasis on widening access. Recommendations were systematically skewed, with higher grades, private schooling, and research-focused priorities increasing the likelihood of recommending elite institutions and programmes with higher entry requirements. Critically, the gender-coded language of outputs predicted institutional characteristics: masculine-coded responses were associated with recommendations featuring higher entry thresholds and stronger research performance, while feminine-coded responses favoured institutions with higher student satisfaction. These findings reveal clear, systematic biases in how genAI guides prospective HE applicants. Such biases risk reinforcing existing educational and socioeconomic inequalities, underscoring the need for transparency, regulation, and oversight in the use of genAI within HE decision-making. HighlightsO_LIGenAI is widely used by HE applicants despite little study of its biases. C_LIO_LI216 prompts across 3 chatbots generated 3240 programme suggestions. C_LIO_LIGrades and priorities drove major shifts in language and recommendations. C_LIO_LIGender-coded wording mapped onto research strength and entry standards. C_LIO_LIGenAI biases may reinforce inequalities in HE admissions decision-making. C_LI
Matching journals
The top 5 journals account for 50% of the predicted probability mass.