Socially Grounded Exemplars Improve Synthetic Conversations for Health-Related Social Needs Navigation
Hussain, S.-A.; Jackson, D. I.; Thotapalli, S.; McClellan, M. B.; Stanco, M.; Varney, G.; Gleeson, S.; Nugroho, F.; Leever, W.; Fosler-Lussier, E.; Sezgin, E.
Show abstract
Health-Related Social Needs (HRSNs) significantly impact health outcomes, yet traditional care often fails to address them effectively. While conversational agents offer scalable support, their deployment is hindered by privacy risks and a lack of specialized training data for clinical applications. Synthetic data generation offers a solution to address this gap; standard pipelines often prompt LLMs using structured user personas, comprising demographics, constraints, and goals, to emulate dialogues. However, current methods relying on coarse demographic attributes often yield generic or stereotyped personas that lack real-world nuance. To improve the realism of synthetic data, we introduce Socially Grounded Exemplars (SGEs), which translate abstract persona attributes into granular, conversational descriptors. We implemented a two-stage pipeline using GPT-4o to generate SGEs, which then grounded synthetic dialogue generation under various prompting strategies. We evaluated the approach using automatic diversity metrics (Vendi Score) and blinded pairwise preference ratings by community behavioral health specialists (CBHS). Validation confirmed the feasibility of input generation, with GPT-4o achieving an 85% term acceptability rate for SGEs. In conversation generation, dynamic SGEs significantly improved lexical diversity, achieving a Vendi Score of 289.41 compared to 252.36 for the control baseline. CBHS ranked the model combining dynamic SGEs with implicit name-based cueing highest (Bradley-Terry Score: 0.753), surpassing both the SGE-only model (0.663) and the explicit demographics model (0.348). Raters favored the name-augmented model for "Specificity & Natural Authenticity" (30.0%), while explicit demographic labeling reduced perceived authenticity. We show SGEs leverage LLM parametric knowledge to produce diverse synthetic data, surpassing the limitations of rigid demographic ontologies. Our findings indicate that implicit cueing through names yields more authentic representations than explicit labeling, reducing the risk of stereotyped outputs. This framework supports the creation of privacy-preserving, conversational datasets informing tasks (e.g. evaluation, agentic workflows, and model distillation) in sensitive healthcare contexts.
Matching journals
The top 2 journals account for 50% of the predicted probability mass.