Original research into how AI systems choose what to cite. We study large language model source selection, citation behaviour, and the emerging science of Generative Engine Optimization.
Investigating how large language models select, rank, and present sources in generated responses across platforms including ChatGPT, Gemini, Perplexity, and Claude.
Mapping the emerging patterns in how AI-powered search engines retrieve, synthesise, and attribute information differently from traditional search.
Studying how content structure, semantic clarity, and technical implementation influence whether AI systems surface and cite a given source.
Developing rigorous, reproducible methodologies for Generative Engine Optimization research that meet rigorous research standards of evidence and transparency.
An analysis of how different LLM platforms select and present sources when answering identical queries, revealing significant variance in citation behaviour.
Examining which content architecture patterns most consistently correlate with inclusion in LLM-generated responses across commercial AI platforms.
Proposing standardised methodology for studying generative engine optimization, including query design protocols and cross-platform testing frameworks.
Our publications are freely available. Explore our research into LLM citation behaviour and generative engine optimization.