ChatGPT users have suspected the online tool has a left-wing bias since it was released in November – and now a scientific study by the University of East Anglia confirms this.
The chatbot is a large language model (LLM) that has been trained on a massive amount of text data, allowing it to generate eerily human-like text in response to a given prompt. One issue is that text generated by LLMs like ChatGPT ‘can contain factual errors and biases that mislead users’, the research team say.