Large Language Models (LLMs) are powerful tools for summarizing data on well-studied topics. However, they are not good at analyzing new topics that constantly emerge in our societies, which have been missing in their training data and may also be hard to collect real-time data for.
Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are under-represented online.
LLMs also suffer from the hallucination problem, meaning they occasionally produce surprisingly incorrect results.
Despite these limitations, LLMs may generate a valuable set of initial arguments. Humans can use these as a starting point or study the data returned by LLMs to create better topics.