Large Language Models (LLMs) provide strong summarization tools and can be used to summarize data on well-studied topics. However, they are not good at analyzing new topics that constantly emerge in our societies, which have been missing in their training data and may also be hard to collect real-time data for.
Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are poorly represented online.
A related point is that LLMs are known to have a limited reasoning capability (See, for example, here). Some researchers attribute this limitation to the structure of neural networks, suggesting they are not inherently suited for reasoning. As mentioned earlier, this implies LLMs are not adept at developing new arguments from scratch.
Needless to say, LLMs also suffer from the hallucination problem, meaning they occasionally produce surprisingly incorrect results.
Despite these limitations, LLMs can, in some cases, generate a good set of initial arguments, which is valuable. Humans can then refine these arguments or reflect on the retrieved data to create more interesting topics.