Rethinking The Bounds Of LLM Reasoning: Are Multi-agent Discussions The Key? | Awesome LLM Papers

Rethinking The Bounds Of LLM Reasoning: Are Multi-agent Discussions The Key?

Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, Yangqiu Song Β· Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Β· 2024

Recent progress in LLMs discussion suggests that multi-agent discussion improves the reasoning abilities of LLMs. In this work, we reevaluate this claim through systematic experiments, where we propose a novel group discussion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observe that the multi-agent discussion performs better than a single agent only when there is no demonstration in the prompt. Further study reveals the common interaction mechanisms of LLMs during the discussion.

Similar Work
Loading…