https://arxiv.org/pdf/2305.10601.pdf
Language models are widely used for various tasks but are limited in their token-level, left-to-right decision-making approach during inference.
The "Tree of Thoughts" (ToT) framework is introduced to address these limitations by allowing models to explore coherent units of text ("thoughts") for deliberate decision-making, considering multiple reasoning paths, self-evaluating choices, and global choices.
Experiments with ToT demonstrate significant improvements in problem-solving abilities for tasks involving planning or search, such as Game of 24, Creative Writing, and Mini Crosswords.
In Game of 24, ToT achieved a remarkable success rate of 74% compared to GPT-4 with chain-of-thought prompting's 4%. The code repository for all prompts is available at https://github.com/ysymyth/tree-of-thought-llm.