Mainstream LLMs are not fully exploiting memory capabilities, and their designs are influenced by biological brain models, which are prone to errors.
To address these limitations, the paper proposes augmenting LLMs with symbolic memory based on modern computer architectures, using SQL databases to support complex multi-hop reasoning, and validates its effectiveness on a synthetic dataset. More information is available on their project website: https://chatdatabase.github.io/.
ChatDB: Augmenting LLMs with databases as their symbolic memory
1