Published: Sep 15, 2023
Understanding the Algorithm of Thoughts
Microsoft recently released their Algorithm of Thoughts Method, a new training method that evaluates each step (or thought) the LLM takes from the instruction to avoid making an early, disastrous wrong turn.
1. Introduction
The “Algorithm of Thoughts” (AoT) is a groundbreaking approach that seeks to harness the power of LLMs in a unique way. Drawing inspiration from both human reasoning and algorithmic methodologies, AoT aims to fuse these elements to enhance the problem-solving capabilities of LLMs. Just as humans instinctively draw upon past experiences when navigating complex problems, AoT enables LLMs to reference prior steps, breaking through the barriers of human working memory. This approach signifies a new paradigm of in-context learning, where the LLM doesn’t just imitate an algorithm’s iterative thinking but infuses its own intuition to achieve remarkable efficiency.
2. The Basics of AoT
2.1. What is AoT?
AoT is a technique that allows LLMs to generate solutions by referencing their previous steps. It’s like a blend of human intuition and algorithmic precision. Imagine trying to solve a jigsaw puzzle. Instead of randomly placing pieces, you’d remember which pieces fit together and use that knowledge to guide your next move. That’s what AoT does, but for computational problems!
2.2. How Does It Work?
AoT operates in two phases:
- Exploration Phase: The model generates multiple potential solutions.
- Exploitation Phase: The model refines these solutions based on feedback.
This approach is inspired by how humans weigh the pros and cons before making a decision. AoT evaluates multiple paths before settling on an answer, ensuring a comprehensive contemplation rather than a narrow focus.
3. AoT in Action: The Game of 24
To truly grasp the power of the Algorithm of Thoughts (AoT), let’s delve into a practical experiment: the Game of 24. This game is a classic arithmetic puzzle where players are given four numbers and must use each number exactly once, along with basic arithmetic operations (+, -, *, /), to arrive at the number 24.
3.1. The Experiment
Researchers used this game to test the effectiveness of AoT. They compared three different approaches:
- Standard Prompting: The model is directly asked to solve the problem. See the basics of Few and Zero Shot Prompting.
- Chain-of-Thought (CoT): The model sketches out successive steps leading to the final solution. An extension is the Tree of Thoughts prompting technique.
- Algorithm of Thoughts (AoT): The model integrates the search process, using markers to guide subtree exploration.
For instance, given the numbers 8, 6, 4, 4
:
- Standard Prompting might directly provide the answer:
(4 + (8 - 6)) * 4 = 24
. - CoT would detail the steps:
8 - 6 = 2
, then4 + 2 = 6
, and finally6 * 4 = 24
. - AoT would explore various paths, such as
4 - 4
,8 - 6
, and then4 + 2
, before arriving at the solution6 * 4 = 24
.
3.2. Insights from the Experiment
The experiment showcased the unique approach of AoT. While standard prompting aims for a direct answer and CoT provides a step-by-step breakdown, AoT’s in-context example integrates the search process. It uses markers as “first operations” to guide subtree exploration, evaluating multiple paths and referencing prior steps to arrive at a solution.
What’s fascinating is that AoT doesn’t just imitate an algorithm’s iterative thinking. It infuses its own “intuition” to achieve a search efficiency that even surpasses the algorithm itself. This means that AoT can explore various solutions, sieve out infeasible options, and refine its approach—all within its iterative generation cycle.
3.3. Why This Matters
This experiment underscores the potential of AoT. By capitalizing on the recursive capabilities of LLMs, AoT emulates a hybrid human-algorithmic approach. It signifies a new paradigm of in-context learning, moving beyond the traditional “supervised-learning” mold to a structure that covers the problem, the search process, and the solution.
**4. Proposed Strengths of AoT
Systematic Reasoning
According to the authors AoT helps LLMs tackle problems more systematically, much like how a detective would piece together clues to solve a mystery.
With AoT, LLMs can learn from their mistakes and successes, similar to how we learn from experience.
Versatility
AoT is versatile and can be applied to various tasks, from math problems to code generation. It’s like a Swiss Army knife for LLMs!
**5. Notes From the Paper
The paper emphasizes the potential of LLMs and their ability to generalize from limited examples. It’s like having a sponge that soaks up knowledge and then squeezes out solutions when needed.
AoT’s approach is inspired by how humans think and reason. Just as we might ponder over a decision by weighing pros and cons, AoT evaluates multiple paths before settling on an answer.
While AoT has shown promise, there’s potential for refining its efficiency and understanding it better from theoretical angles.
Conclusion
The Algorithm of Thoughts is a new interesting prompting approach. I think it’s important to note that the experiments involved a lot of handholding. So, while the results are impressive, we should try a few more real world examples and gather results from there.
1. Scope of Application
While AoT can simulate running code, this might be a challenge for models other than GPT-4. Earlier models have struggled with tasks that require maintaining states, such as simple counting. Moreover, while the Game of 24 and crossword puzzles are relatively simple, more complex problems like maximum bipartite matching might pose challenges. The simulation log might not fit as in-context examples, limiting the approach’s scalability.
2. Practicality and Efficiency
Is AoT practical? If we’re encoding an entire algorithmic simulation, why not use the exact algorithms instead? Why employ GPTs to simulate it? While the exploration of whether neural networks can simulate structure-sensitive algorithms is undoubtedly fascinating, there’s a fine line between innovation and redundancy. Some argue that this approach might be “borderline cheating” since the step-by-step simulation needs to be provided explicitly.
3. Noisy Domains and Real-world Application
AoT’s application in noisy domains is another area of concern. Given that real-world data is often messy and unstructured, the precision required for algorithmic simulations might not be as effective in such environments.
4. Cost Efficiency vs. Optimized Prompting
A perspective to consider is that AoT might be more about cost efficiency than optimized generic prompting. It could be a method for generating cheaper synthetic data compared to using the Tree of Thoughts. If this is the primary advantage, it’s crucial to weigh the benefits against the potential limitations.
5. The Future of AoT
Despite the critiques, there’s a consensus that LLM-guided search, like AoT, can be a reasonable approach for tasks where progress can’t be easily evaluated numerically.