AcademicCodeMachine LearningTechnology

System 2 Attention by Meta

What’s New

The research introduces System 2 Attention (S2A) in Large Language Models (LLMs) to address issues with soft attention in Transformers. S2A is designed to improve the handling of irrelevant or biased information by regenerating input context to focus only on relevant parts.

Problem

Traditional Transformer-based LLMs, like LLaMA-2-70B-chat, often erroneously incorporate irrelevant details from their input context, leading to less factual outputs, especially in cases involving opinionated or extraneous information.

Solution

S2A is evaluated against a baseline (standard zero-shot approach) and an oracle prompt (filtered for relevance). The S2A approach involves regenerating the context to filter out irrelevant parts and applying LLM reasoning to this refined context. This method draws inspiration from human cognitive ‘System 2’ processes, emphasizing deliberate attention allocation in error-prone scenarios. For factual QA and longform generation tasks, S2A uses specific prompts to emphasize factuality and objectivity.

Results

In tasks involving opinions or irrelevant content, S2A outperforms the baseline and closely matches oracle prompt performance. For factual QA with opinionated input, S2A achieved an 80.3% accuracy, nearly matching the oracle’s 82.0%. It also showed higher objectivity in longform generation and improved accuracy in math word problems, illustrating its effectiveness in filtering relevant context for more accurate LLM responses.

Join Upaspro to get email for news in AI and Finance

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.