When augmented with retrieval, LMs sometimes overlook retrieved docs and hallucinate 🤖💭 To make LMs trust evidence more and hallucinate less, we introduce Context-Aware Decoding: a decoding algorithm improving LM's focus on input contexts 📖 arxiv.org/pdf/2305.14739… #NAACL2024
5
61
343
43K
223
Download Image
1️⃣ Context-Aware Decoding simply contrasts output probabilities with and without the desired focus contexts and samples from this contrasted distribution 📊. 2️⃣ How well it works? Without additional training, it improves pretrained LMs' faithfulness (14.3%📈 for LLaMA)
@WeijiaShi2 Interesting work! Weijia's work is all very inspiring to me, but curious how it's only spreading now haha.😂