This was another interesting finding from
@Connor_Kissane
:
- **Base** models trained today refuse like chat models: "I'm sorry, but I cannot provide assistance..."
- This even happened before ChatGPT, but it's a bit different, e.g. Llama-1 says "I'm sorry, I'm not familar..."