SomaticPirate 7 hours ago

A useful reminder that just because an LLM "appears" to be thinking and reasoning that it likely isn't. If it hasn't seen something similar or "in distribution" before, then it typically doesn't perform well.