The philosopher Harry Frankfurt once defined a bullshitter as someone who doesn’t care whether the ideas they are sharing are true or false, they only care about power and rhetorical superiority. This makes them more dangerous, and a threat to truth, than liars.
LLMs can be seen as bullshitters as their only goal is to generate text and, in the case of chatbots, can easily be nudged towards false narratives. They have no intention, yet their existence may lead to dire consequences as the only goal for the companies that train and employ them is to persuade readers.
- Keywords: language-models
- Related: @bender2020