Bad Actors are Grooming LLMs to Produce Falsehoods GenAI powered chatbots’ lack of reasoning can directly contribute to the nefarious effects of LLM grooming: the mass-production and duplication of false narratives online with the intent of manipulating LLM outputs. As we will see, a form of simple reasoning that might in principle throttle such dirty tricks is AWOL. Here’s an example of grooming. In February 2025, ASP’s original report on LLM grooming described the apparent attempts of the Pravda network–a centralized collection of websites that spread pro-Russia disinformation–to taint generative models with millions of bogus articles published yearly.
it's indirect grooming: the propaganda articles also exist on their own and some people might also read them on their own, without LLM filtering. are people who are susceptible to LLM misinformation (i.e. take LLM output uncitically at face value) are also those who have always been susceptible to misinformation through traditional misinformation channels?
people are often unable to identify satire, and they're often not even to blame, because some self-labelled satire websites are just creating fabrications1 that are not satire in the classical sense, then labelling it satire in fine print on the hidden about page.
what are the solutions? train independent models to recognize misinformation, propaganda and satire and implement those as pre-training or pre-output filters?
usually not very funny, but that depends on your sense of humor i guess.
↩