Very Convincing Large Language Models

I recently wrote about my thoughts on generative AI. The quality and capacity of Large Language Models (LLMs) continues to command our attention, and companies are generally trying to stick them anywhere. In apps (like Notion) and in little computers you can pin to your chest like the Apple Broach in that episode of Big Mouth (like Humane).

Last month I came across this study on arXiv which looked at the ability of LLMs to generate persuasive texts for people. The full paper's 25 pages (do skim the start and end, though), the authors had both humans and a LLM (Claude) write a couple-hundred words of persuasive text on contentious issues (e.g. police always wearing body cameras when on duty).

The authors compared human- versus machine-made texts along a handful of methods (readability, lexical complexity, sentiment, and morality).

They found that the LLM-generated text to be more convincing, which they scope as being equally emotional, but more lexically complex and morally grounded than human text:

morally laden language significantly impacts attention and can be highly persuasive... [LLMs demonstrated] a strategic use of moral-emotional language that aligns with the persuasive strategy of negative bias, where negative information tends to influence judgments more than equivalent positive information (Robertson et al., 2023; Rozin & Royzman, 2001)... the mere emotional charge of the language may not be as pivotal as the moral framing of the content, aligning with the view that morality can be a stronger driver of persuasion than emotions alone (van Bavel et al., 2024).

I worry a lot about misinformation.

I worry that the companies that got corruptingly rich by, sort of accidentally, creating a miasma for disinformation and misinformation to travel through. I worry they spend this food-poverty-ending wealth in, of all things, little computers to strap to your heads that you can play imaginary lego with.

I worry that don't-fuck-about magnitude problems like the a climate crisis, radicalisation to violence, and emerging pandemics attract, breed, and circulate patently false information.

I do not think that as people get radicalised they are sitting with opposing arguments in each hand and making an informed choice between "human" and "lizard" for the question "what is the royal family?"

We do not need machines that are more convincing, especially when the little black box is just autocomplete. More so than any human, the LLM is entirely a product of its environment - train it on strong false information and you'll get strong false, very convincing arguments.

Need I cast your mind back five months to hbomberguy's explosive video on plagiarism - how widespread and blatant and unnoticed intellectual dishonesty was for several (not small) content creators.

Sure, YouTube videos definitely are fuck-about magnitude. But have a LLM reword some stolen text, or generate some convincing but false information, speak it into your Shure microphone at your desk with some LED backlighting, and what you've got is a semblance of authenticity, and enough removal from what generated the content to be human, and emotional.

I think videos can be a lot more attention-commanding than emotionally laden prose. And all the cool social medias are 80+% video.

I worry that LLMs are going to be another tool for radicalisation and division of a lot of people, and that actually expecting the platforms to police this stuff is essentially impossible. They have no incentive to do it.

I do not know how we get through this, but unlike when they tried to sell us 3D TVs or Segways or NFTs or face computers, this time the change is just sort of happening.

And once more the nerds selling us the utopia do not have a strong anti-dystopia set of tools from what I can see.

See other articles