Google Researching How to Use AI to Build Consensus Among Plebs

The problem with the robots is that they are not evil. They are literally just information, and information cannot be evil.

Turning the robots evil is a complicated process, because they get confused when you tell them to lie or otherwise deceive people.

The Guardian:

Artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process, researchers claim.

Experts say a system that can create group statements that reflect majority and minority views is able to help people find common ground.

We already know everyone’s views.

People go around saying their views constantly.

Prof Chris Summerfield, a co-author of the research from the University of Oxford, who worked at Google DeepMind at the time the study was conducted, said the AI tool could have multiple purposes.

Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the “Habermas Machine” – an AI system named after the German philosopher Jürgen Habermas.

The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected.

Participants can also feed critiques of this initial group statement back into the Habermas Machine to result in a second collection of AI-generated statements that can again be ranked, and a final revised text selected.

The team used the system in a series of experiments involving a total of more than 5,000 participants in the UK, many of whom were recruited through an online platform.

In each experiment, the researchers asked participants to respond to topics, ranging from the role of monkeys in medical research to religious teaching in public education.

In one experiment, involving about 75 groups of six participants, the researchers found the initial group statement from the Habermas Machine was preferred by participants 56% of the time over a group statement produced by human mediators. The AI-based efforts were also rated as higher quality, clearer and more informative among other traits.

Another series of experiments found the full two-step process with the Habermas Machine boosted the level of group agreement relative to participants’ initial views before the AI-mediation began. Overall, the researchers found agreement increased by eight percentage points on average, equivalent to four people out of 100 switching their view on a topic where opinions were originally evenly split.

All this seems to be saying is that AI has a better writing capacity than most humans. It’s parsing the information and highlighting the points of agreement while downplaying disagreements.

It’s not a hugely important thing, but it’s a sign of things to come. The plan is to use these robots to manipulate people’s thoughts and beliefs, constantly, in all areas of life.