We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
I’ve spent most of my career studying how technology can amplify human abilities, from enhancing physical dexterity to boosting cognitive skills. In recent years I’ve focused on how technology can help make human groups smarter, from small teams to large populations. And what I’ve found is that social media platforms are inadvertently doing the opposite — actively damaging our collective intelligence.
No, I’m not talking about the prevalence of low quality content that insults our intellect. I’m also not talking about the rampant use of misinformation and disinformation that deliberately deceives us. After all, these are not new problems; flawed content has existed throughout history, from foolish misconceptions to outright lies and propaganda.
Instead, I am talking about something more fundamental — a feature of social media that is damaging our intelligence whether the content is factual or fraudulent. To explain this, I need to take a step back and address a few points about human cognition. So, here goes …
We humans are information processing machines, spending our lives observing our world and using those observations to build detailed mental models. We start from the moment of birth, exploring and sensing our surroundings, testing and modeling our experiences, until we can accurately predict how our own actions, and the actions of others, will impact our future.
Consider this example: An infant drops a toy and watches it fall to the ground; after doing that many times with the same result, the infant’s brain generalizes the phenomenon, building a mental model of gravity. That mental model will allow the infant to navigate their world, predicting how objects will behave when they are toppled or dropped or tossed into the air.
This works well until the infant experiences a helium balloon for the first time. They are astonished as their model of gravity fails and their brain has to adjust, accounting for these rare objects. In this way, our mental models become more and more sophisticated over time. This is called intelligence.
And for intelligence to work properly, we humans need to perform three basic steps:
(1) Perceive our world,
(2) Generalize our experiences,
(3) Build mental models.
The problem is that social media platforms have inserted themselves into this critical process, distorting what it means to “perceive our world” and “generalize our experiences,” which drives each of us to make significant errors when we “build mental models” deep within our brains.
No, I’m not talking about how we model the physical world of gravity. I’m talking about how we model the social world of people, from our local communities to our global society. Political scientists refer to this social world as “the public sphere” and define it as the arena in which individuals come together to share issues of importance, exchanging opinions through discussion and deliberation. It’s within the public sphere that society collectively develops a mental model of itself. And by using this model, we the people are able to make good decisions about our shared future.
Now here’s the problem: Social media has distorted the public sphere beyond recognition, giving each of us a deeply flawed mental model of our own communities. This distorts our collective intelligence, making it difficult for society to make good decisions. But it’s NOT the content itself on social media that is causing this problem; it’s the machinery of distribution.
Let me explain.
We humans evolved over millions of years to trust that our daily experiences provide an accurate representation of our world. If most objects we encounter fall to the ground, we generalize and build a mental model of gravity. If a few objects float to the sky, we model those as exceptions — rare events that are important to understand but which represent a tiny slice of the world at large.
An effective mental model is one that allows us to predict our world accurately, anticipating common occurrences at a far more frequent rate than rare ones. But social media has derailed this cognitive process, algorithmically moderating the information we receive about our society. The platforms do this by individually feeding us curated news, messaging, ads, and posts that we assume are part of everyone’s experience but may only be encountered by narrow segments of the public.
As a result, we all believe we’re experiencing “the public sphere” when, really, we are each trapped in a distorted representation of society created by social media companies. This causes us to incorrectly generalize our world. And if we can’t generalize properly, we build flawed mental models. This degrades our collective intelligence and damages our ability to make good decisions about our future.
And because social media companies target us with content that we’re most likely to resonate with, we overestimate the prevalence of our own views and underestimate the prevalence of conflicting views. This distorts reality for all of us, but those targeted with fringe content may be fooled into believing that some very extreme notions are commonly accepted by society at large.
Please understand, I’m NOT saying we should all have the same views and values. I am saying we all need to be exposed to an accurate representation of how views and values are distributed across our communities. That is collective wisdom. But social media has shattered the public sphere into a patchwork of small echo chambers while obscuring the fact that the chambers even exist.
As a result, if I have a fringe perspective on a particular topic, I may not realize that the vast majority of people find my view to be extreme, offensive, or just plain absurd. This will drive me to build a flawed mental model of my world, incorrectly assessing how my views fit into the public sphere.
This would be like an evil scientist raising a group of infants in a fake world where most objects are filled with helium and only a few crash to the ground. Those infants would generalize their experiences and develop a profoundly flawed model of reality. That is what social media is doing to all of us right now.
This brings me back to my core assertion: The biggest problem with social media is not the content itself but the machinery of targeted distribution, as it damages our ability to build accurate mental models of our own society. And without good models, we can’t intelligently navigate our future.
This is why more and more people are buying into absurd conspiracy theories, doubting well-proven scientific and medical facts, losing trust in well-respected institutions, and losing faith in democracy. Social media is making it harder and harder for people to distinguish between a few rare helium balloons floating around and the world of solid objects that reflect our common reality.
So how can we fix social media?
Personally, I believe we need to push for “transparency in targeting” — requiring platforms to clearly disclose the targeting parameters of all social media content so users can easily distinguish between material that is broadly consumed and material that is algorithmically siloed. And the disclosure should be presented to users in real time when they engage the content, allowing each of us to consider the context as we form our mental models about our world.
Currently, Twitter and Facebook do allow users to access a small amount of data about targeted ads. To get this information, you need to click multiple times, at which point you get an oddly sparse message such as “You might be seeing this ad because Company X wants to reach people who are located here: the United States.” That’s hardly enlightening. We need real transparency, and not just for ads but for news feeds and all other shared content deployed through targeting algorithms.
The goal should be clear visual information that highlights how large or narrow a slice of the public is currently receiving each piece of social media content that appears on our screens. And users should not have to click to get this information; it should automatically appear when they engage the content in any way. It could be as simple as a pie chart showing what percentage of a random sample of the general public could potentially receive the content through the algorithms being used to deploy it.
If a piece of material that I receive is being deployed within a 2% slice of the general public, that should allow me to correctly generalize how it fits into society as compared to content that is being shared within a 60% slice. And if a user clicks on the graphic indicating 2% targeting, they should be presented with detailed demographics of how that 2% is defined. The goal is not to suppress content but make the machinery of distribution as visible as possible, enabling each of us to appreciate when we’re being deliberately siloed into a narrowly defined echo chamber and when we’re not.
With transparency in targeting, each of us should be able to build a more accurate mental model of our society. Sure, I might still resonate with some fringe content on certain topics, but I will at least know that those particular sentiments are rare within the public sphere. And I won’t be fooled into thinking that the extreme idea that popped into my head last night about lizard people running my favorite fast food chain is a widely accepted sentiment being shared among the general public.
In other words, social media platforms could still send me large numbers of helium balloons, and I might appreciate getting those balloons, but with transparency in targeting, I won’t be misled into thinking that the whole world is filled with helium. Or lizard people.
Louis Rosenberg is a pioneer in the fields of VR, AR, and AI. Thirty years ago, he developed the first functional augmented reality system for the U.S. Air Force. He then founded early virtual reality company Immersion Corporation (1993) and early augmented reality company Outland Research (2004). He is currently CEO and Chief Scientist of Unanimous AI, a company that amplifies the intelligence of human groups. He earned his PhD from Stanford University, was a professor at California State University, and has been awarded over 300 patents for his work in VR, AR, and AI.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
.article-content .boilerplate-after
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
.article-content .membership-link
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
.article-content .membership-link:hover
color: white;
background-color: #0B1A42;
.article-content .boilerplate-after h3
margin-top: 0;
font-weight: 700;
.article-content .boilerplate-after ul li
margin-bottom: 10px;
@media (max-width: 500px)
.article-content .boilerplate-after
padding: 20px;