Fixing Censorship with Web3

Fixing Censorship with Web3

It’s important to keep striving for better models of censorship that can benefit society without infringing on individual rights and freedoms.

Introduction

Censorship has been around as long as information has been recorded and shared. Different groups, including religious authorities, governments, and corporations, have used censorship as a way to control what people can see and hear, sometimes for legitimate reasons but often to influence public opinion in their favour. It’s important to keep striving for better models of censorship that can benefit society without infringing on individual rights and freedoms.

The emergence of both Web3 and AI presents an opportunity to find new solutions to this problem. These two fields intersect and could potentially lead to innovative approaches that balance the need for censorship with respect for individual liberties. The following is such an idea.

The Need for a Redesign

The shortcomings of the current censorship system have led to a demand for a new approach. To illustrate this, consider a tech company that creates a massive social media platform with over a billion users across several countries like Twitter or Facebook.

On this platform, let’s take one example of a video that was actually posted by a user that depicted the burning of an American flag. This is very likely to offend some users who may hail from the country that the flag represents. The offended users have a few options at their disposal. They may respond to the initial post and ask the user to take down the post because they’re offended, or, they may report the post to the platform for being offensive. The platform then decides whether the post is required to be taken down.

Then some form of the following decision tree ensues within the department of the social media platform in charge of handling such user reports:

  1. Is the post in line with the published set of standards related to posting content on the platform?
  2. Was the first user right to post the content?
  3. Was the second user right to be offended by the content?
  4. Is the offensiveness “sufficient” to warrant the removal of the post?
  5. How much of a political or economic backlash will the platform face if it chooses to leave the post online or take it down?

Consider another example that occurred more recently. The artist Lindsay Mills shared a photo of her baby on various social media platforms. The photo showed the mother and baby, with only the baby’s butt visible. While some platforms allowed the photo to remain, one platform not only removed the post but also banned Mills’ account. This decision was made by an algorithm without human involvement, and appeals to restore the account were unsuccessful. The incident gained attention because Mills is married to Edward Snowden.

These examples demonstrate the gray areas surrounding censorship, as society was divided on the appropriate course of action in each case. They highlight the problems with the mechanisms behind censorship, as corporations hold significant influence over society but are not elected to make these decisions.:

Furthermore, the following questions arise:

  1. Will the platform make a decision based on standards that have been clearly published?
  2. What was considered during the setting of these standards?
  3. Who was involved in setting these standards? Were the people that weighed in on the standards representative of the population of users that will eventually use it or was it more representative of the standards of the corporation and where it was founded?
  4. Should the standards be the same across geographies and cultures? Is it a lowest-common-denominator approach that is safe for all or were other factors taken into account for setting these standards?
  5. What is the basis for judging whether the content that clearly lies in the grey areas should be taken down or not? Is it simply how large a body of people it offends? How is the voice expressing the minority opinion protected?
  6. Do these standards change based on who the audience is, by age, by gender, by their own level of maturity? Can a user have control over what they want to be able to see or not?
  7. Can I make up my own mind after exposure to the said content?
  8. Does it evolve as a society’s views on issues change over time? What doesn’t change?
  9. How does the platform handle pressure from external entities such as influential people, large groups or even governments of countries? What are the concessions it will make in this regard?

The platform cannot avoid adopting a specific political viewpoint because there’s no way around it. But if it holds any political leaning too strongly, it may loose users to rival platforms that do the same thing but only differ in their political ideologies. This is exactly what happened when Twitter decided to de-platform Trump. The problem with this then is it creates echo chambers where users are surrounded only by people that agree with them and view anyone with an alternative viewpoint as the other. We loose the public town squares where ideas are debated which is essential to building stable societies with moderate political leanings instead of societies that may erupt into civil wars at any moment.

Furthermore, we are entrusting the responsibility of making decisions beneficial for society as a whole to a corporation whose sole purpose is generating profits for its shareholders. The mechanism does not allow for the consideration of societal well-being, as corporations must prioritise profits and their ability to sustain losses in a confrontation with political figures or governments pressuring them to make specific decisions. A single misstep can lead to lawsuits that could bankrupt the company.

When did it become a tech company’s responsibility to determine censorship implementation on a platform? Why do we accept this as the only approach when tech companies have consistently demonstrated their inadequacy in handling the responsibility of building good societies?

A Decentralised Solution

A more effective approach would be to begin with the recognition that free speech is a fundamental right for all individuals. From this foundation, the solution should not focus on prohibiting certain forms of expression, but rather on preventing individuals from encountering offensive content. Although the distinction may appear subtle, the solutions for each approach differ significantly. By focusing on the latter, several benefits arise, as outlined below:

  1. The fundamental right to freedom of speech is maintained with no one being de-platformed
  2. The user can control what they want to be exposed to or not be exposed to on an individual level and it can change over time on an individual basis
  3. No other individual or entity dictates the standards for determining what someone should or should not be exposed to
  4. This system cannot be corrupted by external forces putting pressure on any single body

While researching how a solution for this could work, the movie industry stood out as having a model that we could build upon. The film industry rates movies with suitability ratings like “PG-13,” indicating whether it’s appropriate for viewers aged 13 or older, and streaming services like Netflix provide additional information such as “Contains adult themes” or “Has crude humor” to help audiences make informed decisions. Even if two audiences are demographically identical, they may choose differently based on their preferences. This model has been effective in movies, so what if it was applied to content on the internet with the necessary modifications?

One potential approach is to assign tags to all posts on a fictional social media platform that describe the content. Viewers could then apply filters based on these tags to allow or prevent the content from appearing in their timeline. This central idea serves as the starting point for outlining how such a system could operate.

Step 1: Tagging the content

While AI models had to be trained to properly identify content in the beginning, the models have become pretty good at this task today. Now the model can be tested against the content to produce the right tags in the content. This is true for text, audio or video based content and across languages. So in the first pass, any new content that’s posted on a platform can be processed through AI models first to create a tag cloud for the content.

Tags can be improved upon by groups of people post the initial stage to provide an additional layer of meaning through the lens of different individuals. This should happen over time as well so as to not allow the meaning of the content to stagnate in time as the same content could change its relevance at a different point in time.

Step 2: Setting up filters

On the receiving end of the content pipeline, the users need to mark content that they find offensive. But this process could be explicitly done with the user flagging some content as offensive, but also could be done through observing the viewing patterns such as the user scrolling past the content faster than they do on other content or hitting the ‘skip’ or ‘stop’ buttons on videos. One may argue that platforms like YouTube are already doing some version of this, but there are two big differences. Firstly, the data is maintained in a silo on the YouTube servers and not by the users themselves. Secondly, the goal of these platforms is self-serving and to increase viewership on the platform, not to do what the user wants.

While the above process gets better over time, users could also adopt filter sets from other pre-built filters created by people they trust, as starting points for their own filters to achieve immediate relief. This set then gets modified with the users own interactions with various platforms over time using on-device AI engines that are becoming more and more prevalent.

Step 3: Viewing content

This is probably the simplest part. There can be timelines titled “For You” which is a view that applies the preference filters on the content being viewed and a second timeline titled, “Unfiltered” which shows all the content on the timeline if the user so wishes. This again is not even a new mechanism as most content platforms already use such devices.

A Solution for the Age of AI and Web3

While thinking through the solutions here, it was clear that the solution to the problem of censorship is not based on the imagining of technologies that don’t yet exist but the repurposing of those that already do. So why hasn’t it been done yet? I believe there are two reasons. Firstly, as anyone in software can tell you, the higher the amount of input you require from a user, the lower it’s adoption rate is going to be. So a self-learning system like AI or ML was necessary to exist for this to work otherwise it would have been humanly impossible to build a system that works. Secondly, the filters and learnings that a company like YouTube would develop about it’s user’s preferences were guarded as intellectual property of the company and not shared with even the user. So a decentralised system like Web3 needed to exist for such a system to work across the web.

We’re living in some really interesting times when these kinds of ideas will suddenly come to life because of the fertile ground created by AI and decentralisation that have culminated to put the user at the centre of the technological universe.

Conclusion

There are obviously a lot more steps and nuances to consider when building this system, for example, how would such a system work for children and those that are not as tech savvy or even handle content that is clearly illegal across all nations. But I’m sure there are smarter people that could propose better ideas based on the foundation that has been proposed here. I just didn’t want to write an article pointing out all the problems without at least proposing a potentially better solution. If you are interested in discussing any of the ideas proposed here a little further, please do reach out.