HomeBudget & Tax NewsPrivate Rating Systems Could Avoid Section 230 Conflict (Commentary)

Private Rating Systems Could Avoid Section 230 Conflict (Commentary)

Private rating systems evaluating the credibility of online content are an alternative to tech  censorship, says tech entrepreneur Pat Condo.

In a rare moment of agreement in the ideologically fractured Supreme Court, Justices Elena Kagan and Brett Kavanaugh concurred in late February that it was the job of Congress – not SCOTUS – to decide on any changes to a landmark 1996 federal law that has shaped the modern internet.

While hearing arguments in the case of Gonzalez vs. Google, the justices expressed not only wariness of the company’s claim that Section 230 affords tech companies immunity from lawsuits over content posted on their sites, but also a hesitancy to weigh in on the argument.

“Isn’t it better,” Kavanaugh asked, to keep things the way they are and “put the burden on Congress to change that?”

It is true that the Court cannot possibly anticipate the full range of unintended consequences on the Internet that could result from changes to Section 230. Yet, there is little to suggest that a Congress sharply divided along partisan lines is any better equipped to change the law, particularly given that conservatives and progressives can’t agree on what exactly is wrong with it.

Rather than waiting on the Courts and Congress to overhaul Section 230, the private sector can and should take the lead. A solution – one that protects consumers from problematic content without running afoul of the First Amendment – is eminently achievable with the help of advanced AI, but it will require industry cooperation and buy-in around an objective, transparent, and uniform framework.

Instead of censoring or banning content, a rating system would evaluate the credibility of content using metrics such as quality of information, adherence to journalistic principles, and general reliability of the author and domain source. Content evaluation coupled with ratings would allow consumers to make informed decisions about the veracity of the information they encounter.

This approach has worked in the past. A century ago, when Hollywood movies were effectively regulated by a mishmash of community-specific decency standards, the film industry took steps to devise a uniform content rating system. The system underwent several overhauls, but by the 1960s, the Motion Picture Association of America’s transparent rating system was nearly universally adopted on a voluntary basis.

Much like the movies once were, online content today is policed by a patchwork of subjective Trust and Safety standards. These guardrails are often created in an ad hoc manner, they are generally based on individual judgment calls arrived at through an opaque process, and they are unevenly enforced. Problematic content is frequently removed only after it has risen to the level of public discourse, which invariably brings more attention to it.   Not surprisingly, these conventional content moderation strategies have become highly controversial. Furthermore, they have failed to keep up, because online content is produced at a much faster rate than it is being policed – a problem that is already being exacerbated by a tsunami of content produced by AI-powered chatbots.

To keep up with the virtual cascade of new content, a rating system must necessarily be powered by AI. That technology is available now.

It is time for the technology industry to stop hiding behind Section 230 – which has become a shield for behemoths Google, Microsoft, and others – and instead rally behind a standard and transparent rating system that can act as a North Star. Only then can we hold companies accountable for the content on their platforms and help the American public regain some semblance of trust in the Internet.

This is a delicate matter: our collective efforts to combat the unrestrained spread of misinformation must occur within the confines of the First Amendment and freedom of the press. We cannot create a scenario where only a certain few, approved sources can populate the internet with content.

To rework Section 230 – and the Communications Decency Act – is a gargantuan task and one that is fraught with political perils. With the Supreme Court poised to punt to a gridlocked Congress, it is up to the private sector to voluntarily coalesce behind a proven solution that can protect consumers and preserve First Amendment rights that are fundamental to a free society.

Pat Condo is the CEO and founder of Seekr, an AI-powered content evaluation and rating platform.

Originally published by RealClearMarkets. Republished with permission.

For more Budget & Tax News.

Pat Condo
Pat Condo
Pat Condo is the CEO and founder of Seekr, an AI-powered content evaluation and rating platform.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

PROOF Trump's Tax Cuts Workedspot_img
- Advertisment -spot_img

Heartland's Flagship Podcast

- Advertisment -spot_img

Most Popular

- Advertisement -spot_img

Recent Comments