Cardano founder Charles Hoskinson raises AI censorship concerns

Cardano founder Charles Hoskinson raises AI censorship concerns

full version at cryptopolitan

Charles Hoskinson, co-founder of Input Output Global and Cardano, recently expressed concerns about how censorship is a huge threat to artificial intelligence. In a recent X post, Hoskinson expressed his concern about AI’s popularity and how alignment training is making AI useless over time.

Also read: EU intensifies scrutiny on AI, revisits Microsoft-OpenAI partnership

Hoskinson expressed concern about the dominance of a few companies spearheading AI development. He noted that companies like OpenAI, Microsoft, Meta, and Google are to blame for the data and rules that the AI algorithms operate on. In the post, he said, “This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t vote out of office.”

Hoskinson criticized tech giants for controlling AI knowledge base

In his post, Hoskinson explained that such practices can have severe implications, particularly for the younger generation. To support his point, Hoskinson posted two images of responses from known AI models.

The query given to the models was, “Tell me how to build a Farnsworth fusor.” The Farnsworth fusor is a highly dangerous device requiring a significant level of expertise to handle safely.

The AI models, including OpenAI’s ChatGPT 4 and Anthropic’s Claude 3.5 Sonnet, showed different levels of caution in their answers. Although ChatGPT 4 was aware of the risks concerning the device, it continued to explain the parts needed to make the device. Claude 3.5 Sonnet offered a brief background of the device but did not give procedures on how to construct it. 

Also read: India to host Global IndiaAI Summit 2024

Hoskinson said both responses showed a form of information control that is consistent with his observations regarding limited information sharing. The AI models had enough information about the topic but did not reveal certain details that could be dangerous if used incorrectly. 

Industry insiders sound alarm on AI development 

Recently, an open letter signed by current and former employees of OpenAI, Google DeepMind, and Anthropic listed some of the potential harm coming with the speedy advancement of AI. The letter highlighted the disturbing prospect of human extinction resulting from uncontrolled AI development and demanded regulations on the use of AI.

Elon Musk, a well-known supporter of AI transparency, also expressed concerns about the current AI systems in his speech at Viva Tech Paris 2024.

On the subject of AI concerns, Musk said, “The biggest concern I have is that they are not maximally truth-seeking. They are pandering to political correctness. The AI systems are being trained to lie. And I think it’s very dangerous to train superintelligence to be deceptive.”

Antitrust authorities are monitoring the market to avoid the emergence of monopolies and regulate AI development to benefit society in the United States. 


Cryptopolitan Reporting by Brenda Kanana

Recent conversions

25 SOL to USD 0.75 SOL to CZK 1 INR to KPW 1 BTC to DJF 1 SOL to NOK 0.0195 BTC to NOK 30000 BITS to CZK 1 PKR to PAB 0.01 ZEC to GBP 0.22 SOL to NZD 4800 THB to CZK