Is Centralised AI Unethical?

Centralised AI
Centralised AI

For AI to reach its potential and for society to benefit from it, AI needs to be decentralised, i.e., different stakeholders in the AI community should have equal access to all resources like datasets, compute power and the source codes for different AI models. But that is not the case today.

Today, most of the breakthroughs in the field of AI come from big organisations. AI text-to-image generators such as DALL-E2 and Imagen to Large Language Models (LLM) such as GPT-3, have all come from large organisations.

However, none of these AI models are open-sourced. Today, AI still remains fairly centralised. This means the inner functioning of such models is known by only a handful of people.

While large organisations such as Meta, Google and Microsoft are slowly adhering to the open-source culture and are open sourcing some of their models, many still believe that AI should remain centralised. Balaji Srinivasan, former CTO at Coinbase, believes that centralised AI itself is unethical.

The centralisation of AI happens because of the requirement of resources such as large datasets and computing power, which often lie in the hands of large organisations. The concentration of these resources with a few is often seen as unethical. Further, the lack of transparency, interoperability and limited participation of other stakeholders in AI innovation in a centralised AI system makes it unethical, according to many members of the AI community.

One Twitter user even said that centralised AI couldn’t be fully ethical without being fully transparent. For AI to reach its full potential, it is imperative that the broader community must have a good understanding of the different AI models, how they function and how they can be improved. Unfortunately, however, that is not the case.

Read more