Government Advisory on AI Models in India

The Indian government issues a directive requiring tech companies to seek approval before launching AI models and label their reliability, emphasizing user consent and safety on the internet.
Government Advisory on AI Models in India

Government Advisory on AI Models in India

The Indian government has taken a significant step in regulating the deployment of artificial intelligence (AI) models in the country. In a recent advisory, tech companies are now required to seek government approval before launching untested AI models and clearly label their potential unreliability. This directive comes in response to the controversy surrounding Google’s AI platform, Gemini, and its responses to queries about Prime Minister Modi.

According to IT minister Rajeev Chandrasekhar, platforms must seek user consent and ensure safety on the Indian public internet, or face penal consequences. Chandrasekhar emphasized the importance of transparent user consent before deploying under-trial or erroneous platforms, stating that accountability cannot be evaded by later apologies.

The government’s stance is clear - any use of under-testing or unreliable AI models, software, or algorithms on the Indian internet must have explicit permission from the Government of India. These models should only be deployed after appropriately labeling the possible fallibility or unreliability of the output generated.

In a move to combat deepfakes and misinformation, the government has made it mandatory for tech companies to seek permission before launching AI models in India. Social media companies have also been instructed to label under-trial AI models to prevent the hosting of illegal content on their platforms.

The Ministry of Electronics and Information Technology issued an advisory to intermediaries and platforms, warning of criminal action in case of non-compliance. The responsibility lies with the platforms to ensure that the use of AI models does not permit users to engage in unlawful activities. Non-compliance with these provisions will result in penal consequences, as stated in the advisory.

This directive underscores the government’s commitment to ensuring the safety and trustworthiness of AI technologies deployed in India. By requiring approval and labeling of AI models, the government aims to protect users and uphold the integrity of the Indian internet space.

As the tech industry continues to innovate and develop AI technologies, compliance with these regulations will be crucial for tech companies operating in India. The government’s proactive approach to regulating AI models sets a precedent for responsible and ethical AI deployment in the country.