AI Models in India to Require Government Approval: Implications and Analysis


India’s Ministry of Electronics and Information Technology (MeitY) has issued a recent advisory to technology platforms and intermediaries functioning in the country, urging them to adhere to regulations stipulated under the IT Rules 2021. The advisory specifically instructs companies such as Google, OpenAI, and other technology firms to conduct thorough due diligence and ensure compliance within a timeframe of 15 days.

In a new development, the IT Ministry has directed technology companies to obtain explicit permission from the Government of India before deploying “untested” AI models (as well as software products developed based on such models) within the country.

As per the advisory, “the utilization of under-tested or unreliable Artificial Intelligence models, LLM, Generative AI software, or algorithms, along with their availability to users on the Indian Internet, necessitates explicit permission from the Government of India. Moreover, these technologies should only be deployed after appropriately labelling the potential and inherent fallibility or unreliability of the generated output. Additionally, a “consent popup” mechanism is recommended to inform users about the explicit potential limitations of the output.”

While the advisory lacks legal binding on platforms and intermediaries, it has elicited criticism from tech firms globally, raising concerns that it could impede AI innovation in India. Aravind Srinivas, CEO of Perplexity AI, labelled it as a “regrettable decision by India.”

To provide clarity on the advisory, Rajeev Chandrasekhar, the Union Minister of State for Electronics and Information Technology, addressed X to elucidate the key points. He clarified that the requirement for government permission applies solely to large platforms, encompassing major entities such as Google, OpenAI, and Microsoft. Chandrasekhar emphasized that startups are not bound by this requirement. Furthermore, he underscored that the advisory specifically targets “untested” AI platforms.

It is noteworthy that India’s indigenous company, Ola, recently introduced its Krutrim AI chatbot, promoting it for its “purported understanding of Indian cultural nuances and relevance“. However, as reported by the Indian Express, the Krutrim AI chatbot is reportedly highly susceptible to hallucinations.

Additionally, MeitY has instructed AI companies to “refrain from allowing bias, discrimination, or any actions that could compromise the integrity of the electoral process, including through the utilization of AI models, large language models (LLM), generative AI software, or algorithms.”

The latest advisory comes in the aftermath of a misstep by Google Gemini, where the AI model’s response to a politically sensitive inquiry sparked criticism from authorities. Ashwini Vaishnaw, India’s IT Minister, cautioned Google “against the tolerance of racial and other biases“.

Google promptly acknowledged the issue, stating, “Gemini is designed as a tool for creativity and productivity, and may not consistently provide reliable responses, particularly regarding current events, political matters, or unfolding news. We are continuously striving to enhance its performance.”

In the United States, Google encountered backlash when Gemini’s image generation model failed to produce images of white individuals. Users criticized Google for exhibiting anti-white bias. Subsequently, Google disabled the image generation feature for people in Gemini who are actively striving to enhance the model.

Furthermore, the advisory stipulates that failure by platforms or their users to adhere to these regulations could lead to “possible penal consequences.

The advisory emphasizes that “failure to comply with the provisions of the IT Act and/or IT Rules may lead to potential penal repercussions for intermediaries, platforms, or their users upon identification. These repercussions could range from prosecution under the IT Act to violations of criminal code statutes.”

What Could be the Implications?

Although the advisory lacks legal enforceability over tech companies, MeitY has urged intermediaries to furnish an Action Taken-cum-Status report to the Ministry within 15 days. This could have broader implications, potentially hindering not only tech giants providing AI services in India but also impeding AI adoption and overall technological advancement in the country in the long run.

There’s growing concern that the advisory could lead to increased government bureaucracy, with large companies potentially hesitating to introduce potent new AI models in India due to fears of regulatory overreach. Thus far, all tech firms have kept pace with the latest trends by launching advanced AI models in India, aligning with practices in Western nations. In contrast, Western countries are exercising extreme caution regarding AI regulations that could impede progress.

Furthermore, experts argue that the advisory is “ambiguous” as it fails to clarify what constitutes “untested.” Companies such as Google and OpenAI conduct thorough testing before deploying a model. However, AI models, by their nature, are trained on extensive datasets scraped from the web, which may lead to hallucinations and incorrect responses.

The vast majority of AI chatbots disclose this information prominently on their homepage. The government’s method for determining which models are considered untested and the frameworks used for such assessments remain unclear.

Notably, the advisory requires tech firms to include a “permanent unique metadata or identifier” within AI-generated data (including text, audio, visual, or audio-visual content) to identify the source, creator, user, or intermediary. This raises important questions regarding traceability in AI.

This remains an evolving area of research within the AI field. To date, there hasn’t been any credible method developed to detect AI-generated text, let alone identify the originator through embedded metadata.

Last year, OpenAI discontinued its AI Classifier tool, which aimed to differentiate between human-written and AI-generated text due to inaccurate results. In the battle against AI-generated misinformation, Adobe, Google, and OpenAI have recently implemented the C2PA (Content Provenance and Authenticity) standard in their products. This standard adds a watermark and metadata to the generated images. However, it’s worth noting that online tools and services can easily remove or alter this metadata and watermark.

At present, there is no definitive method to ascertain the originator or user via embedded metadata. Therefore, MeitY’s call to embed a permanent identifier in synthetic data is not feasible at this juncture.

That concludes the discussion on MeitY’s latest advisory for tech companies providing AI models and services in India. What are your thoughts on this matter? Share your comments in the comment box.


What's Your Reaction?

hate hate
333
hate
confused confused
66
confused
fail fail
533
fail
fun fun
466
fun
geeky geeky
400
geeky
love love
200
love
lol lol
266
lol
omg omg
66
omg
win win
533
win

0 Comments

Your email address will not be published. Required fields are marked *