Frontline: National: Friday, 21 April 2023.
The Policy Director of the
Internet Freedom Foundation called the proposal to constitute a “fact-check
unit” an “unconstitutional exercise”.
On April 6, the Ministry of
Electronics and Information Technology (MeitY) notified the Information
Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment
Rules, 2023 (IT Amendment Rules, 2023). The proposed amendments primarily
concern online real money gaming intermediaries. However, MeitY also added a
proposal regarding the constitution of a “fact-check unit”, which would have
much broader ramifications for all intermediaries, including social media, OTT
platforms, and digital news platforms.
This fact-checking unit,
once notified by the government, is tasked with monitoring the Internet for
“fake or false or misleading” content about the Union government, thus granting
the government the discretionary power to label content about itself as fake,
false, or misleading. This has led to criticism from digital rights defenders
and organisations, editors, non-profits, and others.
The Editors Guild of India
released a statement on April 7 saying that this amendment will have “deeply
adverse implications for press freedom in the country”. Urging MeitY to
withdraw the notification, the Guild said that the amendment was “akin to
censorship”.
The stand-up comedian Kunal
Kamra filed a writ petition in the Bombay High Court challenging the
constitutional validity of the amendment. The court sought from MeitY a
“factual background” to the amendment and asked why the notification should not
be stayed.
Internet Freedom Foundation
(IFF), an organisation working on digital liberties, has raised concerns about
the amendment since January when it was first proposed. “Assigning any unit of
the government such arbitrary, overbroad powers to determine the authenticity
of online content bypasses the principles of natural justice, thus making it an
unconstitutional exercise,” it said in a statement on April 6. In an interview
with Frontline, Prateek Waghre, Policy
Director of IFF, discussed the issue at length. Excerpts:
Can you explain what
“intermediary” and “safe harbour” mean?
An intermediary is defined
as “any person who on behalf of another person receives, stores or transmits an
electronic record or provides any service with respect to that record…”. This
broad definition encompasses a range of services from cyber-cafés to any service
on the Internet. So, your social media platform, the Internet service provider
you use, a service that resolves Domain Name System (DNS) requests, and many
websites where we transact, post reviews, etc., are classified as
intermediaries. The idea is that an intermediary facilitates an exchange of
information.
As part of the IT Rules,
there exists a concept of intermediary liability and safe harbour. For a
website that allows user-generated content, allows people to post, and allows
other people to transact, there is a question of how liable they should be for
the actions of others on their platforms, websites, or services.
There is a balance here: as
long as intermediaries meet a set of due diligence requirements that have been
defined, they are not directly liable for the actions of their users. You still
have other compliance requirements to meet as a company, but you have some
shield where you aren’t prosecuted for the acts of others.
The current amendment is
not the only problematic one. There is a precedent of overreach in the IT Rules
amendment of 2021. What happened then?
The IT Rules 2021 are, in
many ways, where this problem started. The executive, through subordinate
legislation, was trying to give itself the power and ability to do things that
ideally need a parliamentary process.
The IT Rules 2021 also
broadened the ambit. Earlier, it was just under the MeitY. The IT Rules 2021
brought in the Ministry of Information and Broadcasting to oversee parts of the
rule that dealt with online curated content platforms and digital news
platforms: Netflix, Amazon Prime and so on, digital news websites, and news
aggregators.
It required creating a
multi-tier, self-regulatory organisation, which would involve government
oversight at the highest tier. News publishers and journalists challenged it,
especially regarding the oversight of news publishers, and the court stayed
certain parts of these rules. These challenges are now waiting to be heard in
the Supreme Court. So, the 2021 amendments are where the issue started as the
executive overreach went beyond what the IT Act empowered them to do.
Clear conflict of
interest
What does the Central
Government/MeitY intend to do with the IT (Amendment) Rules, 2023?
There’s some history to the
amendment. Until October 2022, the due diligence requirements stated that
intermediaries had to inform their users not to share, transmit, or upload
certain categories of content. The October 2022 amendment changed that obligation
to saying that intermediaries are expected to make “reasonable efforts” to
cause the user not to do XYZ. So the responsibility of an intermediary was
changed in October to actively prevent users from posting certain kinds of
content. It was a significant shift in the idea of what an intermediary should
or should not do.
Now, the 2023 amendments
have gone further. The condition that has been added was that if any
information or content has been flagged as fake, false, or misleading (I am
paraphrasing here) by a fact-checking unit (which would be notified by the
Union government) on content that’s about matters related to the Union
government, then there’s an obligation on intermediaries to make “reasonable
efforts to cause” the users not to do so. Essentially, this means that the
intermediary has an obligation on them to act against this content.
It’s not the Press
Information Bureau; that’s one change from the amendments stage to now. It is
some fact-checking unit that will be notified. But it allows such a unit to
issue a de facto takedown order, a significant change from the existing
process.
Such takedown orders were
issued under Section 69A of the IT Act, 2000, and the 2009 rules in the
existing process. There is criticism of the rules, such as they need to be more
transparent, etc. But there are checks and balances for such takedown orders.
The amendment is effectively bypassing even those provisions.
The government is
incentivised to say that something that shows it in a negative light or raises
questions about its policies or the outcomes is fake, false, or misleading.
There is a clear conflict of interest here.
Element of interpretation
The rules use the words
“fake or false or misleading” to describe the content that will be fact-checked
by a fact-checking unit notified by the government. Do we have definitional
clarity on these words?
Often, what we’re dealing
with may not even be fake. It could be something taken out of context, not even
new, or just an opinion. That’s just a definitional aspect. There’s a usage
aspect of it. Over time, the term has been used to dismiss views that someone
doesn’t agree with.
The amendment also says
“false or misleading”, which are potentially problematic. There are various
types of content you cannot necessarily distill down to true versus false.
Satire can be false. Opinions are not always easily or directly falsifiable
because they involve an element of interpretation; two people can interpret a
set of facts or the same information to draw different conclusions. This is
why, in many conversations about dealing with problems in the information
ecosystem, it is a more complex issue than looking at whether you can disprove
something.
In January 2023, several
digital rights and media groups criticised the manner in which MeitY proposed
the amendments. Could you detail the public consultation process and why it was
inadequate?
The public consultation for
the “fact-checking” amendments has been problematic since the beginning in
several ways. In January, a consultation process for amendment to the IT rules
pertaining to online gaming intermediaries was ongoing. The process had mainly
online gaming intermediaries and adjacent people paying attention to it.
However, on the evening of
the last day of that consultation, MeitY introduced the new amendment on
fact-checking along with a deadline extension of seven-eight days. This
amendment, inserted in the due diligence requirements for the intermediaries
section, had significantly broader ramifications, affecting many intermediaries
and people. An extension was issued on the final day, but probably of limited
value, as most stakeholders would have already scrambled to submit according to
the initial deadline.
There was pushback and
criticism, with several calls to withdraw these proposed amendments. Since
consultation responses are no longer shared and even denied for Right to
Information requests citing procedural technicalities, it is hard to say to
what extent the final notification reflects the inputs and concerns of civil
society organisations and the media. In response to the criticism, the Ministry
stated that further consultations and meetings would be held. However, as the
Editors Guild of India noted in its statement on April 7, even this does not
seem to have happened.
“The amendment says
“false or misleading”, which are potentially problematic. There are various
types of content you cannot necessarily distill down to true versus false.” : Prateek Waghre
Policy Director, Internet
Freedom Foundation
Could you comment on the
other broader process of such overreach by the Union government in recent
times?
The last legislation
proposed in the past six or seven months is the Draft Indian Telecommunication
Bill, 2022, or the Draft Digital Personal Data Protection Bill, 2022. The way
they are worded, they increase the amount of control with the Union government.
For example, the way
telecommunication services have been defined in the Telecommunications Bill is
so broad that it could cover any service on the Internet. And the Bill also
states that you need a licence to operate telecommunications, potentially leading
to a point where you know services on the Internet will need to be willing to
be licensed. It could start with WhatsApp or Signal.
What does it mean when
WhatsApp and Signal need a licence from the Department of Telecommunications to
function? It increases the regulatory burden on them, right? Will they be
denied licences if they don’t compromise end-to-end encryption?
What is being done is that
the Union government has the control and the discretion to say that it will
pick and choose what services it may want to license. It has also allowed
itself the flexibility to expand as it chooses without any checks or balances.
That is the trend even with
the Digital Personal Data Protection Bill. The actual draft Bill is short; a
number of things were left to be prescribed or may be prescribed in the future.
Now, for some of those, you can argue that, in some cases, you need rulemaking
and that not everything can be in the parent Act. But the extent to which it
has been done in this Bill effectively says that any regulatory review updates
will happen through rulemaking, which does not require a parliamentary process
and does not always need an extensive consultation process. It can just be
notified by a direct notification that’s the broader environment.
What is happening on the
Digital India Act front?
The Digital India Act is
supposed to replace the IT Act of 2000, but its contents are unclear. It covers
a range of things, from what one can gather from the reporting on it AI,
blockchain, metaverse, doxxing, cyberbullying, algorithms, and so on.
Despite all this talk of the
Internet needing to be made open, safe, trusted, and accountable and keeping “digital
nagariks” safe, we don’t have a clear definition or articulation of harms as we
perceive them in the Indian context. It is a significantly challenging
endeavour. Let’s not take away from that. What you then need is to have a
consultative process to determine these harms.
Yes, there are cybercrime,
fraud, and disinformation issues on the Internet. There are also services that
have enabled trade or helped people get their voices out. How are we balancing
these benefits and the harms? What criteria are we using to make trade-offs?
None of that has been articulated in any shape or form.
On the Internet, you can
make only a certain amount of change. There are many problems, many issues that
you’re trying to address, which are existing underlying social problems. If you
look at the fact that we have a lot of hate speech on the Internet, it’s
because there are existing social divisions.
The way politics plays out
today, there is even, in some cases, a reward for engaging in that rhetoric. In
that case, to what extent will regulating the Internet alone solve the
underlying problem? It’s a challenging job, but we need to consider it at a
more fundamental level.