TechTicker Issue 60: November 2024

The Ticker turns 60!

Source: Pinterest

A round of applause for all of you, our amazing readers, who've stuck with us through every pun, wordplay, and quip! We're thrilled to celebrate the Ticker hitting its 60th edition. Last month marked a milestone for us as we officially launched on LinkedIn. As always, sharing is caring, so feel free to pass this along to anyone who might enjoy the read.

This month, we’ve got two deep dives. First, we dissect the courtroom tension in the Wikipedia-ANI case. Then, we dig into SEBI’s quietly impactful consultation paper on specified digital platforms, which is tightening screws on financial influencers or finfluencers. We've rounded up regulatory updates on new media engagement processes for the government, hoax bomb threats, and OTT regulation. We also cover a few other court decisions and share our team’s reading recommendations. Psst... this time, we are also leaving you with a sneak-peek of next month’s deep-dive: an exploration of re-elected President Trump’s impact on technology law and policy.

Deep-dive

An eye for ANI? - Wikipedia versus ANI

The backstory    

ANI filed a defamation lawsuit against the Wikimedia Foundation (the non-profit behind Wikipedia), demanding INR 2 crores in damages and removal of certain content. The bone of contention? ANI says its Wikipedia page unfairly brands it as a "propaganda tool", accusing it of peddling fake news. After ANI’s attempts to edit this article were blocked, it fired back demanding the offending content be taken down.

Chapter 1: The defamation saga

With the Delhi High Court now involved, Wikimedia was ordered to disclose details of the three anonymous editors responsible for writing the disputed description of ANI. Wikimedia challenged the interim order before a Division Bench, arguing that it had legitimate interest in protecting the anonymity of its users.

Chapter 2: Parallel to-and-fro

While the main defamation matter is being deliberated, the court and ANI raised concerns, relating to non-disclosure of editors’ names and removal of the Wikipedia page relating to the case. Following this, in the latest order, parties reached a compromise: Wikimedia agreed to serve summons to the three editors, and provide a redacted proof of service to ANI.

The primary defamation case continues before a single bench.

How does Wikipedia actually work?

Wikipedia is a free online collaborative encyclopedia, meaning you can edit pretty much any page. Wikimedia itself doesn't write the content; instead, it has volunteers (Wikipedians) who write and edit the articles on the website. They research, create summaries, add relevant sources and continue editing to update these articles. The collaborative process on Wikipedia lets you view the entire edit history and discussions behind every page.

Some pages are locked or protected to prevent chaos (after all, the internet can get wild!), which can only be unlocked for edits by select volunteers enjoying administrator privileges. Administrators’ (i.e. Wikipedians having a history of contributing to Wikipedia through edits, discussions, and general maintenance) privileges include deleting pages, blocking users, rolling back edits on a particular page, among others. They apply to get nominated, and the community decides if a user can become the administrator. The foundation provides the tech infrastructure but is hands-off when it comes to editing or determining the duties, roles or responsibilities of the volunteer community. This is why the site argues that it is a platform, not a publisher.

Wikipedia: an intermediary?

It was reported that the Ministry of Information and Broadcasting (MIB) issued a notice to Wikipedia — citing complaints about bias and inaccuracies on the platform and questioning whether Wikipedia should be classified as a publisher — making it liable for the content hosted on its site.

Currently, Wikipedia enjoys the safe harbor immunity under the IT law — and can't be held responsible for user-generated content. The Intermediary Guidelines, 2021 (IT Rules, 2021) require platforms like Wikipedia to make a "reasonable effort" to stop illegal content.  

Most recently, Wikipedia responded to media reports suggesting MIB’s questioning and intervention, saying that it hasn’t received any official government communication regarding its editorial practices so far.

In 2022, the Ministry reportedly emailed Wikipedia about edits to cricketer Arshdeep Singh’s page, which falsely linked him to a Sikh separatist group after a controversial cricket match. Interestingly, this email came seven hours after the story had already made headlines. Talk about a fastball!

What next?          

This case could be a stress test for content control on Wikipedia, with other media outlets, like Republic TV, already eyeing similar legal moves. Experts worry this could create a chilling effect on Wikipedia’s editors, leading to lower-quality content as volunteers hesitate to contribute, fearing lawsuits. The outcome could also have wider implications on interpretation the of IT Rules, 2021, influencing user-generated platforms’ regulation.

Article takedown

On October 21, 2024 Wikipedia suspended access to the article on the lawsuit — marking the first time an English Wikipedia page has been taken down by a court order. It is worth noting that since 2012, Wikipedia has received about 5500 content takedown requests; it has complied with hardly a dozen. Founder Jimmy Wales is committed to battle this issue in court in the long haul.

Finfluencers: Fin.?

SEBI’s crusade

SEBI has been on a mission to clean up (what it views as) the Wild West of unregulated financial advice.   After removing over 15,000 content sites by unregulated finfluencers, it issued a resolution banning board-regulated entities from associating with unregistered individuals offering financial advice or making performance claims about securities. Fast forward to August, and SEBI amended regulations to bar associations of regulated entities—including depositoriesstock exchanges, and intermediaries—with unregulated entities in any form.

The consultation paper

Last month, SEBI dropped a consultation paper, which proposes that platforms hosting financial and securities market content, aka specified digital platforms (SDPs) police content by adopting measures —  both preventive (stopping unqualified creators from posting) and curative (removing or disabling offending content). The catch? The proactive monitoring that SEBI seems to envisage swims against the current notice-takedown model under India's IT law. This could result in much tighter controls on finfluencers’ content.

Here’s what SEBI is considering in its consultation paper:

  •     Ads have a new meaning: Now, even organic posts promoting financial services could count as ads, not just paid promotions.
  •    Platforms must be the new content policemen: Platforms will need technical tools, systems, and expertise to determine whether content relates to the securities market; whether its kosher; and if not, whether the creator is SEBI-registered or not. This means platforms will have to Sherlock to check if content includes a recommendation or advice, or if it redirects a user to an external medium with a ‘call to action’.
  •     Cap on creators: Only SEBI-registered entities can post securities-related content. Even educational content cannot link to any securities-related products or services--meaning, no “click here for stock tips” calls.
  •     Reporting is a must: Platforms must share data with SEBI when requested, and act on feedback.

What does this mean?

If these proposed rules go through, the growing tribe of finfluencers will be impacted and platforms will be responsible for ensuring that financial advice and other securities-related content complies with SEBI regulations. Platforms will need to recalibrate their content moderation systems. The responsibility to ‘expertly’ judge content now shifts to the platform itself. This could mean automating pre-screening or hiring whole teams to vet posts—an expensive and labour-intensive shift.  This raises a crucial question: can platforms truly be expected to police every piece of financial advice or content on their sites?

The Supreme Court has made this answer clear, stating that platforms can’t be held accountable for every piece of content unless they’ve been given “actual knowledge” (a court or government order) that the content is unlawful. The platforms don’t have editorial control over the content users post, and they can’t be expected to read every post, check every comment, and vet every video. With millions of posts flooding these platforms every minute, expecting them to keep tabs on all of it is like asking a librarian to read every book before it hits the shelves. Not very realistic!   

Connecting the dots

Govt's new media SOPs for slicker communication

The government’s set for a media makeover. Following PM Modi’s directive to ministers and secretaries to ensure effective information to the public about government decisions and achievements, a series of high-level meetings have paved fine-tuning of government’s media engagement.

MIB Secretary Sanjay Jaju has been particularly vocal about the importance of speed, emphasizing the need to act quickly during the "golden hour" of social media — when misinformation spreads like wildfire. Following this, Union Cabinet Secretary T V Somanathan reportedly issued guidelines to Union Secretaries, setting clear dos and don’ts for government media interactions. The SOPs say: only finalized decisions should be shared; speculation on ongoing proposals should be avoided, and negative coverage should be met with factual, official statements— after coordinating with the relevant minister’s office. There’s also a push to limit personal credit for government decisions and keep sponsored media events in check (such events can only happen after MIB’s prior approval now).

Who’s responsible for pulling down hoax bomb threats?

The IT Ministry has pulled up X (formerly Twitter) for its alleged mishandling of hoax bomb threats that wreaked havoc on more than150 flights  — costing over INR 600 crores. As the investigation faltered, with Delhi Police struggling to get necessary user data from X, the government called out X’s inaction — accusing it of acting as an accessory to the threat. To address this, IT Ministry Joint Secretary Sanket S Bhondve chaired an urgent meeting with company representatives on October 22, 2024. Cybersecurity agencies blocked around 10 suspicious accounts. Civil Aviation Minister K. Rammohan Naidu hinted at the possibility of adding offenders to a “no-fly” list.

The questions remain: how would liability in these cases work? In case threats aren’t pulled down by platforms, and heavens forbid, a bomb goes off—would liability still lie on platforms? Will they be ordered to be pulled down only if the content is proven to be a hoax? These raise compliance issues, especially for responsible platforms as they navigate a tightening of the safe harbour regime.

Small win for content freedom (for now)

The Supreme Court dismissed a public interest litigation (PIL) seeking establishment of a regulatory body for OTT platforms. The petitioners argued that the lack of regulation for online content was leading to an increase in explicit scenes, violence, and harmful material, with no proper warning or age restrictions. They cited recent Netflix drama IC 814: The Kandahar Hijack, which faced backlash for alleged inaccuracies as an example of unregulated content. However, the three-judge bench dismissed the PIL, holding that regulatory matters should be left to the executive branch and handled through multi-stakeholder consultations, not judicial intervention. This shows the court’s reluctance to step into the policy-making arena, leaving the OTT regulation debate squarely in the hands of policymakers. The question remains: how will the balance between content oversight and creative freedom evolve?

Tech stories

Social media in line of fire?

Legal action is heating up against major platforms accused of fuelling addiction and mental health risks—especially for teens.

Addiction-related lawsuits: Last year, 41 US states sued Meta for allegedly addicting teens to Facebook and Instagram. They argued that Meta was deploying features to promote compulsive use of its apps, and misleading the public about its negative effects.

Last month, a federal judge in California sided with 34 states and allowed some of these claims to proceed with litigation. This ruling is likely to clear the way for more evidence and a trial. During the proceedings, the judge noted that Section 230 of the Communications Decency Act, 1996 provides immunity against some of these claims. For example, states cannot challenge some platform features such as ‘infinite scroll’. What issues will be challenged under the umbrella of deceptive and unfair business practices remains unclear.

TikTok in similar turmoil: It really seems to be a tumultuous time for TikTok (we wrote about TikTok’s struggles in the US in our previous edition) — the company is facing a lawsuit for allegedly using an addictive algorithm harmful for children.

Global trend: Brazil’s consumer rights’ group, Collective Defense Institute, has filed lawsuits against Meta and TikTok demanding compensation of approximately USD 500 million for failing to create mechanisms preventing platform addiction. Earlier this year, the European Union opened an investigation into Meta over its addictive effects on children. United Kingdom is considering a bill that could potentially mandate social media apps to make them less addictive for teenagers. Australia has most recently proposed a ban on social media for children under the age of 16 to prevent harm to children. Prime Minister Anthony Albanese mentioned risks to physical and mental health of children from excessive social media use and highlighted the concerns to girls from harmful depictions of body image, and misogynist content aimed at boys.

Perplexing times for Perplexity AI?

Perplexity, a rising star in the generative AI search space, is making waves — though not all of them are the kind hoped for.

Suing Stories: WIRED has accused the company of scraping its paywalled content without permission — ignoring the 'no scraping’ rules and plagiarizing content for its responses.

News Corp (owner of the New York Post and Wall Street Journal) also filed a lawsuit against Perplexity, alleging the same. Per News Corp, Perplexity is siphoning readers and revenue, and tarnishing their brands by misattributing false information; they are demanding a stop to its content use, and destruction of existing databases containing it.

But, I’m not the only one: Generative AI models are facing copyright infringement lawsuits and scrutiny across the board. While courts are resolving the questions of copyright law and its impact on these models, some AI players, including Meta and OpenAI, are entering into licensing deals with news organizations and content providers to avoid similar legal showdowns. Learning from its experience, Perplexity has worked out a revenue sharing program for publications like Forbes and TIME.

From the courtrooms to your inbox

  •    Star Health and Allied Insurance, one of India’s largest health insurers, was hacked in September, exposing sensitive customer data. Personal details—including full names, phone numbers, tax info, and medical diagnoses—were reportedly circulating on Telegram through chatbots. The hacker, known as xenZen, claimed to have access to data on over 3.1 crore customers. In response, Star Health filed a case in the Madras High Court, seeking an injunction against Telegram from sharing the leaked data. In October, the court directed Telegram to block and delete any posts or chatbots linked to the breach.
  • Reading Reccos 
  •      Kunal Talgeri in The Arc shares Sarvam AI’s indic LLMs and how they’re building for India.  
  •     For some nostalgia, The Verge published a series of articles about the importance of the year 2004 in tech history. Tune in for nuggets on YouTube, iPod, Facebook and music streaming.
  •       Internet Freedom Foundation’s detailed analysis of state-level digital media and influencer-centric policies.   

Teaser


Source: Imgflip

Donald Trump has won the Presidential Elections in the United States of America. For the first time in 20 years, the Republicans won the popular vote and have taken control of the Senate too.

Promising to undo many of President Biden’s policies, Trump is likely to roll back the AI Executive Order, which created AI guardrails and built an emerging technology regulatory framework. Trump has picked Elon Musk to lead a newly created Department of Government Efficiency, aptly abbreviated to DOGE. (Interestingly, Dogecoin surged over 10 percent, right after this announcement!) Former Republican presidential candidate, Vivek Ramaswamy will work with Musk in this department to drive large scale structural reform in government bureaucracy.

This presidency is also likely to see a different approach to anti-trust enforcement. The Federal Trade Commission chair Lina Khan may be asked to step down, with a lighter approach to approval of mergers being adopted.

We will deep dive into the potential implications of a Trump Presidency on technology policy globally, in the next version of the Ticker!

Shout-outs!

Our Partner Nehaa Chaudhari was ranked among the Top 15 Rising Lawyers in the ALB South India Rankings 2024 !

Our colleague, Riya Kothari was part of India's delegation to the G20 Digital Economy Ministers Meeting under Brazil's Presidency. Reach out to us to speak on TMT, emerging tech, data protection, and more.

That’s all for now!

We’d love to hear your feedback, concerns or issues you’d like us to cover. Or, you could just drop in to say hi.

We are available at contact@ikigailaw.com. Follow us on LinkedInFacebookTwitter to catch up on updates.

Meet the Ticker team for this edition!

Isha Nirmal Nehaa Varunavi Vidushi

Challenge
the status quo

Dividing by zero...