TechTicker Issue 61: December 2024

 

...and so are we, with this month’s Ticker! As we sip on our hot chocolate (even our Bangalore team is feeling the chills) — we’ve got some sizzling updates for you.

First up, we deep-dive into the impact of Trump’s presidency on the tech landscape. Spoiler: it includes things on tweets, tariffs, and Tesla. Next, we’ve got a quick parliamentary tracker (we’ll deliver the full version in the next edition). Plus, we bring to you the latest tech headlines and share the surge of AI-related cases in our courts.

So, grab your favorite warm drink – tea, coffee or kahwa; settle in, and let’s get into it.

Deep-dive

Trump-ing tech

What does Donald Trump’s comeback to the Oval Office mean for the tech-world?

A checkered past

In the good ol’ days of Trump’s first term, his relationship with Big Tech was, to put it mildly, contentious. So much so that he started his own social media platform —  Truth Social. He openly accused tech giants of working against him and his administration, even threatening to shut down certain platforms over alleged anti-conservative bias. However, recent studies from Yale and NYU suggest little evidence to support these claims of systematic censorship. During his tenure, we also saw antitrust actions against some of these tech companies.

Changing tides?

However, we hear Trump 2.0 is a changed man — with his close ties to billionaire Elon Musk and opposition to the growth of China’s tech companies, a favorable environment for tech companies and social media platforms is possible.

What’s on the table?

  •       Leading the AI race: The Trump administration seems keen to lead the AI race, focusing on competition with China. Current President Joe Biden’s executive order for safe, secure and trustworthy AI which places guardrails on AI tech, may be repealed to prioritize industry.
  •   Social media platforms to keep a keen eye on policy developments: With the Republican Party’s concerns regarding censorship of right-wing views on social media platforms, Section 230 of the Communications Act, 1934 — which offers safe harbor immunity to platforms hosting third party content — may be under scrutiny. This will not be the first time that the Grand Old Party has attempted to limit this immunity. To top this, Trump has picked Brendan Carr to head the Federal Communications Commission. Carr is not averse to taking ‘broad ranging actions’ against social media platforms. This could involve bringing in legislation that scraps Section 230 entirely or reinterprets it to impose greater obligations on social media platforms.
  •      Anti-trust - it’s complicated?: The US consumer protection and antitrust agency, the Federal Trade Commission (FTC), previously led by Lina Khan, was known for its firm antitrust enforcement against tech companies. However, with Andrew Ferguson’s recent appointment by Trump, it is unclear whether Ferguson will continue Khan’s unfinished probes against big tech — especially considering he had expressed concerns regarding FTC’s overreach of authority. Adding to this uncertainty, Trump also recently appointed Gail Slater, who will lead the Justice Department’s antitrust efforts, and is expected to continue the crackdown on big tech.
  •    Chip and tariff wars: The tech industry will closely watch Trump’s approach towards import tariffs (a key plank of his campaign). He plans to implement differential and hefty tariffs on imported goods: China (60%), Canada and Mexico (25%), and a blanket 10 or 20% on all other imports. These tariffs have clearly been placed to make China sweat, yet they would potentially impact the wallets of everyday Americans — because who doesn’t love paying an extra $200 for a new phone? Tariffs could also drive-up import costs for India in sectors like machinery and electronics.
  •       Crypto’s back?: Bitcoin holders are thrilled with two nominations — Paul Atkins as the Chair of the Securities Exchange Commission; and David Sacks, who would oversee tech policy in the US. Both are advocates for permissive regulation of cryptocurrencies.

What next?

The presidential inauguration on January 20, 2025, will officially kick off Trump 2.0’s tech revolution. Whether it’s more tariffs, more lawsuits, or just more X spats, the next four years are shaping up to be a bumpy ride for Silicon Valley. One thing’s for sure: the tech world will be paying close attention to every tweet, every policy, and every handshake with Elon Musk (whose work with Vivek Ramaswamy on the non-governmental taskforce Department of Government Efficiency will also be under scrutiny).

Connecting the dots

Safe harbor relook on the cards?

At the National Press Day celebrations on November 16, 2024, Uni­on Minister for IT and Information Broadcasting, Ashwini Vaishnaw raised questions regarding the continued relevance of the safe harbor provision in India’s IT law. Reflecting on the provision’s origins in 1990, the Minister pointed out how dramatically the digital media landscape has evolved since then. He specially called out platforms’ key role in spreading everything from memes to, well, fake news, terrorism, and misinformation. This re-thinking exercise dates to the Digital India Bill days of last year (although the Bill has been shelved now).   

India’s AI push

India is putting some serious muscle into AI. Science and Technology Minister, Dr. Jitendra Singh, recently unveiled India’s first AI Data Bank. This will give researchers, startups, and developers access to high-quality, diverse datasets to power scalable AI solutions across sectors like governance, healthcare, education, and even space exploration. The goal? To accelerate progress, fuel innovation, and boost national security.  The IT Ministry is reportedly also working to develop a voluntary code of conduct for AI -- to encompass everything from training to deployment — to ensure responsible AI use. This move aligns with India's broader efforts under the INR 10,732 crore-backed IndiaAI Mission, which seeks to ensure responsible and ethical AI development.

SEBI clarifies that platforms don’t have to apply to become SDPs

India’s market regulator, SEBI had amped up its crackdown on unregulated financial advice this year. Bolstering these efforts, SEBI had released a consultation paper suggesting online platforms to register as ‘specified digital platforms’ (SDPs) — to regulate platforms that are responsible for keeping financial advice clean, clear, and free from the chaos of unqualified creators. Think of it as a digital bouncer for financial content: stopping the riffraff at the door (preventive) and tossing out the bad apples once they’ve snuck in (curative). Secondly, it also came up with the idea of using AI and ML technology to curb unregulated financial advice or claims. Industry associations like NASSCOM and USISPF flagged concerns over the SDP proposal, suggesting that SEBI was overstepping its bounds, making platforms take responsibility for policing content might be a bit much. Cue the dramatic twist: SEBI quickly clarified that being designated as an SDP isn’t mandatory for any platform. That said, stakeholders raise serious concerns about these proposals, citing the regulatory approach’s larger impact on user activity and free speech.    

Parliament tracker

Media ethics code: a step towards responsible content?

Responding to former Information and Broadcasting Minister Anurag Thakur’s questions on the implementation of the Digital Media Code (introduced under IT Rules, 2021 to ensure online news publishers and OTT platforms transmit responsible content), Union Minister Ashwini Vaishnaw shared that the Ministry has appointed an authorized officer, formed an inter-departmental committee and set up a self-regulatory body. He also mentioned that so far over 3,800 publishers had shared their entity details with the Ministry of Information and Broadcasting to help coordinate compliance efforts.

The mysterious case of the disappearing broadcasting bill

The Broadcasting Services Regulation Bill, initially released in November 2023, aimed to regulate content and distribution in the media sector. Fast forward to August 2024, when a second draft of the bill (watermarked and shared only with a select group of stakeholders) caused a stir. The opaque process and broad scope of the draft raised concerns, eventually led to its withdrawal.

Here’s where it gets even murkier: when questioned on December 4, 2024, about this secretive consultation process and the second draft of the bill, Mr. Vaishnaw ignored the 2024 version and only referenced the 2023 draft of the bill.

From the courtrooms to your inbox

  •       Copyright clash: It’s déjà vu with ANI. After suing Wikipedia for defamation (which we covered last time), the news agency is now taking on OpenAI for allegedly using its copyrighted content to train ChatGPT for commercial use — without asking. OpenAI, however, argues that their servers are based outside India, so the suit is technically not valid here. They even removed ANI’s domain from their servers. But ANI’s not backing down, demanding INR 2 crores in damages and a ban on OpenAI from storing or using their content. The court’s got four burning questions to untangle, including whether OpenAI’s storage of ANI's content for training crosses the copyright line, if using ANI’s content to generate responses on ChatGPT could be considered infringement, if using ANI’s content qualifies as fair use and whether Indian courts can even touch this case (given OpenAI's US roots). OpenAI is facing copyright infringement suits in over 30 countries — the most prominent one being New York Times’ case against OpenAI and Microsoft. In an earlier edition, we discussed Perplexity (another generative AI tool) facing copyright infringement suits from news publishers.
  •       AI’s unauthorized use of art works under scrutiny: A public interest litigation listed in the Delhi High Court raises concerns over AI’s unauthorized use of original artistic works. This petition calls for: amendments to copyright and IT laws addressing AI/ deepfake issues, restricting public access AI-image generation platforms and banning sale of AI-generated images created using artists' original works without consent, and appointing a nodal officer for AI-related copyright complaints. This has been tagged with two cases dealing with AI regulation and deepfakes, i.e. the Chaitanya Rohilla and Rajat Sharma cases, which we briefly discussed in our previous issue (Psst: In the last hearing of the Rohilla and Sharma matter, we learnt that the IT Ministry formed a committee to prepare a report on deepfakes).
  •       Free speech’v. influencers’ ‘honest reviews’: The Delhi High Court will examine the contours of freedom of speech and its reasonable restrictions in a case filed by nutritional supplement supplier, San Nutrition, to protect its trademark and reputation — in the context of disparaging comments by social media influencers about third-party goods and services. Other brands such as Bournvita and Physics Wallah have also raised similar concerns. Notably, in both cases, the court restricted such comments.
  •       CEO issued contempt notice for non-compliance with court order: In a case against YouTube, a Mumbai district court issued a contempt notice to Sundar Pichai, Google’s CEO for failing to remove a defamatory video targeting Dhyan Foundation, a non-governmental organization.

Tech stories

Australia says no socials before 16

Australia just became the first country to potentially ban social media for users under the age of 16, by passing the Online Safety Amendment (Social Media Minimum Age) Bill, 2024 on November 29, 2024.

Why the ban?: Prime Minister Anthony Albanese pointed to growing evidence of social media’s harms on young people, including mental health challenges, cyberbullying, and exposure to inappropriate content as reasons for introducing this legislation. The government was also concerned about the constant notification pings’ negative impact on sleep, stress levels, and attention.

State governments and a YouGov survey showed Australians’ strong support for the bill. Parental groups have argued that the law is a necessity since online environments have far-reaching effects on childhood development.

What’s in the bill?: It requires social media platforms (including TikTok, Facebook, Snapchat, Reddit, Instagram, and X) to enforce a minimum age requirement of 16 to create / hold an account. These platforms are expected to take reasonable steps to ensure underage children do not use the platform. Exempt categories include messaging services like WhatsApp, gaming platforms, and services providing educational content. Non-compliance could lead to fines up to AUD 50 million. The law will kick-in by late 2025.

Critics say: Opponents are worried about the feasibility and privacy risks of this law. Monitoring all users, including adults, just to keep kids off? — seems like a technical nightmare. And, while ID checks aren’t mandatory, how platforms verify users' ages remains a mystery. It also restricts the spaces and information that younger children can access.

Social media companies have openly criticized the law. Meta, for example, notes that the process was “rushed” and that the bill hasn’t considered the measures already taken by platforms to protect young people.

A one-off?: While Australia’s move is ground-breaking, other countries are watching closely. Norway’s considering a similar ban for kids under 15 and some U.S. states are debating age verification laws. Australia’s bold step could inspire more countries, or it might serve as a cautionary tale.

Reading reccos

  •       Rest of World’s  fascinating story on the rise of AI-generated images of deceased relatives during Mexico’s Day of the Dead festival.
  •      The Morning Context explores dark patterns used by new age businesses and their impact on consumer choices.
  •        The Arc has a great read on Zepto’s unit economics and growth plans in quick commerce.

Shout-outs!

1) On November 14, 2024, along with UNESCO and the IT Ministry, we hosted the first consultation to document India’s AI readiness and provide policy recommendations. This initiative aligns with UNESCO's Recommendation on the Ethics of AI and the Safety and Trust pillar under the IndiaAI Mission.

2) On December 4, 2024, we convened a multi-stakeholder round table for a discussion on AI governance and shared responsibility framework.  

3) Nehaa Chaudhari (i). was in Goa for Digital Future Labs’ closed-door, invite-only workshop on AI sovereignty and implications for India; (ii). spoke at a session on ‘Emerging Global Trends in Data Protection and Privacy’ at SamvAAd 2024; and (iii). was part of Microsoft’s AI for Law Firms Summit in Singapore.

4) Sreenidhi Srinivasan was in Brussels for the IAPP Data Protection Congress.

5)  Aman Taneja was quoted in (i). the Hindu Businessline for his take on SEBI’s proposals to regulate financial content; (ii). in Scroll for the OpenAI v. ANI dispute.

6) Vidushi Sinha was part of a roundtable hosted by Centre for Internet and Society (CIS) in partnership with ARTPARK and Trilegal, accompanying the launch of the report on 'AI for Healthcare: Understanding Data Supply Chain and Auditability in India’.

That’s all for now!

We’d love to hear your feedback, concerns or issues you’d like us to cover. Or, you could just drop in to say hi. We are available at contact@ikigailaw.com .

Follow us on LinkedInFacebook and  X to catch up on updates

Signing off, the Ticker team for this edition: Isha Nehaa Nirmal  Vidushi

Image credits: Medium

Challenge
the status quo

Bringing what's next...