Remarks by Anatoliy Gruzd: House of Commons’ Standing Committee on Access to Information, Privacy and Ethics (ETHI) on the “Use of Social Media Platforms for Data Harvesting and Unethical or Illicit Sharing of Personal Information with Foreign Entities”

Meeting No. 94 ETHI – Standing Committee on Access to Information, Privacy and Ethics Monday, Nov 27, 2023 15:34 – 17:52

Thank you, Mr. Chair and Committee Members, for the opportunity to discuss the potential threat of foreign interference and the risks associated with the misuse of social media data

I’m Anatoliy Gruzd, a Canada Research Chair in Privacy-Preserving Digital Technologies and a Professor at Toronto Metropolitan University. I’m also a co-director of the Social Media Lab, where I study social media’s impact on society, information privacy, and the spread of misinformation around conflicts, such as the Russia-Ukraine war.

While my comments today are my own, they are grounded in research conducted at the Social Media Lab and are informed by 15 years of working with various types of social media data.

As previous witnesses have testified, there are concerns that TikTok could be vulnerable to foreign interference, leading to major implications for our national security and individual privacy. But I would like to point out that a loaded gun is different from a smoking gun. Despite being framed as a national security threat, to date, there’s still no public evidence that the Chinese government has spied on Canadians using a backdoor or privileged access to the TikTok app.

That is not to say that there is nothing to worry about; there are valid concerns regarding the potential for TikTok and other platforms to be exploited by malicious actors for propaganda and radicalization. For example, Osama bin Laden’s 2002 “Letter to America” recently resurfaced on TikTok and was seen by millions. However, these concerns are not limited to any one platform. Rather, they represent broader challenges to the integrity and security of our information environment. As such, we must take a comprehensive approach to address these issues by compelling platforms to commit to the following:

1)    Adopt the principles of privacy by design and by default,

2)    Invest in expanding their Trust and Safety teams, and

3)    Share data with researchers and journalists.

I will discuss each of these points in more detail next.

Privacy by Design and by Default

Teaching digital literacy is important, but it’s unfair to place all the responsibilities on individuals. Social media platforms are complex, and algorithms that decide what users see and don’t see remain black boxes. The only true choice we have is to disconnect from social media, but it’s not realistic or practical, as our research has shown, most Canadians (94%) have one or more social media accounts.

It’s important to shift the focus from individual responsibility to developing strategies that compel companies to implement privacy by design and by default. Currently, it’s all too common for platforms to collect more data by default than necessary.

Trust & Safety on Social Media

However, even with privacy protection settings enabled, Canadians may still be vulnerable to malicious and state actors. According to a national survey that our lab released last year, half of Canadians reported encountering pro-Kremlin narratives on social media. This highlights concerns about the reach of foreign propaganda and disinformation in Canada, extending beyond a single platform.

In another example, earlier this year, Meta reported a sophisticated influence operation from China that spanned multiple platforms, including Facebook, Twitter, Telegram, and YouTube. The operation tried to impersonate EU and US companies, public figures, and institutions, posting content that matched their identity before shifting to negative comments about Uyghur activists and critics of China.

To fight disinformation, platforms should be expanding their Trust and Safety teams, partnering with fact-checking organizations, and providing access to credible news content. Unfortunately, some platforms, like Meta and X, are doing the exact opposite.

To evaluate how well platforms are combating disinformation, Canada should create an EU-style Code of Practice on Disinformation and a Transparency Repository that requires large platforms to report regularly on their Trust and Safety activities in Canada.

Data Access for Researchers and Journalists

To further increase transparency and oversight, Canada should mandate data access for researchers and journalists, which is essential to independently detect harmful trends. In the EU, this is achieved through the new Digital Services Act.

TikTok currently doesn’t provide data access to Canadian researchers but does offer it to those in the US and Europe. Sadly, TikTok is not alone in this regard. X has recently shut down its free data access for researchers.

Conclusions

In summary, while it’s important to acknowledge the impact of foreign interference on social media, banning a single app may not be effective. It could also undermine trust in government, legitimize censorship, and create an environment for misinformation to thrive.

A more nuanced approach should consider the various forms of interference and develop strategies to address them directly whether on TikTok or other platforms. This may involve a wider adoption of “privacy by design and by default,” expanding Trust and Safety teams in Canada, and compelling platforms to share data with researchers and journalists for greater transparency and independent audit.