
WHEN: Wednesday February 1 – Saturday, March 25, 2023. See the timeline for important due dates.
WHERE: Online (Zoom and GitHub)
WHO: Teams of 2-4 students, currently enrolled in a degree-granting institution in Canada.
Winning Prototypes
- First Place: ReadProbe by Dake Zhang & Ronak Pradeep, University of Waterloo (demo, source code)
- Second Place: Google-Map-Review-Reliability-Checker by Yongxing Zhang, Jasmine Xu, Yundi Duan, Xunchao Zhang, University of Waterloo (source code)
- Third Place: FactBot for Discord by Emilie Chen, McGill University; Chun Ye, University of Waterloo & Wilfrid Laurier University; Marie Ezra Marin, McGill University; Lawrence Gu, McMaster University, University of Waterloo (source code)
About the Event
Chatbots like ChatGPT have recently gained attention for their ability to produce text that sounds like it was written by a human. For instance, this page includes text generated by ChatGPT. However, advancements in AI have also raised both hopes and concerns about how the technology may be used and misused in society.
Our lab is interested in investigating if it’s possible to use this technology to fight the spread of misinformation and disinformation on social media. One way is to train ChatGPT or similar language models to detect patterns or features of false information on social media. The models could also be used for fact-checking by training them on a dataset of accurate information and having them verify information in a given text. Another potential use is to deploy ChatGPT as a honeypot to attract and track “bad” bots that spread disinformation on social media.
The challenge of identifying and addressing false information online is complex and requires multiple approaches. There isn’t a single solution that works for everything and there is no out-of-the-box solution. This is where your ideas can help. Can you suggest other innovative ways to use ChatGPT or other AI tools to fight misinformation and disinformation online or on social media platforms?
The Challenge
We are inviting all Canadian undergraduate and graduate students in computer science, computer engineering, information science, data science, UI or related fields to propose and implement a prototype solution that will use AI technology in an innovative way to fight the spread of mis- and disinformation. Students who participate in the hackathon will have the opportunity to work with cutting-edge AI technology like the OpenAI API and make a difference in the fight against misinformation.
Specifically, your submission/solution must:
Requirement 1
Address one or more of the following challenges:
- Identifying and flagging misinformation, fake accounts or coordinated/inauthentic link sharing
- Providing real-time fact-checking of claims made by politicians and public figures, or claims made in breaking news stories
- Identifying and blocking propaganda and disinformation websites
- Analyzing and detecting deep fake videos or other types of manipulated media
- Monitoring and providing updates on the spread of misinformation on social media to the public
- Implementing one or more of the intervention strategies proven to be effective against online misinformation and manipulation (i.e. accuracy prompts, debunking, friction, inoculation, media literacy, rebuttals of science denialism, self-reflection tools, social norms, warning and fact-checking labels).
Requirement 2
Incorporate one or more of the models available in the OpenAI API in accordance with the API’s Usage Policies (fyi. OpenAI is the company behind ChatGPT):
- Language Translation: The API allows developers to translate text from one language to another using GPT-3.
- Text Completion: The API can complete a given text by generating text that is contextually consistent with it.
- Text Summarization: The API can automatically summarize a given text and extract key points from it.
- Language Understanding: The API provides several functionalities to understand text like sentiment analysis, named entity recognition, and parts of speech tagging.
- Text Generation: The API can generate text based on a given prompt or context, which can be useful for generating creative writing, chatbot responses, or other text-based applications.
- Question-Answering: The API can also be used to answer questions by extracting relevant information from a text.
- Image recognition and captioning: The API also allows developers to generate captions for images and identify objects in images.
- Video recognition and captioning: The API also allows developers to generate captions for videos and identify objects in videos.
Deliverables
At the end of the hackathon, your team is asked to submit the following:
- A live prototype solution, and
- Hint: depending on the nature of your app, you may be able to deploy it for free at https://vercel.com/
- An open-source program with installation instructions uploaded to GitHub and licensed under GNU GPLv3.
- The source code should include a readme document that will (a) outline your solution, (b) describe all planned and implemented functionalities, (c) how your solution can be deployed to fight misinformation online and by whom and (d) whether and how you evaluated its effectiveness.
Your solution could take one of the following forms (among others):
- Web or mobile application
- Social media bot
- Browser extension
- Stand-alone script
- Analytical dashboard
- Interactive visualization
OpenAI Demo App: Fact Check Assistant

Social Media Lab’s Fact Check Assistant
Check out our OpenAI-powered bot for simple fact checking. It was created as a proof of concept for the Social Media Lab’s 2023 Canadian #AI Misinformation Hackathon.
Please be aware that OpenAI has very limited knowledge of the world and events after 2021 and that while OpenAI has put safeguards in place, their system may occasionally generate incorrect or misleading information and produce offensive or biased content.
Rules and Conditions
The first 25 teams that meet the following 3 conditions will automatically be qualified to compete in the hackathon.
- Your team must have at least 2 members and no more than 4 (Team members do not have to be from the same university or even the same city.)
- At least one member of your team must possess an intermediate knowledge of machine learning and has experience working with APIs.
- All team members must be currently enrolled in a degree-granting institution in Canada (either undergraduate or graduate) in computer science, computer engineering, information science, data science, UI or related fields.
Resources
Since the OpenAI API is a paid service (usage limits and costs vary depending on the specific model being used), each team will receive CAD$100 credit in OpenAI tokens to support your development work as part of this competition. More tokens can be requested from the organizers on a case-by-case basis.
Your team is welcome to use other free APIs and libraries to develop your prototype, however, the main requirement is that ANY external APIs and libraries that you used should be publicly available and can be integrated into an open-source program licensed under GNU GPLv3.
Timeline and Key Milestones
Judging Criteria
- Technical merit: The judges will evaluate the technical complexity and difficulty of the proposed solution, as well as the quality of the code, design and overall execution.
- Originality: They will also look for solutions that bring something new and innovative to the table, rather than simply rehashing existing ideas.
- Functionality: The proposed solution should be functional and demonstrate all of its intended features and capabilities.
- User experience: The judges will evaluate how well the proposed solution addresses the needs of its intended users and how easy it is for them to use.
- Compliance with rules and guidelines: The team must adhere to the rules and guidelines provided by the event organizers, such as using the OpenAI API and following its Usage Policies.
- Adherence to theme: The team should demonstrate that their proposed solution is directly related to the theme of the hackathon; that is, creating a technical solution to reduce the spread and/or impact of online mis- and disinformation.
Hackathon Funders, Judges, and Organizers
The hackathon is funded in part by the Government of Canada via the Digital Citizen Contribution Program and the Social Media Lab at Ted Rogers School of Management.
Judges
- Anatoliy Gruzd, PhdCanada Research Chair | Royal Society of Canada College Member| Director of Research, Social Media Lab | Professor – School of Information Technology Management, Toronto Metropolitan University, Canada
- Philip Mai, MA, JDDirector of Business and Communications – Social Media Lab, Ted Rogers School of Management, Toronto Metropolitan University, Canada
- Felipe Soares, PhDResearch CollaboratorSenior Lecturer in Communications and Media – Social Analytics, London College of Communication, UK
Organizing Team
- Tiago RibeiroResearch Professional and System AdministratorBSc, Computer Science, Toronto Metropolitan University
- Omar TalebResearch AssistantCreative School, Toronto Metropolitan University
- Mehwish MujahidResearch AssistantBusiness Technology Management, Toronto Metropolitan University
- Max Jingwei ZhangResearch AssistantBachelor of Science, Engineering Science, University of Toronto