This College Communication Platform, ‘InSpace’, Uses TensorFlow.js For Toxicity Filters In Chat

InSpace is a virtual communication and learning platform. It helps people interact, collaborate, and educate in familiar physical ways, but in a virtual world. It is designed to experience the fluid, personal, and interactive nature of a real classroom. It helps participants break free of “Brady Bunch” boxes in existing conference solutions to create a fun, natural, and engaging environment that involves interaction and collaboration.

  • Video circle for each person that can freely move around the space. It helps people who are next to each other hear and engage in conversation, and with the increase in distance, the audio fades, allowing the users to find new conversations.
  • Visual social cues can be seen as when participants zoom out; they can see the entire space. People can smoothly switch from class discussions to private conversations or group/team-based work, similar to a lab or classroom.
  • Teachers’ conversation with everyone is smooth. They can move between individual students and groups for more private discussions. They can place groups of students in audio-isolated rooms for collaboration while still belonging to one virtual space.

TensorFlow’s collaboration

TensorFlow has been a collaboration platform that provides a mechanism to help warn users to send and receive toxic messages or inappropriate spam.

A simple approach to identify toxic comments would be to check for a list of words, including profanity. But the broader motive is to identify toxic messages not just by words contained in the message but also the context. Hence, there’s a pre-trained ML model for toxicity detection in TensorFlow.js that could be easily integrated into InSpace’s platform. This model runs entirely in the browser that helps warn users against sending toxic comments without storing or processing their messages.

🔥 Recommended Read: Leveraging TensorLeap for Effective Transfer Learning: Overcoming Domain Gaps

Performance-wise, the researchers found that running the toxicity process in a browser’s main thread would be detrimental to the user experience. So, it was decided to use the Web Workers API to separate message toxicity detection from the main application so as to make processes independent and non-blocking.

Web Workers get connected to the main application by sending and receiving messages, in which one can wrap his data. When a user sends a message, it is added to a queue automatically and is sent from the main app to the web worker. After receiving the message from the main app, the web worker starts the classification of the messages, and when the output is ready, it sends the results back to the main application. Based on the web worker results, the main application either sends the message to all participants or warns the user that the message is toxic.

Hence, the toxicity detector is straightforward to integrate the package with an app and does not require significant changes to the existing architecture. The main app only needs a small “connector,” and the logic of the filter is written in a separate file.

InSpace chat:


Shilpi is a Contributor to She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.