Product Spotlight: Toxic Language Identifier

While communicating in an increasingly virtual world, many of us worry about how our messages can be misinterpreted. We keep in touch with friends and family throughout the day, and don't typically want our words to be taken in a negative way!

Our Toxic Language Identifier can help.

It evaluates the effect of what you're saying and tests how others will react to your words. From the obvious to the subtle, it can decode your meaning and provide you with the probability of the relevant labels to your input.

Our inspiration

We were driven not only by the need we see today, but also by a recent Kaggle competition.

Kaggle puts forth challenges with the goal of furthering advances in predictive modelling and analytics. Teams from various backgrounds and fields compete to present the best approach for solving machine learning problems. 

Real world application

In today's fast-paced world, we are constantly on the move. Most of us stay in the loop with our phones by reading articles during our commute. We don't want to disconnect.

News corporations are overjoyed by engagement, but are quickly overwhelmed by the sheer volume of comments. Moderating these contributions - especially when it relates to controversial topics - isn't easily done. Complete censorship seems to be the only solution, but certainly isn't ideal.

Instead, an automatic moderator that could intelligently separate the "good" from the "bad" comments would be a huge benefit. It could easily filter through constructive content and only flag the inappropriate ones. 

The bigger scheme of things

Our algorithms can go beyond comments and can handle entire datasets. We process unstructured content and intelligently organise information, so you can focus on doing what you do best.

We are continuously working to improve our technology, in order to constantly deliver leading products for our customers.

Want to learn more? Let's talk about how we can help you and your organization.

Samuel Witherspoon