MIT’s TextFooler can trick NLP models

This article was first published on our sister Site, Whats New On The Net.

Of late fake news, Online racist rants & incitements to violence or sexual misconduct, especially on large, busy social media Sites, forced networks like Facebook, Twitter & even Google to build algorithms to detect the sentiments of sentences to control & eliminate them.   

Natural Language Processing (NLP) algorithms are used to train Machine Learning (ML) models to ‘understand’ the meaning of sentences in the way a human would. BERT (Bidirectional Encoder Representations from Transformers), developed by Google, is one such.

Launched in October 2019, BERT has been set to transform Google’s Search Engine for the better, say SEO experts. It is equipped to extract the meaning of a sentence, in the manner of human understanding.

Now, a team of scientists from MIT believes it has developed an algorithm, that generates adversarial text that fools some of the most well-developed ML, NLP models currently in use, including Google’s Bert.

Click here to read the rest of the report.


Leave a Reply

Click here to opt out of Google Analytics