An article published yesterday by Google explains how the two latest Google algorithms, MUM and BERT, weave their web to remove offensive or spam results from the SERPs. The influence of artificial intelligence in search results is likely to become even greater in the coming months…
Google published an article yesterday explaining how artificial intelligence and specifically the MUM and BERT algorithms were used in its algorithms to fight spam and remove potentially offensive or dangerous content from search results.
Google takes the example of people in personal crises and explains how the engine works in this case: ” Thanks to our latest AI model, MUM, we can now automatically and more accurately detect a wider range of personal crisis searches. MUM can better understand the intent behind people’s questions to identify when someone is in need and help us more effectively show more reliable and actionable insights at the right time. We will begin MUM in the coming weeks to make these improvements. »
It also aims to combat sites with violent, shocking or inappropriate content: “ One way to achieve this is the SafeSearch mode, which gives users the ability to filter out explicit results. This setting is enabled by default for Google Accounts for people under the age of 18. And even if users choose to turn off SafeSearch mode, our systems still reduce unwanted prompt results for searches that aren’t looking for them. In fact, our security algorithms improve hundreds of millions of searches worldwide on the web, images and videos every day. But there’s still room for improvement, and we’re using advanced AI technologies like BERT to better understand what you’re looking for. BERT has given us a better understanding of whether searches are actually looking for explicit content, which helps us significantly reduce the likelihood of encountering surprising search results. This is a complex challenge that we’ve been addressing for some time, but in the last year alone this improvement in BERT has resulted in a 30 percent reduction in unexpected shocking outcomes. It has proven particularly effective in reducing explicit search content related to race, sexual orientation, and gender, which can disproportionately affect women, especially black women. »
And the article continues on MUM: MUM can share its knowledge in the 75 languages in which it has been trained, which can help us expand security protection around the world much more efficiently. When we train a MUM model to perform a task – such as classifying the nature of a query – it learns to do this in all the languages it knows. »
And in particular the use of MUM in the context of combating spam and black hat practices: ” For example, we use AI to reduce unnecessary and sometimes dangerous spam pages in your search results. In the coming months we will use the MUM to improve the quality of our spam protection and extend it to languages for which we have very little training data. We will also be able to better identify personal crisis requests around the world by working with trusted local partners to display actionable insights in several additional countries. »
Knowing that Google detects 40 billion spam pages every day, we can say that we are far from the end of seeing how MUM and BERT weave their respective webs in Google’s search engine algorithms…
Different levels of spam detection by Google. Source: Google