This week, a Google blog highlighted the organization’s role in AI advancements over the past decade, emphasizing their commitment to "protect you from online scams where malicious actors deceive users to gain access to money, personal information, or both."
Google has invested in AI-powered scam detection systems and enhanced their classifiers across Search. This enables them to analyze vast amounts of text on the web, identify coordinated scam campaigns, and detect emerging threats effectively.
As a result, Google has reported catching 20 times the number of scammy pages, ensuring that users receive legitimate search results and protecting them from harmful sites attempting to steal sensitive data.
Additionally, Google Chrome now employs Gemini Nano, an on-device large language model (LLM) on desktop, providing users with an extra layer of defense against online scams and making them "twice as safe from phishing and other scams compared to Standard Protection mode." This on-device approach offers immediate insights on risky websites and safeguards against unfamiliar scams.
Google notes that an LLM is suitable for this purpose due to its ability to adapt to the varied and complex nature of websites, enhancing their response to new scam tactics. They are already using this AI-driven method to protect users from remote tech support scams.
Furthermore, Google is rolling out AI-powered warnings for Chrome on Android. When the on-device machine learning model flags a notification, users receive a warning with options to unsubscribe or view the blocked content. If the warning seems incorrect, users can choose to allow future notifications from that website.
Google added, "Scams are often initiated through phone calls and text messages that seem harmless at first but can lead to dangerous situations." To combat this, they have launched on-device AI-powered Scam Detection in Google Messages and Phone to protect Android users from such sophisticated scams.