How Booking.com Uses AI to Combat Online Fraud
When you book your vacation online, you place a lot of trust in the website you are using. For a giant company like Booking.com, maintaining this trust for millions of people daily is a massive task. So how do they achieve it? The answer lies in artificial intelligence.
Tackling Modern Cyber Fraud with AI Assistance
Handling vast amounts of data is a significant challenge for Booking.com. It’s not just about preventing the use of stolen credit cards; it also involves detecting fake hotel reviews, marketing scams, fraudulent attacks, and account takeovers.
According to Siddhartha Choudhury, Product Manager at Booking.com, AI is used in a wide range of scenarios to mitigate security risks and fraud. They handle massive data that includes events generated by applications, infrastructure, messaging, and emails.
The Challenge: Performance vs. Cost
Managing a security system of this scale is not easy. One of the biggest challenges is making different internal and external tools work together seamlessly. However, the greater challenge is balancing performance with cost.
Cyber attacks are constantly evolving, which means defenses need to continuously improve. But better technology costs more money. Thus, the decision lies in making things more cost-efficient or enhancing performance.
AI in the Face of Cyber Threats
Instead of merely responding to problems after they occur, Booking.com uses AI to detect issues before they start. A significant part of this shift is moving their systems to the cloud, allowing for the use of smarter and faster tools.
According to Siddhartha, human security experts now have a team of digital assistants that enhance their efficiency and reduce operational stress.
Ensuring Fairness in AI
When granting AI the power to make crucial security decisions, it is essential to ensure it is not biased. Emphasizing fairness, human oversight, explainability, and privacy are the foundations of their strategy.
AI is actively checked for bias to ensure it does not discriminate against individuals or groups. There is sufficient human oversight to identify false positives. AI decisions must be understandable to ensure accountability.
Conclusion
Siddhartha believes that the next big step is not in finding new tasks for AI but in making all current tools work together efficiently. The goal is to build a system where all security components communicate and collaborate intelligently. For many users, AI provides greater reassurance that they will not fall victim to online fraud when they click the ‘book’ button for their vacation.