- One penetration testing participant found flaws in the Eurostar chatbot, including poor validation and HTML implementation.
- Eurostar says customer data was never at risk. The vulnerability has since been fixed.
- Palo Alto warns that rapid adoption of AI will expand the cloud’s attack surface due to misconfiguration and non-human detection.
Eurostar’s recently introduced artificial intelligence-based customer support software may contain cybersecurity vulnerabilities, posing a number of potential risks, experts have warned.
Penetration Test Research Associates searches for The chatbot only correctly examined the most recent message in the conversation. This means that old messages may have been modified and may contain malicious messages. These messages can be almost anything: from disclosure of system information to (possibly) leakage of confidential customer data.
Fortunately, Eurostar had not linked its database of customer information to the chatbot, so there was no immediate risk of a data breach when it was discovered.
“Our customers were never at risk.”
Experts have discovered other vulnerabilities in the system. These include chat IDs and message IDs that are not properly validated, as well as HTML injection flaws that allow JavaScript to be executed directly in the chat window.
It appears that the vulnerability was first discovered by a penetration testing participant, with researchers reporting that “no attempt was made to gain access to other users’ conversations or personal data.” “But as chatbots expand their capabilities, these same design flaws may become more serious.”
Eurostar confirmed that no customer data was compromised. morning city: “The chatbot had no access to other systems, and most importantly, no sensitive customer data was at risk. All data is protected by customer login.”
While many organizations are rushing to adopt AI tools, the rapid adoption of AI tools is dramatically expanding attack vectors in the cloud, leaving organizations at greater risk than ever before.
