Alice, Lovable partner to test AI coding systems for security flaws

As AI tools increasingly write code, build apps and act autonomously online, Alice will red-team Lovable’s infrastructure to identify vulnerabilities, improve safeguards and help protect systems before they can be exploited in the real world

AI security company Alice announced this week that it is partnering with AI development platform Lovable to test the resilience of systems that generate code and act autonomously, as companies race to address a growing set of risks tied to the spread of artificial intelligence across the internet. The collaboration will have Alice conduct advanced red-team exercises on Lovable’s AI infrastructure to identify vulnerabilities before they can be exploited in real-world settings.
The partnership reflects mounting concern across the technology industry that AI systems able to write code, build applications and publish content are creating new security challenges alongside their speed and convenience. Unlike traditional software, AI systems interpret language, infer intent and generate probabilistic outputs, creating openings for attacks that manipulate systems into producing unintended outputs or taking actions beyond their intended limits.
1 View gallery
Alice
Alice
Alice
(Photo: Courtesy)
Alice says it is building tools to help companies confront that shift. The company emerged from ActiveFence, a trust and safety firm founded in 2018 that became known for helping major technology platforms identify and disrupt harmful online activity, including extremist networks, disinformation campaigns, coordinated harassment and child exploitation networks. As generative AI systems became embedded in the same digital environments ActiveFence had spent years monitoring, the company broadened its focus from moderating user-generated content to understanding how AI systems themselves could be manipulated or misused, eventually rebranding as Alice.
Today, Alice says it works to safeguard what it describes as communicative technologies, the digital systems people use to create, collaborate and interact with one another and with machines. The company says its services span the full life cycle of AI systems, from adversarial red-team testing before deployment to runtime guardrails meant to detect manipulation attempts after systems are already in use. It says it now helps protect communicative technologies used by more than 3 billion people and works with seven of the 10 leading AI model companies.
Security researchers have increasingly warned that carefully designed prompts or pieces of content can be used to steer AI systems into unintended behavior. Techniques including prompt injection and indirect manipulation attacks have drawn particular attention as AI agents gain the ability to browse the web, write code and publish information. Alice says it addresses such risks in part through a research infrastructure known as Rabbit Hole, which aggregates billions of examples of harmful or manipulative online behavior to help analysts study evolving adversarial tactics and simulate how they might affect modern AI systems.
Those questions have become especially relevant for developer tools such as Lovable, which enables users to build full-stack applications and websites through natural-language interaction with AI. The company says users created more than 25 million projects on the platform in its first year, a sign of the rapid adoption of AI-powered development tools. Under the new partnership, Alice will carry out structured adversarial testing designed to simulate realistic misuse scenarios, including ambiguous instructions, indirect prompts and harmful intent embedded in otherwise legitimate interactions.
The goal, the companies said, is to study how Lovable’s systems behave under adversarial pressure and use the findings to strengthen safeguards, refine product policies and improve system resilience over time. Alejandra Arreola Ruiz, Lovable’s trust, safety and policy lead, said in a company blog post that as AI capabilities advance, so do the risks that accompany them, and that working with Alice would allow Lovable to proactively simulate real-world misuse scenarios and reinforce user protections.
Alice CEO Noam Schwartz said the rise of AI systems as tools through which people create, publish and interact online is shifting how companies think about safety. Rather than focusing only on moderating content after it appears, he said, companies increasingly must ensure that the systems generating and operating online services are resilient from the outset. The partnership with Lovable underscores a broader industry shift as AI systems move beyond generating content and take on a more active role in browsing, summarizing, recommending and in some cases publishing directly to the web.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""