Locking It Down


Photo by Jason E. Kaplan
David Beveridge of HiddenLayer

Security platform HiddenLayer offers protection for AI systems.

Share this article!

For almost two and a half gripping hours, the new Netflix thriller Leave the World Behind, starring Julia Roberts and Mahershala Ali, portrays a terrifyingly dystopian reality — one where an ambiguous cyberattack has destabilized the United States. No phones, no internet, no GPS. Communication is dismantled and information is sparse and disorienting. 

Perhaps what is most unsettling about the film, however, is its eerily close proximity to a reality we’re teetering toward. With targeted hacks and attacks, the cyber framework we’ve so blindly hung our survival upon will crumble, taking our civil society with it.

Yet one way in which companies can protect themselves against a cyberattack — of even a much lesser magnitude than depicted in sci-fi thrillers — is to secure their artificial intelligence and machine- learning (ML) models from being compromised. 

AI is a broad concept, referring to a computer or system that mimics the “thought” processes and tasks of a human, whereas ML is a subset of AI — the very technologies and methods that train a computer or system to analyze data, learn from it and improve it. And if you didn’t realize it, ML/AI are essential components of the applications we use in our daily lives, from Google Maps and social media to smart home devices and medical diagnostics. 

So it’s no surprise that companies are increasingly relying on ML/AI for their growth. Yet with innovation in the industry comes the need for security to avoid chaos and compromise. 

“Complexity is the biggest enemy of security,” says Bruce Schneier, a security technologist and lecturer of public policy at the Harvard Kennedy School. “AI systems have their own special vulnerabilities, in addition to all of the vulnerabilities that have to do with them being computers.”

To fill in the gaps in security, an innovative new platform called HiddenLayer uses a noninvasive software approach to observing and securing ML/AI.

The company offers a suite of security products that specialize in protecting ML/AI algorithms, models and their underlying data from adversarial attacks, vulnerabilities, and malicious code injections or data poisoning — which means giving the AI an opportunity to “learn” something incorrect to devastating consequences. 

For example, if an AI is being trained on distinguishing tanks from cars, a hacker could “trick” the AI into thinking that a tank is a car, using a set of images of tanks with fake bumper stickers on them. It’s these more unusual data poisoning scenarios that are not included in the initial tests and validation of AI systems. 

“AI is a powerful tool that you don’t want to just leave unhinged,” says HiddenLayer’s VP of engineering, David Beveridge. 

HiddenLayer’s products are mostly cloud-based but also allow the customer to self-host. “So we can set the systems up in your company’s cloud environments,” explains Beveridge. Additionally, the platform offers consulting services in cybersecurity, AI, reverse engineering and threat research.

According to Beveridge, “the vast majority [of companies] are completely unguarded as far as we can tell. It’s kind of like the internet in the ’90s.” 

The reason for this, says Schneier, is that “the market rewards features, scalability, speed, low cost; everything except security. For AI in particular, it’s a vast race to grab market share and profitability and monopoly status, and security is a minor afterthought. It’s also pretty new, and we’re just learning about those unique ML/AI vulnerabilities.”

As far as who is behind the attacks, Beveridge explains that neutral hackers are typically academic researchers, individuals or security organizations — all of whom will break into a system to expose its weaknesses and then publish their findings online, for reasons that might include building a reputation in the industry or brand credibility.

Overtly malicious attacks on ML/AI, however, tend to be at the hands of organized crime, state actors and even terrorist organizations. 

But Beveridge warns companies that any weaknesses they uncover themselves in their system have likely already been found by a third party. “So it’s foolish for a vendor to think hiding [the vulnerability] is going to work; it just means that only the bad guys have it … and the key motivators will quickly become money.”

In addition to financial gain, reasons for adversarial attacks on ML/AI might also include damaging competitors, cracking security and spreading misinformation to sway public opinion. 

HiddenLayer was born out of an actual cyberattack in 2019 on the platform’s first incarnation, an AI company called Cylance, founded in Austin, Texas. The antivirus company worked to prevent malware attacks by employing machine learning, but the hackers were able to bypass the company’s antivirus model. With that attack, Cylance’s team discovered vulnerabilities in its services.

“At that time, we didn’t really consider the attack on the AI itself, and that was a huge eye opener for us,” says Beveridge, who worked in cybersecurity at Cylance. 

Well before ChatGPT, many companies were picking up on AI. So it soon became obvious to Beveridge and his colleagues that, while they hadn’t secured their AI at Cylance, neither had other companies in the field. 

A Forrester consulting study, commissioned by HiddenLayer, found that 40% to 52% of participating companies were either still in the discussion phase regarding threats to their AI or they were using a manual process, meaning humans were tasked with keeping the assets secure. Meanwhile, the study reported that 86% of these companies were “extremely concerned or concerned” about the security of their ML/AI models.

According to consulting service Gartner, two in five organizations have had an AI security or privacy breach, where one in five were malicious attacks. For the year 2021, researcher and publisher Cybersecurity Ventures found that cyberattacks cost an estimated $6 trillion globally. 

“And so we saw, as soon as this takes over, there’s going to be a massive need in the market for being able to secure AI itself,” Beveridge says. 

A few years following the Cylance cyberattack, HiddenLayer was founded with the vast majority of its engineering based in Portland. To date, its clients include mostly larger enterprise companies — including finance, government and defense, and cybersecurity, with Microsoft having climbed aboard as an investor.

The platform is also launching a new product called SafeLLM — named after the “Large Language Model” used by ChatGPT — and will be aimed at protecting hosted models; basically, where a business is making use of an AI system that is off-site, like companies Anthropic and OpenAI. 

HiddenLayer can be viewed as adjacent to standard cybersecurity practices; in basic terms, a security framework observes the overall behavior of a system and throws up alarms if data appears fishy or misused.

Yet still, HiddenLayer operates in largely uncharted waters, as Beveridge calls their platform “painfully innovative.” And the pain is likely coming from explaining to customers what, exactly, HiddenLayer offers when there isn’t an established industry to compare it to. Beveridge uses the analogy of the stop-motion animated series “Wallace & Gromit,” where the dog Gromit lays down his own tracks while simultaneously driving the train forward. 

At a time when news headlines are screaming for the need to regulate AI — often from those who are behind its advancements — Beveridge and HiddenLayer are coming at it a little differently. 

“As a company, we’re not seeking to rein in AI. What we’re interested in doing is keeping malicious parties from using your AI against you,” says Beveridge. “In a way, we are protecting you from AI as well, because we’re protecting AI that’s being hacked and used incorrectly.”


Click here to subscribe to Oregon Business.