-
Cyber Security
-

AI’s High Energy Consumption: A Hacking Vulnerability

By
James Hingley

A new type of cyber attack could increase the energy consumption of AI systems, and debilitate their working efficiency.

Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in input-adaptive multi-exit architectures. These architectures enable faster inferences and could bring DNNs to low-power devices that use the evermore popular Internet of Things capacities.

Faltering in the face of adversity?

However, it is unknown if the computational savings provided by this approach are robust against adversarial pressure. In particular, an adversary may aim to slowdown adaptive DNNs by increasing their average inference time−a threat analogous to the denial-of-service attacks from the Internet.

In the same way a denial-of-service attack on the internet can block a network and make it unusable, the new attack forces a deep neural network to tie up more computational resources than necessary and slow down its “thinking” process.

Increased concern over progress

In recent years, growing concern over the costly energy consumption of large AI models has led researchers to design more efficient neural networks. One category, known as input-adaptive multi-exit architectures, works by splitting up tasks according to how hard they are to solve. It then spends the minimum amount of computational resources needed to solve each.

However, this type of neural network opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation.

When they assumed the attacker had full information about the neural network, they were able to ‘max out its energy draw’. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper co-author.

Theory vs. Actuality?

This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness:

‘What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done’