Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss
Journal
Proceedings of Machine Learning Research
Journal Volume
267
Start Page
32246
End Page
32277
ISSN
26403498
Date Issued
2025-07
Author(s)
Abstract
Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet — a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at GitHub Link.
Event(s)
42nd International Conference on Machine Learning, ICML 2025
Publisher
ML Research Press
Type
conference paper
