Marc Rivinius, Pascal Reisert, Sebastian Hasler, and Ralf Küsters, “Convolutions in Overdrive: Maliciously Secure Convolutions for MPC,” in Privacy Enhancing Technologies Symposium (PETS 2023), 2023, vol. 2023, no. 3.
Abstract
Machine learning (ML) has seen a strong rise in popularity in recent years and has become an essential tool for research and industrial applications. Given the large amount of high quality data needed and the often sensitive nature of ML data, privacy-preserving collaborative ML is of increasing importance. In this paper, we introduce new actively secure multiparty computation (MPC) protocols which are specially optimized for privacy-preserving machine learning applications. We concentrate on the optimization of (tensor) convolutions which belong to the most commonly used components in ML architectures, especially in convolutional neural networks but also in recurrent neural networks or transformers, and therefore have a major impact on the overall performance. Our approach is based on a generalized form of structured randomness that speeds up convolutions in a fast online phase. The structured randomness is generated with homomorphic encryption using adapted and newly constructed packing methods for convolutions, which might be of independent interest. Overall our protocols extend the state-of-the-art Overdrive family of protocols (Keller et al., EUROCRYPT 2018). We implemented our protocols on-top of MP-SPDZ (Keller, CCS 2020) resulting in a full-featured implementation with support for faster convolutions. Our evaluation shows that our protocols outperform state-of-the-art actively secure MPC protocols on ML tasks like evaluating ResNet50 by a factor of 3 or more. Benchmarks for depthwise convolutions show order-of-magnitude speed-ups compared to existing approaches.BibTeX
Marc Rivinius, Pascal Reisert, Sebastian Hasler, and Ralf Küsters, “Convolutions in Overdrive: Maliciously Secure Convolutions for MPC,” Cryptology ePrint Archive, Technical Report 2023/359, 2023.
Abstract
Machine learning (ML) has seen a strong rise in popularity in recent years and has become an essential tool for research and industrial applications. Given the large amount of high quality data needed and the often sensitive nature of ML data, privacy-preserving collaborative ML is of increasing importance. In this paper, we introduce new actively secure multiparty computation (MPC) protocols which are specially optimized for privacy-preserving machine learning applications. We concentrate on the optimization of (tensor) convolutions which belong to the most commonly used components in ML architectures, especially in convolutional neural networks but also in recurrent neural networks or transformers, and therefore have a major impact on the overall performance. Our approach is based on a generalized form of structured randomness that speeds up convolutions in a fast online phase. The structured randomness is generated with homomorphic encryption using adapted and newly constructed packing methods for convolutions, which might be of independent interest. Overall our protocols extend the state-of-the-art Overdrive family of protocols (Keller et al., EUROCRYPT 2018). We implemented our protocols on-top of MP-SPDZ (Keller, CCS 2020) resulting in a full-featured implementation with support for faster convolutions. Our evaluation shows that our protocols outperform state-of-the-art actively secure MPC protocols on ML tasks like evaluating ResNet50 by a factor of 3 or more. Benchmarks for depthwise convolutions show order-of-magnitude speed-ups compared to existing approaches.BibTeX
Sebastian Hasler, Toomas Krips, Ralf Küsters, Pascal Reisert, and Marc Rivinius, “Overdrive LowGear 2.0: Reduced-Bandwidth MPC without Sacrifice,” Cryptology ePrint Archive, Technical Report 2023/462, 2023.
Abstract
Some of the most efficient protocols for Multi-Party Computation
(MPC) follow a two-phase approach where correlated randomness,
in particular Beaver triples, is generated in the offline phase and
then used to speed up the online phase. Recently, more complex
correlations have been introduced to optimize certain operations
even further, such as matrix triples for matrix multiplications. In this
paper, our goal is to improve the efficiency of the triple generation
in general and in particular for classical field values as well as
matrix operations. To this end, we modify the Overdrive LowGear
protocol to remove the costly sacrificing step and therewith reduce
the round complexity and the bandwidth. We extend the state-of-
the-art MP-SPDZ implementation with our new protocols and show
that the new offline phase outperforms state-of-the-art protocols
for the generation of Beaver triples and matrix triples. For example,
we save 33 % in bandwidth compared to Overdrive LowGear.BibTeX
Pascal Reisert, Marc Rivinius, Toomas Krips, and Ralf Küsters, “Overdrive LowGear 2.0: Reduced-Bandwidth MPC without Sacrifice,” in ACM ASIA Conference on Computer and Communications Security (ASIA CCS
’23), July 10–14, 2023, Melbourne, VIC, Australia, 2023.
Abstract
Some of the most efficient protocols for Multi-Party Computation
(MPC) follow a two-phase approach where correlated randomness,
in particular Beaver triples, is generated in the offline phase and
then used to speed up the online phase. Recently, more complex
correlations have been introduced to optimize certain operations
even further, such as matrix triples for matrix multiplications. In this
paper, our goal is to improve the efficiency of the triple generation
in general and in particular for classical field values as well as
matrix operations. To this end, we modify the Overdrive LowGear
protocol to remove the costly sacrificing step and therewith reduce
the round complexity and the bandwidth. We extend the state-of-
the-art MP-SPDZ implementation with our new protocols and show
that the new offline phase outperforms state-of-the-art protocols
for the generation of Beaver triples and matrix triples. For example,
we save 33 % in bandwidth compared to Overdrive LowGear.BibTeX