With the rapid development of cloud computing, outsourcing massive data and complex deep learning model to cloud servers (CSs) has become a popular trend, which also brings some security problems. One is that the model stored in the CSs may be corrupted, leading to incorrect infe
...
With the rapid development of cloud computing, outsourcing massive data and complex deep learning model to cloud servers (CSs) has become a popular trend, which also brings some security problems. One is that the model stored in the CSs may be corrupted, leading to incorrect inference and training results. The other is that the privacy of outsourced data and model may be compromised. However, existing privacy-preserving and verifiable inference schemes suffer from low detection probability, high communication overhead and substantial computational time. To solve the above problems, we propose a privacy-preserving and verifiable scheme for convolutional neural network inference and training in cloud computing. In our scheme, the model owner generates the authenticators for model parameters before uploading the model to CSs. In the phase of model integrity verification, model owner and user can utilize these authenticators to check model integrity with high detection probability. Furthermore, we design a set of privacy-preserving protocols based on replicated secret sharing for both the inference and training phases, significantly reducing communication overhead and computational time. Through security analysis, we demonstrate that our scheme is secure. Experimental evaluations show that the proposed scheme outperforms existing schemes in privacy-preserving inference and model integrity verification.
@en