Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach

MSRDG International Journal of Computer Scientific Technology & Electronics Engineering

 

© 2026 by MSRDG IJCSTEE Journal

 

Volume 2 Issue 2

 

Year of Publication: 2026



Authors: R. Dinesh, A Sathwika
Paper


Download


Article ID
MSRDG-IJCSTEE-V2I2P102
DOI
https://doi.org/10.66037/MSRDG-IJCSTEE/V2I2P102

Abstract:

The proliferation of large-scale machine learning systems has intensified concerns regarding the inadvertent exposure of sensitive information embedded in learned representations. Existing privacy-preserving approaches commonly sacrifice predictive utility to achieve formal privacy guarantees, creating a fundamental tension between model performance and data confidentiality. This paper presents the Multi-Objective Autoencoder (MOAE), a principled framework that simultaneously optimizes reconstruction fidelity, downstream utility, and differential privacy constraints within a unified latent representation learning objective. The MOAE integrates an adversarial privacy discriminator with a task-oriented utility classifier and couples both components to a DP-SGD noise injection layer, enabling fine-grained control over the privacy-utility trade-off. We formally characterize the resulting optimization landscape and derive convergence guarantees under standard regularity assumptions. Extensive experiments conducted on MNIST, CIFAR-10, Adult Income, and Medical MNIST benchmarks demonstrate that MOAE consistently outperforms state-of-the-art baselines — including DP-SGD, PATE, and RDP-VAE — achieving up to 3.1% higher accuracy and a 14.7% reduction in membership inference attack success rate at comparable privacy budgets (ε ≤ 3.0). These results establish MOAE as an effective and theoretically grounded solution for privacy-conscious representation learning.

Keywords: Differential privacy, autoencoder, representation learning, adversarial training, membership inference attack, privacy-utility trade-off, federated learning