我们介绍了SalGAN,一种深度卷积神经网络,用于通过对抗性实例训练的视觉显着性预测。网络的第一阶段包括生成器模型,其权重是通过在显着图的下采样版本上从二进制交叉熵(BCE)丢失计算的反向传播来学习的。所得到的预测由经过训练的鉴别器网络处理,以解决由生成阶段生成的显着性映射与地面实况之间的二元分类任务。我们的实验表明,当与BCE等广泛使用的损失函数相结合时,对抗性训练如何能够跨越不同的指标达到最先进的性能。
We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE.
https://imatge-upc.github.io/saliency-salgan-2017/
转载请注明:《SalGAN:生成对抗网络的视觉显着性预测 SalGAN: Visual Saliency Prediction with Generative Adversarial Networks》