Dissecting GANs for Better Understanding and Visualization

5 December 2018
dissecting gan paper

Dissecting GANs for Better Understanding and Visualization

GANs can be taught to create (or generate) worlds similar to our own in any domain: images, music, speech, etc. Since 2014, a large number of improvements of GANs have…

GANs can be taught to create (or generate) worlds similar to our own in any domain: images, music, speech, etc. Since 2014, a large number of improvements of GANs have been proposed, and GANs have achieved impressive results. Researchers from MIT-IBM Watson Lab have presented GAN Paint based on Dissecting GAN – the method to validate if an explicit representation of an object is present in an image (or feature map) from a hidden layer:

GAN paint gif
The GAN Paint interactive tool

State-of-the-art Idea

However, a question that is raised very often in ML is the lack of understanding of the methods developed and applied. Despite the success of GANs, visualization and understanding of GANs are very little explored fields in research.

A group of researchers led by David Bau have done the first systematic study for understanding the internal representations of GANs. In their paper, they present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level.

Their work resulted with a general method for visualizing and understanding GANs at different levels of abstraction, several practical applications enabled by their analytic framework and an open source interpretation tools for better understanding Generative Adversarial Network models.

dissecting gan
Inserting door by setting 20 causal units to a fixed high value at one pixel in the representation.

Method

From what we have seen so far, especially in the image domain, Generative Adversarial Networks can generate super realistic images from different domains. From this perspective, one might say that GANs have learned facts about a higher abstraction level – objects for example. However, there are cases where GANs fail terribly and produce some very unrealistic images. So, is there a way to explain at least these two cases? David Bau and his team tried to answer this question among a few others in their paper. They studied the internal representations of GANs and tried to understand how a GAN represents structures and relationships between objects (from the point of view of a human observer).

As the researchers mention in their paper, there has been previous work on visualizing and understanding deep neural networks but mostly for image classification tasks. Much less work has been done in visualization and understanding of generative models.

The main goal of the systematic analysis is to understand how objects such as trees are encoded by the internal representations of a GAN generator network. To do this, the researchers study the structure of a hidden representation given as a feature map. Their study is divided into two phases that they call: dissection and intervention.

Characterizing units by Dissection

The goal of the first phase – Dissection, is to validate if an explicit representation of an object is present in an image (or feature map) from a hidden layer. Moreover, the goal is to identify which classes from a dictionary of classes have such explicit representation.

To search for explicit representations of objects they quantify the spatial agreement between the unit thresholded feature map and a concept’s segmentation mask using intersection-over-union (IoU) measure. The result is called agreement, and it allows for individual units to be characterized. It allows to rank the concepts related to each unit and label each unit with the concept that matches it best.

Dissection algorithm
Phase 1: Dissection.

Measuring causal relationships using Intervention

The second important question that was mentioned before is causality. Intervention – denoted as the second phase, seeks to estimate the causal effect of a set of units on a particular concept.

To measure this effect, in the intervention phase the impact of forcing units on (unit insertion) and off (unit ablation) is measured, again using segmentation masks. More precisely, a feature map’s units are forced on and off, and both resulting images from those two representations are segmented to obtain two segmentation masks. Finally, these masks are compared to measure the causal effect.

Intervention algorithm
Phase 2: Intervention.

Results

For the whole study, the researchers use three variants of Progressive GANs (Karras et al., 2018) trained on LSUN scene datasets. For the segmentation task, they use a recent image segmentation model (Xiao et al., 2018) trained on the ADE20K scene dataset.

An extensive analysis was done using the proposed framework for understanding and visualization of GANs. The first part – Dissection was used by the researchers for analyzing and comparing units across datasets, layers, and models, and locating artifact units.

Comparing representations learned by progressive GANs
Comparing representations learned by progressive GANs trained on different scene types. The units that emerge match objects that commonly appear in the scene type: seats in conference rooms and stoves in kitchens.
GAN
Removing successively larger sets of tree-causal units from a GAN.

A set of dominant object classes and the second part of the framework- intervention, were used to locate causal units that can remove and insert objects in different images. The results are presented in the paper, the supplementary material and a video were released demonstrating the interactive tool. Some of the results are shown in the figures below.

Visualizing the activations of individual units in two GANs.
Visualizing the activations of individual units in two GANs.

Conclusion

This is one of the first extensive studies that target the understanding and visualization of generative models. Focusing on the most popular generative model – Generative Adversarial Networks, this work reveals significant insights about generative models. One of the main findings is that the larger part of GAN representations can be interpreted. It shows that GAN’s internal representation encodes variables that have a causal effect on the generation of objects and realistic images.

Many researchers will potentially benefit from the insights that came out of this work and the proposed framework that will provide a basis for analysis, debugging and understanding of Generative Adversarial Network models.

Adobe Creates Neural Network to Reveal Image Manipulations

11 July 2018
imge manipulation

Adobe Creates Neural Network to Reveal Image Manipulations

Image editing is not a challenging task anymore. User-friendly editing software makes a process of image tampering and manipulating very straightforward, and unfortunately, tampered images are more and more often…

Image editing is not a challenging task anymore. User-friendly editing software makes a process of image tampering and manipulating very straightforward, and unfortunately, tampered images are more and more often used for unscrupulous business or political purpose. And what makes things even worth in such situations, humans usually find it difficult to recognize tampered regions, even with careful inspection.

So, let’s discover how neural networks may assist people with this kind of task?

Suggested Approach

Before we dive deep into the capabilities of neural networks with regards to detection of image manipulations, let’s have a short refresh on the most common tampering techniques:

  • splicing copies regions from an authentic image and pastes them into other images;
  • copy-move copies and pastes regions within the same image;
  • removal eliminates regions from an authentic image followed by inpainting.

Group of researchers, headed by Peng Zhou, investigate the possibility to adopt object detection networks to the problem of image detection in a way that will allow efficient detection of all three types of common manipulations.

As a result, they propose a novel two-stream manipulation detection framework which explores both RGB image content and image noise features. More specifically, they adopt Faster R-CNN within a two-stream network and perform end-to-end training. The first stream utilizes features from the RGB channels to capture clues like visual inconsistencies at tampered boundaries and contrast effect between tampered and authentic regions. The second stream analyzes the local noise features in an image. These two streams are, in fact, complimentary for detecting different tampered techniques.

Network Architecture

If you are interested in the technical details of the suggested approach, this section is here just for this purpose. So, let’s take a helicopter view on the network architecture.

network detection manipulation

Network architecture

The network consists of three main parts:

1. RGB stream takes care of both bounding box regression and manipulation classification. Features from the input RGB image are learned with the ResNet 101 network and then used for manipulation classification. RPN network in the RGB stream also utilizes these features to suggest region of interest (RoI) for bounding box regression. The experiments show that RGB features perform better than noise features for the RPN network. However, this stream alone is not sufficient for some of the manipulation cases, where tampered images were post-processed to conceal splicing boundary and reduce contrast differences. That’s why the second stream was introduced.

2. Noise stream is designed to pay more attention to noise rather than semantic image content. Here the researchers utilize advances of steganalysis rich model (SRM) and use SRM filter kernels to produce noise features for their two-stream network. The resulting noise feature maps are shown in the third column of the figure below.

Illustration of tampering artifacts

Noise in this setting is modeled by the residual between a pixel’s value and the estimate of that pixel’s value produced by interpolating only the values of neighboring pixels. The noise stream shares the same RoI pooling layer as the RGB stream.

The three SRM filter kernels used to extract noise features

3. Bilinear pooling combines RGB and noise streams in a two-stream CNN network while preserving spatial information to improve the detection confidence. The output of the bilinear pooling layer is a product of RGB stream’s RoI feature and noise stream’s RoI feature. Then, the researchers apply signed square root and L₂ normalization before forwarding to the fully connected layer. They use cross-entropy loss for manipulation classification and smooth L₁ loss for bounding box regression.

Comparisons with Existing Methods

The method presented in this article was compared to other state-of-the-art methods using four different datasets: NIST16, Columbia, COVER, and CASIA. The comparison was carried out using two pixel-level evaluation metrics: F1 score and Area Under the receiver operating characteristic Curve (AUC).

The performance of the suggested model (RGB-N) was compared against several other methods (ELA, NOI1, CFA1, MFCN, and J-LSTM) as well as against RGB stream alone (RGB Net), noise stream alone (Noise Net), and the model with direct fusion combining of all detected bounding boxes for both RGB Net and Noise Net (Late fusion). See results of this comparison in the tables below.

Table 1. F1 score comparison against other methods


Table 2.
 Pixel level AUC comparison against other methods

As evident from the provided tables, RGB-N model outperforms such conventional methods like ELA, NOI1, and CFA1. That could be due to the fact that they all focus on specific tampering artifacts that only contain partial information for localization. MFCN was outperformed by the suggested approach for NIST15 and Columbia datasets, but not CASIA dataset. Notably, noise stream on its own performed better (based on the F1 score) than a full two-stream model for Columbia dataset. That’s because Columbia only contains uncompressed spliced regions, and hence, preserves noise differences very well.


Good news! Now you may swap your face with celebrity in one click with our brand new app SWAPP!


Below you can also observe some qualitative results for comparison of RGB Net, Noise Net, and RGB-N network in two-class image manipulation detection. As evident from these examples, two-stream network yields good performance even if one of the streams fails (RGB stream in the first row and noise stream in the second row).

Qualitative visualization of results

Furthermore, the network introduced here is good at detecting the exact manipulation technique used. Utilizing information provided by both RGB and noise map it can distinguish between splicing, copy-move, and removal tampering techniques. Some examples are provided below.

Qualitative results for the multi-class image manipulation detection

Bottom Line

This novel approach to image manipulation detection outperforms all conventional methods. Such a high performance is achieved by combining two different streams (RGB and noise) to learn rich features for image manipulation detection. Apparently, the two streams have the complementary contribution in finding tampered regions. Noise features, extracted by an SRM filter, enable the model to capture noise inconsistency between tampered and authentic regions, which is extremely important when dealing with splicing and removal tampering techniques.

In addition, the model is also good at distinguishing between various tampering techniques. So, it tells not only, which region was manipulated, but also how it was manipulated: was some object inserted, removed or copy-moved? You’ll get the answer.