Generalized Mean Pooling Explained | Papers With Code
Pooling Operations

Generalized Mean Pooling

Generalized Mean Pooling (GeM) computes the generalized mean of each channel in a tensor. Formally:

$$ \textbf{e} = \left[\left(\frac{1}{|\Omega|}\sum_{u\in{\Omega}}x^{p}_{cu}\right)^{\frac{1}{p}}\right]_{c=1,\cdots,C} $$

where $p > 0$ is a parameter. Setting this exponent as $p > 1$ increases the contrast of the pooled feature map and focuses on the salient features of the image. GeM is a generalization of the average pooling commonly used in classification networks ($p = 1$) and of spatial max-pooling layer ($p = \infty$).

Source: MultiGrain

Image Source: Eva Mohedano

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Retrieval 3 16.67%
Retrieval 3 16.67%
Physical Simulations 1 5.56%
Decoder 1 5.56%
Philosophy 1 5.56%
Gait Recognition 1 5.56%
Multiview Gait Recognition 1 5.56%
Content-Based Image Retrieval 1 5.56%
Deep Learning 1 5.56%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories