HDR功能是相机的重要功能之一,如https://cloud.tencent.com/developer/article/1167072的系列文章所言,近些年HDR技术发展迅速,新算法层出不穷。为了降低读者的入门门槛,下面我将结合代码就简单但非常经典的基于多尺度对比度重组的单图像HDR技术进行讲解。先前讲过最小二乘(WLS)滤波器可用于图像抽象、多尺度HDR分解等,本文其实可以看作是对“Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation”论文源码的解读。

现在先来简单地讲一讲边缘保持的单图像HDR技术的基本原理:首先,由于HDR图像的动态范围很大,为了压缩动态范围,操作需在对数域/指数域进行,这一点已经基本成为HDR处理领域的共识;第二,为了保持图像的锐利度和避免光晕效应,最后执行变换时实际上是进行单点处理的,不会有滤波等操作;第三,至少要在某个尺度上提升相对暗区/点而抑制相对亮区/点;第四,多尺度分解必须是保边的,这样才能针对性地对该尺度进行对比度拉伸或者压缩,而不至于溢出到边界外产生不期望的结果。只有满足这四点才能在保持细节的同时压缩动态范围。所谓多尺度对比度重组就是对于不同尺度的对比度进行调制后重新组合,你可以根据自己的需求进行针对性地处理,这在摄影处理中非常有用。废话不多说,下面就参照着修改后的源码进行讲解(本文测试图像可从http://www.hdrlib.com/网站下载,讲解时只介绍柔化功能部分的代码,即本人修改后的代码。注意:由于显示范围的限制,超出1的亮度值在显示上没有差异,但实际取值却是不同的)。

%   Multi-scale HDR tonemapping using the WLS edge-preserving filter.
%
%   This script computes a multi-scale edge-preserving decomposition
%   of an HDR image, using the weighted least squares (WLS) optimization
%   framework, as described in Farbman, Fattal, Lischinski, and Szeliski,
%   "Edge-Preserving Decompositions for Multi-Scale Tone and Detail
%   Manipulation", ACM Transactions on Graphics, 27(3), August 2008.
%
%   The decomposition is then used to produce two different tonemapped
%   looks by recombining the detail layers in different manners.

close all;clc;
%% Load
filename = 'HDR images/Beach_02.hdr';
hdr = double(hdrread(filename));
I = 0.299*hdr(:,:,1) + 0.587*hdr(:,:,2) + 0.114*hdr(:,:,3) + 1e-6;
R = hdr(:,:,1) ./ I;  %归一化R/G/B
G = hdr(:,:,2) ./ I;
B = hdr(:,:,3) ./ I;

LL = log(I);

% figure,imshow(hdr);
% figure,imshow(I./max(I(:)));
%% Filter
u0 = LL;
u1 = wlsFilter(LL, 0.125, 1.2, LL );
u2 = wlsFilter(LL, 1.0,   1.2, LL );
u3 = wlsFilter(LL, 8.0,   1.2, LL );

% figure,imshow(exp(u1));
% figure,imshow(exp(u2));
% figure,imshow(exp(u3));
%% Compute detail layers
d0 = u0-u1;
d1 = u1-u2;
d2 = u2-u3;

%% Recombine the layers together while moderately boosting multiple scale detail
% 具有柔化图像效果的HDR技术演示,代码已经过调整
% look = 'Balanced'; 
w0 =0.2;  %抑制小尺度和大尺度的对比度,中尺度对比度不变,获得柔和效果
w1 = 1.0;
w2 = 0.8;
w3 = 0.4; %原图像幂律变换,压缩动态范围作为基底图像
sat = 1.0; %保持1.0,表示不是各通道的原始对比度进行压缩
exposure = 1.0; %保持1.0,通过调整百分比确保变换后在设定百分比内的像素不会过曝
gamma = 1.0/1.6;

cLL = w0*d0 + w1*d1 + w2*d2 + w3*u3;

figure,imshow(exp(d0)-0.5); %显示各尺度的对比度图像
figure,imshow(exp(d1)-0.5);
figure,imshow(exp(d2)-0.5);
figure,imshow(exp(w3*u3)); %动态范围压缩后的图像作为基底图像,在此基础上进行各尺度对比度的调整
%% Convert back to RGB
Inew = exposure*exp(cLL);
figure,imshow(Inew);
sI = sort(Inew(:));
mx = sI(round(length(sI) * (98/100))); %保持98%的像素处理后不会过曝
Inew = Inew/mx;

figure,imshow(Inew);
rgb = composeIRGB(Inew, R, G, B, sat, gamma);
figure,imshow(rgb);

%% Recombine the layers together with stronger emphasis on fine detail
% 小尺度对比度增强的HDR技术演示,保持核心源代码不变
% look = 'StrongFine'; 
w0 = 2.0;
w1 = 0.8;
w2 = 0.7;
w3 = 0.2;
sat = 0.6;
exposure = 0.9;
gamma = 1.0;
%w0 = 3.0; w1 = 1.0; w2 = 1.0; w3 = 0.45;

cLL = w0*d0 + w1*d1 + w2*d2 + w3*u3; %进行多尺度合成

%% Convert back to RGB
Inew = exp(cLL);
sI = sort(Inew(:));
mx = sI(round(length(sI) * (99.95/100))); 
Inew = Inew/mx;

rgb = composeIRGB(exposure*Inew, R, G, B, sat, gamma);
figure,imshow(rgb);
function rgb = composeIRGB(Inew, r, g, b, sat, gamma)
rgb(:,:,1) = Inew .* (r .^ sat);
rgb(:,:,2) = Inew .* (g .^ sat);
rgb(:,:,3) = Inew .* (b .^ sat);
if gamma ~= 1.0    
    rgb = rgb .^ gamma;
end

end

 

function OUT = wlsFilter(IN, lambda, alpha, L)
%WLSFILTER Edge-preserving smoothing based on the weighted least squares(WLS) 
%   optimization framework, as described in Farbman, Fattal, Lischinski, and
%   Szeliski, "Edge-Preserving Decompositions for Multi-Scale Tone and Detail
%   Manipulation", ACM Transactions on Graphics, 27(3), August 2008.
%
%   Given an input image IN, we seek a new image OUT, which, on the one hand,
%   is as close as possible to IN, and, at the same time, is as smooth as
%   possible everywhere, except across significant gradients in L.
%
%
%   Input arguments:
%   ----------------
%     IN              Input image (2-D, double, N-by-M matrix). 
%       
%     lambda          Balances between the data term and the smoothness
%                     term. Increasing lambda will produce smoother images.
%                     Default value is 1.0
%       
%     alpha           Gives a degree of control over the affinities by non-
%                     lineary scaling the gradients. Increasing alpha will
%                     result in sharper preserved edges. Default value: 1.2
%       
%     L               Source image for the affinity matrix. Same dimensions
%                     as the input image IN. Default: log(IN)
% 
%
%   Example 
%   -------
%     RGB = imread('peppers.png'); 
%     I = double(rgb2gray(RGB));
%     I = I./max(I(:));
%     res = wlsFilter(I, 0.5);
%     figure, imshow(I), figure, imshow(res)
%     res = wlsFilter(I, 2, 2);
%     figure, imshow(res)

if(~exist('L', 'var')),
    L = log(IN+eps);
end

if(~exist('alpha', 'var')),
    alpha = 1.2;
end

if(~exist('lambda', 'var')),
    lambda = 1;
end

smallNum = 0.0001;

[r,c] = size(IN);
k = r*c;

% Compute affinities between adjacent pixels based on gradients of L
dy = diff(L, 1, 1);
dy = -lambda./(abs(dy).^alpha + smallNum);
dy = padarray(dy, [1 0], 'post');
dy = dy(:);

dx = diff(L, 1, 2); 
dx = -lambda./(abs(dx).^alpha + smallNum);
dx = padarray(dx, [0 1], 'post');
dx = dx(:);


% Construct a five-point spatially inhomogeneous Laplacian matrix
B(:,1) = dx;
B(:,2) = dy;
d = [-r,-1];
A = spdiags(B,d,k,k);

e = dx;
w = padarray(dx, r, 'pre'); w = w(1:end-r);
s = dy;
n = padarray(dy, 1, 'pre'); n = n(1:end-1);

D = 1-(e+w+s+n);
A = A + A' + spdiags(D, 0, k, k);

% Solve
OUT = A\IN(:);
OUT = reshape(OUT, r, c);

原图(伽马校正后):

hdr深度学习算法_hdr深度学习算法

亮度通道进行幂律变换压缩动态范围作为对比度调整的基准图像:

hdr深度学习算法_伽马校正_02

三个尺度的对比度图像:

hdr深度学习算法_sed_03

 

hdr深度学习算法_取值_04

hdr深度学习算法_伽马校正_05

由于对数域的相加对应原值域上的相乘,对数域的相乘对应原值域的幂变换,因此w0、w1、w2用于控制对应尺度下对比度的拉伸或者压缩。取值大于1意味着拉伸该尺度的对比度,小于1意味着压缩该尺度的对比度,等于1则保持原样。程序中w0、w2小于1,意味着同时压缩大尺度和小尺度的对比度,w1=1意味着中尺度的对比度不做调整。经过多尺度对比度调整的亮度图像如下所示:

hdr深度学习算法_伽马校正_06

可以看出,图像整体的动态范围得到了压缩,亮度更加均衡。接下来是限制过曝像素比例。通过设定允许过曝百分比来实现这个目标。这里我们控制过曝比例不超过2%。控制过曝百分比的亮度图像如下所示:

hdr深度学习算法_伽马校正_07

程序中sat参数取值一般小于等于1,是起到压缩R/G/B三通道相对值,抑制原始HDR图像中颜色比例失衡作用的。这里我给它取值为1,表示认为原始HDR图像颜色比例不存在失衡现象。

经处理后的最终结果如下:

hdr深度学习算法_取值_08

可以看到,经处理后动态范围得到了压缩,图像整体的确变得更加柔和。将w0的值修改为1.5,处理结果如下:

hdr深度学习算法_sed_09

可见这种简单的多尺度对比度重组技术可以让你很方便地获得自己想要的HDR处理效果。事实上,我们曾经经常看到的很多图像(比如挂历上的瀑布)处处平滑,看起来很柔和很舒服,但是又不存在边缘扩散的情况,其处理就可以使用WLS滤波器来实现。这里演示了WLS滤波器作为一种优秀的边缘保持滤波器用于HDR处理的例子。当然,你也可以使用其他保边滤波器获得相似的效果,不过与其他保边滤波器相比,WLS的处理效果还是非常优秀的。