- Open Access
- Total Downloads : 531
- Authors : Arathy Reghukumar, Divya Haridas, Sreekalas, K.P.Soman
- Paper ID : IJERTV1IS8479
- Volume & Issue : Volume 01, Issue 08 (October 2012)
- Published (First Online): 29-10-2012
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Insight In to Bregman Algorithm
Arathy Reghukumar, Divya Haridas, SreekalaS ,K.P.Soman
Center for Excellence in Computational Engineering and Networking, Amrita Vishwa Vidyapeetham,Amrita School of Engineering, Coimbatore -641112
ABSTRACT
Split-Bregman is a recent algorithm proposed with good convergence property in minimum number of iterations. It can be used in areas of denoising, deblurring, segmentation, inpaintaing etc. with ease due to its convergence property. In this paper we are trying to explore the fundamental theory of Bregman and Split Bregman with the help of convex function, constrained and unconstrained optimisation models.
Keywords:Convex function, Constrained and Unconstrained Optimisation, Bregman and Split- Bregman.
INTRODUCTION
Optimization plays an important role in image processing for example in case of denoising application, error between the original image and denoised image should be reduced. This minimisation process will be taken care in optimisation frame work. So bringing the filtering or denoising algorithm into this frame work gives an advantage when compared to conventional methods. Since the objective of operators like denoising, reconstruction etc are to bring out the appropriate image or by minimising error this is largely supported or achieved through convex optimization framework.
Consider a function F which represents an image processing problem like denoising ,restoration etc for a set of feasible solution can be found through optimization. Functions are of different types, widely used functions for optimisation are convex and concave because of its simplicity in finding the optimal minimum. Convex function is defined as if every line segment joining two points its graph is never below the graph.in order to find the solution for a convex/concave functions, constrained or unconstrained optimization can be used. In unconstrained optimisation search limit extends from –
. But in constrained optimisation search limit is restricted, according to the subjected to condition.
CONVEX FUNCTION
A function can be defined as the relation between the set of input (domain) and set of output (codomain). Cube is an appropriate example for explaining convex set . All the points inside the cube constitute a convex set (S).
P
Q
Figure:(1)
But in a non convex set, some of point lies outside the region R
P
Q
Figure:(2)
Consider the point R, which divides the line segment x1 and x2 joining the function which belongs to the convex set in the ratio of ,(1- ) is shown in figure.
(1-)
X1 R X2
Figure:(3)
Point
R x1 (1 )x2
(1 )
(1)
in constrained optimization search space is related with equality and inequality constraints.An unconstrained minimization problem is
R x1 (1 )x2
(2)
Minimize f(x) (9)
is always lies in the convex set. This constitutes a
Where the minimization is over all
x Rn . A
function value. Function is said to be convex if and only if function F :X R set X in a vector space ,for any two points x1 and x2 in X and (0,1) .
constrained optimization problem is defined as follows Minimize f(x) (10)
g (x) 0
f (x1 (1 )x2 ) ( f (x1) (1 )( fx2 ))
(3)
Subject to
h( x) 0
(11)
Since every point on a convex function are differentiable a convex function have unique gradient in every point except at minimum point.
Where f (x) is the objective function to be minimized,
g(x) is the set of inequality constraints and h(x) is
f (x) f (x) , f (x) , f (x) ………… f (x)
x1 x2 x3 xn
(4)
the set of equality constraints. Generally constrained optimization problems are Basis Pursuit Problem and TV Denoising Problem. To efficiently solve this kind of
Using Taylor series expansion the mathematical interpretation of convex function can be depicted as
problems we use Bregman Iterative Algorithm. Basis Pursuit problem deals with finding the solution for linear systems of equations of the form P q .
f (x) f (x0 (x x0 ) f '(x0 ))
(5)
Consider the linear system of equations
For higher dimension
P P ………. P q
11 1 22 2 1n 1n 1
0 0 0
f (x) f (x ) f (x )T (x x )
(6)
P211 P222 ……….. P2n2n q2
.
(12)
The function condition
f (x) should be convex if satisfies the
.
0 0 0
Pm11 Pm22 ……….. Pmnn qn
f (x)
f (x ) f (x )T (x x )
(7)
P P
……P
q1
CONSTRAINED AND UNCONSTRAINED OPTIMIZATION
11 12 1n
P21 P22 ……P2n
1
q
2
P 2
q .
m1 m2 mn
Optimization problems are of two types constrained and
……………..
…
unconstrained optimization. Generally optimization is the method of finding maximum or minimum of a function.
Consider a n variable function f (x1 , x2 ,…..xn ) =
P P ……P
n
.
qn
F ( X ) In order to find the solution of a function f we have to find a point X 0 .
The above system can be written in the form of where P q ,P Rmxn Rn and q Rm .Here assume the rows of P are linearly independent .So the
system of equation has infinite solution .Using minimal
f (x0 ) f (x)
f (x0 ) f (x)
(8)
L norm we can find the solution for linear system of 1
equations and can be represented as
The first inequality equation stands for minimizing the function and the latter equation stands for maximizing the function. For solving the unconstrained problems we use root finding algorithm .In unconstrained problems the search space is not related with any constraints. But
min
subject to P q 1
(13)
Where
n
1 = i
i1
min || d || X ()
1
,d
(18)
There are many problems while solving the constrained equation (1), because of imposing constraints for search direction. So we go for unconstrained basis pursuit
min || || 1 || P q ||
Rn 2 2 2
(19)
problem ie;
1
min || ||2
Rn 2
|| P q ||2
(14)
Add a penalty term to make the constrained problem to unconstrained problem. By combining both (18) and
(19) we get
2
Where µ is a positive constant. These types of problems are used for Compressive Sensing. The second type of problem is total variation denoising.
arg min || d ||1 X ()
u,d 2
|| d d () ||2
(20)
The removal noise from an original image is called denoising. The problem is of the form
Let || d ||1
X () be E(, d )
BV
min || ||
BV ()
X ()
(15)
then the equation becomes
2
arg min E(, d ) || d d () ||
(21).
Where || ||BV
is not a smooth convex function and it
u ,d 2
is added with strictly convex function
X () . BV is
Now we can define the Bregman Distance
the bounded variation of the form Rn .To find the BV norm of a function has high computation cost and
Bregman Distances
can be replaced by
L1 nor of the gradient ie;
For a smooth differentiable convex function, gradient is
possible . But for a non-differential function the concept
|| ||BV || ||1 and X () can be replaced by
sub-gradient comes into play. At a point A there can be many P, where P is the sub gradient. Set of all sub- gradients form Sub differential.
2
min || || || q ||2
BV ()
Here should be greater than zero.
(16)
p d ( k ) . (22)
It forms a convex set
In older days we were using interior point method. Consider a medium sized problem, using interior point algorithm requires upto 100 iteration for computing the solution with a relative tolerance of about 0.01 . So we go for Bregman Iterative Algorithm.
BREGMAN METHOD
Bregman is a highly specific and efficient algorithm for deblurring, denoising, segmentation etc. To solve general L1 regularization problem use the equation
d()
p d() p, k
2
arg min | d() | || P q ||
u
(17)
k
To split the
L1 and L2 component, introduce a new
Figure:(4)
term to solve the constrain problem
d() d(k )
pk , k
(23)
P is the sub-gradient at k
In multi-dimensional,
Where
d(k )
pk , k
is the equation of
d() d(k )
p, k
(24)
the sub gradient at k
j
BregmanDistanceFunction
D
j
The distance is always positive. At as seen from the graph.
k , D pk = 0
Similarly for other u values
pk 0
p d () p, k
Our aim is to minimize
1
min || d || X ()
,d
(25)
k Dd
B () pk1
(,
k 1
) X ()
(26)
k Such that replace the semi convex function d( ) with
pk k
Figure: (5)
Bregman distance
Dd (, ) .We make an
The function at P can be plotted as in figure
argument that Bk () is greater than zero.
D
pk1 d
(, k 1 )
is a distance function with minimum
zero .This is strictly convex function. Therefore
k 1 arg min Dpk (,k ) X (k )
(27)
d
d()
At k
then,
one of the sub-gradient is zero, 0 d ( k )
k
B () d() d(k 1) pk 1, k 1 X ()
k
B () is differentiating with respect to k
0 (28)
0 d ( k ) pk 1 X ( k )
(29)
This (29) is not a regular function and to make it a regular function
D(, k )
0 d ( k ) pk 1 X ( k )
Now it is easy to find the minimum value.
pk 1 X ( k ) d ( k )
(30)
(31)
Figure:(6)
The gradient at k is
pk pk 1 X ( k )
(32)
The gradient at k+1 is obtained by subtracting the derivative of strictly convex function at k+1 from previous gradient p at k.
Applying Lagrangian multiplier in above equation we can write
pk 1 pk X ( k 1 )
(33)
min || d ||1 X () || d P ||2
,d 2
(36)
Bregman Algorithm
Now take J (, d ) || d ||1
X ()
K=0, 0 ,
p0 =0
Now (36) can be written as
While k not converge
k 1 arg min Dpk (,k 1) X ()
min J (, d )
,d 2
|| d P ||2
(37)
j As in iterative Bregman replace J( ,d) by
pk 1 pk X ( k 1 ) d ( k 1 )
j
k k 1
D pk (,k , d, d k ) Therefore equation (37) can be rewritten as
min D p k (, k , d, d k ) || d P ||
(38)
end while
, p are the variables used in Bregman algorithm. K is the number of iteration. Successive iterations are given as
k 1 , pk 1 . The algorithm runs in a while loop k is incrementing for successive iterations and it ends when
k converges.
SPLIT- BREGMAN ALGORITHM
Split Bregman algorithm is a suitable technique in
,d j 2 2
-
is in the form of
min J (, d) X (, d )
,d
While updating to the next iterative point
(k 1, d k1) min D pk (,k , d, d k ) || d P ||
(39)
solving convex minimisation problems which are of non- differentiable in nature. Optimisation problems of
,d j
2 2 (40)
following format can be solved by using the split From Bregman Iterative Algorithm
Bregman algorithm d : R X : R
Constrained functioncan be defined as,
pk 1 pk X ( k 1 )
min
d ()
X ()
(34)
( k 1, d k 1 ) min J (, d ) bk , d P k
,d
1
Replace
d()
by P therefore the equation can be
|| d P k ||
2 2
written as
min || P ||
X ()
p k 1 p k PT (Pk 1 dk 1)
Now take
1
d P
(35)
The
p k 1
is the gradient at k+1 with variable
Therefore the objective function
min || d ||1 X ()
Similarly
k 1 k
k 1
k 1
,d
subject to d P where , d s
pd pd
-
d
( || d
2
-
P
||2 ) (41)
The d.
k 1
P
d
is the gradient at k+1 with variable
Then,
min J (, d ) (bk , d P) || d P || c
d d
p k 1 p k (d k 1 P k 1 )
(42)
,d
2 2 (48)
d d
p k 1 p k (P k 1 d k 1 )
(43)
Here c= (J ( k , d k ) bk , d d k ) T
Therefore the equation can be rewritten as
Taking each term separately
min J (, d )
|| d P bk
|| c2
(49)
p k PT (d k 1 )
,d 2
P k 1 PT (P k d k )…..
Solving the above equation we get
and so on
k 1
bk 1 bk (P k 1 d k 1 )
(50)
p k 1 PT P i d i PT bk 1
i1
k 1
(44)
Split-Bregman Algorithm
0 0
Since Pi d i
= bk 1
is called residual vector.
Intialize k 0
0 b 0
i1
Similarly
k 1
p P d b
k 1 i i k 1
d
while||
k k 2
2
tol do
i1
k 1
min X () || d k () bk 2
Then,
2 2
( k 1, d k 1 ) min J (, d ) J ( k , d k ) p k , k
d k 1 min | d | || d P( k 1) bk 2
,d
d 2 2
d
p k , d d k || d P ||
2 2
(45)
bk 1
bk (P k 1 bk )
Substitute we get
p k PT bk
and
pk bk in (45),
k k 1
end while
(k 1, d k 1) min J (, d) J (k , d k ) bk , d Pk
,d
(46)
, b , d are the variablesin Split-Bregman. In addition
to the variables used in Bregman an extra variable is introduced here for reducing the computational
2
bk , d d k
2
|| d Pk ||
complexity. K is the present iteration, and successive iteration are given as k+1. The difference in the value of present and previous iteration is compared with the tolerance value, if it is greater than the tolerance value
Since PT bk , k (PT bk )T ( k )
(bk )T P( k )
bk , P( k )
bk , P Pk
bk , d Pk (47)
then the while loop continues, then while loop ends.
EXPERIMENT AND RESULTS
The experimental part of Split- Bregman denoising is implemented using MATLAB. The denoised output of a cameraman image which has been corrupted with some random noiseis given below. The corresponding PSNR of 28.3971 and MSE of 94.0525 is obtained by using
weighting parameter for fidelity term mu is .050 and tolerance is 0.001.Figure (6): original
Figure (7): noisy
Figure (8): Denoised
Figure (9):Difference
CONCLUSION
In this paper we are trying to explore the fundamental theories and mathematical explanation of Bregman and Split-Bregman with the help convex function and constrained optimisation. After that explaining the denoising property of Split-Bregman using
corresponding algorithms and the regarding results are depicted in above figure (6,7,8,9).
REFERENCES
-
Bregman Algorithms Author :JacquelineBush Supervisor Dr.Carlos Garc´a-Cervera June 10, 2011
-
Rudin{Osher{Fatemi Total Variation Denoising using Split Bregman
Pascal Getreuer Yale University 2012
-
Tom.Goldstein The Split Bregman Method For L1 Regularized Problems. "May 22 2008
-
Jos´e M. Bioucas-Dias M´ario A. T. Figueiredo Instituto de Telecomunicac¸ oes,Instituto Superior T´ecnico,Lisboa, portugal total variation restoration of speckled images using a split-bregman algorithm march 24 2009
-
Rong-Qing Jia , Hanqing Zhao, Wei Zhao Convergence analysis of the Bregman method for the variational model of image denoising
-
C.Y. LeeUnconstrained Optimization
-
Basics of Unconstrained Optimization-
http://www.pages.drexel.edu/
-
Xiyin Li Fundamentals of Constrained optimization
-
Richard de Neufville, Joel Clark, and Frank R. Field Massachusetts Institute of Technologyconstrained optimization
-
Rong-Qing Jia, Hanqing Zhao, Wei ZhaoConvergence analysis of the Bregman method for the variational model ofimage denoising
-
Bregman Algorithms Author JacquelineBush Supervisor Dr.Carlos Garc´a-Cervera June 10, 2011
-
Rudin Osher Fatemi Total Variation Denoising using Split BregmanPascal Getreuer Yale University 2012
-
Tom.Goldstein The Split Bregman Method For L1 Regularized Problems. "May 22 2008
-