- Open Access
- Total Downloads : 15
- Authors : Dr. Nidhi Gupta
- Paper ID : IJERTCONV5IS01060
- Volume & Issue : ICIATE – 2017 (Volume 5 – Issue 01)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Estimation of Software Reliability by Sequential Testing with Simulated Annealing of Mean Field Approximation
Dr. Nidhi Gupta
Atharva College of Engineering
Abstract Various problems of combinatorial optimization and permutation can be solved with neural network optimization. The problem of estimating the software reliability can be solved with the optimization of failed components to its minimum value. Various solutions of the problem of estimating the software reliability has been given. These solutions are exact and heuristic, but all the exact approaches are of considerable theoretical interest. In this paper, we propose the simulated annealing technique of mean field approximation for finding the possible minimum number of failed components in the sequential testing. These minimum numbers of failed components are depending upon the selection of time intervals or slots. The selection of time intervals or slots satisfies all the necessary constraints of the problem. The new energy function with the mean field approximation is also proposed. The constraint parameter for annealing schedule is also be defined dynamically that is changed with the selection of time interval or slot on each iteration of the processing. The algorithm of the whole process shows that this approach can generate the optimal solution.
Keywords Software Reliability, Mean field approximation, Simulated Annealing, Optimization.
I. INTRODUCTION
Most of the traditional problems of combinatorial permutation can be solved with the help of Artificial Neural Network [1]. There are various application and uses of neural network [2]. One of the most successful applications of the neural network is solving optimization problem [3]. There are many situations where a problem may be formulated as minimization or maximization of some cost function or objective function subject to constraints. It is possible to map such problems onto a feedback network, where the units and connection strengths are identified by comparing the cost function of the problem with the energy function of the network expressed in terms of the states values of the units and the connection strength. The software reliability can be defined in statistical terms as the probability of fault free operation of a system or component to
appropriate for the prediction of failure. It may be a better idea to use the optimization technique of ANN for determining the failures and use it for estimating the software reliability.
There are various solutions of such type of problem have been proposed in which the problems can be formulated as the optimization of some cost or function with the satisfaction of certain constraints. The same method can also be applied to optimize the reliability estimation, where the objective is to minimize the number of failed components in the sequential testing [6, 7]. The traditional method to solve such problems was gradient decent approaches of hill climbing and a stochastic simulated annealing [8, 9].In this paper, we propose the simulated annealing technique of mean field approximation for optimizing the possible minimum number of failed components in the given time duration. The selection of time intervals or slots satisfies all the necessary constraints. Energy function for any Hopfield type feedback neural network represents the stored input pattern in the form of number of failed components. This energy function also satisfies all the necessary constraints imposed in the problem. A global constraint in the form of the number of failed components in the particular slot or time interval (that is being selected randomly) can be selected for the Annealing schedule. The constraints can be reduced as per the annealing schedule and corresponding the new energy function is being estimated. It can be seen that the possible minimum energy function will represent the minimum number of failed components in the sequential testing.
-
Equations
Simulated Annealing of Mean Field Approximation for Optimizing the Number of Failed Components of
The Software:
To optimize the maximum or minimum number of
perform its required functions under stated conditions for a
A N N N
B N N N
C N N N
specified period of time [4]. Software reliability represents a
E 2 TXi TX j 2 TX i TYi 2 f xyTXi (TY ,i1 TY ,i1 )
user (or customer) oriented view of software quality. It relates
X 1 i1 j1, j1
i1 X 1Y 1,Y X
X 1Y 1,Y X i1
directly to operation rather than design of a program, and hence it is dynamic rather than static. For this reason software reliability is interested in failures occurring and not faults in a program. Failures mean a function of the software that does not meet user requirements .The reliability of any software can be estimated using the failure data of the system, but the prediction of exact failures in software is complicated. We have various statistical methods [5] for predicting the failures but the investigation indicates that these models are not much
failed components in the testing period we can consider where A,B and C are the positive constants and denote the relative importance given to the constraints. the optimization method of simulated annealing .The optimization is a process in which a function can be formulated as minimization or maximization of some cost function or objective function subject to certain constraints. The solution of the problem is generally defined in two terms, one is globally optimal solution and other is locally
optimal solution. Globally optimal solution is one where there is no other feasible solutions with better objective function values. A locally optimal solution is one where there is no other feasible solution in the vicinity with better objective function values .In the testing of the different software components, the objective is to minimize the number of failed components in the given time duration (T) under the certain conditions. We divide the total time duration in to different small time interval and note all the number of failed components at the end of that slot. Therefore, we can take time interval differently for the execution of the software .For example we are noting the number of failed components each after one hour, in other ways we can note number of failed components each after 2 hour and so on. Now, the question is that which time interval we should select so that the number of failed components could be minimized subject to the constraints is that each slot should
come only once in the testing period .The Hopfield memory can be used to solve this problem. In this process the characteristic
(2.2), there are only two terms, the first term is regarding feasibility, which inhibits two slots from being in the same testing time. The second summation term is used for the minimization of the number of failed components. The term fmax is used for balancing of the summation terms.
The output Txi of a neuron (X, i) is interpreted as the probability of finding slot X in testing duration at position i. The mean field for a neuron (X, i) is defined according to the energy function given in equation (2.2) as;
(2.3)
Initially all the neurons are arranged to the average value and the constraint parameter F with the fmax. The weights of the interconnections are initialized with the small random numbers. As per the Hopfield model, the stable state of the any ith neuron can be define as
T (t 1) F '[W T (t)]
i
of interest is the rapid minimization of the energy function. To
p>use the Hopfield memory for the application, we map the
ji
ij j
(2.4)
problem onto the Hopfield type network architecture. The first term is to develop a representation of the problem of the solution that fits an architecture having a single array of the processing elements (PE). We develop it by allowing a set of N PEs to represent the N possible positions for a given slot in the sequence of the testing. The weight matrix format can be found from the number of failed components in that particular slot. The output will be labeled as Txi, where the X, subscript refers to the slot and the i, subscript refers to the position in the testing. To formulate the connection weight matrix, the energy function must be constructed that satisfies the following
This stable state may lead to a state corresponding to a local minimum of the energy function. In order to reach the global minimum, i.e. the possible minimum number of failed components in the sequential testing, by passing the local minima, we use the concept of stochastic updation of the unit in the activation dynamics of the network. In stochastic updation, the state of a units is updated using the probabilistic updation, which is controlled by the annealing schedule constraint parameter (F = fmax) and the probability distributions of the states at thermal equilibrium [12] is given as;
criteria:
-
Energy minima must favor states that have each slot only once in the sequential testing.
P(T )
X
i
1 eEX i / F
Z
(2.5)
-
Energy minima must favor states that have each position in the testing only once.
-
Energy minima must favor states with the minimum total number of failed components.
N
Denoting the state of a processing unit of a Hopfield network as Txi = 1 indicates that the slotX is to be completed at the ith stage of the testing, the energy function [14] can be written as;
Where Z is the partition function.
Initially, the arbitrary time slot is selected and the energy function of the network is constructed, as given in equation (2.2). The annealing schedule assigned with the maximum number of failed components i.e. to its higher value. So at higher F, many states are likely to be visited, irrespective of the energies of those states. Now, the value of the constraint parameter F is gradually reduced as per the annealing schedule, the output value of the states perturbs. Thisperturbation
N
2
E A N N
T T B N N
N T T C N
f T (T
T )
continues until the network settles in to stable state or
2
Xi X j
X 1 i1 j1, j1
X i Yi
N
i1 X 1Y 1,Y X
2 X 1Y 1,Y X i1
xy Xi
Y ,i1
Y ,i1
equilibrium state. So, the network estimates the energy function for this state and compares it with the previous energy function
by computingE. If E 0, we accept the solution with the
The Mean field Approximation algorithm [10, 11] also proposed the solution for this type of optimization problem .The following energy function with the mean field approximation can be proposed [11];
Where fmax is real constant, which is slightly larger than
highest probability i.e. 1, otherwise we accept it with the probability given in the equation (2.5). On each iteration of this process, the solution is accepted with the probability 1, and then
the constraint parameter F is being assigned with fmax of the newly constructed energy function. Then (2ev.2e)ry time on the
acceptable solution with higher probability, the constraints parameter changes with the newly found maximum number of
EST
fmax N N N T
Xi
2
i ji X
TX j
1 N
N N
2
X Y X i
f XY
TXi
(TY ,i1
TY ,i1 )
failures. This newly found maximum number of failed components will be less then the previously found maximum number of failed components. So the simulated annealing process will continue with the new value of the constraint
the largest number of failed components between the time
intervals in the given sequential testing. So, in the equation
parameter. This process is continuing till the final value of the schedule is obtained. At this state the units of the network
represents the state of equilibrium, which will represents the minimum energy function for the network. Thus the minimum energy function will represent the possible minimum number of
F f
new
max
(2.11)
On each iteration the constraint parameter
failed components in the sequential testing.
In order to speed up the process of the simulated annealing, the mean field approximation is used [13], in which the stochastic updation of the binary units is replaced by deterministic analog states [14]. Thus the fluctuating activation value of each unit is replaced with its average value. The equation (2.4) can be express with this method as:
is changed to its minimum value with respect to the previous
value. Hence, each time the network determines the stable condition with the new value of energy function E and constraints parameter F (when PE<=0). The neurons produce the stable state that represents the position of the slot with minimum number of failed components with respect to the previous position. So, at the stability with E 0 the state of
the neurons can be defined from the equation (2.5)
T (t 1) F '[ W T (t) ] F ' W T (t)
f new
i ij j
ij j
N N N
1 N N N
j i
j i
[ max Tx Tx fT ( T T
)]
(2.6)
and from the stochastic updation with thermal equilibrium we have;
1 N
Tx tanh[ 1
i f new
max
2 i j i X i
j 2 X Y X i XY Xi
TX
i
Y,i 1
Y,i 1
]
F
Ti (t 1) tanh[ Wij Tj (t) ]
j i
(2.7)
(2.12)
The Mean field for the neuron (X, i), Where output<TXi> can be
This equation is solved iteratively starting with some
arbitrary values <Ti(0)> initially. The stable state in the mean field approximation is a result of minimization of an effective
interpreted as the probability of finding the slot X in the ith testing position as defined from the equation (2.3) as;
N
N
E f new T f ( T T )
energy and defined as the function of F [17]:
X i max
X i
Y X
XY
Y X
Y ,i 1
Y ,i 1
i
T tanh[ 1 E( Ti )]
F Ti
(2.8)
(2.13)
Thus, the entire process continues until fixed point is found for the every value of constraint parameter F. In the process
Where the effective energy E (<Ti>) is the expression for energy function of the Hopfield model using averages for the state variables.
Now the constraint parameter F decreases as per the annealing schedule, the state of the network perturbs with the stochastic asynchronous updation of the processing elements. So, as the output value perturbs, the neurons produces the updated states . The states updation continues until the equation (2.6) satisfied.
Hence, for the stability condition the energy function of the annealing schedule can be expressed as;
of selecting a fixed point, change in the energy is computed and the probability of accepting the point in the minimum number of failed components can be found. The field for the selected points will also be computed. This entire process will continue for every schedule of the constraints parameter F, until the F reaches to the final value.
The algorithm of the entire process can be proposed as:
-
Initialize the weights and bias with the small random number. The value of F is initializedwith any large random number.
f new N N N
1 N N N
E max T
T
f
T ( T
-
T
) '
ST 2
Xi
i ji X
X j 2
XY
X Y X i
Xi
(2.9)
Y ,i 1
Y ,i 1
Ti (t 1) F [ WijT j (t )]
j i
-
-
Store the input pattern in the form of number of failed
The energy difference E is computed as;
components ( f1 , f2 , , fn ) that are observed during the different time slots
f new N N
f old N N
i.e.( t1 , t2 , , tn ) of sequential
E Enew Eold max T new T new max T old T old
X
X
X
X
ST ST
2 i ji i j
+
2 i ji i j
1 N N N ( f new f old )( T new Told )[( T T
)new ( T T
)old ]
2
XY XY
X Y X i
Xi Xi
Y ,i1
Y ,i1
Y ,i1
Y ,i1
(2.10)
If the energy difference E 0, the probability of accepting the solution becomes 1 otherwise it is PmpeE / F . On each
iteration with the highest probability of accepting the solution, constraint parameter of Annealing Schedule is defined as;
-
-
figure and Tables:-
Table [1] Number of failures in the different time intervals for the sequential testing
Time(hr) |
Failed components |
Time(hr) |
Failed components |
Time (hr) |
Failed compone nts |
Time(hr) |
Failed components |
Time(hr) |
Failed components |
0-1 |
130 |
0-2 |
170 |
0-3 |
150 |
0-4 |
240 |
0-5 |
300 |
1-2 |
83 |
2-4 |
158 |
3-6 |
163 |
4-8 |
248 |
5-10 |
350 |
2-3 |
75 |
4-6 |
132 |
6-9 |
148 |
8-12 |
230 |
10-15 |
200 |
3-4 |
68 |
6-8 |
145 |
9-12 |
130 |
12-16 |
172 |
15-20 |
150 |
4-5 |
62 |
8-10 |
100 |
12-15 |
121 |
16-20 |
110 |
||
5-6 |
56 |
10-12 |
73 |
15-18 |
146 |
||||
6-7 |
51 |
12-14 |
60 |
18-20 |
140 |
||||
7-8 |
46 |
14-16 |
16 |
||||||
8-9 |
41 |
16-18 |
18 |
||||||
9-10 |
37 |
18-20 |
20 |
||||||
10-11 |
34 |
||||||||
11-12 |
31 |
||||||||
12-13 |
28 |
||||||||
13-14 |
64 |
||||||||
14-15 |
76 |
||||||||
15-16 |
62 |
||||||||
16-17 |
40 |
||||||||
17-18 |
12 |
||||||||
18-19 |
3 |
||||||||
19-20 |
1 |
testing for the different modules of the software. Number of failures ( f1 , f2 , , fn ) is stored with the stable state of the network as:
i ij j
T (t 1) F ' [W T (t)]
j i
-
Do until the fixed point is found.
-
Randomly select a slot say X.
-
Compute the energy function as;
-
-
If F reaches to the final value stop the processing otherwise decrease the F according to the annealing schedule and repeat the step 2.
SIMULATION RESULTS
In this paper, for estimating the software reliability we have applied the technique of sequential testing with simulated annealing of mean field approximation. The pattern of failure can be obtained from life-test results, i.e. by testing a fairly large number of components until failure occurs, and observing the failure rate characteristics as a function of time. For the
E fmax N N N T
T
1 N N N f
T ( T T )
optimization of software reliability we consider a series of tests
2
ST
i
Xi
ji X
X j 2
XY Xi
X Y X i
Y ,i1
Y ,i1
conducted under certain stipulated conditions on 1000 software components. The total time duration of tests is 20 hours. The
Compute the state of the unit at equilibrium;
number of components that will fail during the defined different
Txi
tanh[
1
f max
EST ]
X
T
i
time interval is noted. The results obtained are tabulated as shown in table [1].
This table lists the total number of failed components at the end
Also calculate the E using equation (2.7)
of 1hr, 2 hr, 3hr, and so on in the second column. In the same manner lists the total number of failed components at the end of
3.3 if
E 0 accept with P< T
X
i
> (accept) =1 and set F=
2hr, 4hr, and so on in the next column, and so on. Now after some iterations these numbers of failures will be stored in the
X
fmax else P< T
i
> (accept)=exp (- E /F)
different energy minimas with the time interval (as figure (1)
Compute the Mean field as
N
shows) according to this energy function as given in equation (2.2).Initially the arbitrary slot is selected and the energy
Exi
f max TX i
Y X
f XY ( TY ,i1 TY ,i1 )
function is constructed as given in equation (2.2). The annealing schedule assigned with the maximum number of
(The average of the output values of accepted fixed point i.e. neurons)
failures i.e. F. At higher F, many states are likely to be visited. Therefore, the value of the constraints parameter F is gradually reduced as per the annealing schedule, the output value of the state perturbs. This perturbation continues until the network settles to an stable state or equilibrium state. At the stable state after minimizing the constraint parameter, the table [2] can be
Time interval |
Number of Failures |
||
0-1 |
130 |
||
0-2 |
|||
0-3 |
|||
0-4 |
|||
0-5 |
|||
p>1-2 |
83 |
||
2-3 |
75 |
||
2-4 |
|||
3-4 |
68 |
||
3-6 |
|||
4-5 |
62 |
||
4-6 |
|||
4-8 |
|||
5-6 |
56 |
||
5-10 |
|||
6-7 |
51 |
||
6-8 |
|||
6-9 |
|||
7-8 |
46 |
||
8-9 |
41 |
||
8-10 |
|||
8-12 |
|||
9-10 |
37 |
||
9-12 |
|||
10-11 |
34 |
||
10-12 |
|||
10-15 |
|||
11-12 |
31 |
||
12-13 |
28 |
||
12-14 |
|||
12-15 |
|||
12-16 |
|||
13-14 |
64 |
||
14-15 |
|||
14-16 |
65 |
||
16-17 |
40 |
||
16-18 |
|||
16-20 |
|||
17-18 |
12 |
||
18-19 |
3 |
||
18-20 |
|||
19-20 |
1 |
constructed. Table [2] shows the results that the number of failures 130 gradually reduces from its higher values 350. This trend continues also in others rows for the selection of next slots. Now the energy function will be changed as given in equation (2.10), and the state of the neurons can be defined in equation (2.5).This entire process will continue for every schedule of the constraint parameter F, until the F reaches to the final state. After completing the entire iterations for the allowable minimum limit of F, the table [3] can be constructed. Table [3] shows that, the numbers of failed components has been reduced. Initially we have 1000 numbers of failures and after applying the annealing schedule in the sequential testing, 927 cumulative numbers of failures have been remaining. The results of the 4th column shows that this technique of optimization reduced 73 failed components. Figure (2) shows the optimized number of failed components with the different time interval in the energy landscape. This table also shows the failed components density, failed components intensity and the reliability after getting the failures in each time interval. The table shows that the reliability of software components is increasing.
Table [2] Selection of time interval with the minimum number of failure
Table [3] shows the optimized value of the number of failures, cumulative number of failures, failure rate, failure density, and the reliability
Time |
Number of failed components f |
Cumulative number of failures F |
No. of survivors |
Failure density |
Failure rate |
Reliability |
0 |
0 |
1000 |
1 |
|||
130 |
0.13 |
0.139 |
||||
1 |
130 |
870 |
0.87 |
|||
83 |
0.083 |
0.101 |
||||
2 |
213 |
787 |
0.787 |
|||
75 |
0.075 |
0.1 |
||||
3 |
288 |
712 |
0.712 |
|||
68 |
0.068 |
0.1 |
||||
4 |
356 |
644 |
0.644 |
|||
62 |
0.062 |
0.101 |
||||
5 |
418 |
582 |
0.582 |
|||
56 |
0.056 |
0.101 |
||||
6 |
474 |
526 |
0.526 |
|||
51 |
0.051 |
0.101 |
||||
7 |
525 |
475 |
0.475 |
|||
46 |
0.046 |
0.101 |
||||
8 |
571 |
429 |
0.429 |
|||
41 |
0.041 |
0.1 |
||||
9 |
612 |
388 |
0.388 |
|||
37 |
0.031 |
0.1 |
||||
10 |
649 |
351 |
0.351 |
|||
34 |
0.034 |
0.101 |
||||
11 |
683 |
317 |
0.317 |
|||
31 |
0.031 |
0.103 |
||||
12 |
714 |
286 |
0.286 |
|||
28 |
0.028 |
0.103 |
||||
13 |
742 |
258 |
0.258 |
|||
64 |
0.064 |
0.283 |
||||
14 |
806 |
194 |
0.194 |
|||
65 |
0.065 |
0.04 |
||||
16 |
871 |
129 |
0.129 |
|||
40 |
0.04 |
0.368 |
||||
17 |
911 |
89 |
0.089 |
|||
12 |
0.012 |
0.144 |
||||
18 |
923 |
77 |
0.077 |
|||
3 |
0.003 |
0.039 |
||||
19 |
926 |
74 |
0.074 |
|||
1 |
0.001 |
0.013 |
||||
20 |
927 |
73 |
0.073 |
140
120
100
80
60
40
20
0
0 5 10 15 20 25
Time Interval
the final value of the schedule is obtained. At this state the units of the network represents the state of equilibrium, which will represents the minimum energy state for the network. Thus the minimum energy function will represent the possible minimum number of failed components in the sequential testing. Thus, energy iteration will be scheduled with the optimize value of F. The reliability of the software is easily estimated with the optimized minimum number of failures determined by the network. The results shows that after applying the sequential testing with MFA we can optimize the number of failures up to its minimum value. These minimum numbers of failures increase the software reliability. More experiments and analytical investigation are still required for increasing the efficiency and speed of the solution.
Figure (2) shows the energy states of the optimized number of failed components
CONCLUSION:-
In the process of estimating the software reliability we optimizing are the fatiled software components to its minimum value. Using the simulated annealing technique of MFA we are reducing the failures in a very systematic way. For this we divide the total time duration in the small time intervals or slots and noted the failed components at the end of each slot. The selection of time interval or slot depends upon certain constraints. The main contribution of our work is to design or formulations of necessary constraints under which the slot is selected and determining the optimized number of failed components of the software. To accomplish the task of optimized number of failed components, we select the slots in the sequential testing. The selection of slots satisfies all the necessary constraints imposed in the problem. The Hopfield energy function represents the stored failed components in the different time interval in the sequential testing. The constraint parameter F of the annealing schedule will be changed on each iteration of the process with the fmax(number of failures), which is slightly larger than the number of failures between slots that have selected for the sequence. This process is continuing till