Anand Singh Rajawat 1, Sangita Tomar 2,Upendra Dwivedi 3 and Dr. Akhilesh R. Upadhyay 4 1 JJT University, Jhunjhunu, India 2 TRUBA College of Science & Technology, CSE Department, Indore, India 3 JJT University Jhunjhunu, India 4 Professor and Head, Dept. of Communication Engg., SIRT Bhopal, India. E-mail: 1 rajawat_iet@yahoo.in, 2 sangitatomar31@gmail.com, 3 ud1985@gmail.com, 4 akhileshupadhyay@yahoo.com Abstract A lot of research has been performed on the issue of reliability of web based project. Reliability is the probability of failurefree web project operation for a specified period of time in a specified environment. It plays a key role in planning and controlling development of projects. It facilitates the organization to deliver a quality product on time and within budget. Nowadays there are large numbers of models available for estimating reliability. Most of the models are complex, expensive and require much effort to estimate the parameters. In this paper, a comparative study of metrics parameters (i.e. defect density, planned efforts, planned time and resources) and defect statistical data (which is measured by testing) is done for measuring and predicting reliability and calculating testing effort. For predicting web project reliability, Moranda Model is used. Through our extended model, we can find out whether our project is reliable or not and at the same time we can also predict its reliability unlike other traditional models which measure only the reliability of any project. Through the proposed model, it is easier to plan the resources requirement. The model is computationally simple and produces fairly accurate results. Keywords: Web project Reliability, Moranda Model, Web project Reliability Estimation, Defect Statistical Data. Introduction In today s technological world many types of project are being developed for many purposes. Now we must know how much reliable these project s are. So in order to test the reliability many web project reliability models have been developed over time. The work on web project reliability models started in 70 s, the first model being presented in 1972. This Research paper describes a system that can measure the reliability and calculate testing effort for any web project. For this a comparative study of metrics parameters and defect statistical data is done. Defect statistical data, which is measured by testing, is compared with the metrics, used to predict web project reliability by using the Moranda Model equation. The outcome of this research is presented through graphical representation which facilitates the estimation of web project reliability. Our system is based on comparison between baseline data and actual data. Baseline data is the collection of standard information from various projects. We have used baseline data provided by reputed software company in this research. We assume baseline data as standard for all our research work. Actual data is collection of testing results of various project for which we are calculating software estimation. Our research compares baseline data and actual data to estimate software reliability. We also define baseline defect density as number of defects for particular size of code. Baseline defect density decides what standard defect density we follow for various project. Project defect density must follow the standard define by baseline defect density to pass our reliability estimation standard. When the baseline effort estimates, revised effort estimates, and actual effort are plotted together for all the phases of SDLC, effort variances are estimated. Schedule variances are calculated at the end of every milestone to find out how well the project is doing with respect to the schedule. Moranda Model is used to predict the reliability and effort estimation in our research. This model is credited with being the first reliability model [1]. It belongs to a class of exponential order statistic model that assumes that fault detection and correction begins when a program contains faults and all the faults have the same rate. The basic assumptions of the model includes the rate of fault detection is proportional to the current fault content of the web project. The fault detection rate remains constant over the intervals between fault occurrences. A fault is corrected instantaneously without introducing new faults into the software. Every fault has the same chance of being encountered within a severity class as any other fault in that class. The failures, when the faults are detected, are independent. Web project Reliability Estimation Nowadays, estimating the reliability of web project is becoming increasingly important. As we know, there are large numbers of models available for estimating reliability. However, most of the models are complex, expensive and require much effort to estimate the parameters. In such a situation, there is a need to develop a model which suits the user s choice, to estimate the reliability of given web project. Our objective is to derive an approach that produce analytical pictorial reports for actual vs. planned effort [Page No. 312]
variances, resource, time and cost variances and predict web project reliability through Moranda Model. This system is very easy to use and produces fairly accurate results. It is very useful for monitoring the reliability of many web projects. Reliability Models Web project reliability models are used to predict the web project reliability which cannot be estimated unless the development of software is complete. As software reliability model specify the general form of the dependence of the failure process on the principle factors that affect it, namely fault introduction, fault removal and the environment reliability [2] we have analyzed four well-known reliability models. These models include Littlewood-Verall model, Goel- Okumoto NHPP (GO) model, Musa-Okumoto logarithmic execution (MO) model. Our research focuses on Jelinski - Moranda model. Jelinski-Moranda (JM) Model The model proposed by Jelinski and Moranda [1] is one of the earliest and the simplest software reliability models. The JM model assumes that times between failures are independent random variables, T 1,T 2,.following an exponential distributions, that there are finite number of faults at the beginning of the test phase, and that the failure rate is uniform between successive failures and is proportional to the current error content of the program being tested. It also assumes that the fault detected is immediately and completely fixed. From these assumptions we have failure rate λ i λi=φ(n-i+1) Where N is the total number of faults in the software at the beginning of the test, i ; is the number of faults detected so far and φ is the reduction in failure intensity per failure per fault. The reliability function is given by R i (t)=e -λ i t and the current MTTF is given by MTTF =1/λ i. The advantage of this model is that it is very simple to use. It is also fairly accurate for some data sets. Littlewood-Verall (LV) model Littlewood-Verall model [3] assumes exponential distribution for the random variable T i representing the failure interval time. But the failure intensity is regarded as a stochastically decreasing function with gamma distribution, implying that the fault fixing process is not considered as perfect, and that faults are of different sizes. A function Ψ (i), which is under the control of the user, determines the nature of the reliability growth. In this model Ψ (i) is taken as Ψ(i,β) = β 1 +β 2 i. The current reliability estimate in this model is given by R p i(t)=[ψ(i,β p )/t+ Ψ(i,β p )]α Mean Time To Failure is given by (it does not exist for α <= 1) MTTF=Ψ(i)/α-1 α>1 Predictions are by maximum likelihood estimation of parameters α 1, β 1, β 2 and use of plug-in rule. The problem with this model lies in complexity involved in determining the parameters. For estimated parameters, MTTF may not be finite. Goel-Okumoto NHPP (GO) model The Goel-Okumoto model [4] considers software failure process as a non homogeneous Poisson process. With a mean function µ(t). This model treats initial error contents as a random variable. Time between k - l th and k th failure depends on the time to k l th failure. For the NHPP we have Pr{n(t)=y}= ([µ(t)] y /y! ) e- µ(t) for y=0,1, With µ(t) considered as µ(t) = a(1 - e - b t ) Where a is the expected number of failures in the system and b is the initial failure intensity. Hence, the failure rate can be expressed as, λ (t) = abe -bt and λ(µ)= b(a-µ) is the time between j - l th and j th failures. From this data cumulative error n(t) can be easily calculated. Estimation of the parameters a and b is by maximum likelihood method. Musa-Okumoto logarithmic execution (MO) model The model proposed by Musa and Okumoto [5] views failure process as an NHPP like GO model. But Unlike GO model it assumes that reduction in failure rates are greater for the earlier fixes. MO model assumes failure rate to be an exponential function of the expected number of failures. λ( t ) = λ o e -θµ ( t ), where λ 0 and θ are initial failure rate and reduction in the normalized failure intensity per failure respectively. Input to the model is in the form t l, t 2,..., where each t i represents the execution time. Musa has established the superiority of execution time over calendar time when it comes to software reliability models. However, this model works for calendar times also. The conditional reliability is given as, R(t/t i-1 )={(λ 0 θt i-1 +1)/λ 0 θ(t+t i-1 )+1} 1/θ Estimation of parameter λ 0, θ is by maximum likelihood method. By substituting the estimated values λ 0, θ P the reliability and other quantities are determined. Execution time is related to calendar time through some suitable assumptions and further computation. Research Methodology This research uses the Moranda Model to estimate and predict the reliability. For this we have used the baseline data i.e. metrics parameters. The parameters are defect density, planned efforts, planned time and resources. We also collected the defect statistical data by testing. A comparative study of metrics parameters and defect statistical data has been done for measuring the reliability and calculate testing effort. Find out metrics The analysis of metrics relates several data and consolidating the results in terms of charts and pictures simplifies the analysis and facilitates the use of metrics for decision making. Baseline defect density: Defect density is number of defects for particular size of code. We determine the defect density by using metrics and measurements in our environments. Defect Density is computed by: (number of defects)/(1000 line of code) [Page No. 313]
5 th IEEE International Conference on Advanced Computing & Communication Technologies [ICACCT-2011] ISBN 81-87885-03-3 Effort variances: when the baseline effort estimates, revised effort estimates, and actual effort are plotted together for all the phases of SDLC, it provides many insights about the estimation process. Effort variances are calculated by: Variance %= [(actual effort-revised estimated)/revised estimate]*100 Schedule variances: Schedule variances are calculated at the end of every milestone to find out how well the project is doing with respect to the schedule. To get a real picture on schedule in the middle of project execution, it is important to calculate and plot it along with the actual schedule spent. Defect statistical data This statistical data is found after the testing process. A comparative study of metrics parameters and defect statistical data is done for measuring the reliability and calculating testing effort. For reliability prediction we use the Moranda Model [1]. The Moranda Model equation is given by this formula: λi= Φ (N - i+ 1) Fig 2. Baseline statistical data list Fig 3. Import Statistical Data List Where N is the total number of faults in the software at the beginning of the test, i is the number of faults detected so far and Φ is the reduction in failure intensity per failure per fault. The reliability function is given by Ri (t) = e λit and the current MTTF is given by MTTF = 1/λi The advantage of this model is that it is very simple to use. It is also fairly accurate for some data sets. Implementation of the System Implementing the web based application we used the MVC architecture. In our system, control elements are implemented using servlets or JSP. For measuring and predicting software reliability graphical results are obtained that shows the comparative study of actual vs. planned effort variances. This system is tested and fully implemented. Here, we are enclosing snapshots of running system. Fig 1. Testing effort estimation tool. Fig 4. Actual Statistical Data List Positive factors of proposed System: It is very simple to use. It produces fairly accurate results. It plays a key role in the planning and controlling of software development projects. This research is useful for monitoring the reliability of many types of software. The model needs extensive comparison with other existing models and reliability statistics of past projects. This will help us to enhance the model. Result of the System In this research, we have analyzed and designed a system. This system is based on Moranda Model reliability prediction. We have also assumed statistical data such as baseline data, actual data and baseline defect density. The final outcome of our system is analytical pictorial reports. These reports include [Page No. 314]
comparison between actual versus planned effort variances and between Resource versus time and cost variances. Moranda Model is a standard model to predict reliability and also produces fairly accurate results. Conclusion Through our extended model, we can find out whether our web project is reliable or not and at the same time we can also predict its reliability unlike other traditional models which measure only the reliability of any web project. The model is computationally simple and produces fairly accurate results. Through this model resources can be planned effectively and efficiently in the coming future. It is also very cost effective system which can be implemented by various organizations very easily. This research work is naturally extensible to any similar situation, where there is any need to make reliability prediction web project. There is a possibility to extend this model. The more we learn about past mistakes, the better sure our chances to avoid them in the future and build better web project. In future, more diversity can add to this mode and help organization to maximize their quality efforts. Acknowledgment We would like to express our gratitude to all those who gave us the possibility to complete this paper. We want to thank the Department of Engineering of the JJT University for giving me permission to commence this paper in the first instance, to do the necessary research work and to use departmental data. We are deeply indebted to our supervisor Prof. Dr. Akhilesh R. Upadhyay from the JJT University whose help, stimulating suggestions and encouragement helped me in all the time of our research work for our Phd. and writing of this paper. Fig 5. Base Line Data Graph Representation Fig 6. Actual Data Graph Representation and reliability prediction through Moranda Model References [1] Ahmad, N., Khan, M. G. M., Quadri, S. M. K. and Kumar, M., Modeling and Analysis of Software Reliability with Burr Type X Testing-Effort and Release-Time Determination, Journal of Modeling in Management, Vol. 4 (1), 28 54, 2009. [2] Huang, C. Y. Performance analysis of software reliability growth models with testing-effort and change-point, Journal of Systems and Software, Vol. 76, pp. 181-194, 2005. [3] Huang, C. Y. Cost-reliability-optimal-release policy for software reliability models incorporating improvements in testing efficiency, Journal of Systems and Software 77(2), pp. 139-155, 2005b. [4] Stringfellow, C., Andrews, A., "Integrating Defect Estimation Methods to Make Release Decisions," Proc. IASTED Software Engineering Applications, Marina Del Rey, CA, November 2003, pp. 447-452. [5] Musa, J.D., Introduction to software reliability engineering and testing. Proc. 8-th International Symposium on Software Reliability Engineering: Case studies, Albuquerque, NM, November 1997, pp. 3-12. [6] Y. Tohma, K. Tokunaga, S. Nagase, and Y. Murata, Structural approach to the estimation of the number of residual software faults based on the hypergeometric distribution, IEEE Trans. Software Engineering, vol. 15, no. 3, pp. 345 355, Mar. 1999. [7] Phil McMin, Search-based Software Test Data Generation: A Survey, Proceedings in Software [Page No. 315]
5 th IEEE International Conference on Advanced Computing & Communication Technologies [ICACCT-2011] ISBN 81-87885-03-3 Testing, Verification and Reliability, 2004, 14:105-156. [8] D. J. Berndt and A. Watkins, High Volume Software Testing Using Genetic Algorithms, Proceedings of the 38th (IEEE) Hawaii International Conference on System Sciences, Waikoloa, Hawaii, Jan 2005. [9] Nachiappan Nagappan, Laurie Williams, Jason Osborne, Mladen Vouk, Pekka Abrahamsson: Providing Test Quality Feedback Using Static Source Code and Automatic Test Suite Metrics, Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering, Chicago, 2005, pp85-94 [10] Z. Jelinski and P.B. Moranda, Software Reliability Research, Statistical Computer performance Evaluation, W. Freibeger (Ed,), New York, Academic,1972. [11] Lyu, M., Nikora, A., Applying reliability Models more effectively, IEEE software 9(4),July 1992. [12] B. Littlewood and J.L. Verrall, A Bayesian Reliability Growth Model for Computer Software, J.Royal Statist. Soc., C (App1ied Statistics), Vol. 2, pp 332-346, 1973. [13] A.L. Goel and K. Okumoto, Time-dependent errordetection rate model for software reliability and other performance measures, IEEE Tr. Reliability, Vol. R- 28, No. 3, pp 206-211, 1979. [14] J.D. Musa and K. Okumoto, A Logarithmic Execution time model for software reliability measurement, Proc. 7th International conference on Software Engineering, Orlando, Florida, March26-29, pp 230-238, 1984 [Page No. 316]