Test Plan1.0 For the project A Credit Assessment System (CAS) Version 1.0 Submitted by Karl Remarais CIS 895 MSE Project Department of Computing and Information Sciences Kansas State University
Table of Content 1. Test Plan Identifier... 1 2. Introduction... 1 3. Test Items... 1 4. - Feature to be tested... 1 5.-feature not to be tested... 2 6.-Approach... 2 6.1.-Functionnal testing... 3 6.2.-Security testing... 3 6.3.-Performance testing... 4 7. Item Pass/Fail Criteria... 4 8. Suspension criteria and resumption requirements... 5 8.1 Suspension criteria... 5 8.2 Resumption requirements... 5 9. Test deliverables... 5 10. Environmental needs... 5
1. Test Plan Identifier CAS-V1.0 2. Introduction The purpose of this document is to outline the plan for testing the critical use cases and the performance of the project Credit Assessment System (CAS). The document will describe the activities related to the testing task, the environment and tools that will be used to test the software. The following documents will be used for reference: Vision Document 2 (sections 3.2 and 6 describing Software Requirements Specification). Software Quality Assurance Plan (SQAP). 3. Test Items The following item or aspect of the CAS software will be tested: The functional requirements as describe by the use cases (See 3.2 of Vision document 2) and the scenario descriptions (see architecture_design 1.0). Unit testing of the EJB components. The performance of the system. The security of the system with regard to authentication, authorization and confidentiality will be tested. 4. - Feature to be tested The features listed below will be tested: Use Case 1: A Data Furnisher transfers credit records to the CAS SR1.2 [Critical] The CAS must accept only signed post from the data furnishers. SR1.9 [Critical] The CAS must perform consistency checks on data provided by data furnishers before permanently store the data. SR1.12 [Critical] If information in a file is inconsistent CAS must send clear and relevant message to the data furnisher to letting him know why part of or the entire post has been rejected. 1
SR1.2.1[critical] A data furnisher can only update loans that have been granted by the creditor he is working for. SR1.2.2[critical] Modification performed by a data furnisher on customers, loan or loan application information should be logged. Use case 2: A Credit Officer requests the Credit Report of a Customer. SR2.1 [critical] The creditor officer must have authorization of the customer to view his credit report. Use Case 3: A customer requests his Credit Report Use case 4: A Customer gives his consent (Authorization) for a Creditor to request his Credit Report. Use case 4. - A Customer gives his consent to a creditor. SR4.1 The authorization has a start and end date. Use case 5: A Customer registers in the system. SR3.1 [Critical] A customer has access to only his own credit report. Use case 6: An Administrator registers a Creditor Officer. SR4.2 [Critical] A credit officer must be associated to a creditor and only one. SR1.10 [Critical] Only administrator registers credit officers. Use case 7: An Administrator registers a Data Furnisher. SR1.10 [Critical] Only administrator registers data furnishers. SR4.2 [Critical] A datafurnisher must be associated to a creditor and only one. 5.-feature not to be tested No Database testing. 6.-Approach 2
The approach that is selected to conduct the test varies with the type of testing. 6.1.-Functional testing For the functional testing we will test the scenarios that are described in the sequence diagram of the architecture document (see 2). For each scenario we will test the typical course of action as well as the alternate courses. The scenarios will be evaluated by the visual result of the action performed and the input values. The acceptable outcome for each scenario must be conformed to the sequence diagrams. (Architecture document section 2). Tool: to help us in this task we plan to use a web debugging proxy (Fiddler: http://www.fiddler2.com) to inspect and modify incoming and outgoing traffic between the browser and the web application. 6.2.-Unit testing We will use JUnit API 4.5 and the Glassfish Embedded Container to individually test the EJB session and persistence. For each EJB class we will write a corresponding test class that contains the tests to verify each individual method that are implemented in the EJB. For instance, for the EJB DataFurnisher the test method of the class DataFurnisherTest will be used to verify the method transferpost. Similarly, in the class CreditOfficerTest contains the test methods for the EJB CreditOfficer. The package test includes the test classes: CreditOfficerTest, DataFurnisherTest, CustomerTest, PostTest. The test methods will be developed in conjunction with the implementation of the components tested during the implementation phase. 6.3.-Security testing For the security testing we will cover three security aspects: confidentiality, authentication and authorization. Our approach is to use a spider tool to crawl the application and automatically collects all accessible url. Then we determine from the catalogue of url whether each url pattern defined in the deployment descriptor is accessible by only the right group of user. The following groups are defined: Datafurnisher, CreditOffier, Customer, and Admin. 3
Url pattern /admin/* /datafurnisher/* /creditofficer/* /customer/* Roles administrator datafurnisher creditofficer customer Tool: to help us in this task we plan to use a web security testing tool (Burp suite: http://portswigger.net/burp/). 6.4.-Performance testing For performance testing we will create a load test for each use case. We will consider only the typical course of action not the alternate courses. We will run each load test with an average number of simulated concurrent users of 50. The output of the tool the number of threads, throughput, and the average response time will be added to the test log document. Tool: to help us in this task we plan a performance testing tool (JMeter: http://jakarta.apache.org/jmeter/ ). 7. Item Pass/Fail Criteria For the functional testing a test scenario will be considered successful if its result is the same as mentioned in the Vision Document 2.0 and the architecture document 1.0. The test suite will fail if this is not the case. As we said before the unit tests will be developed in conjunction with the methods of class. They will be executed automatically after each change to the component level. The criteria of success or failure of the test are specific to the test method. For the security testing a test scenario will be considered successful if the url pattern that is currently tested is only accessible by a user that the has proper authorization. Second, on the server side the user role should match the annotation of the component (servlet or EJB) that provides the services to realize the specific scenario. For the performance testing we did not define rigorous pass/fail criteria. However, we will accept as success an average response time of 200ms for a number of threads (simulated users) of 50. 4
We will not hold this criterion for the use case 2.1 which is the transfer of credit records to the CAS. However, if a performance test for a scenario introduces unacceptable delay, further investigation will be conducted to determine the cause. 8. Suspension criteria and resumption requirements 8.1 Suspension criteria If a functional scenario fails, all further testing will be suspended. The failure will be logged into the Testing Log. 8.2 Resumption requirements Testing of the application CAS will resume once the failure has been logged and a solution has been found for the functional test scenario failure. Testing will continue from the beginning of the testing procedure. This is done to ensure that any changes found post to a failure have not negatively affected previous scenario (typical or alternate courses). 9. Test deliverables The following artifacts will be produced after the tests are conducted on the CAS software: Test Plan Test Log: the Test Log will be maintained during testing and provided concluding testing. All tests will be recorded as either passed or failed. All failed test will contain a description, date, and time of the failure, along with a recommended solution. 10. Environmental needs The functional testing and security testing will be conducted in the development environment. To be more conclusive performance testing will be conducted in an environment similar to the deployment environment (see Architecture document section 6). 5