National Information Systems And Network Security Standards & Guidelines
|
|
|
- Suzan Payne
- 10 years ago
- Views:
Transcription
1 National Information Systems And Network Security Standards & Guidelines Version 3.0 Published by National Information Technology Development Agency (NITDA) January 2013
2 Table of Contents Section One Preamble Authority Scope Application... 6 Section Two:... 7 Part 1:... Standards for the Categorization of Information for Security Management Purpose Information Security Categorization Standards Security Objectives Data Categorization Tasks Data Security Measures... 9 Part 2:... Guidelines for the Categorization of Information for Security Management Security Categorization Guidelines Classification of Potential Impact of Security Breach on Organizations and Individuals Security Categorization Applied to Information Types Security Categorization Applied to Information Systems Section Three Part 1: Minimum Security Requirements for National Information And Information Systems Purpose Information System Impact Levels Minimum Security Requirements Security Control Selection Actionable Tasks and Policies for the MDAs Server Security General server Configuration Guidelines Monitoring The Acceptable System Use Policy The Password Policy: Part 2:... Guidelines for Minimum Security Requirements for National Information and Information Systems Specifications for Minimum Security Requirements (Metrics of Security) General Password Construction Guidelines
3 3.7.1 Password Protection Standards Section Four: Part 1: Standards for Intrusion Detection And Prevention Systems (IDPS) Purpose Part 2:Guidelines for Intrusion Detection and Prevention Types of Intrusion Detection and Prevention System (IDPS) General Incident Reporting guideline/policy Section Five: Part 1:... Standard for Protecting the Confidentiality of Object Identifiable Information (OII) Purpose Introduction and Identification of OII The Potential Impact of Inappropriate Access To OII Methods for Protecting the Confidentiality of OII and Factors for Determining OII Confidentiality Impact Levels Overview Distinguishability Aggregation and Data Field Sensitivity Obligation to Protect Confidentiality Access to and Location of the OII General Protection Measures Part 2:... Guidelines for Protecting the Confidentiality of Object Identifiable Information (OII) Introduction and Identification of OII Examples of OII Data OII and Fair Information Practices The potential impact of inappropriate access to OII Impact Level Definitions Methods for protecting the confidentiality of OII and Factors for Determining OII Confidentiality Impact Levels Overview Distinguishability Aggregation and Data Field Sensitivity Context of Use Obligation to Protect Confidentiality Access to and Location of the OII OII Confidentiality Impact Level Examples Education, Training, and Awareness
4 6.3 De-Identifying Information Anonymous Information Security Controls Recommendations for developing an incident response plan for breaches involving OII Preparation Detection and Analysis Containment, Eradication, and Recovery Post-Incident Activity Section Six Part 1:Standards on Securing Public Web Server Purpose Web Server Policy Web Server Risk General Configuration Standard Part 2:Guidelines on Securing Public Web Server Guidelines Deployment of Public Web Server Web Application implementation guidelines Application Service Provider Guidelines The Internet DMZ Equipment Guidelines GENERAL SECURITY CONCEPT Section Seven Part 1:Standards on Firewalls and Firewall Policy Purpose The Placement of the Firewalls within the Network Architecture with Multiple Layers of Firewalls Policies Based on IP Addresses and Protocols IP Addresses and Other IP Characteristics TCP and UDP IPsec Protocols Policies Based on Applications Virtual Private Network (VPN) Policy Policy Malicious Application and Virus Policy and Guidelines Part 2:Guidelines on Firewalls and Firewall Policy General Guidelines and introduction on Firewalls and Firewall Policy Section Eight
5 Part 1:Cyber Forensic Standards Purpose Overall Action Plan for Implementation of Cyber Forensic Part 2:Cyber Forensic Guidelines General Guidelines and Overview of Cyber Forensic The Tool Capabilities and Features: Handling of Retained Data The Data Handover Interface The Security framework Data exchange techniques Backward and Update Compatibility Guidelines and Policy for Acceptable Encryption Definition of Terms
6 Section One 1.1 Preamble The National Information Technology Development Agency (NITDA) is mandated by the NITDA Act of 2007 to develop Information Technology in Nigeria through regulatory policies, guidelines, standards, and incentives. Part of that mandate is to ensure the safety of the Nigerian cyberspace and a successful implementation of an electronic government program. Many establishments have migrated their businesses to the online environment. Information networks in both the private and public sectors now drive service delivery in the country. These networks have thus become critical information infrastructure which must be safeguarded. This document provides government wide Standards and Guidelines on National Information Systems and Network Security. It contains eight sections, section two to section eight are in two parts. Part 1 contains the Standards while Part 2 contains the Guidelines. Several International Standards documents were reviewed during the development of this Standards and Guidelines which includes: 1. ISO ISO/IEC 27001, 27002, 27005, OFCOM Guidance on Network Security 4. EU Network Security Framework 5. Information Technology Security Guidelines (ITSG-38) Canada 6. NIST Guideline. What has been has been put together in this document is what stakeholders consider suitable for the Nigerian environment. 1.2 Authority National Information Systems and Network Security Standards and Guidelines are issued by the National Information Technology Development Agency (NITDA) in accordance with NITDA Act They are specifically issued pursuant to sections 6 and 17 of the National Information Technology Development Agency Act 2007 and is subject to periodic review by NITDA. A breach of the guidelines shall be deemed to be a breach of the Act. 5
7 These standards are mandatory for Federal, State and Local Government Agencies and institutions as well as private sector organizations which own, use or deploy critical information infrastructure of the Federal Republic of Nigeria. They serve as reference for systems auditors, network administrators and security personnel, among others. Additional security guidelines may be developed and used at Agency discretion in accordance with these standards. MDAs are mandated to use the reporting documents in the appendix to report compliance to NITDA on quarterly basis. 1.3 Scope This document prescribes minimum standards on 7 primary areas of network security and cyber forensic: 1. Categorization of information 2. Minimum security requirements 3. Intrusion detection and protection 4. Protection of OII 5. Securing public web server 6. System firewall 7. Cyber forensic 1.4 Application The standards contained herein shall apply to: Public Sector organizations, including: Federal and State Ministries Federal and State Departments, Federal and State Agencies Local Governments Private Sector Organizations and Companies Non-Governmental Organizations (NGOs) 6
8 Section Two: Part 1: Standards for the Categorization of Information for Security Management 2.1 Purpose This section of the document: A. Sets minimum standards for the categorization of all information collected, processed and stored using ICT systems based on the objectives of providing required levels of information security according to risk levels, threat thresholds, and impact in order to guarantee : Confidentiality Integrity Availability, and Survivability and continuity of business processes and information systems in Nigeria. B. Provides guidelines on information security control areas within each category. C. Prescribes minimum information security requirements for the management, operation, and technical controls for information in each category. 2.2 Information Security Categorization Standards This document establishes security categories for both information and information systems. The security categories are based on the potential impact on an organization should certain events occur which jeopardize the information and information systems needed by the organization to accomplish its assigned mission, protect its assets, fulfill its legal responsibilities, maintain its day-to-day functions, and protect individuals Security Objectives The five security objectives of information and information systems specified in these standards are: 1) Confidentiality: Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. A loss of confidentiality is the unauthorized disclosure of information. 7
9 2) Integrity: Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity. A loss of integrity is the unauthorized modification or destruction of information. 3) Availability: Ensuring timely and reliable access to and use of information. A loss of availability is the disruption of access to or use of information or an information system. 4) Survivability: Ensuring that services continue and those business operations survive a security breach. Survivability is lost in a case of complete disruption of operations and discontinuation of services 5) Authenticity This means that the data (source), security level, user, time and location are required to be authenticated Data Categorization Tasks These standards and guidelines apply to all MDAs data categories and to all user-developed data sets and systems that may access these data, regardless of the environment where the data reside (including cloud systems, servers, personal computers, mobile devices, etc.). The standards apply regardless of the media on which data reside (including electronic, microfiche, printouts, CD, etc.) or the form they may take (text, graphics, video, voice, etc.). All MDAs are required to maintain data in a secure, accurate, and reliable manner as specified in and be readily available for authorized use. Data security measures must be implemented commensurate with data sensitivity and risk. I. Data should be classified into one of the following categories: a. Restricted the disclosure of which to any unauthorized persons would be unlawful. b. Public data to which the general public may be granted access in accordance with the available laws. II. Data in both categories require security measures to the degree of which the loss or corruption of the data would impair the business or service functions of the MDA, result in financial loss, or violate law, standards and guidelines. Security measures for data must be set by the data custodian, working in cooperation with the data stewards, as defined below. The following roles and responsibilities must be established for enforcing data standards and guidelines: a. Data Trustee: Data trustees are senior MDA officials (or their designees) who have planning responsibility for data within their functional areas and management responsibilities for defined segments of institutional data. Responsibilities include assigning data stewards, participating in establishing policies, and promoting data resource management for the good of the entire MDA. (Director/CIO) 8
10 b. Data Steward: Data stewards must be MDA officials having direct operational-level responsibility for information management - usually Deputy Directors/ Assistant Directors. Data stewards are responsible for data access and policy implementation issues. c. Data Custodian: Information Technology Services/Department (ITS) is the data custodian. The custodian is responsible for providing a secure infrastructure in support of the data, including, but not limited to, providing physical security, backup and recovery processes, granting access privileges to system users as authorized by data trustees or their designees (usually the data stewards), and implementing and administering controls over the information. (Chiefs) d. Data User: Data users are individuals who need and use MDA data as part of their assigned duties or in fulfillment of assigned roles or functions within the organization. Individuals who are given access to sensitive data have a position of special trust and as such are responsible for protecting the security and integrity of those data Data Security Measures All MDAs are required to adopt measures for data security as stated by the data-classification level. Required measures include the following: I. Encryption requirements II. III. IV. Data protection and access control Documented backup and recovery procedures Change control and process review V. Data-retention requirements VI. VII. VIII. Data disposal Audit controls Storage locations 9
11 Part 2: Guidelines for the Categorization of Information for Security Management 2.3 Security Categorization Guidelines The following are various levels and types of Information categorization as envisioned in the STANDARD s part for categorization of information for security management: i. Information shall be categorized according to its information type. An information type is a specific category of information (e.g., private, confidential, secret) as defined by an organization or, in some instances, by a specific law, Executive Order, directive, policy, or regulation. ii. Information shall be also categorized according to value, owner, types of access, custodian, retention, user, and etc. iii. Information must also be classified according to the level of impact of adverse effects should the threats materialize (High, Moderate, Low, Not Applicable) iv. System information (e.g. network routing tables, password files, and cryptographic key management information) must be protected at a level commensurate with the most critical or sensitive user information being processed, stored, or transmitted by the information system to ensure confidentiality, integrity, availability and survivability. v. The potential impact value of not applicable only applies to the security objective of confidentiality. vi. System processing functions (i.e., Programs in execution within an information system such as system processes that facilitate the processing, storage, and transmission of information and are necessary for the organization to conduct its essential mission-related functions and operations) shall be subjected to security categorization. vii. Storage location based shall be subjected to classification. viii. In general security matrix has at least five dimensions: a. Information Type b. Sensitiveness: Public, Secret, Confidential, c. Action: Creation, Modification, Keep, d. Transfer, Purge, Duplicate, Read, Process e. User: The categories of users of Info. System. f. Time: When the data is accessible, nights, holidays? g. Location: Where can I KEEP (act) on information? 10
12 Note for Example: A chunk of information is in level A; means it is top secret. It can be created only by level H staff, Carried by level G staff, Never Duplicated by anybody, purged only by level G staff. Level G staff cannot read it. Level G staff can keep it at most for 5 hours during the working hours of working days. This web site could only be accessed within the governmental premises etc. 2.4 Classification of Potential Impact of Security Breach on Organizations and Individuals This Publication defines three levels of potential impact on organizations or individuals should there be a breach of security (i.e., a loss of confidentiality, integrity, or availability). The application of these definitions must take place within the context of each organization and the overall national interest The potential impact shall be classified as LOW if: The loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals. A limited adverse effect means for instance that the loss of confidentiality, integrity, or availability might: (i) (ii) (iii) (iv) cause a degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is noticeably reduced; result in minor damage to organizational assets; result in minor financial loss; or result in minor harm to individuals The potential impact shall be classified as MODERATE if: The loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals. A serious adverse effect means for instance that the loss of confidentiality, integrity, or availability might: (i) (ii) (iii) (iv) cause a significant degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is significantly reduced; result in significant damage to organizational assets; result in significant financial loss; or result in significant harm to individuals that does not involve loss of life or serious life threatening injuries. 11
13 2.4.3 The potential impact shall be classified as HIGH if: The loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic effect on organizational operations, organizational assets, or individuals. A severe or catastrophic effect means for instance that the loss of confidentiality, integrity, or availability might: (i) (ii) (iii) (iv) cause a severe degradation in or loss of mission capability to an extent and duration that the organization is not able to perform one or more of its primary functions; result in major damage to organizational assets; result in major financial loss; or result in severe or catastrophic harm to individuals involving loss of life or serious life threatening injuries. 2.5 Security Categorization Applied to Information Types The security category of an information type may be associated with both user information and system information and can be applicable to information in either electronic or non-electronic form. It must also be used as input in considering the required security category (SC) of an information system (see description of security categories for information systems below). Establishing a required security category of an information type shall be based on the potential impact for each security objective associated with the particular information type. The format for expressing the security category, SC, of an information type is: SC information type: {(confidentiality, impact), (integrity, impact), (availability, impact)} where the acceptable values for potential impact are LOW, MODERATE, HIGH, or NOT APPLICABLE. EXAMPLE 2.1: If an organization managing public information on its web server determines that there is no potential impact from a loss of confidentiality (i.e., confidentiality requirements are not applicable), a moderate potential impact from a loss of integrity, and a moderate potential impact from a loss of availability. The resulting security category, SC, of this information type is expressed as: SC public information = {(confidentiality, NOT APPLICABLE), (integrity, MODERATE), (availability, MODERATE)}. EXAMPLE 2.2: If a law enforcement organization managing extremely sensitive investigative information determines that the potential impact from a loss of confidentiality is high, the potential impact from a loss of integrity is moderate, and the 12
14 potential impact from a loss of availability is moderate. The resulting security category, SC, of this information type is expressed as: SC investigative information = {(confidentiality, HIGH), (integrity, MODERATE), (availability, MODERATE)}. EXAMPLE 2.3: If a financial organization managing routine administrative information (not privacy-related information) determines that the potential impact from a loss of confidentiality is low, the potential impact from a loss of integrity is low, and the potential impact from a loss of availability is low, the resulting security category, SC, of this information type shall be expressed as: SC administrative information = {(confidentiality, LOW), (integrity, LOW), (availability, LOW)}. 2.6 Security Categorization Applied to Information Systems In determining the security category of an information system consideration must be given to the security categories of all information types resident on the information system. For an information system, the potential impact values assigned to the respective security objectives (confidentiality, integrity, availability) shall be the highest values (i.e., high water mark) from among those security categories that have been determined for each type of information resident on the information system. The format for expressing the security category, SC, of an information system is: SC information system = {(confidentiality, impact), (integrity, impact), (availability, impact)} where the acceptable values for potential impact are LOW, MODERATE, or HIGH. Under this section the value of not applicable cannot be assigned to any security objective in the context of establishing a security category for an information system. This is in recognition that there is a low minimum potential impact (i.e., low water mark) on the loss of confidentiality, integrity, and availability for an information system due to the fundamental requirement to protect the system-level processing functions and information critical to the operation of the information system. EXAMPLE 2.4: An information system used for large acquisitions in a contracting organization contains both sensitive, pre-solicitation phase contract information and routine administrative information, if the management within the contracting organization determines that: (i) for the sensitive contract information, the potential impact from a loss of confidentiality is moderate, the potential impact from a loss of integrity is moderate, and the potential impact from a loss of availability is low; and 13
15 (ii) for the routine administrative information (non-privacy-related information), the potential impact from a loss of confidentiality is low, the potential impact from a loss of integrity is low, and the potential impact from a loss of availability is low, the resulting security categories, SC, of these information types shall be expressed as: SC contract information = {(confidentiality, MODERATE), (integrity, MODERATE), (availability, LOW)}, and SC administrative information = {(confidentiality, LOW), (integrity, LOW), (availability, LOW)}. The resulting security category of the information system is expressed as: SC acquisition system = {(confidentiality, MODERATE), (integrity, MODERATE), (availability, LOW)}, representing the high water mark or maximum potential impact values for each security objective from the information types resident on the acquisition system. EXAMPLE 2.5: Where a power plant contains a SCADA (supervisory control and data acquisition) system controlling the distribution of electric power for a large military installation, where also the SCADA system contains both real-time sensor data and routine administrative information, and where the management at that power plant determines that: (i) (ii) for the sensor data being acquired by the SCADA system, there is no potential impact from a loss of confidentiality, a high potential impact from a loss of integrity, and a high potential impact from a loss of availability; and for the administrative information being processed by the system, there is a low potential impact from a loss of confidentiality, a low potential impact from a loss of integrity, and a low potential impact from a loss of availability, the resulting security categories, SC, of these information types shall be expressed as: SC sensor data = {(confidentiality, NA), (integrity, HIGH), (availability, HIGH)}, and SC administrative information = {(confidentiality, LOW), (integrity, LOW), (availability, LOW)}. The resulting security category of the information system is initially expressed as: SC SCADA system = {(confidentiality, LOW), (integrity, HIGH), (availability, HIGH)}, representing the high water mark or maximum potential impact values for each security objective from the information types resident on the SCADA system. The management at the power plant chooses to increase the potential impact from a loss of confidentiality from low to moderate reflecting a more realistic view of the potential 14
16 impact on the information system should there be a security breach due to the unauthorized disclosure of system-level information or processing functions. The final security category of the information system is expressed as: SC SCADA system = {(confidentiality, MODERATE), (integrity, HIGH), (availability, HIGH)}. 15
17 Section Three Part 1: Minimum Security Requirements for National Information And Information Systems 3.1 Purpose This section of the document: a) Prescribes Standards on information system impact levels; b) Provides list of minimum information security requirements for the management, operation, and technical controls for information in each category. c) Prescribes actionable and tasked standards on security measures for all MDAs on Network, Server, System acceptable use, Password guidelines, Physical Location and Security policy. 3.2 Information System Impact Levels Organizations are required to categorize their information systems as low-impact, moderate-impact, or high-impact for the security objectives of confidentiality, integrity, and availability. The potential impact values assigned to the respective security objectives are the highest values (i.e., high water mark) from among the security categories that have been determined for each type of information resident on those information systems. The generalized format for expressing the security category (SC) of an information system shall be: SC Information System = {(confidentiality, impact), (integrity, impact), (availability, impact)}, where the acceptable values for potential impact are low, moderate, or high. Explanatory Note: a) For the purpose of this document, an information system is a discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. Information resources include information and related resources, such as personnel, equipment, funds, and information technology. b) The high water mark concept is employed in these Standards owing to significant dependencies among the security objectives of confidentiality, integrity, and availability. In most cases, a compromise in one security objective ultimately affects the other security objectives. c) Since the potential impact values for confidentiality, integrity, and availability may not always be the same for a particular information system, the high water mark concept must be used to determine the overall impact level of the information system. Thus, a low-impact system is an information system in which all three of the security objectives are low. A moderate-impact system is an information system in which at least one of the security objectives is moderate and no security objective is greater than moderate. And finally, a high-impact system is an information system in which at least one security objective is high. The determination of 16
18 information system impact levels must be accomplished prior to the consideration of minimum security requirements and the selection of required security controls for those information systems. 3.3 Minimum Security Requirements The minimum security requirements cover the following security-related areas with regard to protecting the confidentiality, integrity, availability and survivability of information systems and the information processed, stored, and transmitted by those systems. The security-related areas include: i. access control, identification and authentication; ii. iii. iv. awareness and training; audit and accountability; certification, accreditation, and security assessments; v. configuration management; vi. vii. viii. ix. contingency planning; incident response; maintenance; media protection; x. physical and environmental protection; xi. xii. xiii. xiv. xv. xvi. planning; personnel security; risk assessment; systems and services acquisition; system and communications protection; and System and information integrity. The areas represent a broad-based, balanced information security program that addresses the management, operational, and technical aspects of protecting national information and information systems. Policies and procedures play an important role in the effective implementation of enterprise-wide information security programs within the government/ private systems and the success of the resulting security measures employed to protect national information and information systems. Thus, organizations are required to develop and promulgate formal, documented policies and procedures governing the minimum security requirements set forth in this standard and must ensure their effective implementation. 17
19 3.4 Security Control Selection Organizations are required to meet the minimum security requirements in this standard by selecting the required security controls and assurance requirements as described by this document. The process of selecting the required security controls and assurance requirements for organizational information systems to achieve adequate security is a multifaceted, risk-based activity involving management and operational personnel within the organization. Security categorization of information and information systems, as required by this publication is the first step in the risk management process. Subsequent to the security categorization process, organizations must select required set of security controls for their information systems that satisfy the minimum security requirements set forth in this standard. The selected set of security controls must include one of three tailored security control baselines in this document that are associated with the designated impact levels of the organizational information systems as determined during the security categorization process. - For low-impact information systems, organizations must, as a minimum, employ tailored security controls from the low baseline of security controls and must ensure that the minimum assurance requirements associated with the low baseline are satisfied. - For moderate-impact information systems, organizations must, as a minimum, employ tailored security controls from the moderate baseline of security controls defined and must ensure that the minimum assurance requirements associated with the moderate baseline are satisfied. - For high-impact information systems, organizations must, as a minimum, employ tailored security controls from the high baseline of security controls defined and must ensure that the minimum assurance requirements associated with the high baseline are satisfied. Organizations must employ all security controls in the respective security control baselines. Security categorization must be accomplished as an enterprise-wide activity with the involvement of senior-level organizational officials including, but not limited to, chief information officers, senior Organizations information security officers, authorizing officials (a.k.a. accreditation authorities), information system owners, and information owners. To ensure a cost-effective, risk-based approach to achieving adequate security across the organization, security control baseline tailoring activities must be coordinated with and approved by required organizational officials (e.g., chief information officers, senior Organizations information security officers, authorizing officials, or authorizing officials designated representatives). The resulting set of security controls must be documented in the security plan for the information system. 18
20 3.5 Actionable Tasks and Policies for the MDAs Server Security This policy applies to server equipment owned and/or operated by the MDA and to servers registered under any MDA owned internal network domain. This policy is specifically for equipment on the internal MDA network. All internal servers deployed at MDA must be owned by an operational group that is responsible for system administration. Approved server configuration guides must be established and maintained by each operational group, based on business needs and approved by management. Operational groups should monitor configuration compliance and implement an exception policy tailored to their environment. Each operational group must establish a process for changing the configuration guides, which includes reviews. Servers must be registered within the corporate enterprise management system. At a minimum, the following information is required to positively identify the point of contact: Server contact(s) and location, and a backup contact Hardware and Operating System/Version Main functions and applications, if applicable Information in the corporate enterprise management system must be kept up-to-date. Configuration changes for production servers must follow the required change management procedures of the organization General server Configuration Guidelines Operating System configuration should be in accordance with guidelines. Services and applications that will not be used must be disabled where practicable. Access to services should be logged and/or protected through access-control methods such as TCP Wrappers. The most recent security patches must be installed on the system as soon as practicable, the only exception being when immediate application would interfere with business requirements. Trust relationships between systems are a security risk, and their use should be avoided. Do not use a trust relationship when some other method of communication will do. Always use standard security principles of least required access to perform a function. Do not use root when a non-privileged account will do. If a methodology for secure channel connection is available (i.e., technically feasible), privileged access must be performed over secure channels, (e.g., encrypted network connections using SSH or IPSec). Servers should be physically located in an access-controlled environment. Servers are specifically prohibited from operating from uncontrolled cubicle areas. 19
21 3.5.3 Monitoring All security-related events on critical or sensitive systems must be logged and audit trails saved as follows: i. All security related logs will be kept online for a minimum of 1 week. ii. iii. iv. Daily incremental tape backups will be retained for at least 1 month. Weekly full backups of logs will be retained for at least 1 month. Monthly full backups will be retained for a minimum of 2 years. Security-related events will be reported to Operation group, who will review logs and report incidents to IT management. Corrective measures will be prescribed as needed. Security-related events include, but are not limited to: Port-scan attacks i. Evidence of unauthorized access to privileged accounts ii. Anomalous occurrences that are not related to specific applications on the host The Acceptable System Use Policy The purpose of this standards is to outline the acceptable use of computer equipment at MDAs. These rules must be put in place to protect the employee and MDA. Inappropriate use exposes MDA to risks including virus attacks, compromise of network systems and services, and legal issues. This policy applies to employees, contractors, consultants, temporaries, and other workers at MDA, including all personnel affiliated with third parties. This policy applies to all General Use and Ownership i. While MDAs network administration desires to provide a reasonable level of privacy, users must be aware that the data they create on the corporate systems remains the property of the MDA. Because of the need to protect MDAs network, management cannot guarantee the confidentiality of information stored on any network device belonging to MDA. ii. Employees must be responsible to exercise good judgment regarding the reasonableness of personal use. Individual departments must responsible for creating guidelines concerning personal use of Internet/Intranet/Extranet systems. In the absence of such policies, employees should be guided by departmental policies on personal use, and if there is any uncertainty, employees should consult their supervisor or manager iii. NITDA recommends that any information that users consider sensitive or vulnerable be encrypted. 20
22 iv. For security and network maintenance purposes, authorized individuals within MDAs should monitor equipment, systems and network traffic at any time. v. MDA reserves the right to audit networks and systems on a periodic basis to ensure compliance with this policy Security and Proprietary Information a) The user interface for information contained on Internet/Intranet/Extranet-related systems should be classified as either confidential or not confidential, as defined by corporate confidentiality guidelines. Examples of confidential information include but are not limited to: organization private details, corporate strategies, competitor sensitive details, trade secrets, specifications, customer lists, and research data. Employees should take all necessary steps to prevent unauthorized access to this information. b) Passwords must be kept secured and not shared. Authorized users must be responsible for the security of their passwords and accounts. System level passwords should be changed quarterly; user level passwords should be changed every six months. c) All PCs, laptops and workstations must be secured with a password-protected screensaver with the automatic activation feature set at 10 minutes or less, or by logging-off (control-alt-delete for Win2K users) when the host will be unattended. d) Must use encryption of information in compliance with International standard Acceptable Encryption Use policy. e) Because information contained on portable computers is especially vulnerable, special care must be exercised. Protect laptops in accordance with the Laptop Security policy of MDAs. f) Postings by employees from a MDA address to newsgroups must contain a disclaimer stating that the opinions expressed are strictly their own and not necessarily those of MDAs, unless posting is in the course of business duties. 21
23 g) All hosts used by the employees for official business must be connected to the MDAs Internet/Intranet/Extranet, whether owned by the employee or MDAs, shall be continually executing approved virus-scanning software with a current virus database. h) Employees must use extreme caution when opening attachments received from unknown senders, which may contain viruses, bombs, or Trojan horse code. i) To deter SPAM and Spoofing, MDAs must use any of the three authentication systems: 1. Sender policy framework (SPF) 2. Sender ID 3. Domain Keys Identified Mail (DKIM) Unacceptable Use The following activities are, in general, prohibited. Employees may be exempted from these restrictions: during the course of their legitimate job responsibilities (e.g., systems administration staff may have a need to disable the network access of a host if that host is disrupting production services). Under no circumstances is an employee of MDAs authorized to engage in any activity that is illegal under local, state, national or international law while utilizing MDA-owned resources The Password Policy: Passwords are an important aspect of computer security. They are the front line of protection for user accounts. A poorly chosen password may result in the compromise of MDAs entire corporate network. As such, all MDA employees (including contractors and vendors with access to MDA systems) must be responsible for taking the required steps, as outlined below, to select and secure their passwords. The purpose of this policy is to establish a standard for creation of strong passwords, the protection of those passwords, and the frequency of change. The scope of this policy includes all personnel who have or are responsible for an account (or any form of access that supports or requires a password) on any system that resides at any MDA facility, has access to the MDA network, or stores any non-public MDA information. 22
24 Operational Practice a. All system-level passwords (e.g., root, enable, OS admin, application administration accounts, etc.) must be changed on at least a quarterly basis. b. All production system-level passwords must be part of the enterprise administered global password management database. c. All user-level passwords (e.g., , web, desktop computer, etc.) must be changed at least every six months. The recommended change interval is every four months. d. User accounts that have system-level privileges granted through group memberships or programs must have a unique password from all other accounts held by that user. e. Passwords must not be inserted into messages or other forms of electronic communication. f. Where SNMP is used, the community strings must be defined as something other than the standard defaults of "public," "private" and "system" and must be different from the passwords used to log in interactively. 23
25 Part 2: Guidelines for Minimum Security Requirements for National Information and Information Systems This section of the document: a. Provides guidelines on information system impact levels; b. Prescribes list of minimum information security requirements for the management, operation, and technical controls for information in each category. c. Provides actionable and tasked policy on security measures for all MDAs on Network, Server, System acceptable use, Password guidelines, Physical Location and Security policy. 3.6 Specifications for Minimum Security Requirements (Metrics of Security) Access Control (AC): Organizations must limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems) and to the types of transactions and functions that authorized users are permitted to exercise. Access Control implementations must include Administrative Safeguards (i.e. Personnel Security (PS) Due care in delegating authority), Technical Safeguards (i.e. Identification and Authentication (IA)) and Physical Safeguards (i.e. Physical and Environmental Protection (PE)): 1. Personnel Security (PS): Organizations must: (i) ensure that individuals occupying positions of responsibility within organizations (including third-party service providers) are trustworthy and meet established security criteria for those positions; (ii) ensure that organizational information and information systems are protected during and after personnel actions such as terminations and transfers; and (iii) employ formal sanctions for personnel failing to comply with organizational security policies and procedures. 2. Identification and Authentication (IA): Organizations must identify information system users, processes acting on behalf of users, or devices and authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational information systems. 3. Physical and Environmental Protection (PE): Organizations must: (i) limit physical access to information systems, equipment, and the respective operating environments to authorized individuals; (ii) protect the physical plant and support infrastructure for information systems; (iii) provide supporting utilities for information systems; (iv) protect information systems against environmental hazards; and (v) provide required environmental controls in facilities containing information systems. 24
26 4. Awareness and Training (AT): Organizations must: (i) ensure that managers and users of organizational information systems are made aware of the security risks associated with their activities and of the applicable laws, Executive Orders, directives, policies, standards, instructions, regulations, or procedures related to the security of organizational information systems; and (ii) ensure that organizational personnel are adequately trained to carry out their assigned information security-related duties and responsibilities. 5. Audit and Accountability (AU): Organizations must: (i) store, protect, and retain both content and Non content information system audit records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity; (ii) ensure that the actions of individual information system users can be uniquely traced to those users so they can be held accountable for their actions; (iii) store and keep such audit records in accordance with subsisting legislation; (iv) must ensure that such stored logs are secured and must only be retrieved for analysis upon compelled to do so in the course investigations. 6. Certification, Accreditation, and Security Assessments (CA): Organizations must: (i) periodically assess the security controls in organizational information systems to determine if the controls are effective in their application; (ii) develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational information systems; (iii) authorize the operation of organizational information systems and any associated information system connections; and (iv) monitor information system security controls on an ongoing basis to ensure the continued effectiveness of the controls. 7. Configuration Management (CM): Organizations must: (i) establish and maintain baseline configurations and inventories of organizational information systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles; and (ii) establish and enforce security configuration settings for information technology products employed in organizational information systems. 8. Contingency Planning (CP): Organizations must establish, maintain, and effectively implement plans for emergency response, backup operations, and post-disaster recovery for organizational information systems to ensure the availability of critical information resources and continuity of operations in emergency situations. 9. Incident Response (IR): Organizations must: (i) establish an operational incident handling capability for organizational information systems that includes adequate preparation, detection, analysis, containment, recovery, and user response activities; and (ii) track, document, and report incidents to required organizational officials and/or authorities. 10. Maintenance (MA): Organizations must: (i) perform periodic and timely maintenance on organizational information systems; and (ii) provide effective controls on the tools, techniques, mechanisms, and personnel used to conduct information system maintenance. 25
27 11. Media Protection (MP): Organizations must: (i) protect information system media, both paper and digital; (ii) limit access to information on information system media to authorized users; and (iii) sanitize or destroy information system media before disposal or release for reuse. 12. Planning (PL): Organizations must develop, document, periodically update, and implement security plans for organizational information systems that describe the security controls in place or planned for the information systems and the rules of behavior for individuals accessing the information systems. 13. Risk Assessment (RA): Organizations must periodically assess the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals, resulting from the operation of organizational information systems and the associated processing, storage, or transmission of organizational information. 14. System and Services Acquisition (SA): Organizations must: (i) allocate sufficient resources to adequately protect organizational information systems; (ii) employ system development life cycle processes that incorporate information security considerations; (iii) employ software usage and installation restrictions; and (iv) ensure that third-party providers employ adequate security measures to protect information, applications, and/or services outsourced from the organization. 15. System and Communications Protection (SC): Organizations must: (i) monitor, control, and protect organizational communications (i.e., information transmitted or received by organizational information systems) at the external boundaries and key internal boundaries of the information systems; and (ii) employ architectural designs, software development techniques, and systems engineering principles that promote effective information security within organizational information systems. 16. System and Information Integrity (SI): Organizations must: (i) identify, report, and correct information and information system flaws in a timely manner; (ii) provide protection from malicious code at required locations within organizational information systems; and (iii) monitor information system security alerts and advisories and take required actions in response. 3.7 General Password Construction Guidelines Passwords are used for various purposes at MDAs. Some of the more common uses include: user level accounts, web accounts, accounts, screen saver protection, voic password, and local router logins. Since very few systems have support for one-time tokens (i.e., dynamic passwords which are only used once), everyone must be aware of how to select strong passwords. Poor, weak passwords have the following characteristics: The password contains less than eight characters The password is a word found in a dictionary (English or foreign) The password is a common usage word such as: 26
28 Names of family, pets, friends, co-workers, fantasy characters, etc. Computer terms and names, commands, sites, companies, hardware, software. Birthdays and other personal information such as addresses and phone numbers. Word or number patterns like aaabbb, qwerty, zyxwvuts, , etc. Any of the above spelled backwards. Any of the above preceded or followed by a digit (e.g., secret1, 1secret) Strong passwords must have the following characteristics: Contain both upper and lower case characters (e.g., a-z, A-Z) Have digits and punctuation characters as well as letters e.g., 0-9,!@#$%^&*()_+ ~- =\ {}[]:"; <>?,./) Are at least eight characters long. Are not a word in any language, slang, dialect, jargon, etc. Are not based on personal information, names of family, etc. Passwords should never be written down or stored on-line. Try to create passwords that can be easily remembered. One way to do this is create a password based on a song title, affirmation, or other phrase. For example, the phrase might be: "This May Be One Way To Remember" and the password could be: "TmB1w2R!" or "Tmb1W>r~" or some other variation Password Protection Standards Do not use the same password for MDA accounts as for other non-mda access (e.g., personal ISP account, option trading, benefits, etc.). Where possible, don t use the same password for various MDA access needs. For example, select one password for the Engineering systems and a separate password for IT systems. Also, select a separate password to be used for a WIN2K account and a UNIX account. Do not share MDA passwords with anyone, including administrative assistants or secretaries. All passwords must to be treated as sensitive, Confidential MDAs information. Here is a list of "dont s": Don t reveal a password over the phone to ANYONE Don t reveal a password in an message Don t reveal a password to the boss Don t talk about a password in front of others Don t hint at the format of a password. Don t reveal a password on questionnaires or security forms Don t share a password with family members Don t reveal a password to co-workers while on vacation 27
29 Section Four: Part 1: Standards for Intrusion Detection And Prevention Systems (IDPS). 4.1 Purpose This section of the document: Prescribes standards on Intrusion detection and prevention systems (IDPS) that are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators Prescribes minimum and type of IDPS technology that must be considered for implementation. Provides actionable policy on IDPS guideline, and incident Response policy. This policy describes the characteristics of IDPS technologies that MDAs must adopt and provides recommendations for designing, implementing, configuring, securing, monitoring, and maintaining them. The types of IDPS technologies are differentiated primarily by the types of events that they monitor and the ways in which they are deployed. This publication prescribes the following types of IDPS technologies: Network-Based, which monitors network traffic for particular network segments or devices and analyzes the network and application protocol activity to identify suspicious activity Host-Based, which monitors the characteristics of a single host and the events occurring within that host for suspicious activity. Implementing the following recommendations should facilitate more efficient and effective intrusion detection and prevention system use for MDAs. 1. Organizations must use multiple types of IDPS technologies (subject to adequacy of Business Requirement Definition of the Company) to achieve more comprehensive and accurate detection and prevention of malicious activity. 2. Organizations planning to use multiple types of IDPS technologies or multiple products of the same IDPS technology type must considered whether or not the IDPSs must be integrated. 3. Before evaluating IDPS products, organizations must define the requirements that the products must meet. 4. When evaluating IDPS products, organizations should consider using a combination of several sources of data on the products characteristics and capabilities. 28
30 Part 2: Guidelines for Intrusion Detection and Prevention 4.2 Types of Intrusion Detection and Prevention System (IDPS) The two primary types of IDPS technologies -network-based and host-based- each offer fundamentally different information gathering, logging, detection, and prevention capabilities. Each technology type offers benefits over the others, such as detecting some events that the others cannot and detecting some events with significantly greater accuracy than the other technologies. In many environments, a robust IDPS solution cannot be achieved without using multiple types of IDPS technologies. For most environments, a combination of network-based and host-based IDPS technologies is needed for an effective IDPS solution. Wireless IDPS technologies may also be needed if the organization determines that its wireless networks need additional monitoring or if the organization wants to ensure that rogue wireless networks are not in use in the organization s facilities. Network Behavior Analysis (NBA) technologies can also be deployed if organizations desire additional detection capabilities for denial of service attacks, worms, and other threats that NBAs are particularly well-suited to detecting. Organizations must consider the different capabilities of each technology type along with other cost-benefit information when selecting IDPS technologies. a) Organizations planning to use multiple types of IDPS technologies or multiple products of the same IDPS technology type must be considered whether or not the IDPSs must be integrated. b) Before evaluating IDPS products, organizations must define the requirements that the products must meet. Evaluators also need to define specialized sets of requirements for the following: 1. Security capabilities, including information gathering, logging, detection, and prevention 2. Performance, including maximum capacity and performance features 3. Management, including design and implementation (e.g., reliability, interoperability, scalability, product security), operation and maintenance (including software updates), and training, documentation, and technical support 4. Life cycle costs, both initial and maintenance costs. 29
31 c) When evaluating IDPS products, organizations shall consider using a combination of several sources of data on the products characteristics and capabilities. d) General guideline for implementing IDPS. The scope of this policy includes computer and telecommunications systems and the employees, contractors, temporary personnel and other agents of the MDA who use and administer such systems. 1. The MDA must be committed to intrusion detection as well as intrusion prevention capabilities as part of an overall, multi-layered information technology security design to prevent, monitor and identify system intrusion or misuse. The MDAs must develop a strategy for intrusion detection and prevention within the resource constraints for these activities. The goal is to deploy systems that provide robust and effective intrusion detection, raise awareness of actions that may cause intrusions, and prepare plans for effective response when intrusions occur. 2. MDAs IT departments must adopt and deploy intrusion detection and prevention guidelines, systems and procedures for assets identified as critical to the mission of the agency. Such assessments can be enhanced or developed using vulnerability tools such as discovery scanning or vulnerability scanning. e) Implementation of Intrusion Prevention and Detection Capabilities MDAs IT department(s) must evaluate, select and deploy intrusion detection and prevention capabilities compatible with the network infrastructure, policies and resources available for these activities. Intrusion detection and prevention capabilities shall address the following: Personnel: Personnel must be identified and properly trained to operate, interpret and maintain intrusion detection and prevention capabilities. Assets: Intrusion detection capabilities must be in place to provide information related to unauthorized or irregular behavior on an agency computer, network or telecommunications system. In addition intrusion prevention capabilities must be implemented to prevent unauthorized use, anomalies or attacks on computer, network or telecommunications systems. In addition, intrusion detection capabilities shall be in place to provide information related to unauthorized or irregular behavior on an agency computer, network or telecommunications system. Intrusion detection and prevention capabilities shall be implemented that encompass basic security procedures such as reviewing activity logs, and depending on the results of the assessment, may also include special purpose intrusion 30
32 prevention and detection features on network-based, host-based, wireless, or network behavior analysis intrusion detection and prevention systems. i. Prevention Controls: Intrusion prevention systems must have controls set to respond to a perceived attack. Controls must be set from the perspective of continuing service to meet business needs and objectives. ii. Monitoring, Review & Detection : Intrusion detection and prevention capabilities must include guidelines for monitoring and analyzing system logs, notifications, warnings, alerts and audit logs. Agencies shall maintain and review information technology security audit logs and intrusion detection and prevention system alerts on a daily basis to determine if an intrusion or other type of security incident has occurred or has been prevented. iii. Security Audit Strategies: Agencies must develop information security audit strategies and processes relevant to each system. The strategy shall include the definition of monitored assets, the types and techniques of intrusion detection systems or intrusion prevention systems to be used, where each intrusion detection system or intrusion prevention system will be deployed, resources responsible for monitoring, the types of attacks the intrusion detection systems or intrusion prevention systems will be configured to detect or prevent and the methods that will be used for responses or alerts. iv. Alarms and Alerts: Thresholds for alarms and alerts shall be configured to identify possible intrusion detection or prevention events or violations of agency policy. Agency procedures shall address the disposition, retention and criticality of alerts. f) Incident Report and Response guidelines The purpose of this policy is to establish a protocol to guide a response to a computer incident or event impacting MDA computing equipment, data, or networks. This policy applies to all MDA employees, contractors, and others who process, store, transmit, or have access to any MDA information and computing equipment. Incidents are prioritized based on the following: Criticality of the affected resources (e.g., public Web server, user workstation) Current and potential technical effect of the incident (e.g., root compromise, data destruction). Combining the criticality of the affected resources and the current and potential technical effect of the incident determines the business impact of the incident for example, data destruction on a user workstation might result in a minor loss of productivity, whereas root compromise of a public Web server might result in a major loss of revenue, productivity, 31
33 access to services, and reputation, as well as the release of confidential data (e.g., credit card numbers, National Identity Numbers). 4.3 General Incident Reporting guideline/policy All computer security incidents, including suspicious events, must be reported immediately (orally or via ) to the agency/department IT manager and/or department supervisor by the employee who witnessed/identified the breach. A) Escalation: The agency/department IT manager and/or department supervisor must determine the criticality of the incident. If the incident is something that will have serious impact, the department Commissioner and CIO of MDA must be notified and briefed on the incident. The CIO or his/her designee will determine if other agencies, departments, or personnel need to become involved in resolution of the incident. Only the CIO or his/her designee or department Commissioners will speak to the press about an incident. B) Mitigation and Containment: Any system, network, or security administrator who observes an intruder on the MDA network or system must take required action to terminate the intruder s access. (Intruder can mean a hacker, botnet, malware, etc.) Affected systems, such as those infected with malicious code or systems accessed by an intruder must be isolated from the network until the extent of the damage can be assessed. Any discovered vulnerabilities in the network or system will be rectified by required means as soon as possible. C) Eradication and Restoration: The extent of damage must be determined and course of action planned and communicated to the required parties. D) Information Dissemination: Any public release of information concerning a computer security incident must be coordinated through the office of the MDAs CIO. The CIO and/or his/her designee must manage the dissemination of incident information to other participants, such as law enforcement or other incident response agencies. After consulting with the any available National Agency Response Team (NART), he/she shall coordinate dissemination of information that could affect the public, such as web page defacement or situations that disrupt systems or applications. E) Ongoing Reporting: After the initial oral or report is filed, and if the incident has been determined to be a significant event (such as multiple workstations effected, root compromise, data breach, etc.), subsequent reports shall be provided to the CIO and required managers and Commissioners. Incidents such as individual workstations infected with malware are considered minor events and need not be followed up with a written report. 32
34 The incident reports must be submitted within 24 hours of the incident. An agency/department may be required to provide reports sooner in accordance with more stringent regulations (if any). A general report to the CIO and Security Director of MDAs must contain the following: Point of contact Affected systems and locations System description, including hardware, operating system, and application software Type of information processed. Incident description Incident resolution status Damage assessment, including any data loss or corruption Organizations contacted Corrective actions taken Lessons learned A follow-up report shall be submitted upon resolution by those directly involved in addressing the incident. 33
35 Section Five: Part 1: Standard for Protecting the Confidentiality of Object Identifiable Information (OII) 5.1 Purpose This section of the document: 1) Sets minimum standards to be adopted by all organizations that do business in Nigeria for protecting the confidentiality of a specific category of data commonly known as Object Identifiable Information (OII). OII must be protected from inappropriate access, use, and disclosure. This document provides practical, context-based guidance for identifying OII and determining what level of protection is required for each instance of OII : 2) Provides policy on OII information in four parts: a) Part 1 provides an introduction to OII and lists some basic requirements involving the collection and handling of OII. b) Part 2 describes factors for determining the potential impact of inappropriate access, use, and disclosure of OII. c) Part 3 presents several methods for protecting the confidentiality of OII that can be implemented to reduce OII exposure and risk. d) Part 4 provides recommendations for developing an incident response plan for breaches involving OII and integrating the plan into an organization s existing incident response plan. 3) Prescribes minimum information security requirements for the management, operation, and technical controls for information in each category. 4) Provides actionable policy to facilitate the implementation of OII: a) Backup policy, b) Change Control, and c) Hardware Disposal policy. 34
36 5.2 Introduction and Identification of OII In this standard, OII is defined as information which can be used to distinguish or trace an individual's identity, such as name, national ID number, biometric records, etc. alone or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother s maiden name, etc. Organizations should use a variety of methods to identify all OII residing within their organization or under the control of their organization through a third party (e.g., a system being developed and tested by a contractor). Privacy threshold analyses (PTAs), also referred to as initial privacy assessments (IPAs), are often used to identify OII. Organizations must complete a PTA before the development or acquisition of a new information system and when a substantial change is made to an existing information system. PTAs are used to determine if a system contains OII, whether a Privacy Impact Assessment is required, whether a System of Records Notice (SORN) is required, and if any other privacy requirements apply to the information system. PTAs should be submitted to an organization s privacy office for review and approval. PTAs are often comprised of simple questionnaires that are completed by the system owner. 5.3 The Potential Impact of Inappropriate Access To OII This standard focuses on protecting OII from losses of confidentiality. The security objective of confidentiality is defined as preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. The security objectives of integrity and availability is also important for OII, and organizations should use Risk Management Framework to determine the required integrity and availability impact levels. The confidentiality of OII should be protected based on its risk level. This section outlines factors for determining the OII confidentiality impact level for a particular instance of OII. The OII confidentiality impact level takes into account additional OII considerations and should be used to determine if additional protections should be implemented. The OII confidentiality impact level low, moderate, or high indicates the potential harm that could result to the subject individuals and/or the organization if the OII were inappropriately accessed, used, or disclosed. Once the OII confidentiality impact level is selected, it should be used to supplement the provisional confidentiality impact level, which is determined from information and system categorization processes outlined in section 2 of this standards and guidelines. Some OII does not need to have its confidentiality protected, such as information that the organization has permission or authority to release publicly (e.g., an organization publishing a phone directory of 35
37 employees names and work phone numbers so that members of the public can contact them directly). In this case, the OII confidentiality impact level would be not applicable and would not be used to supplement a system s provisional confidentiality impact level. OII that does not require confidentiality protection may still require other security controls to protect the integrity and the availability of the information, and the organization should provide required security controls based on the assigned impact levels. 5.4 Methods for Protecting the Confidentiality of OII and Factors for Determining OII Confidentiality Impact Levels Overview Determinant of the OII confidentiality impact level should take the required factors into account. Several important factors that organizations should consider are described below. It is important to note that required and relevant factors should be considered together; one factor by itself might indicate a low impact level, but another factor might indicate a high impact level, and thus override the first factor. Also, the impact levels suggested for these factors are for illustrative purposes; each instance of OII is different, and each organization has a unique set of requirements and a different mission. Therefore, organizations should determine which factors, including organization-specific factors, they should use for determining OII confidentiality impact levels and should create and implement policy and procedures that support these determinations Distinguishability Organizations should evaluate how easily the OII can be used to distinguish particular individuals. OII data composed of only individuals location and gender would not allow any unique individuals to be identified. OII that is easily distinguishable may merit a higher impact level than OII that cannot be used to distinguish individuals without unusually extensive efforts Aggregation and Data Field Sensitivity Organizations should evaluate the sensitivity of each individual OII data field, as well as the sensitivity of the OII data fields together. For example, an individual s NATIONAL ID NUMBER or financial account number is generally more sensitive than an individual s phone number or zip code, and the combination of an individual s name and NATIONAL ID NUMBER is less sensitive than the combination of an individual s name, NATIONAL ID NUMBER, date of birth, mother s maiden name, and credit card number. OII confidentiality impact level must be set to at least moderate if a certain sensitive data field, such as NATIONAL ID NUMBER, is present. Organizations may also consider certain combinations of OII data fields to be more sensitive, such as name and credit card number, than each data field would be considered without the existence of the others. 36
38 5.4.4 Obligation to Protect Confidentiality An organization that is subject to any obligations to protect OII should consider such obligations when determining the OII confidentiality impact level. Many organizations are subject to laws, regulations, or other mandates governing the obligation to protect personal information, such as the Privacy Act and the Health Insurance Act. Additionally, some Federal agencies, such as the National Population Commission and the Federal Internal Revenue Service (FIRS), are subject to additional specific legal obligations to protect certain types of OII. Some organizations are also subject to specific legal requirements based on their role. For example, organizations acting as financial institutions by engaging in financial activities are subject to the public laws guiding privacy. Also, some agencies that collect OII for statistical purposes are subject to the strict confidentiality requirements of the Confidential Information Protection and Statistical Efficiency rules. Violations of many of these laws can result in civil or criminal penalties. Organizations may also be obliged to protect OII by their own policies, standards, or management directives. For example, a database with OII for beneficiaries of government services that retrieves information by NATIONAL ID NUMBER would be considered a System of Records under the Privacy Act and the organization would be required to provide administrative, technical, and physical safeguards for the database. Decisions regarding the applicability of a particular law, regulation, or other mandate should be made in consultation with an organization s legal counsel and privacy officer because relevant laws, regulations, and other mandates are often complex and change over time Access to and Location of the OII Organizations shall to take into consideration the nature of authorized access to the OII. When OII is accessed more often or by more people and systems, there are more opportunities for the OII s confidentiality to be compromised. Another element is the scope of access to the OII, such as whether the OII needs to be accessed from teleworkers systems and other systems outside the direct control of the organization. These considerations could cause an organization to assign a higher impact level to widely-accessed OII than would otherwise be assigned to help mitigate the increased risk caused by the nature of the access. Additionally, organizations shall choose to consider whether OII that is stored or regularly transported off-site by employees should be assigned a higher OII confidentiality impact level. For example, surveyors, researchers, and other field employees often need to store OII on laptops or removable media as part of their jobs. OII located offsite is more vulnerable to unauthorized access or disclosure because it is more likely to be lost or stolen than OII stored within the physical boundaries of the organization. 37
39 5.4.6 General Protection Measures This section describes two types of general OII protection: policy and procedure creation; and education, training, and awareness Policy and Procedure Creation Organizations shall develop comprehensive policies and procedures for handling OII at the organization level, the program or component level, and occasionally the system level. Some types of policies include foundational privacy principles, privacy rules of behavior, policies that implement laws and other mandates, and system-level policies. The organizational privacy principles act as the foundation upon which the overall privacy program is built and reflect the organization s privacy objectives. Foundational privacy principles shall also be used as a guide against which to develop additional policies and procedures. Privacy rules of behavior policies provide guidance on the proper handling of OII, as well as the consequences for failure to comply with the policy. Some policies provide guidance on implementing laws and guidance in an organization s environment based upon the organization s authorized business purposes and mission. Organizations should develop privacy policies and associated procedures for the following topics: a. Development of Privacy Impact Assessments (PIAs) and coordination with System of Records Notices (SORNs) b. Access rules for OII within a system c. OII retention schedules and procedures d. Redress e. Individual consent f. Data sharing agreements g. OII incident response and data breach notification h. Privacy in the System Development Life Cycle Process i. Limitation of collection, disclosure, sharing, and use of OII j. Consequences for failure to follow privacy rules of behavior. If the organization permits access to or transfer of OII through interconnected systems external to the organization or shares OII through other means, the organization should implement the required documented agreements for roles and responsibilities, restrictions on further sharing of the information, requirements for notification to each party in the case of a breach, minimum security controls, and other relevant factors. 38
40 Interconnection Security Agreements (ISA) should be used for technical requirements, as necessary. These agreements ensure that the partner organizations abide by rules for handling, disclosing, sharing, transmitting, retaining, and using the organization s OII. OII maintained by the organization should also be reflected in the organization s incident response policies and procedures. A well-defined incident response capability helps the organization detect incidents rapidly, minimize loss and destruction, identify weaknesses, and restore IT operations rapidly Education, Training, and Awareness Education, training, and awareness are distinct activities, each critical to the success of privacy and security programs. Their roles related to protecting OII are briefly described below. An organization should have a training plan and implementation approach, and an organization s leadership should communicate the seriousness of protecting OII to its staff. Organizational policy should define roles and responsibilities for training; training prerequisites for receiving access to OII; and training periodicity and refresher training requirements. To reduce the possibility that OII will be accessed, used, or disclosed inappropriately, all individuals that have been granted access to OII should receive required training and, where applicable, specific rolebased training Privacy-Specific Protection Measures Privacy-specific protection measures are controls for protecting the confidentiality of OII. These controls provide types of protections not usually needed for other types of data. Privacy-specific protection measures provide additional protections that help organizations collect, maintain, use, and disseminate data in ways that protect the confidentiality of the data Minimizing Collection and Retention of OII The practice of minimizing the collection and retention of OII is a basic privacy principle. By limiting OII collections to the least amount necessary to conduct its mission, the organization may limit potential negative consequences in the event of a data breach involving OII. Organizations should consider the total amount of OII collected and maintained, as well as the types and categories of OII collected and maintained. This general concept is often abbreviated as the minimum necessary principle. OII collections should only be made where such collections are essential to meet the authorized business purpose and mission of the organization. If the OII serves no current business purpose, then the OII should no longer be collected. Also, an organization should regularly review its holdings of previously collected OII to determine whether the OII is still relevant and necessary for meeting the organization s business purpose and mission. If the OII is no longer relevant and necessary, then the OII should be properly destroyed. The destruction or disposal of OII must be conducted in accordance with the applicable rules and records control schedules. The effective management and prompt disposal of OII, in accordance with approved disposition schedules, will minimize the risks of unauthorized disclosure. 39
41 De-Identifying Information Full data records are not always necessary, such as for some forms of research, resource planning, and examinations of correlations and trends. The term de-identified information is used to describe records that have had enough OII removed or obscured, also referred to as masked or obfuscated, such that the remaining information does not identify an individual and there is no reasonable basis to believe that the information can be used to identify an individual. De-identified information can be re-identified (rendered distinguishable) by using a code, algorithm, or pseudonym that is assigned to individual records. The code, algorithm, or pseudonym should not be derived from other related information about the individual, and the means of re-identification should only be known by authorized parties and not disclosed to anyone without the authority to re-identify records. 40
42 Part 2: Guidelines for Protecting the Confidentiality of Object Identifiable Information (OII) 5.5 Introduction and Identification of OII In this guideline, OII is defined as information which can be used to distinguish or trace an individual's identity, such as name, national ID number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother s maiden name, etc. To distinguish an individual is to identify an individual. Some examples of information that could distinguish an individual include, but are not limited to, name, passport number, national ID number, or biometric image and template. In contrast, a list containing only credit scores does not have sufficient information to distinguish a specific individual. Information elements that are linked or linkable are not sufficient to distinguish an individual when considered separately, but which could distinguish individuals when combined with a secondary information source. For example, suppose that two databases contain different OII elements and also share some common OII elements. An individual with access to both databases may be able to link together information from the two databases and distinguish individuals. If the secondary information source is present on the same system or a closely-related system, then the data is considered linked. If the secondary source is available to the general public or can be obtained, such as from an unrelated system within the organization, then the data is considered linkable. Linked data is often de-identified in some way, and information that makes re-identification possible is available to some system users. Linkable data is also often de-identified, but the remaining data can be analyzed against other data sources, such as telephone directories and other sources available to large communities of people, to distinguish individuals. Organizations should use a variety of methods to identify all OII residing within their organization or under the control of their organization through a third party (e.g., a system being developed and tested by a contractor). Privacy threshold analyses (PTAs), also referred to as initial privacy assessments (IPAs), are often used to identify OII. PTAs are useful in initiating the communication and collaboration for each system between the privacy officer, the information security officer, and the information officer. Other examples of methods to identify OII include reviewing system documentation, conducting interviews, conducting data calls, or checking with system owners. 41
43 5.6 Examples of OII Data The following list contains examples of information that may be considered OII. a. Name, such as full name, maiden name, mother s maiden name, or alias b. Personal identification number, such as NIDN, passport number, driver s license number, tax identification number, patient identification number, and financial account or credit card number. c. Address information, such as street address or address d. Asset information, such as Internet Protocol (IP) or Media Access Control (MAC) address or other host-specific persistent static identifier that consistently links to a particular person or small, well-defined group of people e. Telephone numbers, including mobile, business, and personal numbers f. Personal characteristics, including photographic image (especially of face or other distinguishing characteristic), x-rays, fingerprints, or other biometric image or template data (e.g., retina scans, voice signature, facial geometry) g. Information identifying personally owned property, such as vehicle registration or identification number, and title numbers and related information Information about an individual that is linked or linkable to one of the above (e.g., date of birth, place of birth, race, religion, weight, activities, or employment, medical, education, or financial information). 5.7 OII and Fair Information Practices The protection of OII and the overall privacy of records are concerns both for individuals whose personal records are at stake and for organizations that may be liable or have their reputations damaged should such OII be inappropriately accessed, used, or disclosed. The protection also have national security implications in some cases that involves government and security agencies personnel s. Treatment of OII is distinct from other types of data because it needs to be not only protected, but also collected, maintained, and disseminated in accordance with Federal law. The Privacy Act, as well as other privacy laws, is based on the widely-recognized Fair Information Practices, also called Privacy Principles. There are five core Fair Information Practices that are based on the common elements, or privacy principles, of several international reports and guidelines. These core practices are as follows: a) Notice/Awareness Individuals should be given notice of an organization s information practices before any personal information is collected from them. b) Choice/Consent Individuals should be given a choice about how information about them is used. c) Access/Participation Individuals should have the right to access information about 42
44 them and request correction to ensure the information is accurate and complete. d) Integrity/Security Data collectors should ensure that information is protected by reasonable security safeguards against such risks as loss or unauthorized access, destruction, use, modification or disclosure of data. e) Enforcement/Redress Data collectors should be held accountable for complying with measures that give effect to the practices stated above. 5.8 The potential impact of inappropriate access to OII This standard focuses on protecting OII from losses of confidentiality. The security objective of confidentiality is defined by law as preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. The security objectives of integrity and availability is also important for OII, and organizations should determine the required integrity and availability impact levels. The confidentiality of OII should be protected based on its risk level. This section outlines factors for determining the OII confidentiality impact level for a particular instance of OII. The OII confidentiality impact level takes into account additional OII considerations and should be used to determine if additional protections should be implemented. The OII confidentiality impact level low, moderate, or high indicates the potential harm that could result to the subject individuals and/or the organization if the OII were inappropriately accessed, used, or disclosed. Once the OII confidentiality impact level is selected, it should be used to supplement the provisional confidentiality impact level, which is determined from information and system categorization processes outlined in section one guideline. Some OII does not need to have its confidentiality protected, such as information that the organization has permission or authority to release publicly (e.g., an organization publishing a phone directory of employees names and work phone numbers so that members of the public can contact them directly). In this case, the OII confidentiality impact level would be not applicable and would not be used to supplement a system s provisional confidentiality impact level. OII that does not require confidentiality protection may still require other security controls to protect the integrity and the availability of the information, and the organization should provide required security controls based on the assigned impact levels. 43
45 5.8.1 Impact Level Definitions The harm caused from of a loss of confidentiality should be considered when attempting to determine which OII confidentiality impact level corresponds to a specific set of OII data. Harm for the purposes of this document, includes any adverse effects that would be experienced by an individual whose OII was the subject of a loss of confidentiality, as well as any adverse effects experienced by the organization that maintains the OII. Harm to an individual includes any negative or unwanted effects (i.e., that may be socially, physically, or financially damaging). Examples of types of harm to individuals include, but are not limited to, the potential for blackmail, identity theft, physical harm, discrimination, or emotional distress. Organizations may also experience harm as a result of a loss of confidentiality of OII maintained by the organization including but not limited to administrative burden, financial losses, loss of public reputation and public confidence, and civil liability. The following describe the three impact levels low, moderate, and high defined earlier in this document which are based on the potential impact of a security breach involving a particular system. The potential impact is LOW if the loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals. A limited adverse effect means that, for example, the loss of confidentiality, integrity, or availability might (i) cause a degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is noticeably reduced; (ii) result in minor damage to organizational assets; (iii) result in minor financial loss; (iv) result in minor harm to individuals. The potential impact is MODERATE if the loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals. A serious adverse effect means that, for example, the loss of confidentiality, integrity, or availability might 1 cause a significant degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the 44
46 functions is significantly reduced; 2 result in significant damage to organizational assets; 3 result in significant financial loss; or 4 result in significant harm to individuals that does not involve loss of life or serious life threatening injuries. The potential impact is HIGH if the loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals. A severe or catastrophic adverse effect means that, for example, the loss of confidentiality, integrity, or availability might (i) (ii) (iii) (iv) cause a severe degradation in or loss of mission capability to an extent and duration that the organization is not able to perform one or more of its primary functions; result in major damage to organizational assets; result in major financial loss; or result in severe or catastrophic harm to individuals involving loss of life or serious life threatening injuries. Harm to individuals as described in these impact levels is easier to understand with examples. A breach of the confidentiality of OII at the low impact level would not cause harm greater than inconvenience, such as changing a telephone number. The types of harm that could be caused by a breach of OII at the moderate impact level include financial loss due to identity theft or denial of benefits, public humiliation, discrimination, and the potential for blackmail. Harm at the high impact level involves serious physical, social, or financial harm, resulting in potential loss of life or inappropriate physical detention. 5.9 Methods for protecting the confidentiality of OII and Factors for Determining OII Confidentiality Impact Levels Overview Determining the OII confidentiality impact level should take into account relevant factors. Several important factors that organizations should consider are described below. It is important to note that relevant factors should be considered together; one factor by itself 45
47 might indicate a low impact level, but another factor might indicate a high impact level, and thus override the first factor. Also, the impact levels suggested for these factors are for illustrative purposes; each instance of OII is different, and each organization has a unique set of requirements and a different mission. Therefore, organizations should determine which factors, including organization-specific factors, they should use for determining OII confidentiality impact levels and should create and implement policy and procedures that support these determinations Distinguishability Organizations should evaluate how easily the OII can be used to distinguish particular individuals. For example, OII data composed of individuals names, fingerprints, and NIDN uniquely identify individuals, whereas OII data composed of individuals phone numbers only would require the use of additional data sources, such as phone directories, and would only allow some unique individuals to be identified (for example, unique identification might not be possible if multiple individuals share a phone or if a phone number is unlisted). OII data composed of only individuals area codes and gender would not allow any unique individuals to be identified. OII that is easily distinguishable may merit a higher impact level than OII that cannot be used to distinguish individuals without unusually extensive efforts. Organizations may also choose to consider how many individuals can be distinguished from the OII data. Breaches of 25 records and 25 million records may have different impacts, not only in terms of the collective harm to individuals but also in terms of harm to the organization s reputation and the cost to the organization in addressing the breach. For this reason, organizations may choose to set a higher impact level for particularly large OII data sets than would otherwise be set. However, organizations should not set a lower impact level for an OII data set simply because it contains a small number of records Aggregation and Data Field Sensitivity Organizations should evaluate the sensitivity of each individual OII data field, as well as the sensitivity of the OII data fields together. For example, an individual s NATIONAL ID NUMBER or financial account number is generally more sensitive than an individual s phone number or zip code, and the combination of an individual s name and NATIONAL ID NUMBER is less sensitive than the combination of an individual s name, NATIONAL ID NUMBER, date of birth, mother s maiden name, and credit card number. Organizations often require the OII confidentiality impact level to be set to at least moderate if a certain sensitive data field, such as NATIONAL ID NUMBER, is present. Organizations may also consider certain combinations of OII data fields to be more sensitive, such as name and credit card number, than each data field would be considered without the existence of the others Context of Use Context of use is defined as the purpose for which the OII is collected, stored, used, processed, disclosed, or disseminated, as well as how that OII is used or could potentially be used. Examples of context include, but are not limited to, statistical analysis, determining eligibility for 46
48 benefits, administration of benefits, research, tax administration, or law enforcement. Organizations should assess the context of use because it is important to understand how the disclosure of data elements can potentially harm individuals and the organization. Organizations should consider what harm is likely to be caused if the OII is disclosed (either intentionally or accidentally) or if the mere fact that the OII is being collected or used is disclosed could cause harm to the organization or individual. For example, law enforcement investigations could be compromised if the mere fact that information is being collected about a particular individual is disclosed. The context of use may cause multiple instances of the same types of OII data to be assigned different OII confidentiality impact levels. For example, suppose that an organization has three lists that contain the same OII data fields (e.g., name, address, phone number). The first list is people who subscribe to a general-interest newsletter produced by the organization. The second list is people who have filed for retirement benefits, and the third list is individuals who work undercover in law enforcement. The potential impacts to the affected individuals and to the organization are significantly different for each of the three lists. Based on context of use only, the three lists are likely to merit impact levels of low, moderate, and high, respectively. Examples of topics that are relevant to context of use as a factor for determining OII confidentiality impact level are abortion; alcohol, drug, or other addictive products; illegal conduct; illegal immigration status; information damaging to financial standing, employability, or reputation; information leading to social stigmatization or discrimination; politics; psychological well-being or mental health; religion; same-sex partners; sexual behavior; sexual orientation; taxes; and other information due to specific cultural or other factors. 6.0 Obligation to Protect Confidentiality An organization that is subject to any obligations to protect OII should consider such obligations when determining the OII confidentiality impact level. Many organizations are subject to laws, regulations, or other mandates governing the obligation to protect personal information, such as the Privacy Act and the Health Insurance Act. Additionally, some Federal agencies, such as the Census Bureau and the Federal Internal Revenue Service (FIRS), are subject to additional specific legal obligations to protect certain types of OII. Some organizations are also subject to specific legal requirements based on their role. For example, organizations acting as financial institutions by engaging in financial activities are subject to the public laws guiding privacy. Also, some agencies that collect OII for statistical purposes are subject to the strict confidentiality requirements of the Confidential Information Protection and Statistical Efficiency 47
49 rules. Violations of many of these laws can result in civil or criminal penalties. Organizations may also be obliged to protect OII by their own policies, standards, or management directives. For example, a database with OII for beneficiaries of government services that retrieves information by NATIONAL ID NUMBER would be considered a System of Records under the Privacy Act and the organization would be required to provide administrative, technical, and physical safeguards for the database. Decisions regarding the applicability of a particular law, regulation, or other mandate should be made in consultation with an organization s legal counsel and privacy officer because relevant laws, regulations, and other mandates are often complex and change over time. 6.1 Access to and Location of the OII Organizations may choose to take into consideration the nature of authorized access to the OII. When OII is accessed more often or by more people and systems, there are more opportunities for the OII s confidentiality to be compromised. Another element is the scope of access to the OII, such as whether the OII needs to be accessed from teleworkers systems and other systems outside the direct control of the organization. These considerations could cause an organization to assign a higher impact level to widely-accessed OII than would otherwise be assigned to help mitigate the increased risk caused by the nature of the access. Additionally, organizations may choose to consider whether OII that is stored or regularly transported off-site by employees should be assigned a higher OII confidentiality impact level. For example, surveyors, researchers, and other field employees often need to store OII on laptops or removable media as part of their jobs. OII located offsite is more vulnerable to unauthorized access or disclosure because it is more likely to be lost or stolen than OII stored within the physical boundaries of the organization OII Confidentiality Impact Level Examples The following are examples of how an organization might assign OII confidentiality impact levels to specific instances of OII. The examples are intended to help organizations better understand the process of considering the various impact level factors, and they are not a substitute for organizations analyzing their own situations. Certain circumstances within any organization or specific system, such as the context of use or obligation to protect, may cause different outcomes. Obligation to protect is a particularly important factor that should be determined early in the categorization process. Since obligation to protect confidentiality should always be made in consultation with an organization s legal counsel and privacy officer, if is not addressed in the following examples. 48
50 Example 5.1: Incident Response Roster An organization maintains a roster (in both electronic and paper formats) of its computer incident response team members. In the event that an IT staff member detects any kind of security breach, standard practice requires that the staff member contact the required people listed on the roster. Because this team may need to coordinate closely in the event of an incident, the contact information includes names, professional titles, office and work cell phone numbers, and work addresses. The organization makes the same types of contact information available to the public for all of its employees on its main Web site. Distinguishability: The information directly identifies a small number of individuals (fewer than 20). Aggregation and data field sensitivity: Although the roster is intended to be made available only to the team members, the individuals information included in the roster is already available to the public on the organization s Web site. Context of use: The release of the individuals names and contact information would not likely cause harm to the individuals, and disclosure of the fact that the organization has collected or used this information is also unlikely to cause harm. Access to and location of the OII: The information is accessed by IT staff members that detect security breaches, as well as the team members themselves. The OII needs to be readily available to tele workers and to on-call IT staff members so that incident responses can be initiated quickly. Taking into account these factors, the organization determines that unauthorized access to the roster would likely cause little or no harm, and it chooses to assign the OII confidentiality impact level of low. Example 5.2: Intranet Activity Tracking An organization maintains a Web use audit log for an intranet Web site accessed by employees. The Web use audit log contains the following: The user s IP address The Uniform Resource Locator (URL) of the Web site the user was viewing immediately before coming to this Web site (i.e., referring URL) The date and time the user accessed the Web site The amount of time the user spent at the Web site The web pages or topics accessed within the organization s Web site (e.g., organization security policy). Distinguishability: By itself, the log does not contain any distinguishable data. However, the 49
51 organization has another system with a log that contains domain login information records, which include user IDs and corresponding IP addresses. Administrators that can access both systems and their logs and took the time to correlate information between the logs could distinguish individuals. Potentially, information could be gathered on the actions of most of the organization s users involving Web access to intranet resources. The organization has a small number of administrators that have access to both systems and user logs. Aggregation and data field sensitivity: The information on which internal Web pages and topics were accessed could potentially cause some embarrassment if the pages involved certain human resources-related subjects, such as a user searching for information on substance abuse programs. However, since the logging is limited to use of intranet-housed information, the amount of potentially embarrassing information is minimal. Context of use: The release of the information would be unlikely to cause harm, other than potentially embarrassing a small number of users if their identities could be distinguished. The fact that the logging is occurring is generally known and assumed and would not cause harm. Access to and location of the OII: The log is accessed by a small number of system administrators when troubleshooting operational problems and also occasionally by a small number of incident response personnel when investigating internal incidents. All access to the log occurs only from the organization s own systems. Taking into account these factors, the organization determines that a breach of the log s confidentiality would likely cause little or no harm, and it chooses to assign the OII confidentiality impact level of low. Example 5.3: Fraud, Waste, and Abuse Reporting Application A database contains Web form submissions by individuals claiming possible fraud, waste, or abuse of organizational resources and authority. Some of the submissions include serious allegations, such as accusing individuals of accepting bribes or accusing individuals of not enforcing safety regulations. The submission of contact information is not prohibited, and individuals sometimes enter their personal information in the form s narrative text field. The Web site is hosted by a server that logs IP address, referring Web site information, and time spent on the Web site. Distinguishability: By default, the database does not request distinguishable data, but a significant percentage of users choose to provide distinguishable information. A recent estimate indicated that the database has approximately 30 records with distinguishable information out of nearly 1000 total records. The Web log does not contain any distinguishable information, nor could it be readily linked with the database or other sources to identify specific individuals. Aggregation and data field sensitivity: The database s narrative text field contains usersupplied text and frequently includes information such as name, mailing address, address, and phone numbers. The organization does not know how sensitive this information might be to 50
52 the individuals, such as unlisted phone numbers or addresses used for limited private communications. Context of use: Because of the nature of the submissions reporting claims of fraud, waste, or abuse the disclosure of individuals identities would likely cause some of the individuals making the claims to fear retribution by management and peers. The ensuing harm could include blackmail, severe emotional distress, loss of employment, and physical harm. A breach would also undermine trust in the organization by both the individuals making the claims and the public. Access to and location of the OII: The database is only accessed by a few people who investigate fraud, waste, and abuse claims. All access to the database occurs only from the organization s own systems. Taking into account these factors, the organization determines that a breach of the database s confidentiality would likely cause catastrophic harm to some of the individuals and chooses to assign the OII confidentiality impact level of high. OII should be protected through a combination of measures, including general protection measures, privacy-specific protection measures, and security controls. Organizations should use a risk-based approach for protecting the confidentiality of OII. The OII protection measures provided in this section are complementary to other general protection measures for data and may be used as one part of an organization s comprehensive approach to protecting the confidentiality of OII. 6.2 Education, Training, and Awareness Education, training, and awareness are distinct activities, each critical to the success of privacy and security programs. Their roles related to protecting OII are briefly described below. Awareness efforts are designed to change behavior or reinforce desired OII practices. The purpose of awareness is to focus attention on the protection of OII. Awareness relies on using attention-grabbing techniques to reach all different types of staff across an organization. For OII protection, awareness methods include informing staff of new scams that are being used to steal identities, providing updates on privacy items in the news such as government data breaches and their effect on individuals and the organization, providing examples of how staff members have been held accountable for inappropriate actions, and providing examples of recommended privacy practices. The goal of training is to build knowledge and skills that will enable staff to protect OII. Laws and regulations may specifically require training for staff, managers, and contractors. An organization should have a training plan and implementation approach, and an organization s leadership should communicate the seriousness of protecting OII to its staff. Organizational policy should define roles and responsibilities for training; training prerequisites for receiving access to OII; and training periodicity and refresher training requirements. 51
53 To reduce the possibility that OII will be accessed, used, or disclosed inappropriately, all individuals that have been granted access to OII should receive required training and, where applicable, specific role-based training. Depending on the roles and functions involving OII, important topics to address may include: a) The definition of OII b) The basic privacy laws, regulations, and policies that apply to a staff member s organization c) Restrictions on data collection, storage, and use of OII d) Roles and responsibilities for using and protecting OII e) Having the organization s legal counsel or privacy officer determine legal obligations to protect OII f) Required disposal of OII g) Sanctions for misuse of OII h) Recognizing a security or privacy incident involving OII i) Retention schedules for OII j) Roles and responsibilities in responding to OII-related incidents. Education develops a common body of knowledge that reflects all of the various specialties and aspects of OII protection. It is used to develop privacy professionals who are able to implement privacy programs that enable their organizations to proactively respond to privacy challenges. 6.3 De-Identifying Information Full data records are not always necessary, such as for some forms of research, resource planning, and examinations of correlations and trends. The term de-identified information is used to describe records that have had enough OII removed or obscured, also referred to as masked or obfuscated, such that the remaining information does not identify an individual and there is no reasonable basis to believe that the information can be used to identify an individual. De-identified information can be re-identified (rendered distinguishable) by using a code, algorithm, or pseudonym that is assigned to individual records. The code, algorithm, or pseudonym should not be derived from other related information about the individual, and the means of re-identification should only be known by authorized parties and not disclosed to anyone without the authority to re-identify records. A common de-identification technique for obscuring OII is to use a one-way cryptographic function, also known as a hash function, on the OII. De-identified information can be assigned an OII confidentiality impact level of low, as long as the following are both true: The re-identification algorithm, code, or pseudonym is maintained in a separate system, with required controls in place to prevent unauthorized access to the re-identification information. The data elements are not linkable, via public records or other reasonably available external records, in order to re-identify the data. For example, de-identification could be accomplished by removing account numbers, names, 52
54 NATIONAL ID NUMBERs, and any other identifiable information from a set of financial records. By de-identifying the information, a trend analysis team could perform an unbiased review on those records in the system without compromising the OII or providing the team with the ability to identify any individual. Another example is using health care test results in research analysis. All of the distinguishable OII fields can be removed, and the patient ID numbers can be obscured using pseudo-random data that is linked to a cross-reference table located in a separate system. The only means to reconstruct the original (complete) OII records is through authorized access to the cross-reference table. Additionally, de-identified information can be aggregated for the purposes of statistical analysis, such as making comparisons, analyzing trends, or identifying patterns. An example is the aggregation and use of multiple sets of de-identified data for evaluating several different types of education loan programs. The data describes characteristics of loan holders, such as age, gender, region, and outstanding loan balances. With this dataset, an analyst could draw statistics showing that 18,000 women in the age group have outstanding loan balances greater than $10,000. Although the original data sets contained distinguishable identities for each person and is considered to be OII, the deidentified and aggregated dataset would not contain linked or readily distinguishable data for any individual. 6.4 Anonymous Information Anonymous is defined as something that cannot be named or identified. It derives from a Greek word meaning without a name. Similarly, anonymized information is defined as previously identifiable information that has been de-identified and for which a code or other link no longer exists. Anonymized information differs from de-identified information because anonymized information cannot be re-identified. A re-identification algorithm, code, or pseudonym does not exist or has been removed and is not available. Anonymizing information usually involves the application of statistical disclosure limitation techniques to ensure the data cannot be reidentified, such as: Generalizing the Data Making information less precise, such as grouping continuous values Suppressing the Data Deleting an entire record or certain parts of records Introducing Noise into the Data Adding small amounts of variation into selected data Swapping the Data Exchanging certain data fields of one record with the same data fields of another similar record (e.g., swapping the zip codes of two records) Replacing Data with the Average Value Replacing a selected value of data with the average value for the entire group of data. 53
55 Using these techniques, the information is no longer OII, but it can retain its useful and realistic properties. Anonymized information is useful for system testing. Most systems that are newly developed, newly purchased, or upgraded require testing before being introduced to their intended production environment. Testing generally should simulate real conditions as closely as possible to ensure the new or upgraded system runs correctly and handles the projected system capacity effectively. If OII is used in the test environment, it is required to be protected at the same level that it is protected in the production environment, which can add significantly to the time and expense of testing the system. Randomly generating fake data in place of OII to test systems is often ineffective because certain properties and statistical distributions of the OII may need to be retained to effectively test the system. There are tools available that substitute OII with synthetic data generated by anonymizing OII. The anonymized information retains the useful properties of the original OII, but the anonymized information is not considered to be OII. Anonymized data substitution is a privacy-specific protection measure that enables system testing while reducing the expense and added time of protecting OII. However, not all data can be readily anonymized (e.g. biometric data). 6.5 Security Controls In addition to the OII-specific protection measures described earlier in this section, many types of technical and operational security controls are available to safeguard the confidentiality of OII. These controls are often already available on a system to protect other types of data processed, stored, or transmitted by the system. The security controls listed address general protections of data and systems. The items listed below are some of the controls that are required to be used to help safeguard the confidentiality of OII. However, organizations may choose to provide greater protections than what is recommended 1. Access Enforcement. Organizations are required to control access to OII through access control policies and access enforcement mechanisms (e.g., access control lists). This can be done in many ways. One example is implementing role-based access control and configuring it so that each user can access only the pieces of data necessary for the user s role. Another example is only permitting users to access OII through an application that tightly restricts their access to the OII, instead of permitting users to directly access the databases or files containing OII. Encrypting stored information is also an option for implementing access enforcement. 54
56 2. Separation of Duties. Organizations are required to enforce separation of duties for duties involving access to OII. For example, the users of de-identified OII data would not also be in roles that permit them to access the information needed to re-identify the records. 3. Least Privilege. Organizations are required to enforce the most restrictive set of rights/privileges or accesses needed by users (or processes acting on behalf of users) for the performance of specified tasks. Concerning OII, the organization are required to ensure that users who must access records containing OII only have access to the minimum amount of OII data, along with only those privileges (e.g., read, write, execute) that are necessary to perform their job duties. 4. Remote Access. Organizations are required to choose to prohibit or strictly limit remote access to OII. If remote access is permitted, the organization are required to ensure that the communications are encrypted. 5. Access Control for Mobile Devices. Organizations are required to choose to prohibit or strictly limit access to OII from portable and mobile devices, such as laptops, cell phones, and personal digital assistants (PDA), which are generally higher-risk than non-portable devices (e.g., desktop computers at the organization s facilities). Some organizations choose to forbid all telework and remote access involving higher-impact instances of OII so that the information will not leave the organization s physical boundaries. If access is permitted, the organization are required to ensure that the devices are properly secured and regularly scan the devices to verify their security status (e.g., antivirus software enabled and up-to-date, operating system fully patched). 6. Auditable Events. Organizations are required to monitor events that affect the confidentiality of OII, such as unauthorized access to OII. 7. Audit Monitoring, Analysis, and Reporting. Organizations are required to regularly review and analyze information system audit records for indications of inappropriate or unusual activity affecting OII, investigate suspicious activity or suspected violations, report findings to required officials, and take necessary actions. 8. User Identification and Authentication. Users are required to be uniquely identified and authenticated before accessing OII. The strength requirement for the authentication mechanism depends on the impact level of the OII and the system as a whole. Organizations must allow remote access only with two-factor authentication where one of the factors is provided by a device separate from the computer gaining access, and also must use a time-out function for remote access and mobile devices requiring user re-authentication after thirty minutes of inactivity. 9. Media Access. Organizations are required to restrict access to information system media containing OII, including digital media (e.g., CDs, USB flash drives, backup tapes) and non-digital media (e.g., paper, microfilm). This could also include portable and mobile devices with a storage capability. 10. Media Marking. Organizations are required to label information system media and output containing OII to indicate how it should be distributed and handled. Examples of labeling are cover sheets on printouts and paper labels on digital media. 55
57 11. Media Storage. Organizations are required to securely store OII, both in paper and digital forms, until the media are destroyed or sanitized using approved equipment, techniques, and procedures. One example is the use of storage encryption technologies to protect OII stored on removable media. 12. Media Transport. Organizations are required to protect digital and non-digital media and mobile devices containing OII that is transported outside the organization s controlled areas. Examples of protective measures are encrypting stored information and locking the media in a container. 13. Media Sanitization. Organizations are required to sanitize digital and non-digital media containing OII before it is disposed or released for reuse. An example is degaussing a hard drive applying a magnetic field to the drive to render it unusable. 14. Transmission Confidentiality. Organizations are required to protect the confidentiality of transmitted OII. This is most often accomplished by encrypting the communications or by encrypting the information before it is transmitted. 6.6 Recommendations for developing an incident response plan for breaches involving OII Handling breaches involving OII is different from regular incident handling and may require additional actions by an organization. Breaches involving OII can receive considerable media attention, which can greatly harm an organization s reputation and reduce the public s trust in the organization. Moreover, affected individuals can be subject to embarrassment, identity theft, or blackmail as the result of a breach of OII. Due to these particular risks of harm, organizations are required to develop additional policies, such as determining when and how individuals should be notified, when and if a breach should be reported publicly, and whether to provide remedial services, such as credit monitoring, to affected individuals. Organizations are required to integrate these additional policies into their existing incident handling response policies. Incident response plans are required to be modified to handle breaches involving OII. Incident response plans should also address how to minimize the amount of OII necessary to adequately report and respond to a breach. This standard guideline describes four phases of handling security incidents. Specific policies and procedures for handling breaches involving OII can be added to each of the following phases: Preparation; Detection and analysis; Containment, eradication, and recovery; Post-incident activity. This section provides additional details on OII-specific considerations for each of these four 56
58 phases Preparation Preparation requires the most effort because it sets the stage to ensure the OII breach is handled appropriately. Organizations are required to build their OII breach response plans into their existing incident response plans. The development of OII breach response plans requires organizations to make many decisions about how to handle OII breaches, and the decisions should be used to develop policies and procedures. The policies and procedures should be communicated to the organization s entire staff through training and awareness programs. Training programs should inform employees of the consequences of their actions for inappropriate use and handling of OII. The organization should determine if existing processes are adequate, and if not establish a new incident reporting method for employees to report suspected or known breaches of OII. The method could be a telephone hotline, , or a management reporting structure in which employees know to contact a specific person within the management chain. Employees should be able to report any OII breach immediately at any day or time. Additionally, employees should be provided with a clear definition of what constitutes an OII breach and what information needs to be reported. The following information is helpful to obtain from employees who are reporting a known or suspected OII breach: Person reporting the incident Person who discovered the incident Date and time the incident was discovered Nature of the incident Description of the information lost or compromised Storage medium from which information was lost or compromised Controls in place to prevent unauthorized use of the lost or compromised information Number of individuals potentially affected Whether law enforcement was contacted. Federal agencies and organizations are required to report all known or suspected breaches of OII, in any format, to meet this obligation, organizations should proactively plan their breach notification response. An OII breach may require notification to persons external to the organization, such as law enforcement, financial institutions, affected individuals, the media, and the public. Organizations should plan in advance how, when, and to whom notifications should be made. Organizations should conduct training sessions on interacting with the media regarding incidents. 57
59 Whether breach notification is required Timeliness of the notification Source of the notification Contents of the notification Means of providing the notification Who receives the notification; public outreach response. Additionally, organizations are required to establish a committee or person responsible for using the breach notification policy to coordinate the organization s response. The organization should also determine what circumstances require the organization to provide remedial assistance to affected individuals, such as credit monitoring services. The OII confidentiality impact level should be considered for this determination because it provides an analysis of the likelihood of harm for the loss of confidentiality for each instance of OII Detection and Analysis Organizations may continue to use their current detection and analysis technologies and techniques for handling incidents involving OII. However, adjustments to incident handling processes may be needed, such as ensuring that the analysis process includes an evaluation of whether an incident involves OII. Detection and analysis should focus on both known and suspected breaches of OII Containment, Eradication, and Recovery Existing technologies and techniques for containment, eradication, and recovery are required to be used for breaches involving OII. However, changes to incident handling processes may be needed, such as performing additional media sanitization steps when OII needs to be deleted from media during recovery. Particular attention should be paid to using proper forensics techniques to ensure preservation of evidence for intentional criminal acts. Additionally, it is important to determine whether OII was accessed and how many records or individuals were affected. 6.7 Post-Incident Activity As with other security incidents, information learned through detection, analysis, containment, and recovery are required to be collected for sharing within the organization and with the NITDA or any available Emergence Response center to help protect against future incidents. The OII breach response plan should be continually updated and improved based on the lessons learned during each incident. Additionally, the organization are required to use its OII breach response policy to determine whether the organization should provide affected individuals with remedial assistance, such as credit monitoring. 58
60 Exercises involving OII scenarios within an organization provide an inexpensive and effective way to build skills necessary to identify potential issues with how the organization identifies and safeguards OII. Individuals who participate in these exercises are presented with a brief OII scenario and a list of general and specific questions related to the scenario. After reading the scenario, the group then discusses each question and determines the most appropriate response for their organization. The goal is to determine what the participants would really do and to compare that with policies, procedures, and generally recommended practices to identify any discrepancies or deficiencies and decide upon required mitigation techniques. 59
61 Section Six Part 1: Standards on Securing Public Web Server 6.1 Purpose This section of the document: 1) Sets minimum standards to be adopted by all organizations that do business in Nigeria for securing World Wide Web (WWW) infrastructure as a system for exchanging information over the Internet. At the most basic level, the Web can be divided into two principal components: a) Web Server: Applications machine that make information available over the Internet (in essence, publish information) b) Web Browsers (client): Systems used to access and display the information stored on the Web servers 2) Prescribes minimum information security requirements for the management, operation, and technical controls for information in each category. 3) The actionable policy on Web server components: a) Web application b) Application Service provider c) DMZ issues 6.2 Web Server Policy Web applications must be subjected to security assessments based on the following criteria: New or Major Application Release shall be subjected to a full assessment prior to approval of the change control documentation and/or release into the live environment. Third Party or Acquired Web Application shall be subjected to full assessment after which it will be bound to policy requirements. Point Releases shall be subjected to required assessment level based on the risk of the changes in the application functionality and/or architecture. Patch Releases shall be subjected to required assessment level based on the risk of the changes to the application functionality and/or architecture. 60
62 Emergency Releases An emergency release will be allowed to forgo security assessments and carry the assumed risk until such time that a proper assessment can be carried out. 6.3 Web Server Risk Security issues that are discovered during assessments will be mitigated based upon the following risk levels. High Any high risk issue must be fixed immediately or other mitigation strategies must be put in place to limit exposure before deployment. Applications with high risk issues are subject to being taken off-line or denied release into the live environment. Medium Medium risk issues should be reviewed to determine what is required to mitigate and scheduled accordingly. Applications with medium risk issues may be taken off-line or denied release into the live environment based on the number of issues and if multiple issues increase the risk to an unacceptable level. Issues should be fixed in a patch/point release unless other mitigation strategies will limit exposure. Low Issue should be reviewed to determine what is required to correct the issue and scheduled accordingly. Remediation validation testing will be required to validate fix and/or provide mitigation strategies for any discovered issues of Medium risk level or greater. 6.4 General Configuration Standard All equipment must comply with the following configuration standrads: Hardware, operating systems, services and applications must be approved by proper or designated authority as part of the pre-deployment review phase. Operating system configuration must be done according to the secure host and router installation and configuration standards. All patches/hot-fixes recommended by the equipment vendor must be installed. This applies to all services installed, even though those services may be temporarily or permanently disabled. Administrative owner groups must have processes in place to stay current on required patches/hotfixes. Services and applications not serving business requirements must be disabled. 61
63 Trust relationships between systems may only be introduced according to business requirements, must be documented, and must be approved. Services and applications not for general access must be restricted by access control lists. Insecure services or protocols must be replaced with more secure equivalents whenever such exist. Remote administration must be performed over secure channels (e.g., encrypted network connections using SSH or IPSEC) or console access independent from the DMZ networks. Where a methodology for secure channel connections is not available, one-time passwords (DES/SofToken) must be used for all access levels. All host content updates must occur over secure channels. Security-related events must be logged and audit trails saved to approved logs. Security related events include (but are not limited to) the following: User login failures. Failure to obtain privileged access. Access policy violations. 62
64 Part 2: Guidelines on Securing Public Web Server 6.5 Guidelines Unfortunately, Web servers are often the most targeted and attacked hosts on organizations networks. As a result, it is essential to secure Web servers and the network infrastructure that supports them. The following key guidelines are prescribed to ALL STAKEHOLDERS for maintaining a secure Web presence Deployment of Public Web Server Because it is much more difficult to address security once deployment and implementation have occurred, security should be considered from the initial planning stage. Organizations are more likely to make decisions about configuring computers appropriately and consistently when they develop and use a detailed, well-designed deployment plan. Developing such a plan will support Web server administrators in making the inevitable tradeoff decisions between usability, performance, and risk. Organizations must:- 1. Consider the human resource requirements for both deployment and operational phases of the Web server and supporting infrastructure. 2. Implement required security management practices and controls when maintaining and operating a secure Web server. 3. Ensure that Web server operating systems are deployed, configured, and managed to meet the security requirements of the organization. 4. Ensure that the Web server application is deployed, configured, and managed to meet the security requirements of the organization. 5. Take steps to ensure that only required content is published on a Web site. Some generally accepted examples of what should not be published or must be carefully examined and reviewed before publication on a public Web site are: a) Classified or proprietary information b) Information on the composition or preparation of hazardous materials, toxins, Improvised Explosive Devices, etc. c) Sensitive information relating to national security. d) Medical records e) An organization s detailed physical and information security safeguards f) Details about an organization s network and information system infrastructure (e.g., address ranges, naming conventions, access numbers) g) Information that specifies or implies physical security vulnerabilities 63
65 h) Detailed plans, maps, diagrams, aerial photographs, and architectural drawings of organizational buildings, properties, or installations i) Any sensitive information about individuals, such as object identifiable information (OII), that might be subject to either National, state or, in some instances, international privacy laws. 6. Ensure that required steps are taken to protect Web content from unauthorized access or modification. Examples of resource control practices include: a) Install or enable only necessary services. b) Install Web content on a dedicated hard drive or logical partition. c) Limit uploads to directories that are not readable by the Web server. d) Define a single directory for all external scripts or programs executed as part of Web content. e) Disable the use of hard or symbolic links. f) Define a complete Web content access matrix that identifies which folders and files within the Web server document directory are restricted and which are accessible (and by whom). g) Disable directory listings. h) Use user authentication, digital signatures, and other cryptographic mechanisms as required. i) Use host-based intrusion detection systems (IDS), intrusion prevention systems (IPS), and/or file integrity checkers to detect intrusions and verify Web content. j) Protect each backend server (e.g., database server, directory server) from command injection attacks at both the Web server and the backend server. 7. Use active content judiciously after balancing the benefits gained against the associated risks. 8. Use authentication and cryptographic technologies as required to protect any sensitive data. 9. Employ their network infrastructure to help protect their public Web servers. 10. Commit to the continuous maintenance of the security of public Web servers to ensure continued security. Maintaining the security of a Web server will usually involve the following steps: Configuring, protecting, and analyzing log files Backing up critical information frequently Maintaining a protected authoritative copy of the organization s Web content 64
66 Establishing and following procedures for recovering from compromise Testing and applying patches in a timely manner Testing security periodically. 6.6 Web Application implementation guidelines This policy covers all web application security assessments requested by any individual, group or department for the purposes of maintaining the security posture, compliance, risk management, and change control of technologies in use at MDA. All web application security assessments must be performed by delegated security personnel either employed or contracted by MDA. All findings must be considered confidential and must be distributed to persons on a need to know basis. Distribution of any findings outside of MDA is strictly prohibited unless approved by the Chief Information Officer. Any relationships within multitiered applications found during the scoping phase will be included in the assessment unless explicitly limited. Limitations and subsequent justification will be documented prior to the start of the assessment Application Service Provider Guidelines This section prescribes Information Security's requirements of Application Service Providers (ASPs) that engage with MDA. This policy applies to any use of Application Service Providers by MDA independent of where hosted Policy Requirements of Project Sponsoring Organization The ASP Sponsoring Organization must first establish that its project is an acceptable one for the ASP model, prior to engaging any additional infrastructure teams within MDA or ASPs external to the company. The person/team wanting to use the ASP service must confirm that the ASP chosen to host the application or project complies with this policy. The Business Function to be outsourced must be evaluated against the following: 1. The requester must go through the ASP engagement process with the ASP designated team to ensure affected parties are properly engaged. 2. In the event that MDA data or applications are to be manipulated by, or hosted at, an ASP's service, the ASP sponsoring organization must have written, explicit permission from the data/application owners. A copy of this permission must be provided to required authority at the MDA (CIO). 65
67 3. The information to be hosted by an ASP must fall under the "Minimal" or "More Sensitive" categories. Information that falls under the "Most Sensitive" category may not be outsourced to an ASP. 4. If the ASP provides confidential information to MDA, the ASP sponsoring organization is responsible for ensuring that any obligations of confidentiality are satisfied. This includes information contained in the ASP's application. MDAs legal services department must be contacted for further guidance if questions about third-party data arise. Projects that do not meet these criteria may not be deployed to an ASP The Internet DMZ Equipment Guidelines The purpose of this policy is to define guidelines for the MDAs in making operational decisions on all equipment owned and/or operated by MDAs located outside MDAs corporate Internet firewalls. These guidelines are designed to minimize the potential exposure to MDA from the loss of sensitive or company confidential data, intellectual property, damage to public image etc., which may follow from unauthorized use of MDAs resources. Devices that are Internet facing and outside the MDA firewall are considered part of the "demilitarized zone" (DMZ) and are subject to this policy. These devices (network and host) are particularly vulnerable to attack from the Internet since they reside outside the corporate firewalls. The policy defines the following standards: Ownership responsibility Secure configuration requirements Operational requirements Change control requirement All equipment or devices deployed in a DMZ owned and/or operated by MDA (including hosts, routers, switches, etc.) and/ or registered in any Domain Name System (DNS) domain owned by MDA, must follow this policy. This policy also covers any host device outsourced or hosted by external/third-party service providers, if that equipment resides in the MDA.gov.ng" domain or appears to be owned by MDA. All new equipment which falls under the scope of this policy must be configured according to the referenced configuration documents. 6.7 GENERAL SECURITY CONCEPT The practices prescribed in this document are designed to help mitigate the risks associated with public Web servers. When addressing Web server security issues, it is an excellent idea to keep in mind the following general information security principles: 66
68 Simplicity Security mechanisms (and information systems in general) should be as simple as possible. Complexity is at the root of many security issues. Fail-Safe If a failure occurs, the system should fail in a secure manner, i.e., security controls and settings remain in effect and are enforced. It is usually better to lose functionality rather than security. Complete Mediation Rather than providing direct access to information, mediators that enforce access policy should be employed. Common examples of mediators include file system permissions, proxies, firewalls, and mail gateways. Open Design System security should not depend on the secrecy of the implementation or its components. Security through obscurity is not reliable. Separation of Privilege Functions, to the degree possible, should be separate and provide as much granularity as possible. The concept can apply to both systems and operators and users. In the case of systems, functions such as read, edit, write, and execute should be separate. In the case of system operators and users, roles should be as separate as possible. For example, if resources allow, the role of system administrator should be separate from that of the security administrator. Least Privilege This principle dictates that each task, process, or user is granted the minimum rights required to perform its job. By applying this principle consistently, if a task, process, or user is compromised, the scope of damage is constrained to the limited resources available to the compromised entity. Psychological Acceptability Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they may devise ways to work around or compromise them. The objective is not to weaken security so it is understandable and acceptable, but to train and educate users and to design security mechanisms and policies that are usable and effective. Least Common Mechanism When providing a feature for the system, it is best to have a single process or service gain some function without granting that same function to other parts of the system. The ability for the Web server process to access a back-end database, for instance, should not also enable other applications on the system to access the back-end database. Defense-in-Depth Organizations should understand that a single security mechanism is generally insufficient. Security mechanisms (defenses) need to be layered so that compromise of a single security mechanism is insufficient to compromise a host or network. No silver bullet exists for information system security. Work Factor Organizations should understand what it would take to break the system or network s security features. The amount of work necessary for an attacker to break 67
69 the system or network should exceed the value that the attacker would gain from a successful compromise. Compromise Recording Records and logs should be maintained so that if a compromise does occur, evidence of the attack is available to the organization. This information can assist in securing the network and host after the compromise and aid in identifying the methods and exploits used by the attacker. This information can be used to better secure the host or network in the future. In addition, these records and logs can assist organizations in identifying and prosecuting attackers. 68
70 Section Seven Part 1: Standards on Firewalls and Firewall Policy 7.1 Purpose This section of the document: A. Sets minimum standards to be adopted by all organizations that do business in Nigeria for implementation of firewall solution and firewall policy to safe guard their network infrastructure. Also for the organizations to understand the capabilities of firewall technologies and firewall policies. It provides practical, real-world guidance on developing firewall policies and selecting, configuring, testing, deploying, and managing firewalls B. The actionable policy on Firewall guidelines: Policy based on IP address and Protocol Policy based on application VPN policy Malicious software protection and Antivirus policy Note: This document has been created primarily for technical information technology (IT) personnel such as network, security, and system engineers and administrators who are responsible for firewall design, selection, deployment, and management. Other IT personnel with network and system security responsibilities may also find this document to be useful. The content assumes some basic knowledge of networking and network security. 7.2 The Placement of the Firewalls within the Network Although firewalls at a network s perimeter provide some measure of protection for internal hosts, in many cases additional network protection is required. Network firewalls are not able to recognize all instances and forms of attack, allowing some attacks to penetrate and reach internal hosts and attacks sent from one internal host to another may not even pass through a network firewall. Because of these and other factors, network designers must include firewall functionality at places other than the network perimeter to provide an additional layer of security where needed. 7.3 Architecture with Multiple Layers of Firewalls There is no limitation on where a firewall can be placed in a network. While firewalls should be at the edge of a logical network boundary, creating an inside and outside on either side of the firewall, a network administrator may wish to have additional boundaries within the network and deploy additional firewalls to establish such boundaries. Firewall policy should be maintained and updated frequently as classes of new attacks or vulnerabilities arise, or as the organization s needs regarding network applications change. This should make the 69
71 process of creating a firewall ruleset less error-prone and more verifiable, since the ruleset can be compared to the applications matrix. The policy should also include specific guidance on how to address changes to the ruleset. Generally, firewalls should block all inbound and outbound traffic that has not been expressly permitted by the firewall policy traffic that is not needed by the organization. This practice, known as deny by default, decreases the risk of attack and can also reduce the volume of traffic carried on the organization s networks. Because of the dynamic nature of hosts, networks, protocols, and applications, deny by default is a more secure approach than permitting all traffic that is not explicitly forbidden. 7.4 Policies Based on IP Addresses and Protocols Firewall policies should only allow necessary IP protocols through. Examples of commonly used IP protocols, with their IP protocol numbers are ICMP (1), TCP (6), and UDP (17). Other IP protocols, such as IPsec components Encapsulating Security Payload (ESP) (50) and Authentication Header (AH) (51) and routing protocols may also need to pass through firewalls. These necessary protocols should be restricted whenever possible to the specific hosts and networks within the organization with a need to use them. By permitting only necessary protocols, all unnecessary IP protocols are denied by default. 7.5 IP Addresses and Other IP Characteristics Firewall policies should only permit required source and destination IP addresses to be used. Specific recommendations for IP addresses include: a. Traffic with invalid source or destination addresses should always be blocked, regardless of the firewall location. Examples of relatively common invalid IPv4 addresses are (also known as the localhost address) and (interpreted by some operating systems as a localhost or a broadcast address). These have no legitimate use on a network. b. Traffic with an invalid source address for incoming traffic or destination address for outgoing traffic (an external address) should be blocked at the network perimeter. c. Traffic with a private destination address for incoming traffic or source address for outgoing traffic (an internal address) should be blocked at the network perimeter. Perimeter devices can perform address translation services to permit internal hosts with private addresses to communicate through the perimeter, but private addresses should not be passed through the network perimeter. Incoming traffic with a destination address of the firewall itself should be blocked unless the firewall is offering services for incoming traffic that require direct connections for example, if the firewall is acting as an application proxy. Organizations should also block the following types of traffic at the perimeter: I. Traffic containing IP source routing information, which allows a system to specify the routes that packets will employ while traveling from source to destination. This could potentially permit an attacker to construct a packet that bypasses network security controls 70
72 II. III. IV. Traffic containing directed broadcast addresses, which are broadcast addresses that are not in the same subnet as the originator SHOULD BE BLOCKED. Firewalls at the network perimeter should block all incoming traffic to networks and hosts that should not be accessible from external networks. These firewalls should also block all outgoing traffic from the organization s networks and hosts that should not be permitted to access external networks IPv6 is a new version of IP that is increasingly being deployed. Although IPv6 s internal format and address length differ from those of IPv4, many other features remain the same and some of these are relevant to firewalls. For the features that are the same between IPv4 and IPv6, firewalls should work the same. For example, blocking all inbound and outbound traffic that has not been expressly permitted by the firewall policy should be done regardless of whether or not the traffic has an IPv4 or IPv6 address. Every organization that has any IPv6 traffic coming into its internal network needs a firewall that is capable of filtering this kind of traffic. These firewalls should have the following capabilities: a. The firewall should be able to use IPv6 addresses in all filtering rules that use IPv4 addresses. b. The administrative interface should allow administrators to clone IPv4 rules to IPv6 addresses to make administration easier. c. If the firewall can filter based on DNS lookup of domain names, it needs to use AAAA (IPv6 address records) records in the same way as A records (those used for IPv4 addresses). d. The firewall needs to be able to filter ICMPv6, as specified in IETF RFC 4890, Recommendations for Filtering ICMPv6 Messages in Firewalls. 7.6 TCP and UDP TCP and UDP are used by applications. An application server typically listens at a fixed TCP or UDP port, while application clients typically use any of a wide range of ports and as with other aspects of firewall rulesets, deny by default policies should be used for incoming TCP and UDP traffic. To prevent malicious activity, firewalls at the network perimeter should deny all incoming and outgoing ICMP traffic except for those types and codes specifically permitted by the organization. For ICMP in IPv4, ICMP type 3 messages ( destination unreachable ) should not be filtered because they are used for important network diagnostics. For ICMP in IPv6, many types of messages must be allowed in specific circumstances to enable various IPv6 features 7.7 IPsec Protocols ESP and AH protocols are used for IPsec VPNs, and a firewall that blocks these protocols will not allow IPsec VPNs to pass. While blocking ESP can hinder the use of encryption to protect sensitive data, it can also force users who would normally encrypt their data with ESP to allow it to be inspected for example, by a stateful inspection firewall or an application-layer gateway. 71
73 Organizations should block ESP and AH except to and from specific addresses on the internal network those addresses belong to IPsec gateways that are allowed to be VPN endpoints. Enforcing this policy will require people inside the organization to obtain the required approval to open ESP and/or AH access to their IPsec routers. This will also reduce the amount of encrypted traffic coming from inside the network that cannot be examined by network security controls. 7.8 Policies Based on Applications Most early firewall work involved simply blocking unwanted or suspicious traffic at the network boundary. Inbound application proxies take a different approach they let traffic destined for a particular server into the network, but capture that traffic in a server that processes it like a port-based firewall. The application proxy approach provides an additional layer of security for incoming traffic by validating some of the traffic before it reaches the desired server. An application proxy prevents the server from having direct access to the outside network. Inbound application proxies should be used in front of any server that does not have sufficient security features to protect it from application-specific attacks. The main considerations when deciding whether or not to use an inbound application proxy are: a. Is a suitable application proxy available? b. Is the server already sufficiently protected by existing firewalls? c. Can the main server remove malicious content as effectively as the application proxy? d. Is the latency caused by the proxy acceptable for the application? e. How easy it is to update the filtering rules on the main server and the application proxy to handle newly developed threats? 7.9 Virtual Private Network (VPN) Policy The purpose of this policy is to provide guidelines for Remote Access IPSec or L2TP Virtual Private Network (VPN) connections to the MDA corporate network. This policy applies to all MDA employees, contractors, consultants, temporaries, and other workers including all personnel affiliated with third parties utilizing VPNs to access the MDA network. This policy applies to implementations of VPN that are directed through an IPSec Concentrator Policy Approved MDA employees and authorized third parties (customers, vendors, etc.) must utilize the benefits of VPNs, which are a "user managed" service. This means that the user is responsible for selecting an Internet Service Provider (ISP), coordinating installation, installing any required software, and paying associated fees. Additionally; 1. It is the responsibility of employees with VPN privileges to ensure that unauthorized users are not allowed access to MDA internal networks. 72
74 2. VPN use must be controlled using either a one-time password authentication such as a token device or a public/private key system with a strong passphrase. 3. When actively connected to the corporate network, VPNs must force all traffic to and from the PC over the VPN tunnel: all other traffic must be dropped. 4. Dual (split) tunneling must NOT be permitted; only one network connection must be allowed. 5. VPN gateways must be setup and managed by MDA network operational groups. 6. All computers connected to MDA internal networks via VPN or any other technology must use the most up-to-date anti-virus software that is the corporate standard, this includes personal computers. 7. VPN users must be automatically disconnected from MDAs network after thirty minutes of inactivity. The user must then logon again to reconnect to the network. Pings or other artificial network processes are not to be used to keep the connection open. 8. The VPN concentrator must be limited to an absolute connection time of 24 hours. 9. Users of computers that are not MDA-owned equipment must configure the equipment to comply with MDAs VPN and Network policies. 10. Only approved VPN clients must be used. 11. By using VPN technology with personal equipment, users must understand that their machines are a de facto extension of MDAs network, and as such are subject to the same rules and regulations that apply to MDA-owned equipment, i.e., their machines must be configured to comply with any Security Policies Malicious Application and Virus Policy and Guidelines Agencies must not develop an internal policy with requirements lower than the minimum requirements listed in this policy. For the purpose of this policy, MDA refers to any government entity including ministries, agencies, departments, boards and councils or other entities in all branches of government. Anti-virus, anti-spyware and firewall software must be deployed on all workstations, portable computers, servers and other computing devices that attach to the MDAs networks. This policy applies to all MDAs and other entitles including third-party business relationships that require access to non-public State resources. This includes, but is not limited to, desktop computers, laptop computers, proxy servers, mobile devices and any file and print servers. In addition, all gateway providers must provide malware checking and protection for messages processed by the gateway. Routine monitoring of networks can provide patterns of non-standard traffic that are indicative of many types of malware. If a typical traffic is detected, the IT staff of the Department of Information must notify the agency/department CIO. Departmental IT staff must have authority to remove or disable any device producing suspicious traffic or with apparent virus infection and retain the equipment for investigation and/or forensic review, as needed. This includes 3rd party devices. 73
75 At their discretion, IT staff must be empowered and encouraged to conduct audits of any PC systems that fall under this policy. This type of audit may be triggered when patterns of problems indicate that it is likely that associated standards for this policy are not being met. a. Anti-malware applications must be used to protect the MDA networks from malware infections and attacks. b. All desktop and laptop computers, servers and applicable devices must have current versions of software applications designed to detect malicious software. c. Non-MDA equipment used in the conduct of MDA business through contractual or other agreements must be certified by the required agency/department IT manager as having up-todate anti-virus protection prior to allowing the device to access the organization s networks. d. All individuals accessing MDA networks must not disable or disrupt the operation of antivirus protection on any device. Nor should they in any way engage in practices that would introduce malicious software into the MDAs computing environment either directly or through data exchanges and transfers. 74
76 Part 2: Guidelines on Firewalls and Firewall Policy 7.11 General Guidelines and introduction on Firewalls and Firewall Policy To improve the effectiveness and security of their firewalls, organizations should implement the following recommendations: 1. Create a firewall policy that specifies how firewalls should handle network traffic. 2. Conduct risk analysis to develop a list the types of traffic needed by the organization and how they must be secured including which types of traffic can transverse a firewall and under what circumstances. 3. Identify all requirements that should be considered when determining which firewall to implement, e.g.: a. Which network area need to be protected b. Which type of firewall technologies will be most effective for the types of traffic that require protection. c. Possible future needs, such as plans to adopt new IPv6 technologies or virtual private networks (VPN) 4. Create rulesets that implement the organization s firewall policy while supporting firewall performance. 5. Manage firewall architectures, policies, software, and other components throughout the life of the firewall solutions. 75
77 Section Eight Part 1: Cyber Forensic Standards 8.1 Purpose This section of the document: a. Prescribes minimum information security requirements for the management, operation, and technical controls for information in each category. b. Provides actionable policy on Cyber Forensic 8.2 Overall Action Plan for Implementation of Cyber Forensic Overall objectives Goals I. Increased stakeholder awareness and transfer of knowledge. 1. High levels of awareness of information security and cybercrime issues amongst users at home, in government and educational institutions, in the private sector, and amongst legal officers. 2. Increased exchange of information on information security and cybercrime at the regional and national levels. II. Improved policy, legal and regulatory frameworks for promoting information security and addressing cybercrime 3. Policy, legal and regulatory frameworks at the national level that are consistent with existing or developing international legal instruments, and that provide for proportionate and dissuasive sanctions, including deprivation of liberty. III. Increased protection against cybercrime. 4. Secure information systems, networks and transactions in the public and private sectors. 76
78 5. Safe and secure environments for users, especially children and young persons. IV. Improved detection of, and responses to, cybercrime. 6. Effective mechanisms for detection of, and responses to, cybercrime at the national and state levels, including the creation of environments that are conducive to the reporting of cybercrime. 7. Widespread adoption of, and compliance with, relevant codes of conduct and best practices at the national level. 8. Increased capacity to conduct domestic electronic investigations and to assist with transnational investigations. 77
79 Part 2: Cyber Forensic Guidelines 8.3 General Guidelines and Overview of Cyber Forensic Cyber forensics is the process of extracting information and data from computer storage and communication media and guaranteeing its accuracy and reliability. The challenge of course is actually finding this data, collecting it, preserving it, and presenting it in a manner acceptable in a court of law. It is necessary that the collected data (cyber evidence) must be transformed (encrypted) not to expose it to irrelevant entities on its way to the court. It is also important that a cyber-forensics tool must keep track of all activities of operators and all the data they inspected. This is to manage the risk of having some private data misused by an operator (of cyber forensics device.) Traffic logging policies of a cyber-forensics network must be authenticated and checked for integrity to its maximum level. Such technology must be purchased and managed by each MDA in a way that the country security officials and necessary stakeholders should have access to the source code for later verifications The Tool Capabilities and Features: The forensics device and tools must be capable of storing all the traffic of a node irrespective of its protocol even if it is not , chat, etc. Criminals may use non-standard protocols. The hand-over of the retained data along with searching instructions to distributed forensics devices MUST be encrypted with the established and defined standards. Electronic evidence is fragile and its must be encrypted to ensure the integrity of the data. Additionally, cyber thieves, criminals, dishonest and even honest employees hide, wipe, disguise, cloak, encrypt and destroy evidence from storage media using a variety of freeware, shareware and commercially available utility programs. The below issues are among the strong point of adoption and implementation of cyber forensic tools: With the rapid growth of interest in the Internet and the interanet, network security has become a major concern to organizations throughout the world. The fact that the information and tools needed to penetrate the security of corporate networks are widely available has only increased that concern. Because of this increased focus on network security, network administrators often spend more effort protecting their networks than on actual network setup and administration. In response to the increasing threats, network administrators must constantly keep abreast of the wide number of security issues confronting today's world 78
80 Network security and traffic logging must be a continuous process built around a security policy. A continuous security policy is most effective, because it promotes retesting and reapplying updated security measures on a continuous basis. Most MDA and private networks contain some information that should not be shared with outside users on the Internet, so it must be protected and must have tools to investigate cybercrime and attacks Some MDA and Private/Corporate and public Networks need to censor or control the internet content accessible by both employees and public. Due to the nature of the Cybercrime and probability of successful prosecution, MDAs are required to log all employee/customer/user communications - (including webmail) & Instant Message Conversations carried over the MDAs network Handling of Retained Data Due to the sensitivity nature of the recorded network data, therefore, MDA MUST treat Retained Data as Highly Confidential with necessary encryption protection MDA must create internal process and requirements for the handling, delivery and associated issues of retained data of telecommunications traffic and users/subscribers Set of requirements relating to handover interfaces for retained traffic and subscriber data must be in place and backed by legal request The Data Handover Interface There MUST be an agreement on handover interface for the request and delivery of Retained Data between the requesting government authorities and the organization Handover requirements and handover specification for the data must be specified for Retained Data The consideration for both the requesting of retained data and the delivery of the results must be defined The Security framework Security framework in Lawful Interception and Retained Data environment MUST be laid out just to ensure the integrity of the retained data Defining a security framework for securing Lawful Interception and Retained Data environment of the communication service provider and the Handover of the information Lawful Interception and Retained Data Security Reporting process MUST be defined clearly. Various kinds of administrative, request and response information from/to the Issuing Authority and the responsible organization at the communication service provider and internet service provider for Retained Data matters. This process must look into the following steps: o Retained data information from the communication service provider and internet service provider to the Receiving Authority 79
81 o The crossing borders and cooperation with other countries must be defined: subject to corresponding national laws and/or international agreements Data exchange techniques Data exchange techniques and equipment must be agreed upon by all stakeholder and probably preapproved to ensure the standard and conformity. Also, the cyber forensic tools MUST support all of the below techniques to ensure format compatibility among the stakeholders: Direct TCP with BER encoding derived from the ASN1 HTTP with XML encoding Top of the standard TCP/IP stack Backward and Update Compatibility Lawful Intercept tools must maintain the Retained Data standards and mechanism to ensure annual review and update the set standards to be sure that all new threats are being accounted for under the security guidelines Add synchronous multi-media services Add new internet services as technology progress Add new parameters in line with national requirements Annually organize an interoperability test, if required plug-test for checking the specifications The use of the Handover standard and process should and must be promoted in Local, State, and national conferences and workshops. 8.4 Guidelines and Policy for Acceptable Encryption The purpose of this policy is to provide guidance that limits the use of encryption to those algorithms that have received substantial public review and have been proven to work effectively. Additionally, this policy provides direction to ensure that National regulations are followed, and legal authority is granted for the dissemination and use of encryption technologies. This policy applies to all MDAs employees and affiliates. All MDAs encryption must be done using approved cryptographic modules. Recommended ciphers include: AES 256, Triple DES and RSA. Symmetric cryptosystem key lengths must be at least 128 bits. Asymmetric crypto-system keys must be of a length that yields equivalent strength. MDAs key length requirements shall be reviewed annually as part of the yearly security review and upgraded as technology allows. The use of proprietary encryption algorithms is not allowed for any purpose, unless reviewed by qualified experts outside of the vendor in question and approved by NITDA. 80
82 9.0 Definition of Terms Networks:- A computer network is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users. Networks are commonly categorized based on their characteristics. Information Infrastructure:- All people, processes, procedures, tools, facilities, and technology which supports the creation, use, transport, storage, and destruction of information. Object Identifiable Information:- OII is defined as information which can be used to distinguish or trace an individual's identity, such as name, national ID number, biometric records, etc. Confidentiality: Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. A loss of confidentiality is the unauthorized disclosure of information. Integrity: Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity. A loss of integrity is the unauthorized modification or destruction of information. Availability: Ensuring timely and reliable access to and use of information. A loss of availability is the disruption of access to or use of information or an information system. Survivability: Ensuring that services continue and those business operations survive a security breach. Survivability is lost in a case of complete disruption of operations and discontinuation of services Authenticity This means that the data (source), security level, user, time and location are required to be authenticated. 81
83 Information System:- Information system is a discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. Information resources include information and related resources, such as personnel, equipment, funds, and information technology. Anonymous Anonymous is defined as something that cannot be named or identified. It derives from a Greek word meaning without a name. Similarly, anonymized information is defined as previously identifiable information that has been de-identified and for which a code or other link no longer exists. 82
National Information Systems And Network Security Standards & Guidelines. Version 3.0. The National Information Technology Development Agency (NITDA)
National Information Systems And Network Security Standards & Guidelines Version 3.0 Published by The National Information Technology Development Agency (NITDA) January 2013 Table of Contents Section One...
National Information Systems And Network Security Standards & Guidelines
National Information Systems And Network Security Standards & Guidelines Version 3.0 Published by National Information Technology Development Agency (NITDA) January 2013 Table of Contents Section One...3
Minimum Security Requirements for Federal Information and Information Systems
FIPS PUB 200 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION Minimum Security Requirements for Federal Information and Information Systems Computer Security Division Information Technology Laboratory
Standards for Security Categorization of Federal Information and Information Systems
FIPS PUB 199 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION Standards for Security Categorization of Federal Information and Information Systems Computer Security Division Information Technology
5 FAH-11 H-500 PERFORMANCE MEASURES FOR INFORMATION ASSURANCE
5 FAH-11 H-500 PERFORMANCE MEASURES FOR INFORMATION ASSURANCE 5 FAH-11 H-510 GENERAL (Office of Origin: IRM/IA) 5 FAH-11 H-511 INTRODUCTION 5 FAH-11 H-511.1 Purpose a. This subchapter implements the policy
Data Management Policies. Sage ERP Online
Sage ERP Online Sage ERP Online Table of Contents 1.0 Server Backup and Restore Policy... 3 1.1 Objectives... 3 1.2 Scope... 3 1.3 Responsibilities... 3 1.4 Policy... 4 1.5 Policy Violation... 5 1.6 Communication...
The Cloud in Regulatory Affairs - Validation, Risk Management and Chances -
45 min Webinar: November 14th, 2014 The Cloud in Regulatory Affairs - Validation, Risk Management and Chances - www.cunesoft.com Rainer Schwarz Cunesoft Holger Spalt ivigilance 2014 Cunesoft GmbH PART
REGION 19 HEAD START. Acceptable Use Policy
REGION 19 HEAD START Acceptable Use Policy 1.0 Overview Research, Evaluation, Assessment and Information Systems (R.E.A.I.S.) intentions for publishing an Acceptable Use Policy are not to impose restrictions
Information Security Program Management Standard
State of California California Information Security Office Information Security Program Management Standard SIMM 5305-A September 2013 REVISION HISTORY REVISION DATE OF RELEASE OWNER SUMMARY OF CHANGES
CTR System Report - 2008 FISMA
CTR System Report - 2008 FISMA February 27, 2009 TABLE of CONTENTS BACKGROUND AND OBJECTIVES... 5 BACKGROUND... 5 OBJECTIVES... 6 Classes and Families of Security Controls... 6 Control Classes... 7 Control
How To Use A College Computer System Safely
1.0 Overview Keuka College provides access to modern information technology in support of its mission to promote excellence and achievement across its mission areas of instruction, research, and service.
AASTMT Acceptable Use Policy
AASTMT Acceptable Use Policy Classification Information Security Version 1.0 Status Not Active Prepared Department Computer Networks and Data Center Approved Authority AASTMT Presidency Release Date 19/4/2015
U.S. ELECTION ASSISTANCE COMMISSION OFFICE OF INSPECTOR GENERAL
U.S. ELECTION ASSISTANCE COMMISSION OFFICE OF INSPECTOR GENERAL FINAL REPORT: U.S. Election Assistance Commission Compliance with the Requirements of the Federal Information Security Management Act Fiscal
Get Confidence in Mission Security with IV&V Information Assurance
Get Confidence in Mission Security with IV&V Information Assurance September 10, 2014 Threat Landscape Regulatory Framework Life-cycles IV&V Rigor and Independence Threat Landscape Continuously evolving
NIST 800-53A: Guide for Assessing the Security Controls in Federal Information Systems. Samuel R. Ashmore Margarita Castillo Barry Gavrich
NIST 800-53A: Guide for Assessing the Security Controls in Federal Information Systems Samuel R. Ashmore Margarita Castillo Barry Gavrich CS589 Information & Risk Management New Mexico Tech Spring 2007
ASIA/PAC AERONAUTICAL TELECOMMUNICATION NETWORK SECURITY GUIDANCE DOCUMENT
INTERNATIONAL CIVIL AVIATION ORGANIZATION ASIA AND PACIFIC OFFICE ASIA/PAC AERONAUTICAL TELECOMMUNICATION NETWORK SECURITY GUIDANCE DOCUMENT DRAFT Second Edition June 2010 3.4H - 1 TABLE OF CONTENTS 1.
Network Security Policy
Network Security Policy I. PURPOSE Attacks and security incidents constitute a risk to the University's academic mission. The loss or corruption of data or unauthorized disclosure of information on campus
Altius IT Policy Collection Compliance and Standards Matrix
Governance IT Governance Policy Mergers and Acquisitions Policy Terms and Definitions Policy 164.308 12.4 12.5 EDM01 EDM02 EDM03 Information Security Privacy Policy Securing Information Systems Policy
THE UNIVERSITY OF IOWA INFORMATION SECURITY PLAN
THE UNIVERSITY OF IOWA INFORMATION SECURITY PLAN This document is a compilation of resources, policy information and descriptions encompassing the overall (enterprise) information security environment
<Choose> Addendum Windows Azure Data Processing Agreement Amendment ID M129
Addendum Amendment ID Proposal ID Enrollment number Microsoft to complete This addendum ( Windows Azure Addendum ) is entered into between the parties identified on the signature form for the
IT Security Management Risk Analysis and Controls
IT Security Management Risk Analysis and Controls Steven Gordon Document No: Revision 770 3 December 2013 1 Introduction This document summarises several steps of an IT security risk analysis and subsequent
CMS POLICY FOR THE INFORMATION SECURITY PROGRAM
Chief Information Officer Office of Information Services Centers for Medicare & Medicaid Services CMS POLICY FOR THE INFORMATION SECURITY PROGRAM FINAL Version 4.0 August 31, 2010 Document Number: CMS-CIO-POL-SEC02-04.0
Acceptable Use Policy
1. Overview The Information Technology (IT) department s intentions for publishing an Acceptable Use Policy are not to impose restrictions that are contrary to Quincy College s established culture of openness,
Supplier Information Security Addendum for GE Restricted Data
Supplier Information Security Addendum for GE Restricted Data This Supplier Information Security Addendum lists the security controls that GE Suppliers are required to adopt when accessing, processing,
Dr. Ron Ross National Institute of Standards and Technology
Managing Enterprise Risk in Today s World of Sophisticated Threats A Framework for Developing Broad-Based, Cost-Effective Information Security Programs Dr. Ron Ross National Institute of Standards and
IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including:
IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including: 1. IT Cost Containment 84 topics 2. Cloud Computing Readiness 225
United Tribes Technical College Acceptable Use Policies for United Tribes Computer System
United Tribes Technical College Acceptable Use Policies for United Tribes Computer System 1.0 Policy The purpose of this policy is to outline the acceptable use of computer equipment at United Tribes Technical
Sample CDC Certification and Accreditation Checklist For an Application That Is Considered a Moderate Threat
Sample CDC Certification and Accreditation Checklist For an Application That Is Considered a Moderate Threat Centers for Disease and Prevention National Center for Chronic Disease Prevention and Health
Security and Privacy Controls for Federal Information Systems and Organizations
NIST Special Publication 800-53 Revision 4 Security and Privacy Controls for Federal Information Systems JOINT TASK FORCE TRANSFORMATION INITIATIVE This document contains excerpts from NIST Special Publication
HIPAA Security. 2 Security Standards: Administrative Safeguards. Security Topics
HIPAA Security SERIES Security Topics 1. Security 101 for Covered Entities 5. 2. Security Standards - Organizational, Security Policies Standards & Procedures, - Administrative and Documentation Safeguards
Information Security Risk Assessment Checklist. A High-Level Tool to Assist USG Institutions with Risk Analysis
Information Security Risk Assessment Checklist A High-Level Tool to Assist USG Institutions with Risk Analysis Updated Oct 2008 Introduction Information security is an important issue for the University
Acceptable Use Policy
Acceptable Use Policy Free Use Disclaimer: This policy was created by or for the SANS Institute for the Internet community. All or parts of this policy can be freely used for your organization. There is
HIPAA Security Alert
Shipman & Goodwin LLP HIPAA Security Alert July 2008 EXECUTIVE GUIDANCE HIPAA SECURITY COMPLIANCE How would your organization s senior management respond to CMS or OIG inquiries about health information
Health Insurance Portability and Accountability Act Enterprise Compliance Auditing & Reporting ECAR for HIPAA Technical Product Overview Whitepaper
Regulatory Compliance Solutions for Microsoft Windows IT Security Controls Supporting DHS HIPAA Final Security Rules Health Insurance Portability and Accountability Act Enterprise Compliance Auditing &
Information Resources Security Guidelines
Information Resources Security Guidelines 1. General These guidelines, under the authority of South Texas College Policy #4712- Information Resources Security, set forth the framework for a comprehensive
Looking at the SANS 20 Critical Security Controls
Looking at the SANS 20 Critical Security Controls Mapping the SANS 20 to NIST 800-53 to ISO 27002 by Brad C. Johnson The SANS 20 Overview SANS has created the 20 Critical Security Controls as a way of
Office 365 Data Processing Agreement with Model Clauses
Enrollment for Education Solutions Office 365 Data Processing Agreement (with EU Standard Contractual Clauses) Amendment ID Enrollment for Education Solutions number Microsoft to complete 7392924 GOLDS03081
FISH AND WILDLIFE SERVICE INFORMATION RESOURCES MANAGEMENT. Chapter 7 Information Technology (IT) Security Program 270 FW 7 TABLE OF CONTENTS
TABLE OF CONTENTS General Topics Purpose and Authorities Roles and Responsibilities Policy and Program Waiver Process Contact Abbreviated Sections/Questions 7.1 What is the purpose of this chapter? 7.2
HHS Information System Security Controls Catalog V 1.0
Information System Security s Catalog V 1.0 Table of Contents DOCUMENT HISTORY... 3 1. Purpose... 4 2. Security s Scope... 4 3. Security s Compliance... 4 4. Security s Catalog Ownership... 4 5. Security
ORANGE REGIONAL MEDICAL CENTER Hospital Wide Policy/Procedure
ORANGE REGIONAL MEDICAL CENTER Hospital Wide Policy/Procedure MANUAL: Hospital Wide SECTION: Information Technology SUBJECT: Acceptable Use of Information Systems Policy IMPLEMENTATION: 01/2011 CONCURRENCE:
INFORMATION SECURITY GOVERNANCE ASSESSMENT TOOL FOR HIGHER EDUCATION
INFORMATION SECURITY GOVERNANCE ASSESSMENT TOOL FOR HIGHER EDUCATION Information security is a critical issue for institutions of higher education (IHE). IHE face issues of risk, liability, business continuity,
TASK -040. TDSP Web Portal Project Cyber Security Standards Best Practices
Page 1 of 10 TSK- 040 Determine what PCI, NERC CIP cyber security standards are, which are applicable, and what requirements are around them. Find out what TRE thinks about the NERC CIP cyber security
APHIS INTERNET USE AND SECURITY POLICY
United States Department of Agriculture Marketing and Regulatory Programs Animal and Plant Health Inspection Service Directive APHIS 3140.3 5/26/2000 APHIS INTERNET USE AND SECURITY POLICY 1. PURPOSE This
Central Agency for Information Technology
Central Agency for Information Technology Kuwait National IT Governance Framework Information Security Agenda 1 Manage security policy 2 Information security management system procedure Agenda 3 Manage
Estate Agents Authority
INFORMATION SECURITY AND PRIVACY PROTECTION POLICY AND GUIDELINES FOR ESTATE AGENTS Estate Agents Authority The contents of this document remain the property of, and may not be reproduced in whole or in
Unified Security Anywhere HIPAA COMPLIANCE ACHIEVING HIPAA COMPLIANCE WITH MASERGY PROFESSIONAL SERVICES
Unified Security Anywhere HIPAA COMPLIANCE ACHIEVING HIPAA COMPLIANCE WITH MASERGY PROFESSIONAL SERVICES HIPAA COMPLIANCE Achieving HIPAA Compliance with Security Professional Services The Health Insurance
How To Protect Decd Information From Harm
Policy ICT Security Please note this policy is mandatory and staff are required to adhere to the content Summary DECD is committed to ensuring its information is appropriately managed according to the
DIVISION OF INFORMATION SECURITY (DIS) Information Security Policy Threat and Vulnerability Management V1.0 April 21, 2014
DIVISION OF INFORMATION SECURITY (DIS) Information Security Policy Threat and Vulnerability Management V1.0 April 21, 2014 Revision History Update this table every time a new edition of the document is
Information security controls. Briefing for clients on Experian information security controls
Information security controls Briefing for clients on Experian information security controls Introduction Security sits at the core of Experian s operations. The vast majority of modern organisations face
DIVISION OF INFORMATION SECURITY (DIS)
DIVISION OF INFORMATION SECURITY (DIS) Information Security Policy Information Systems Acquisitions, Development, and Maintenance v1.0 October 15, 2013 Revision History Update this table every time a new
Information Technology Cyber Security Policy
Information Technology Cyber Security Policy (Insert Name of Organization) SAMPLE TEMPLATE Organizations are encouraged to develop their own policy and procedures from the information enclosed. Please
LAMAR STATE COLLEGE - ORANGE INFORMATION RESOURCES SECURITY MANUAL. for INFORMATION RESOURCES
LAMAR STATE COLLEGE - ORANGE INFORMATION RESOURCES SECURITY MANUAL for INFORMATION RESOURCES Updated: June 2007 Information Resources Security Manual 1. Purpose of Security Manual 2. Audience 3. Acceptable
HIGH-RISK SECURITY VULNERABILITIES IDENTIFIED DURING REVIEWS OF INFORMATION TECHNOLOGY GENERAL CONTROLS
Department of Health and Human Services OFFICE OF INSPECTOR GENERAL HIGH-RISK SECURITY VULNERABILITIES IDENTIFIED DURING REVIEWS OF INFORMATION TECHNOLOGY GENERAL CONTROLS AT STATE MEDICAID AGENCIES Inquiries
Microsoft Online Subscription Agreement/Open Program License Amendment Microsoft Online Services Security Amendment Amendment ID MOS10
Microsoft Online Subscription Agreement/Open Program License Amendment Microsoft Online Services Security Amendment Amendment ID This Microsoft Online Services Security Amendment ( Amendment ) is between
FINAL May 2005. Guideline on Security Systems for Safeguarding Customer Information
FINAL May 2005 Guideline on Security Systems for Safeguarding Customer Information Table of Contents 1 Introduction 1 1.1 Purpose of Guideline 1 2 Definitions 2 3 Internal Controls and Procedures 2 3.1
California State University, Sacramento INFORMATION SECURITY PROGRAM
California State University, Sacramento INFORMATION SECURITY PROGRAM 1 I. Preamble... 3 II. Scope... 3 III. Definitions... 4 IV. Roles and Responsibilities... 5 A. Vice President for Academic Affairs...
MONTSERRAT COLLEGE OF ART WRITTEN INFORMATION SECURITY POLICY (WISP)
MONTSERRAT COLLEGE OF ART WRITTEN INFORMATION SECURITY POLICY (WISP) 201 CMR 17.00 Standards for the Protection of Personal Information Of Residents of the Commonwealth of Massachusetts Revised April 28,
INFORMATION TECHNOLOGY SECURITY STANDARDS
INFORMATION TECHNOLOGY SECURITY STANDARDS Version 2.0 December 2013 Table of Contents 1 OVERVIEW 3 2 SCOPE 4 3 STRUCTURE 5 4 ASSET MANAGEMENT 6 5 HUMAN RESOURCES SECURITY 7 6 PHYSICAL AND ENVIRONMENTAL
Enrollment for Education Solutions Addendum Microsoft Online Services Agreement Amendment 10 EES17 --------------
w Microsoft Volume Licensing Enrollment for Education Solutions Addendum Microsoft Online Services Agreement Amendment 10 Enrollment for Education Solutions number Microsoft to complete --------------
COORDINATION DRAFT. FISCAM to NIST Special Publication 800-53 Revision 4. Title / Description (Critical Element)
FISCAM FISCAM 3.1 Security (SM) Critical Element SM-1: Establish a SM-1.1.1 The security management program is adequately An agency/entitywide security management program has been developed, An agency/entitywide
Information Security Policy
Information Security Policy Touro College/University ( Touro ) is committed to information security. Information security is defined as protection of data, applications, networks, and computer systems
Office of Inspector General
DEPARTMENT OF HOMELAND SECURITY Office of Inspector General Security Weaknesses Increase Risks to Critical United States Secret Service Database (Redacted) Notice: The Department of Homeland Security,
E-mail Policy Of Government of India
E-mail Policy Of Government of India October 2014 Version 1.0 Department of Electronics and Information Technology Ministry of Communications and Information Technology Government of India New Delhi -
Security Self-Assessment Tool
Security Self-Assessment Tool State Agencies Receiving FPLS Information, 7/15/2015 Contents Overview... 2 Access Control (AC)... 3 Awareness and Training (AT)... 8 Audit and Accountability (AU)... 10 Security
Guidelines on Data Protection. Draft. Version 3.1. Published by
Guidelines on Data Protection Draft Version 3.1 Published by National Information Technology Development Agency (NITDA) September 2013 Table of Contents Section One... 2 1.1 Preamble... 2 1.2 Authority...
Information Security Policy September 2009 Newman University IT Services. Information Security Policy
Contents 1. Statement 1.1 Introduction 1.2 Objectives 1.3 Scope and Policy Structure 1.4 Risk Assessment and Management 1.5 Responsibilities for Information Security 2. Compliance 3. HR Security 3.1 Terms
Newcastle University Information Security Procedures Version 3
Newcastle University Information Security Procedures Version 3 A Information Security Procedures 2 B Business Continuity 3 C Compliance 4 D Outsourcing and Third Party Access 5 E Personnel 6 F Operations
Delphi Information 3 rd Party Security Requirements Summary. Classified: Public 5/17/2012. Page 1 of 11
Delphi Information 3 rd Party Security Requirements Summary Classified: Public 5/17/2012 Page 1 of 11 Contents Introduction... 3 Summary for All Users... 4 Vendor Assessment Considerations... 7 Page 2
Acceptable Use Policy
Acceptable Use Policy 1. Overview Nicholas Financial Inc. s intentions for publishing an Acceptable Use Policy are not to impose restrictions that are contrary to Nicholas Financial s established culture
Virginia Commonwealth University School of Medicine Information Security Standard
Virginia Commonwealth University School of Medicine Information Security Standard Title: Scope: Data Handling and Storage Standard This standard is applicable to all VCU School of Medicine personnel. Approval
SUBJECT: SECURITY OF ELECTRONIC MEDICAL RECORDS COMPLIANCE WITH THE HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996 (HIPAA)
UNIVERSITY OF PITTSBURGH POLICY SUBJECT: SECURITY OF ELECTRONIC MEDICAL RECORDS COMPLIANCE WITH THE HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996 (HIPAA) DATE: March 18, 2005 I. SCOPE This
ensure prompt restart of critical applications and business activities in a timely manner following an emergency or disaster
Security Standards Symantec shall maintain administrative, technical, and physical safeguards for the Symantec Network designed to (i) protect the security and integrity of the Symantec Network, and (ii)
Information Security for Managers
Fiscal Year 2015 Information Security for Managers Introduction Information Security Overview Enterprise Performance Life Cycle Enterprise Performance Life Cycle and the Risk Management Framework Categorize
MIT s Information Security Program for Protecting Personal Information Requiring Notification. (Revision date: 2/26/10)
MIT s Information Security Program for Protecting Personal Information Requiring Notification (Revision date: 2/26/10) Table of Contents 1. Program Summary... 3 2. Definitions... 4 2.1 Identity Theft...
Information Technology Acceptable Use Policy
Information Technology Acceptable Use Policy Overview The information technology resources of Providence College are owned and maintained by Providence College. Use of this technology is a privilege, not
DATA SECURITY AGREEMENT. Addendum # to Contract #
DATA SECURITY AGREEMENT Addendum # to Contract # This Data Security Agreement (Agreement) is incorporated in and attached to that certain Agreement titled/numbered and dated (Contract) by and between the
BEFORE THE BOARD OF COUNTY COMMISSIONERS FOR MULTNOMAH COUNTY, OREGON RESOLUTION NO. 05-050
BEFORE THE BOARD OF COUNTY COMMISSIONERS FOR MULTNOMAH COUNTY, OREGON RESOLUTION NO. 05-050 Adopting Multnomah County HIPAA Security Policies and Directing the Appointment of Information System Security
VIRGINIA DEPARTMENT OF MOTOR VEHICLES IT SECURITY POLICY. Version 2.
VIRGINIA DEPARTMENT OF MOTOR VEHICLES IT SECURITY POLICY Version 2., 2012 Revision History Version Date Purpose of Revision 2.0 Base Document 2.1 07/23/2012 Draft 1 Given to ISO for Review 2.2 08/15/2012
POSTAL REGULATORY COMMISSION
POSTAL REGULATORY COMMISSION OFFICE OF INSPECTOR GENERAL FINAL REPORT INFORMATION SECURITY MANAGEMENT AND ACCESS CONTROL POLICIES Audit Report December 17, 2010 Table of Contents INTRODUCTION... 1 Background...1
Legislative Language
Legislative Language SEC. 1. COORDINATION OF FEDERAL INFORMATION SECURITY POLICY. (a) IN GENERAL. Chapter 35 of title 44, United States Code, is amended by striking subchapters II and III and inserting
INFORMATION SECURITY SPECIFIC VENDOR COMPLIANCE PROGRAM (VCP) ACME Consulting Services, Inc.
INFORMATION SECURITY SPECIFIC VENDOR COMPLIANCE PROGRAM (VCP) ACME Consulting Services, Inc. Copyright 2016 Table of Contents INSTRUCTIONS TO VENDORS 3 VENDOR COMPLIANCE PROGRAM OVERVIEW 4 VENDOR COMPLIANCE
PCI DSS Requirements - Security Controls and Processes
1. Build and maintain a secure network 1.1 Establish firewall and router configuration standards that formalize testing whenever configurations change; that identify all connections to cardholder data
Summary of CIP Version 5 Standards
Summary of CIP Version 5 Standards In Version 5 of the Critical Infrastructure Protection ( CIP ) Reliability Standards ( CIP Version 5 Standards ), the existing versions of CIP-002 through CIP-009 have
Information Technology Branch Access Control Technical Standard
Information Technology Branch Access Control Technical Standard Information Management, Administrative Directive A1461 Cyber Security Technical Standard # 5 November 20, 2014 Approved: Date: November 20,
R345, Information Technology Resource Security 1
R345, Information Technology Resource Security 1 R345-1. Purpose: To provide policy to secure the private sensitive information of faculty, staff, patients, students, and others affiliated with USHE institutions,
Supplier Security Assessment Questionnaire
HALKYN CONSULTING LTD Supplier Security Assessment Questionnaire Security Self-Assessment and Reporting This questionnaire is provided to assist organisations in conducting supplier security assessments.
INCIDENT RESPONSE CHECKLIST
INCIDENT RESPONSE CHECKLIST The purpose of this checklist is to provide clients of Kivu Consulting, Inc. with guidance in the initial stages of an actual or possible data breach. Clients are encouraged
NIST Special Publication 800-60 Version 2.0 Volume I: Guide for Mapping Types of Information and Information Systems to Security Categories
NIST Special Publication 800-60 Version 2.0 Volume I: Guide for Mapping Types of Information and Information Systems to Security Categories William C. Barker I N F O R M A T I O N S E C U R I T Y Computer
NETWORK AND CERTIFICATE SYSTEM SECURITY REQUIREMENTS
NETWORK AND CERTIFICATE SYSTEM SECURITY REQUIREMENTS Scope and Applicability: These Network and Certificate System Security Requirements (Requirements) apply to all publicly trusted Certification Authorities
Odessa College Use of Computer Resources Policy Policy Date: November 2010
Odessa College Use of Computer Resources Policy Policy Date: November 2010 1.0 Overview Odessa College acquires, develops, and utilizes computer resources as an important part of its physical and educational
DIVISION OF INFORMATION SECURITY (DIS) Information Security Policy IT Risk Strategy V0.1 April 21, 2014
DIVISION OF INFORMATION SECURITY (DIS) Information Security Policy IT Risk Strategy V0.1 April 21, 2014 Revision History Update this table every time a new edition of the document is published Date Authored
Regulations on Information Systems Security. I. General Provisions
Riga, 7 July 2015 Regulations No 112 (Meeting of the Board of the Financial and Capital Market Commission Min. No 25; paragraph 2) Regulations on Information Systems Security Issued in accordance with
Policies and Compliance Guide
Brooklyn Community Services Policies and Compliance Guide relating to the HIPAA Security Rule June 2013 Table of Contents INTRODUCTION... 3 GUIDE TO BCS COMPLIANCE WITH THE HIPAA SECURITY REGULATION...
micros MICROS Systems, Inc. Enterprise Information Security Policy (MEIP) August, 2013 Revision 8.0 MICROS Systems, Inc. Version 8.
micros MICROS Systems, Inc. Enterprise Information Security Policy (MEIP) Revision 8.0 August, 2013 1 Table of Contents Overview /Standards: I. Information Security Policy/Standards Preface...5 I.1 Purpose....5
External Supplier Control Requirements
External Supplier Control Requirements Cyber Security For Suppliers Categorised as High Cyber Risk Cyber Security Requirement Description Why this is important 1. Asset Protection and System Configuration
TEMPLE UNIVERSITY POLICIES AND PROCEDURES MANUAL
TEMPLE UNIVERSITY POLICIES AND PROCEDURES MANUAL Title: Computer and Network Security Policy Policy Number: 04.72.12 Effective Date: November 4, 2003 Issuing Authority: Office of the Vice President for
