EVALUATION COPY GL275 ENTERPRISE LINUX NETWORK SERVICES RHEL6 The contents of this course and all its modules and related materials, including handouts to audience members, are copyright 2012 Guru Labs L.C. No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of Guru Labs. This curriculum contains proprietary information which is for the exclusive use of customers of Guru Labs L.C., and is not to be shared with personnel other than those in attendance at this course. This instructional program, including all material provided herein, is supplied without any guarantees from Guru Labs L.C. Guru Labs L.C. assumes no liability for damages or legal action arising from the use or misuse of contents or details contained herein. Photocopying any part of this manual without prior written consent of Guru Labs L.C. is a violation of federal law. This manual should not appear to be a photocopy. If you believe that Guru Labs training materials are being photocopied without permission, please email Alert@gurulabs.com or call 1-801-298-5227. Guru Labs L.C. accepts no liability for any claims, demands, losses, damages, costs or expenses suffered or incurred howsoever arising from or in connection with the use of this courseware. All trademarks are the property of their respective owners. Version: GL275S-R6-H00
Table of Contents Chapter 1 SECURING SERVICES 1 Xinetd 2 Xinetd Connection Limiting and Access Control 4 Xinetd: Resource limits, redirection, logging 5 TCP Wrappers 6 The /etc/hosts.allow & /etc/hosts.deny Files 7 /etc/hosts.{allow,deny} Shortcuts 8 Advanced TCP Wrappers 9 Basic Firewall Activation 10 Netfilter: Stateful Packet Filter Firewall 11 Netfilter Concepts 12 Using the iptables Command 13 Netfilter Rule Syntax 14 Targets 15 Common match_specs 16 Connection Tracking 17 SELinux Security Framework 18 Choosing an SELinux Policy 20 SELinux Commands 21 SELinux Booleans 22 Graphical SELinux Policy Tools 23 Lab Tasks 25 1. Securing xinetd Services 26 2. Enforcing Security Policy with xinetd 29 3. Securing Services with TCP Wrappers 32 4. Securing Services with Netfilter 36 5. Troubleshooting Practice 41 6. SELinux File Contexts 45 Chapter 2 DNS CONCEPTS 1 Naming Services 2 DNS A Better Way 3 The Domain Name Space 4 Delegation and Zones 6 Server Roles 7 Resolving Names 8 Resolving IP Addresses 10 Basic BIND Administration 12 Configuring the Resolver 13 ii EVALUATION COPY Testing Resolution 14 Lab Tasks 16 1. Configuring a Slave Name Server 17 Chapter 3 CONFIGURING BIND 1 BIND Configuration Files 2 named.conf Syntax 3 named.conf Options Block 5 Creating a Site-Wide Cache 7 rndc Key Configuration 8 Zones In named.conf 10 Zone Database File Syntax 12 SOA Start of Authority 13 A & PTR Address & Pointer Records 15 NS Name Server 16 CNAME & MX Alias & Mail Host 17 Abbreviations and Gotchas 18 $ORIGIN and $GENERATE 20 Lab Tasks 21 1. Use rndc to Control named 22 2. Configuring BIND Zone Files 24 Chapter 4 CREATING DNS HIERARCHIES 1 Subdomains and Delegation 2 Subdomains 3 Delegating Zones 4 in-addr.arpa. Delegation 5 Issues with in-addr.arpa. 6 RFC2317 & in-addr.arpa. 7 Lab Tasks 8 1. Create a Subdomain in an Existing Domain 9 2. Subdomain Delegation 13 Chapter 5 ADVANCED BIND DNS FEATURES 1 Address Match Lists & ACLs 2 Split Namespace with Views 3 Restricting Queries 4 Restricting Zone Transfers 5
Running BIND in a chroot jail 6 Dynamic DNS Concepts 7 Allowing Dynamic DNS Updates 8 DDNS Administration with nsupdate 9 Common Problems 10 Common Problems 11 Securing DNS With TSIG 12 Lab Tasks 14 1. Configuring Dynamic DNS 15 2. Securing BIND DNS 18 Chapter 6 LDAP CONCEPTS AND CLIENTS 1 LDAP: History and Uses 2 LDAP: Data Model Basics 3 LDAP: Protocol Basics 5 LDAP: Applications 6 LDAP: Search Filters 7 LDIF: LDAP Data Interchange Format 8 OpenLDAP Client Tools 10 Alternative LDAP Tools 12 Lab Tasks 13 1. Querying LDAP 14 Chapter 7 OPENLDAP SERVERS 1 Popular LDAP Server Implementations 2 OpenLDAP: Server Architecture 3 OpenLDAP: Backends 4 OpenLDAP: Replication 5 OpenLDAP: Configuration Options 6 OpenLDAP: Configuration Sections 7 OpenLDAP: Global Parameters 8 OpenLDAP: Database Parameters 10 OpenLDAP Server Tools 12 Enabling LDAP-based Login 13 System Security Services Daemon (SSSD) 15 Lab Tasks 17 1. Building An OpenLDAP Server 18 2. Enabling TLS For An OpenLDAP Server 23 3. Enabling LDAP-based Logins 26 Chapter 8 USING APACHE 1 HTTP Operation 2 Apache Architecture 4 Dynamic Shared Objects 6 Adding Modules to Apache 7 Apache Configuration Files 8 httpd.conf Server Settings 9 httpd.conf Main Configuration 10 HTTP Virtual Servers 11 Virtual Hosting DNS Implications 12 httpd.conf VirtualHost Configuration 13 Port and IP based Virtual Hosts 14 Name-based Virtual Host 15 Apache Logging 16 Log Analysis 17 The Webalizer 18 Lab Tasks 19 1. Apache Architecture 20 2. Apache Content 23 3. Configuring Virtual Hosts 25 EVALUATION COPY Chapter 9 APACHE SECURITY 1 Virtual Hosting Security Implications 2 Delegating Administration 3 Directory Protection 4 Directory Protection with AllowOverride 6 Common Uses for.htaccess 7 Symmetric Encryption Algorithms 8 Asymmetric Encryption Algorithms 9 Digital Certificates 10 SSL Using mod_ssl.so 11 Lab Tasks 12 1. Using.htaccess Files 13 2. Using SSL Certificates with Apache 17 Chapter 10 APACHE SERVER-SIDE SCRIPTING ADMINISTRATION 1 Dynamic HTTP Content 2 PHP: Hypertext Preprocessor 3 Developer Tools for PHP 4 Installing PHP 5 iii
Configuring PHP 6 Securing PHP 7 Security Related php.ini Configuration 8 Java Servlets and JSP 9 Apache's Tomcat 10 Installing Java SDK 11 Installing Tomcat Manually 12 Using Tomcat with Apache 13 Lab Tasks 14 1. CGI Scripts in Apache 15 2. Apache's Tomcat 20 3. Using Tomcat with Apache 22 4. Installing Applications with Apache and Tomcat 26 Chapter 11 IMPLEMENTING AN FTP SERVER 1 The FTP Protocol 2 Active Mode FTP 3 Passive Mode FTP 4 ProFTPD 6 Pure-FTPd 7 vsftpd 8 Configuring vsftpd 9 Anonymous FTP with vsftpd 11 Lab Tasks 12 1. Configuring vsftpd 13 Chapter 13 SAMBA CONCEPTS AND CONFIGURATION 1 Introducing Samba 2 Samba Daemons 3 NetBIOS and NetBEUI 4 Accessing Windows/Samba Shares from Linux 5 Samba Utilities 6 Samba Configuration Files 8 The smb.conf File 9 Mapping Permissions and ACLs 10 Mapping Linux Concepts 11 Mapping Case Sensitivity 12 Mapping Users 13 Sharing Home Directories 14 Sharing Printers 15 Share Authentication 16 Share-Level Access 17 User-Level Access 18 Samba Account Database 19 User Share Restrictions 21 Lab Tasks 22 1. Samba Share-Level Access 23 2. Samba User-Level Access 28 3. Samba Group Shares 32 4. Configuring Samba 36 5. Samba Home Directory Shares 39 EVALUATION COPY Chapter 12 THE SQUID PROXY SERVER 1 Squid Overview 2 Squid File Layout 3 Squid Access Control Lists 4 Applying Squid ACLs 5 Tuning Squid & Configuring Cache Hierarchies 7 Bandwidth Metering 8 Monitoring Squid 9 Proxy Client Configuration 10 Lab Tasks 12 1. Installing and Configuring Squid 13 2. Squid Cache Manager CGI 16 3. Proxy Auto Configuration 18 4. Configure a Squid Proxy Cluster 20 iv Chapter 14 SMTP THEORY 1 SMTP 2 SMTP Terminology 3 SMTP Architecture 4 SMTP Commands 5 SMTP Extensions 6 SMTP AUTH 7 SMTP STARTTLS 8 SMTP Session 9 Chapter 15 POSTFIX 1 Postfix Features 2 Postfix Architecture 3 Postfix Components 4
Postfix Configuration 5 master.cf 7 main.cf 8 Postfix Map Types 9 Postfix Pattern Matching 10 Advanced Postfix Options 11 Virtual Domains 12 Postfix Mail Filtering 13 Configuration Commands 14 Management Commands 15 Postfix Logging 16 Logfile Analysis 17 chrooting Postfix 18 Postfix, Relaying and SMTP AUTH 19 SMTP AUTH Server and Relay Control 21 SMTP AUTH Clients 22 Postfix / TLS 23 TLS Server Configuration 24 Postfix Client Configuration for TLS 25 Other TLS Clients 26 Ensuring TLS Security 27 Lab Tasks 28 1. Configuring Postfix 29 2. Postfix Network Configuration 34 3. Postfix Virtual Host Configuration 38 4. Postfix SMTP AUTH Configuration 42 5. Postfix STARTTLS Configuration 47 GNU Mailman 20 Mailman Configuration 21 Lab Tasks 23 1. Configuring Procmail & SpamAssassin 24 2. Configuring Cyrus IMAP 31 3. Dovecot TLS Configuration 38 4. Configuring SquirrelMail 43 5. Base Mailman Configuration 47 6. Basic Mailing List 50 7. Private Mailing List 56 Appendix A SENDMAIL 1 Sendmail Architecture 2 Sendmail Components 3 Sendmail Configuration 4 Sendmail Remote Configuration 6 Controlling Access 8 Sendmail Mail Filter (milter) 9 Configuring Sendmail SMTP AUTH 10 Configuring SMTP STARTTLS 11 Lab Tasks 12 1. Configuring Sendmail 13 2. Sendmail Network Configuration 18 3. Sendmail Virtual Host Configuration 20 4. Sendmail SMTP AUTH Configuration 24 5. Sendmail STARTTLS Configuration 28 EVALUATION COPY Chapter 16 MAIL SERVICES AND RETRIEVAL 1 Filtering Email 2 Procmail 3 SpamAssassin 5 Bogofilter 7 Accessing Email 9 The IMAP4 Protocol 10 Dovecot POP3/IMAP Server 11 Cyrus IMAP/POP3 Server 13 Cyrus IMAP MTA Integration 14 Cyrus Mailbox Administration 16 Fetchmail 17 SquirrelMail 18 Mailing Lists 19 Appendix B NIS 1 NIS Overview 2 NIS Limitations and Advantages 3 NIS Client Configuration 4 NIS Server Configuration 5 NIS Troubleshooting Aids 7 Lab Tasks 8 1. Configuring NIS 9 2. NIS Slave Server 13 v
Typographic Conventions The fonts, layout, and typographic conventions of this book have been carefully chosen to increase readability. Please take a moment to familiarize yourself with them. A Warning and Solution A common problem with computer training and reference materials is the confusion of the numbers "zero" and "one" with the letters "oh" and "ell". To avoid this confusion, this book uses a fixed-width font that makes each letter and number distinct. Typefaces Used and Their Meanings The following typeface conventions have been followed in this book: fixed-width normal Used to denote file names and directories. For example, the /etc/passwd file or /etc/sysconfig/directory. Also used for computer text, particularily command line output. fixed-width italic Indicates that a substitution is required. For example, the string stationx is commonly used to indicate that the student is expected to replace X with his or her own station number, such as station3. 0 O The number 1 l "zero". The number "one". The letter "oh". The letter "ell". EVALUATION COPY fixed-width bold Used to set apart commands. For example, the sed command. Also used to indicate input a user might type on the command line. For example, ssh -X station3. fixed-width bold italic Used when a substitution is required within a command or user input. For example, ssh -X stationx. fixed-width underlined Used to denote URLs. For example, http://www.gurulabs.com/. variable-width bold Used within labs to indicate a required student action that is not typed on the command line. Occasional variations from these conventions occur to increase clarity. This is most apparent in the labs where bold text is only used to indicate commands the student must enter or actions the student must perform. vi
Typographic Conventions Terms and Definitions Line Wrapping EVALUATION COPY The following format is used to introduce and define a series of terms: deprecate To indicate that something is considered obsolete, with the intent of future removal. frob To manipulate or adjust, typically for fun, as opposed to tweak. grok To understand. Connotes intimate and exhaustive knowledge. hork To break, generally beyond hope of repair. hosed A metaphor referring to a Cray that crashed after the disconnection of coolant hoses. Upon correction, users were assured the system was rehosed. mung (or munge) Mash Until No Good: to modify a file, often irreversibly. troll To bait, or provoke, an argument, often targeted towards the newbie. Also used to refer to a person that regularly trolls. twiddle To make small, often aimless, changes. Similar to frob. When discussing a command, this same format is also used to show and describe a list of common or important command options. For example, the following ssh options: -X Enables X11 forwarding. In older versions of OpenSSH that do not include -Y, this enables trusted X11 forwarding. In newer versions of OpenSSH, this enables a more secure, limited type of forwarding. -Y Enables trusted X11 forwarding. Although less secure, trusted forwarding may be required for compatibility with certain programs. Representing Keyboard Keystrokes When it is necessary to press a series of keys, the series of keystrokes will be represented without a space between each key. For example, the following means to press the "j" key three times: jjj When it is necessary to press keys at the same time, the combination will be represented with a plus between each key. For example, the following means to press the "ctrl," "alt," and "backspace" keys at the same time: Ó Ô. Uppercase letters are treated the same: Ò A Occasionally content that should be on a single line, such as command line input or URLs, must be broken across multiple lines in order to fit on the page. When this is the case, a special symbol is used to indicate to the reader what has happened. When copying the content, the line breaks should not be included. For example, the following hypothetical PAM configuration should only take two actual lines: password required /lib/security/pam_cracklib.so retry=3a type= minlen=12 dcredit=2 ucredit=2 lcredit=0 ocredit=2 password required /lib/security/pam_unix.so use_authtok Representing File Edits File edits are represented using a consistent layout similar to the unified diff format. When a line should be added, it is shown in bold with a plus sign to the left. When a line should be deleted, it is shown struck out with a minus sign to the left. When a line should be modified, it is shown twice. The old version of the line is shown struck out with a minus sign to the left. The new version of the line is shown below the old version, bold and with a plus sign to the left. Unmodified lines are often included to provide context for the edit. For example, the following describes modification of an existing line and addition of a new line to the OpenSSH server configuration file: File: /etc/ssh/sshd_config #LoginGraceTime 2m - #PermitRootLogin yes + PermitRootLogin no + AllowUsers sjansen #StrictModes yes Note that the standard file edit representation may not be used when it is important that the edit be performed using a specific editor or method. In these rare cases, the editor specific actions will be given instead. vii
Lab Conventions EVALUATION COPY Lab Task Headers Every lab task begins with three standard informational headers: "Objectives," "Requirements," and "Relevance". Some tasks also include a "Notices" section. Each section has a distinct purpose. Variable Data Substitutions In some lab tasks, students are required to replace portions of commands with variable data. Variable substitution are represented using italic fonts. For example, X and Y. Objectives An outline of what will be accomplished in the lab task. Requirements A list of requirements for the task. For example, whether it must be performed in the graphical environment, or whether multiple computers are needed for the lab task. Relevance A brief example of how concepts presented in the lab task might be applied in the real world. Notices Special information or warnings needed to successfully complete the lab task. For example, unusual prerequisites or common sources of difficulty. Command Prompts Though different distributions and shells have different prompts, all examples will use a $ prompt for commands to be run as an unprivileged user (guru or visitor), and commands with a # prompt should be run as the root user. For example: $ whoami guru $ su - Password: makeitso # whoami root Occasionally the prompt will contain additional information. For example, when portions of a lab task should be performed on two different stations (always of the same distribution), the prompt will be expanded to: stationx$ whoami guru stationx$ ssh visitor@stationy root@stationy s password: work stationy# whoami visitor Substitutions are used most often in lab tasks requiring more than one computer. For example, if a student on station4 were working with a student on station2, the lab task would refer to stationx and stationy stationx$ ssh root@stationy and each would be responsible for interpreting the X and Y as 4 and 2. station4$ ssh root@station2 Truncated Command Examples Command output is occasionally omitted or truncated in examples. There are two type of omissions: complete or partial. Sometimes the existence of a command s output, and not its content, is all that matters. Other times, a command s output is too variable to reliably represent. In both cases, when a command should produce output, but an example of that output is not provided, the following format is used: $ cat /etc/passwd... output omitted... In general, at least a partial output example is included after commands. When example output has been trimmed to include only certain lines, the following format is used: $ cat /etc/passwd root:x:0:0:root:/root:/bin/bash... snip... clints:x:500:500:clint Savage:/home/clints:/bin/zsh... snip... viii
Lab Conventions EVALUATION COPY Distribution Specific Information This courseware is designed to support multiple Linux distributions. When there are differences between supported distributions, each version is labeled with the appropriate base strings: R S U Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) Ubuntu The specific supported version is appended to the base distribution strings, so for Red Hat Enterprise Linux version 6 the complete string is: R6. Certain lab tasks are designed to be completed on only a sub-set of the supported Linux distributions. If the distribution you are using is not shown in the list of supported distributions for the lab task, then you should skip that task. Certain lab steps are only to be performed on a sub-set of the supported Linux distributions. In this case, the step will start with a standardized string that indicates which distributions the step should be performed on. When completing lab tasks, skip any steps that do not list your chosen distribution. For example: 1) [R4] This step should only be performed on RHEL4. Because of a bug in RHEL4's Japanese fonts... Sometimes commands or command output is distribution specific. In these cases, the matching distribution string will be shown to the left of the command or output. For example: [R6] [S11] $ grep -i linux /etc/*-release cut -d: -f2 Red Hat Enterprise Linux Server release 6.0 (Santiago) SUSE Linux Enterprise Server 11 (i586) Action Lists Some lab steps consist of a list of conceptually related actions. A description of each action and its effect is shown to the right or under the action. Alternating actions are shaded to aid readability. For example, the following action list describes one possible way to launch and use xkill to kill a graphical application: Callouts Ô Å Open the "Run Application" dialog. xkillõ Launch xkill. The cursor should change, usually to a skull and crossbones. Click on a window of the application to kill. Indicate which process to kill by clicking on it. All of the application s windows should disappear. Occasionally lab steps will feature a shaded line that extends to a note in the right margin. This note, referred to as a "callout," is used to provide additional commentary. This commentary is never necessary to complete the lab succesfully and could in theory be ignored. However, callouts do provide valuable information such as insight into why a particular command or option is being used, the meaning of less obvious command output, and tips or tricks such as alternate ways of accomplishing the task at hand. [S10] $ sux - Password: password # xclock On SLES10, the sux command copies the MIT-MAGIC-COOKIE-1 so that graphical applications can be run after switching to another user account. The SLES10 su command did not do this. ix
EVALUATION COPY
EVALUATION COPY Content Xinetd............................................. 2 Xinetd Connection Limiting and Access Control........ 4 Xinetd: Resource limits, redirection, logging........... 5 TCP Wrappers...................................... 6 The /etc/hosts.allow & /etc/hosts.deny Files............ 7 /etc/hosts.{allow,deny} Shortcuts.................... 8 Advanced TCP Wrappers............................ 9 Basic Firewall Activation............................ 10 Netfilter: Stateful Packet Filter Firewall............... 11 Netfilter Concepts................................. 12 Using the iptables Command....................... 13 Netfilter Rule Syntax............................... 14 Targets........................................... 15 Common match_specs............................. 16 Connection Tracking............................... 17 SELinux Security Framework........................ 18 Choosing an SELinux Policy......................... 20 SELinux Commands............................... 21 SELinux Booleans................................. 22 Graphical SELinux Policy Tools...................... 23 Lab Tasks 25 1. Securing xinetd Services......................... 26 2. Enforcing Security Policy with xinetd.............. 29 3. Securing Services with TCP Wrappers............. 32 4. Securing Services with Netfilter................... 36 5. Troubleshooting Practice......................... 41 6. SELinux File Contexts............................ 45 Chapter 1SECURING SERVICES
Xinetd The Super Daemon Small, efficient transient-daemon launcher Listens on ports, launches transient daemons as needed Built-in TCP Wrappers for daemons not linked against libwrap IPv6 support Modern Replacement for inetd /etc/xinetd.conf /etc/xinetd.d/ Efficient Resource Utilization EVALUATION COPY Originally, inetd was created to be a small, low memory usage network daemon. It would launch other daemons on demand, thereby not requiring transient daemons to be always running, consuming CPU time and memory. This was especially important in the early days of Unix when memory and CPU cycles were in short supply and extremely expensive. 1 Client 3120 23 69 79 110 inetd The xinetd Daemon A positive side effect of the inetd setup is that it allows for centralized enforcement of policy. Policy decisions include the number of simultaneous connections, rate limiting, and logging. 2 Client 3120 23 69 79 23 inetd in.telnetd While the original inetd is (still) widely used on most Unix systems, most Linux distributions typically use xinetd in place of inetd. 110 Some of the benefits of xinetd over the older inetd include: y Greater configuration control & flexibility y TCP Wrappers built into the super-daemon y Has security and access control features 3 Client 3120 23 23 69 in.telnetd inetd 79 110 1-2
Xinetd Configuration EVALUATION COPY The main configuration file for Xinetd is /etc/xinetd.conf, which references the /etc/xinetd.d/ directory. In this directory, the configuration for each transient service is typically isolated in an individual configuration file. Example Service Configuration Files in /etc/xinetd.d The following table includes several of the service files: Filename amanda gssftp Description AMANDA backup system server Kerberized FTP server krb5-telnet Kerberized telnet server rsync talk telnet tftp rsync file synchronization server Internet Talk Protocol server Standard telnet server Trivial File Transfer Protocol (TFTP) server Example Service Configuration File telnet File: /etc/xinetd.d/telnet # default: on # description: The telnet server serves telnet sessions; # it uses unencrypted authentication. service telnet { flags = REUSE socket_type = stream wait = no user = root server = /usr/sbin/in.telnetd log_on_failure += USERID disable = no } 1-3
EVALUATION COPY Xinetd Connection Limiting and Access Control The extended Internet services daemon Built-in access control only_from no_access Sophisticated connection limiting instances cps per_source max_load Time of day restrictions access_times Remote Access Control Xinetd provides access control through two attributes: only_from and no_access. Both attributes take a value of an IP address in CIDR form: 1.2.3.4/32. Multiple values can be supplied space separated. If both attributes are defined, then the most specific match wins. Generally these attributes are not used. Centralized, system-wide TCP Wrappers (built-in to Xinetd) and packet-filter firewalling is more common for access control. Connection Limiting to Mitigate DoS Attacks One of the major reasons for using Xinetd in place of Inetd are the connection limiting features. With legacy inetd, there are no limits, besides exhausting system resources, on the number of connections to a service. This creates the probability of DoS attack. Xinetd Connection Limiting Features The main configuration file /etc/xinetd.conf specifies a default of 50 connections per service using the instances attribute. Such defaults can be overridden in per-service configuration files. File: /etc/xinetd.conf # max_load = 0 cps = 50 10 instances = 50 per_source = 10 Additional Connection Limiting Attributes cps Limits the rate of incoming connections. It takes two values. The first defines the number of connections per second. If this value is exceeded, then the service is temporarily disabled for the number of seconds defined in the second value. per_source Limits the number of connections per source IP address. This is useful to keep a single client from monopolizing server resources. max_load Monitor the one minute load average (an indication of how busy a server is), and only allow connections when the load average is below the defined value. Limiting New Connections to a Certain Time of Day Using the access_times attribute, the administrator can limit access to Xinetd controlled services based on the time of day when the connection is established. The value is an interval specified in the hour:min-hour:min format. For example: File: /etc/xinetd.d/service_config + access_times = 2:00-8:59 12:00-23:59 1-4
EVALUATION COPY Xinetd: Resource limits, redirection, logging The extended Internet services daemon System resource limiting rlimit_as rlimit_cpu nice Extensive logging features log_type log_on_success log_on_failure Redirection of TCP connections redirect Configuration Attributes for Enforcing Resource Limits Xinetd can set limits on the amount of memory and CPU that launched services can consume. This can be used to protect against potentially malicious clients, or buggy launched daemons. rlimit_as sets the maximum amount of address space that a launched service can use. rlimit_cpu sets the maximum number of CPU seconds that a service can use. For example: File: /etc/xinetd.d/telnet service telnet { socket_type wait user server + rlimit_as = 8M + rlimit_cpu = 20 } = stream = no = root = /usr/etc/in.telnetd Additionally, services can be launched with a different priority to give higher or lower multi-tasking preference. This is configured using the nice attribute. Logging Features of xinetd The Xinetd system has fine-grained logging abilities. You can specify exactly what you would like logged, and you can log to SYSLOG or bypass SYSLOG and log directly to files. The log_type attribute is used to specify where to log. Each service can log one or more of these items: the PID, the remote HOST address, the remote USERID (obtained via the ident protocol), the EXIT event (and status) and / or the DURATION. These values are used with the log_on_success and log_on_failure attributes. Redirection of TCP Connections An often unused feature of Xinetd is the ability to redirect a TCP connection to another host. This is sometimes used when Xinetd is run on a multi-homed firewall. For example within the telnet service configuration file, telnet connections can be redirected to another host with the following line: File: /etc/xinetd.d/service_config + redirect = 172.23.52.18 23 With Xinetd running on Linux, this is not often used because Netfilter has more powerful redirection. 1-5
EVALUATION COPY TCP Wrappers Generic Application Level Network ACL framework /etc/hosts.allow /etc/hosts.deny Originally used to secure inetd launched services via binary /usr/sbin/tcpd Many Modern network daemons link against libwrap sshd, vsftpd, xinetd, rpcbind Some large network daemons forgo TCP Wrappers and implement their own network ACL method Apache Samba TCP Wrappers TCP Wrappers provides another tool useful for limiting access to services by remote IP address or addresses, hostname or domain. Using TCP Wrappers allows services to be accessible to trusted hosts and denied to un-trusted hosts. This gives the system administrator some flexibility for providing services instead of only being able to provide them to either everyone or no one. Centralized Administration A nice feature of TCP Wrappers is that it provides a centralized administrative point where access to all services on the system can be managed; TCP Wrappers works by "wrapping around" existing applications. Before incoming network connections get passed to the appropriate daemon, they first get processed by TCP Wrappers, which checks two files, /etc/hosts.allow and /etc/hosts.deny, and then either drops the connection or passes it on to the appropriate service. Rules in the /etc/hosts.allow file specify hosts which can connect to specific services, while /etc/hosts.deny specifies hosts to deny access to for listed services. /etc/hosts.allow is read first, if it exists, then /etc/hosts.deny. Securing Inetd Originally TCP Wrappers was primarily used to provide network ACLs for services launched from Inetd. To change a standard Inetd telnet service entry in /etc/inetd.conf to use TCP Wrappers, use tcpd to wrap the service: File: /etc/inetd.conf telnet stream tcp nowait root /usr/sbin/in.telnetda in.telnetd + telnet stream tcp nowait root /usr/sbin/tcpda in.telnetd libwrap The network ACL abilities of TCP Wrappers can be compiled into a program via the libwrap library. This allows a service to get all the benefits of TCP Wrappers without performing a chained launch via tcpd (which may not even be possible). 1-6
EVALUATION COPY The /etc/hosts.allow & /etc/hosts.deny Files /etc/hosts.allow If a rule matches: access is granted to the requested service rule checking ends /etc/hosts.deny If a rule matches: access is denied to the requested service rule checking ends If there are no matches in either file, access is granted Basic Syntax daemon_list : client_list Access Control Rules When incoming connections are wrapped, the hosts.allow and hosts.deny files are checked for a match. A match means that both the daemon_list and client_list pair matches. It is important to note that both files are checked top down, with hosts.allow checked before hosts.deny. If no matches are found then access is granted. Within each file, more specific rules should come first, then general rules. If this isn't done, the more specific rules will never be matched against when a more general rule is matched before ever checking the more specific one. The daemon_list Syntax The daemon_list should be one (typical) or more daemon names. When wrapping Xinetd services, the daemon_list should be the name of the binary used in the server attribute. If the service doesn't have a server attribute, usually an internal or redirect service, then the daemon_list should contain the service name. For libwrap linked binaries, such as sshd, the daemon_list is usually the binary name. The client_list Syntax The client_list is used to match the client's source IP address. This can be specified in a variety of ways. y An IPv4 address in the form n.n.n.n. For example, 192.0.2.4. y An IPv4 network in the form n.n.n.n/m.m.m.m. For example 192.168.32.0/255.255.255.0. Note that CIDR notation is not supported. y An IPv6 address in the form [n:n:n:n:n:n:n:n]. For example [3ffe:505:2:1::]. y An IPv6 network in the form [n:n:n:n:n:n:n:n]/m. For example [3ffe:505:2:1::]/64. y A partial IPv4 address terminated with a period. For example, 192.168. will match all IP addresses starting with 192.168. Note that a single complete IP can also be used. y A domain starting with a period. For example,.gurulabs.com. This will match any IP that reverse resolves to a host in the gurulabs.com domain. y The wildcards * and? can be used to match hostnames or IP address. Examples sshd: 10.100.0.5 in.telnetd: 192.168. 10.5.2.0/255.255.255.0 in.ftpd sshd:.gurulabs.com 1-7
EVALUATION COPY /etc/hosts.{allow,deny} Shortcuts Wildcard shortcuts ALL LOCAL KNOWN UNKNOWN PARANOID Operators EXCEPT Shortcuts in the /etc/hosts.allow & /etc/hosts.deny Files To ease and shorten rule writing, TCP Wrappers defines several wildcard short cuts that can be used. The most commonly used wildcard is the ALL wildcard. It always matches. For example: File: /etc/hosts.{allow,deny} ALL: 127. [::1] sshd: ALL The LOCAL wildcard matches any host whose name doesn't contain a dot. This means that hosts within your same domain would match. Name and IP Address Matching The KNOWN, UNKNOWN, and PARANOID wildcards are used to match based on whether or not the IP address reverse resolves to a host name. Be careful using these keywords, as DNS problems can cause what would normally match to not match. y The KNOWN wildcard will match an IP address for which a reverse DNS lookup successfully resolves to a host name. y The UNKNOWN wildcard will match an IP address that doesn't reverse resolve to a host name. y The PARANOID wildcard matches any host whose reverse and forward name lookups are not-consistent. The EXCEPT Operator The EXCEPT operator is useful when you want to match nearly all hosts in a client_list. For example, you have a kiosk machine in your lobby and you don't want people using the kiosk machine to make connections to wrapped services on your corporate hosts. One possible solution would be use TCP Wrappers and have: File: /etc/hosts.{allow,deny} ALL:.gurulabs.com EXCEPT kiosk.gurulabs.com Note, that this rule would have to be added to each system which you wanted to protect. Although the EXCEPT operator can be nested, this is strongly discouraged. The problem with nesting the EXCEPT operator is that such practices very quickly lead to the creation of rules that even the original author will have difficulty understanding. This usually means that the configuration which caused the confusion does more or less than what was expected, instead of doing what the author (or editor) of the configuration intended. 1-8
EVALUATION COPY Advanced TCP Wrappers Printing banner banners Running an alternative daemon twist Running a command on connection spawn Default deny configuration ALL: ALL in /etc/hosts.deny Put allowed hosts in /etc/hosts.allow At least ALL: 127. [::1] Creating Banners with TCP Wrappers One commonly sought after technique is instead of blocking connections only, to block connections and display a message at the same time. This is done by creating a banner file, for example: File: /etc/deny-banner + Attention, this is a private host! + To request access, email <root@example.com> With appropriate allowed hosts already listed in the hosts.allow file, add the following to the hosts.deny: File: /etc/hosts.deny + ALL: ALL : banners /etc/deny-banner Note that you can use ANSI or VT100 screen control characters as well in banner files. Using the twist Option Sometimes, you may want to redirect connections into a fake server. This can be used as part of a honeypot to catch and monitor network attackers. For example to redirect telnet connections to a fake server: File: /etc/hosts.allow + in.telnetd:.badguys.com : twist exec /bin/ft.pl Using the spawn Option TCP Wrappers supports tokens that are replaced before a spawn is performed. These tokens are supported: Token Function %a The client (server) host address %c Client info: user@host, user@address, host name or an address %d The daemon process name %h The client host name or address %n The client host name or unknown or paranoid %p The daemon pid %s Server info: daemon@host, daemon@address %u Client username obtained via the ident protocol This line would finger the connecting host and email the output with a subject line containing the Client info. File: /etc/hosts.deny + in.telnetd: ALL : spawn (/usr/sbin/safe_finger -l @%h a /bin/mail -s "Connect from %c" admin@example.com) & 1-9
EVALUATION COPY Basic Firewall Activation Enabled during install Builds a stateful packet-filter firewall using Netfilter http://netfilter.kernelnotes.org/ Red Hat Enterprise Linux firewall configuration tool system-config-firewall (GUI and text-mode front-ends) Firewalls on Red Hat Enterprise Linux Systems The firewall creation code built into the Anaconda installer has two modes, Enabled (the default) and Disabled. It uses stateful rules. All network traffic that is part of (or related to) some established connection initiated by the host is automatically allowed along with inbound ICMP and IPSec connections. All other, externally initiated, inbound connections are rejected. After the system is installed, the firewall can be configured by using the system-config-firewall tool. The activation of the firewall is handled by Sys-V init scripts. The iptables and ip6tables service scripts consult /etc/sysconfig/iptables-config and /etc/sysconfig/ip6tables-config for configuration options. These options include loading and unloading Netfilter kernel modules and saving the current firewall configuration when stopping or restarting. The iptables and ip6tables service scripts load the contents of /etc/sysconfig/iptables and /etc/sysconfig/ip6tables, respectively. These files can be edited by hand, but depending on configuration, may be overwritten. 1-10
EVALUATION COPY Netfilter: Stateful Packet Filter Firewall Part of Linux 2.4/2.6 kernel Netfilter firewalling subsystem Flexible rules system Filter by IP, MAC, protocol, port, user, frequency etc. Connection tracking Packets may be filtered based on connection state: NEW, ESTABLISHED, RELATED, INVALID Userspace tools iptables: used to manipulate Netfilter configuration iptables-save: Save configuration to STDOUT iptables-restore: Load saved configuration from a file The iptables Command iptables works by having chains of rules specifying how packets should be filtered. Packets flow down the chains, and at every rule they can be blocked, dropped, redirected to other chains or passed on to the next rule. Netfilter can be configured in a stateful fashion making it possible to filter packets by connection state, which can especially be useful when configuring a firewall safely to permit baroque protocols like FTP. In addition, maintaining state makes it possible for Netfilter to prevent certain kinds of denial-of-service attacks. It also makes it possible for Netfilter to deal with fragments more safely than earlier Linux firewalls such as ipchains. In addition, Netfilter has very powerful selection criteria. It can match packets at any point in their travel through the Linux network stack by source or destination interface, TCP source or destination ports, TCP flags, TCP options, UDP source or destination ports, ICMP types, state, TOS bits or locally generated UID, GID, PID or SID. It even has the ability to "mark" packets in a way that is invisible which can be used to manipulate them later, or to impose rate limits on packets matching any of the criteria available to Netfilter. Similarly, once Netfilter has matched a packet, it has a wide variety of possible behaviors it can apply to the packet, including logging, blocking, dropping, rejecting (with arbitrary ICMP error messages), mirroring or redirecting the packet to local ports. It can also modify the packet in various ways, such as by source or destination NAT, source or destination PAT, and changing TOS bits or TTLs, to name a few. It can even hand the packets to arbitrary programs running in userspace, where they can be further analyzed or manipulated. Once the iptables command has been used to configure a firewall, this can be saved using the iptables-save command: # iptables-save > iptables The iptables-restore command can then be used to load this configuration. On Red Hat Enterprise Linux, the /etc/init.d/iptables and /etc/init.d/ip6tables init scripts can be used to automate the use of these tools for persistence on boot, loading the ipv4 /etc/sysconfig/iptables and ipv6 /etc/sysconfig/ip6tables firewall rules. Instead of using the iptables-save command to save an existing firewall rule, use the init script: # service iptables save... output omitted... 1-11
EVALUATION COPY Netfilter Concepts Tables are maintained of actions to be performed on a packet at various stages in its travels through the network stack Tables contain set(s) of chains Chains contain actual rules General syntax: If a packet matches X, then do Y Netfilter Concepts By default, three chains exist: INPUT, FORWARD, and OUTPUT. When a packet comes into the box, the kernel looks at the packet's destination. If the destination is local, the packet gets handed off to the INPUT chain. If the destination is not local and the machine has IP forwarding enabled, the packet gets handed to the FORWARD chain. If the destination is not local and the machine does not have IP forwarding enabled, or has forwarding enabled but cannot forward the packet (due to routing table configurations, for example), the packet gets dropped. All local packets originated by programs running on the system, leave via the OUTPUT chain. If network address translation (NAT) is being used, two additional chains will exist: PREROUTING and POSTROUTING. If the PREROUTING chain exists, all packets will pass through it as they come into the box, before the kernel even decides whether to pass them through the INPUT or FORWARD chains. This chain is where destination network address translation can be implemented. Similarly, if the POSTROUTING chain exists, all packets will pass through it after they pass through the OUTPUT chain. This chain is where source network address translation can be implemented. In addition to the default chains, user-defined chains can be created. Rules to apply to packets can be created within user-defined chains the same as built-in chains. These rules can even hand off packets to other chains, allowing Netfilter great flexibility when handling packets. Packet Input ppp0 eth0 eth1 Legend CHAIN table PREROUTING nat, mangle Routing Decision INPUT filter, mangle FORWARD filter, mangle OUTPUT filter, mangle, nat Routing Decision Local Services and Processes POSTROUTING nat, mangle Packet Output ppp0 eth0 eth1 Kernel Space User Space Three tables exist to organize chains: filter, nat, and mangle. This chart lists which chains are available in each table Chain filter nat mangle PREROUTING X X INPUT X X FORWARD X X OUTPUT X X X POSTROUTING X X 1-12
EVALUATION COPY Using the iptables Command Adding new rules Add a rule to block all traffic from host 10.1.2.3: # iptables -A INPUT -s 10.1.2.3 -j DROP Deleting existing rules Delete rule that blocks all traffic from 10.1.2.3 # iptables -D INPUT -s 10.1.2.3 -j DROP Listing existing rules List all rules in all chains (or in just one specified chain): # iptables -L chain_name Setting chain policy To set the FORWARD chain's policy to DROP: # iptables -P FORWARD DROP Manipulating Rules The core functions of any firewalling software revolve around creation of the rule sets to specify behaviors to apply to specific packets. Rules are added by using the -A option with the name of a chain to specify that a rule is being added to that chain, and then supplying the rule to be added. This iptables command would add a rule to the INPUT chain which limits ICMP ping requests coming in via eth0 to one per second: # iptables -A INPUT -p icmp --icmp-type echo-requesta -m limit --limit 1/s -i eth0 -j ACCEPT Rules can be deleted in two different ways, either by specifying the exact rule to delete or by specifying the number of the rule, which can be obtained by analysis of the output from iptables -L. For example, if rule 37 limited incoming traffic to one ping request per second, it could be deleted by either of these methods: # iptables -D INPUT -p icmp --icmp-type echo-requesta -m limit --limit 1/s -i eth0 -j ACCEPT # iptables -D INPUT 37 Flushing all the rules To delete all the rules in a chain use the -F (Flush) chain operator. If a CHAIN is not specified, all rules in all chains in the specified table (the filter table if the -t option is not used) are flushed: # iptables -F CHAIN Listing Rules To list rules in chains, the basic command is (Rules in a specific chain can be listed by specifying the chain of interest): # iptables -L CHAIN... output omitted... Option Description -n Do not do DNS lookups -v Provide verbose output -x Provide exact values (do not round) --line-numbers Show line numbers for each rule Built-in Chain Policies If no rules have matched by the time Netfilter reaches to bottom of a built-in chain, that chain's policy takes effect. The policy can be set using the -P option to either ACCEPT or DROP: # iptables -P CHAIN DROP 1-13
EVALUATION COPY Netfilter Rule Syntax Zero or more Match Specifications Specify certain property/value pairs of a packet to match All or nothing Zero or one Targets What should be done when traffic matches this rule? Policy Considerations Default-open approach Default-closed approach Match Specification There are many options that can be used to match traffic. When multiple switches are used on the command line, all of the conditions listed must be satisfied in order for the rule to match. If any part does not match, Netfilter moves on to the next rule in the chain, without taking the action specified by the rule's target. Targets Only when every item in the match_spec has successfully matched the packet being examined by Netfilter, will the target come into play. Most of the time, rules processing will cease after Netfilter has done with the packet as specified by the target. Policy Considerations There are two different ways in which policy based systems can be built. They are referred to as the "default-open" and the "default-closed" models. In the default-open model, the system will examine the policy to see if there is a reason to deny access and, if none can be found, will grant access. Rules must be written to explicitly deny anything that should not be allowed. In the default-closed model, the system examines the policy to see if there is a reason to permit access. Otherwise, access will be denied. From a security perspective, default-open is (almost) always the wrong choice. This is especially true when the system involved is 1-14 complex (like networking). A default-closed approach will yield better results, is easier to build, maintain & update and is more secure. To build default-closed Netfilter firewalls, the filter table's build-in chains, INPUT, FORWARD and OUTPUT need their policy set to DROP.
EVALUATION COPY Targets Most targets cause rule checking to cease when followed DROP REJECT ACCEPT Some targets cause rule checking to continue after processing LOG The DROP Target When a rule whose target is DROP matches a packet, Netfilter throws the packet away without further rule checking. There is no other action taken. No response is sent. No user-space daemon or other program on the system will ever know the packet even existed. One major security advantage of the DROP target's behavior is that it minimizes the potential load that unwanted traffic can have on the system and/or network behind the firewall. The REJECT Target If the DROP target were referred to as the "rude" way to block traffic, then the REJECT target might be called the "kind" way. When a rule whose target is REJECT matches a packet, Netfilter throws the packet away with no further rule checking, just like with the DROP target, but also sends an ICMP response to the REJECT'ed packet's sender. Just as with DROP'ed packets, user-space programs will never know of the packet's existence. There are some potential disadvantages of using the REJECT target, from a security perspective. The machine the firewall is running on can become visible when traffic comes in that is REJECT'ed but was not destined for the firewall, depending on the chosen ICMP message(s) to return. Also, if an attacker discovers that your firewall will generate traffic in response to particular traffic that hits the system, they could abuse that knowledge to create extra load on your network or on another network. The ACCEPT Target When a rule whose target is ACCEPT matches a packet, Netfilter will let that packet through with no further rule checking. Remember, any traffic that matches the match_spec portion of a rule with the ACCEPT target will not be filtered further by Netfilter. Make sure that such rules are built so as to not allow unwanted traffic through. The LOG Target When a rule whose target is LOG matches a packet, Netfilter will send information about that packet to Syslog. Since Netfilter is a part of the kernel, those log messages will be sent from the kernel using the kern facility to syslogd. Rule processing will continue with the next rule following a rule with a matched LOG target. To log traffic before dropping it, create two rules with the same match_spec; the first with the LOG target and the second with the DROP target. The only drawback to using the LOG target from a security perspective, is that an attacker who knows or discovers that a log message is generated when certain traffic hits the machine could use that knowledge to cause the system to flood its syslog process, possibly causing a DoS. Because of this, rules with the LOG target should include rate limiting. 1-15
EVALUATION COPY Common match_specs IP Address Source [-s] or destination [-d] address Interface Input [-i] or output [-o] interface Protocol -p TCP, UDP & ICMP For these protocols, iptables can match on most header values Matching IP Addresses There are two match_spec options which can be used to match IP addresses in the IP packet header. For the packet's source address use the -s option. The -d option will match against the IP packet's destination IP address. The address can be specified as a single address or and an address/netmask pair using CIDR notation. # iptables -A INPUT -s 10.100.0.3 -j DROP # iptables -A OUTPUT -d 10.100.0.0/24 -j ACCEPT Matching the Interface It is good practice to only allow certain traffic through based on which network it came from. With multi-homed firewalls (including routers) this can be accomplished using the -i and the -o options to match against the input & output interfaces, respectively, associated with the packet being tested. # iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT Matching the Protocol The IP packet's protocol field can be tested using the -p option. Protocols specified by name must be found in the /etc/protocols file, as the iptables command needs to give Netfilter a number to work with. # iptables -A INPUT -p 57 -j DROP # iptables -A INPUT -p egp -j DROP For the TCP, UDP & ICMP protocols, there are several additional options available for narrowing down exactly what traffic to match. Matching TCP or UDP Ports If either tcp or udp has been specified as the protocol to the -p option, you can then use the --sport and --dport options to match against the source or destination TCP or UDP port, respectively. The value specified to these options can either be a single port, or a range of ports. # iptables -A INPUT -p tcp --dport 80 -j ACCEPT # iptables -A INPUT -p udp --sport 1024:65535 --dport 53 a -j ACCEPT Matching ICMP If icmp is the value given to the -p option, then the --icmp-type option can be used. Despite the name of this option, the values specified cover both the ICMP type and code. # iptables -A INPUT -p icmp --icmp-type port-unreachable a -j ACCEPT To get a list of all parameters the --icmp-type option accepts, run: # iptables -p icmp -h 1-16
EVALUATION COPY Connection Tracking NEW No match in Netfilter's connection tracking state engine ESTABLISHED The packet matches an ongoing communication RELATED The packet is not a direct part of a tracked connection INVALID The packet is bogus Stateless vs. Stateful Stateful packet-filtering adds the ability to consider a packet within the context of an ongoing communication. In order to be able to provide this capability, Netfilter must track a connection. This is implemented in Netfilter's state module. The state Module In order to use the state module, it must be loaded. This has to be done on each iptables command line that you want to use stateful rules with. The -m option is used to load a module. Once the state module is loaded, you can then use the --state option to match a packet according to its Netfilter connection-tracking state. The ESTABLISHED State A packet will match the ESTABLISHED state when it is part of a communication that is being tracked by Netfilter's state module. This is not limited to just TCP, but also works with UDP and ICMP. # iptables -A INPUT -m state --state ESTABLISHED -j ACCEPT The NEW State A packet will match the NEW state when it is not part of a communication already being tracked by Netfilter's state module. # iptables -A INPUT -p tcp --dport 80 -m state --state NEW a -j ACCEPT The RELATED State A packet will match the RELATED state when it was generated in relation to some other communication (which is being tracked), but is not actually part of that communication. The best example is an ICMP packet generated in response to TCP or UDP traffic. # iptables -A INPUT -m state --state RELATED -j ACCEPT The INVALID State Packets that do not match another state, match the INVALID state. # iptables -A INPUT -m state --state INVALID -j REJECT 1-17
EVALUATION COPY SELinux Security Framework Allows administrator to specify security policy Policy defines the "correct operation" of the system interaction of processes and files use of POSIX capabilities use of system resources (shared memory, etc.) network sockets interaction of processes (signals, pipes, IPC, etc.) Uses labels called "Security Context"s identity:role:type:security_level Occasional relabeling of the filesystem may be required Security Administrator Specifies Policy The SELinux extensions allow an administrator to define a security policy. This policy can be very simple providing only basic limitation, or it can be very detailed defining which of the many complex interactions of system components are to be allowed. One of the advantages of the SELinux security framework is its flexibility in allowing security administrators to create a policy that meets their TCB goals. The policy is then loaded and enabled so that the Linux kernel can enforce compliance with the policy. Policy Defines the "Correct Operation" of the System When a security administrator creates a policy, he or she is essentially detailing the interactions that are expected for correct operation of the system and its services. For example, a program running under a specific security context should either be allowed to interact with a given file, or it should not. The policy defines the allowed level of interaction between the program and then the kernel enforces it. If the program later tries to interact with the file in some non-permitted way, or perhaps interact with some completely different file, then the kernel will deny the attempt. Security Context The security policy determines the permissible interactions between objects on the system. To determine if a specific interaction is permitted, the system compares the security context of the interacting objects with the security policy. The security context (sometimes called the label) is a string and consists of the following components: identity Name of the 'owner' of the object. System objects have special identities. Unix user accounts can be mapped to specific identities. A default identity (user_u) exists for Unix accounts not explicitly mapped to an identity. Identity controls the available roles. role For processes, lists the domain of the process. For files, has a placeholder value (object_r). type Classifies the object as to its specific security needs; that is each object that has unique (with respect to the other objects) security needs will be assigned a unique type. Conversely, objects with common security needs can share the same type. security_level Compound value in the form sx-sx:cy-cy where sx is the sensitivity level (valid values from s0 s15) and cy is the category (c0 c255). Used by the MCS/MLS policies. 1-18
Relabeling Files EVALUATION COPY If files on the filesystem have the wrong (or no) security context label, then applications can fail. The most common reasons that security labels become incorrect is either from copying files, or running the system with SELinux disabled. You can relabel files or directories using the setfiles, restorecon, or chcon commands. You can relabel the entire filesystem using the fixfiles relabel command. 1-19
EVALUATION COPY Choosing an SELinux Policy Targeted separate types for most commands and services everything else runs under the unconfined_t, kernel_t, or initrc_t domains MLS (Multi-Level Security) implements sensitivity and category security labels primarily used in military and government Minimum Everything runs unconfined_t All targeted modules are available if desired Selected via /etc/selinux/config SELinux Policies Three different SELinux policies are included with the SELinux reference policy: targeted, MLS, and minimum. targeted policy Originally focused on the network services most likely to be the source of a security breach (e.g. Apache, BIND). With the targeted policy, any applications that do not have a policy defined will run under the unconfined_t, kernel_t, or initrc_t domain. MLS policy Multi Level Security provides a policy based around security levels and categories. The goal is to get LSPP, RBAC, and CAPP certification at EAL 4+. The security model provided by this policy is most commonly used in military or government deployments and not appropriate for typical corporate systems, with the possible exception of high-profile, sensitive servers. Minimum Policy The minimum policy was introduced in Fedora 10 as a variant of the targeted policy, preserving the unconfined_t target as the default: all services run as unconfined_t, kernel_t, or initrc_t, unless configured to be confined by the administrator. Initial development versions of SELinux contained a single policy that was similar to the original strict policy. Due to conflicts generated by the nascent policy code, SELinux was disabled by default in distributions that first started shipping SELinux. In subsequent versions of Linux, SELinux was enabled and a targeted policy was implemented. Switching Policies It is simple to switch between the targeted and MLS policies (or minimum): 1. Verify that the policy files for the desired policy exist. They are contained in the following RPMS: selinux-policy (configuration), selinux-policy-minimum, selinux-policy-targeted, and selinux-policy-mls. 2. Set the active policy in /etc/selinux/config to SELINUXTYPE=mls, SELINUXTYPE=minimum, or SELINUXTYPE=targeted. 3. Reboot the machine for the new policy to take effect. SELinux Kernel Options SELinux functionality can also be controlled by passing parameters to the kernel on boot. To disable SELinux a single time at boot, use the interactive GRUB interface to add one of two options to the kernel's arguments. To boot into permissive mode, add enforcing=0. To boot without loading SELinux at all, add selinux=0. 1-20
EVALUATION COPY SELinux Commands sestatus display current SELinux settings -v displays contexts for files and processes listed in /etc/sestatus.conf chcon set the security context of a file or files Many core commands have new options to support Security Context labels New Commands Included with SELinux Several new commands were created to provide an interface to the new SELinux functionality. These commands are provided in the policycoreutils package. Descriptions and example invocations of these new commands follow. The sestatus Command The sestatus command can be used to display the current state of an SELinux enabled system. If the -v option is used then the report will include the security contexts of all the files and processes listed in the /etc/sestatus.conf file. This allows an administrator to quickly verify the context of key objects. This can be helpful in spotting incorrect context on critical files that may commonly have their context clobbered due to improper handling. # sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 21 Policy from config file: targeted The chcon Command Security contexts on system objects can be changed with the chcon command. It has similar functionality and syntax to the chmod command, but changes contexts instead of traditional Unix permissions. The three options that are used for changing an object's context are -u for user, -r for role, and -t for type. Like many Unix commands, the -R option performs recursive file modification. # ls -Z -rw-r--r-- guru guru system_u:object_r:user_home_t file.txt # chcon -t staff_home_t file.txt # ls -Z -rw-r--r-- guru guru system_u:object_r:staff_home_t file.txt Supporting Security Context Labels Many of the core commands that work with files have had options added to support the security context labels for files. When possible, these commands have used the -Z option--with the meaning of the option varying greatly command to command. Examples of commands that have been patched to support SELinux in some way include: login, su, id, ls, ps, cp, mv, stat, and find. For example: $ ps -ez... output omitted... 1-21
EVALUATION COPY SELinux Booleans Easy way of activating certain policy rules Immediate changes made with: setsebool [-P] togglesebool "Batch" changes made via /selinux filesystem echo X > /selinux/booleans/boolean_name "Batch" must be committed to take effect echo 1 > /selinux/commit_pending_bools Booleans To make SELinux more flexible and easy-to-use, categories of policy have been added that can be turned on or off. These boolean values can be toggled in real time and immediately become active. Most are intuitively named such as spamassassin_can_network, user_rw_usb, and named_write_master_zones. A complete list of possible boolean values for the current running policy can be found with the sestatus -b or the getsebool -a commands: # sestatus -b... snip... Policy booleans: allow_httpd_anon_write off allow_httpd_apcupsd_cgi_script_anon_write off allow_httpd_bugzilla_script_anon_write off allow_httpd_mod_auth_pam off allow_httpd_squid_script_anon_write off allow_httpd_sys_script_anon_write off allow_java_execstack off... snip... Toggling Booleans The most common tool for activating or disabling a boolean value is the setsebool command. Basic usage is as follows: # setsebool [-P] boolean_name value The value field may be true or 1 to enable the boolean; false or 0 to disable it. The togglesebool command can be used to toggle a boolean value on or off. Both the setsebool and togglesebool commands immediately make changes active in the running policy. It is also possible to enable boolean policy values in the selinuxfs. This virtual filesystem is similar to /proc, and is usually mounted on /selinux. Like /proc, values can be echoed into /selinux to change SELinux's current configuration. An example could be: # echo 1 > /selinux/booleans/allow_ypbind Changes to files in /selinux/booleans do not take effect immediately, instead SELinux must be alerted to the change. The following command atomically commits all changes made to boolean files: # echo 1 > /selinux/commit_pending_bools Booleans set using the -P (persistent) option will be written to the policy file on disk and become permanant (i.e. survive a reboot). 1-22
EVALUATION COPY Graphical SELinux Policy Tools system-config-selinux status, booleans, file-labeling, SELinux user creation and mapping, translations for MLS, network ports, and control of loaded policy modules The system-config-selinux Command Newly introduced in RHEL6 this tool provides a unified graphical interface for most of the system administration tasks associated with SELinux: y selection of SELinux mode y force relabel of filesystem on next boot y view and modify SELinux booleans y view and modify file context labeling expressions y create SELinux users and map them to Linux user accounts and roles y define translations for MLS security-levels/categories y define network ports accessible by SELinux types y add and remove policy modules 1-23
1-24 EVALUATION COPY
EVALUATION COPY Lab 1 Estimated Time: 70 minutes Task 1: Securing xinetd Services Page: 1-26 Time: 10 minutes Requirements: b (1 station) Task 2: Enforcing Security Policy with xinetd Page: 1-29 Time: 10 minutes Requirements: b (1 station) Task 3: Securing Services with TCP Wrappers Page: 1-32 Time: 10 minutes Requirements: bb (2 stations) Task 4: Securing Services with Netfilter Page: 1-36 Time: 15 minutes Requirements: bb (2 stations) c (classroom server) Task 5: Troubleshooting Practice Page: 1-41 Time: 20 minutes Requirements: b (1 station) Task 6: SELinux File Contexts Page: 1-45 Time: 5 minutes Requirements: b (1 station) 1-25
Objectives y Discover what services are running and listening for connections. y Configure xinetd to provide a variety of limits for connecting to services. Requirements b (1 station) Lab 1 Task 1 Securing xinetd Services Estimated Time: 10 minutes EVALUATION COPY Relevance Identifying and securing running services is crucial in today's hostile network environments. xinetd provides several powerful features for securing the services it provides. 1) 2) The following actions require administrative privileges. Switch to a root login shell: $ su -l Password: makeitso Õ Determine which services are currently configured to run: # chkconfig --list grep on... output omitted... When examining a list of services such as this one, there are some questions that should always be asked about each one: What is the purpose of the service? Is it really necessary for this machine? 3) List the services bound to UDP ports using one of the following commands: # netstat -ulp # lsof -i UDP # ss -uap... output omitted... Note: all three commands may not be installed on your system. 4) List the services listening on TCP ports using one of the following commands: # netstat -tlp 1-26
# lsof -i TCP # ss -tlp EVALUATION COPY 5) Install the in.telnetd server and client for use in the remaining steps: # yum install -y telnet-server telnet... output omitted... 6) 7) Enable telnet for use in testing: # chkconfig telnet on To more tightly secure the telnet service, modify the /etc/xinetd.d/telnet service configuration file: File: /etc/xinetd.d/telnet service telnet {... snip... server = /usr/sbin/in.telnetd log_on_failure += USERID + log_on_success += DURATION + instances = 5 + per_source = 1 + banner = /etc/go-away.banner } 8) Create a new banner file with this content: File: /etc/go-away.banner + This is a secured system. Unauthorized access is prohibited. + All access is logged. 9) Cause xinetd to re-read its configuration files and check the log for indications of errors: # service xinetd reload 1-27
... snip... # tail /var/log/messages grep xinetd xinetd[1714]: Starting reconfiguration xinetd[1714]: Swapping defaults xinetd[1714]: Reconfigured: new=1 old=0 dropped=0 (services) EVALUATION COPY 10) Test the new telnet configuration by attempting to open two simultaneous telnet connections: # telnet localhost Trying 127.0.0.1... Connected to localhost (127.0.0.1). Escape character is 'ˆ]'. This is a secured system. Unauthorized access is prohibited. All access is logged.... snip... login: guru Password: work Õ Last login: Tue May 31 13:59:06 from localhost.localdomain $ Ó ]Õ telnet> z [1]+ Stopped telnet localhost # telnet localhost Trying 127.0.0.1... Connected to localhost (127.0.0.1). Escape character is 'ˆ]'. This is a secured system. Unauthorized access is prohibited. All access is logged. Connection closed by foreign host. # fg telnet localhost Ó d Connection closed by foreign host. xinetd blocks the connection due to the configured per_source limit. 11) Examine the log for evidence of the telnet connection being denied: # tail /var/log/messages... snip... xinetd[5497]: START: telnet pid=28392 from=127.0.0.1 xinetd[5497]: FAIL: telnet per_source_limit from=127.0.0.1 xinetd[5497]: EXIT: telnet status=0 pid=28392 duration=47(sec) 1-28
Objectives y Configure a sensor (using xinetd) to log connection attempts. Requirements b (1 station) Relevance Once a security policy - such as "no telnet" - has been established, it is important to have some mechanism to detect violations of the policy and report them to the administrator. Lab 1 Task 2 Enforcing Security Policy with xinetd Estimated Time: 10 minutes EVALUATION COPY 1) Create a new xinetd service configuration file with this content: File: /etc/xinetd.d/notelnet + service notelnet + { + disable = no + type = INTERNAL + flags = SENSOR + protocol = tcp + port = 23 + socket_type = stream + wait = no + user = root + log_type = FILE /var/log/violators + banner = /etc/notelnet-violation.banner + } 2) Create a new banner file, with this content: File: /etc/notelnet-violation.banner + Remember the new security policy! + *telnet* is no longer acceptable. + Your IP address has been logged. 3) Modify the /etc/services file so that the line for the telnet protocol contains an alias for the new service name. The new line should read: 1-29
File: /etc/services - telnet 23/tcp + telnet 23/tcp notelnet # Telnet EVALUATION COPY 4) Configure xinetd to use the new notelnet service and check the logs for signs of problems (correct as necessary): # chkconfig telnet off # grep deactivated /var/log/messages Jul 19 11:13:59 stationx xinetd[5535]: service telnet deactivated Examine the log output carefully for any errors, and correct your service definition if needed. 5) Test the new service by trying to connect via telnet: # telnet localhost Trying 127.0.0.1... Connected to localhost. Escape character is 'ˆ]'. Remember the new security policy! *telnet* is no longer acceptable. Your IP address has been logged. Connection closed by foreign host. banner is displayed, logs the attempt, and locks out that address for 10 minutes. 6) Look at the log for information about the connection activity: # tail /var/log/violators FAIL: notelnet address from=::1 Cleanup 7) Clean up the xinetd services so that future lab exercises will operate correctly: # chkconfig notelnet off # chkconfig telnet off 1-30
8) Modify the /etc/services file so that the line for the telnet protocol no longer contains the alias for the notelnet service: File: /etc/services - telnet 23/tcp notelnet # Telnet + telnet 23/tcp EVALUATION COPY 1-31
Objectives y Use TCP Wrappers to secure various services. Requirements bb (2 stations) Relevance A frequent security need is the ability to prevent connections to services from specific IP addresses. The TCP Wrapper framework provides a centralized place to enforce these IP address based connection restrictions. Lab 1 Task 3 Securing Services with TCP Wrappers Estimated Time: 10 minutes EVALUATION COPY 1) 2) Install the vsftpd package: # yum install -y ftp vsftpd... output omitted... Configure TCP wrappers to explicitly permit certain service/ip pairs: File: /etc/hosts.allow + ALL: 127. [::1] 10.100.0.254 + sshd: 10.100.0.Y + vsftpd: 10.100.0.0/255.255.255.0 EXCEPT 10.100.0.Y Normally you would permit the local IP address of the system to all services as well (adding it to the first line). It is left off only to facilitate testing within the lab. 3) Configure TCP wrappers to deny all connections not matching a rule in the hosts.allow file (any existing comments can be left in place): File: /etc/hosts.deny + ALL: ALL 4) Since vsftpd is not started by default, start it now: # service vsftpd start 1-32
5) Enable telnet, which is wrapped via Xinetd: # chkconfig telnet on EVALUATION COPY 6) Verify you can connect to the SSH and telnet services via loopback, but not your assigned IP: # ssh guru@localhost The authenticity of host 'localhost (127.0.0.1)' can't be established. RSA key fingerprint is 36:fe:89:5a:d4:57:da:3d:29:9d:6b:d4:27:65:fd:3e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. guru@localhost's password: work Õ... snip... # exit logout Connection to localhost closed. # telnet localhost Trying 127.0.0.1... Connected to localhost. Escape character is 'ˆ]'. This is a secured system, unauthorized access is prohibited. All access is logged. login: Ó ] telnet> close Connection closed. # ssh guru@10.100.0.x ssh_exchange_identification: Connection closed by remote host # telnet 10.100.0.X Trying 10.100.0.X... Connected to 10.100.0.X. Escape character is 'ˆ]'. This is a secured system, unauthorized access is prohibited. All access is logged. Connection closed by foreign host. Connection is established. Connection is established Connection is denied Connection is denied 7) Verify you can connect to the FTP service via both loopback and your assigned IP: # ftp localhost 1-33
Connected to localhost (127.0.0.1). 220 (vsftpd 2.2.2) Name (localhost:guru): Ó c # ftp 10.100.0.X Connected to 10.100.0.1 (10.100.0.1). 220 (vsftpd 2.2.2) Name (10.100.0.X:guru): Ó c Connection is established Connection is established EVALUATION COPY 8) 9) 10) 11) If you are working with a lab partner, wait for them to reach this point before continuing. Examine the log files for evidence of connection attempts: # tail -n 20 /var/log/messages... snip... xinetd[25390]: libwrap refused connection to telnet (libwrap=in.telnetd) from ::ffff:9.100.0.x xinetd[25390]: FAIL: telnet libwrap from=10.100.0.x xinetd[25311]: START: telnet pid=25390 from=10.100.0.x xinetd[25311]: EXIT: telnet status=0 pid=25390 duration=0(sec) If you are working with a lab partner, wait for them to reach this point before continuing. Verify that you can connect to the SSH service of your designated lab partner's system: # ssh guru@10.100.0.y The authenticity of host 'stationy (10.100.0.Y)' can't be established. RSA key fingerprint is 1f:63:ea:ed:07:bf:82:09:79:99:08:57:e9:79:d0:28. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'stationy,10.100.0.y' (RSA) to the list of known hosts. guru@stationy's password: work Õ $ exit Connection is established logout Connection to 10.100.0.Y closed. 1-34
12) Attempt to connect to your lab partner's FTP and telnet service (expecting to be denied): # ftp 10.100.0.Y Connected to 10.100.0.Y (10.100.0.Y). 421 Service not available. ftp> quit # telnet 10.100.0.Y Trying 10.100.0.Y... Connected to stationy. Escape character is 'ˆ]'. This is a secured system, unauthorized access is prohibited. All access is logged. Connection closed by foreign host. Connection is denied The Xinetd banner is also printed. Denied for the stationx IP address and permitted for others on the subnet. EVALUATION COPY 13) Optional: Examine log files again looking for evidence of the denial messages associated with your lab partner's attempts to connect to your services. Cleanup 14) 15) Clean up the TCP Wrappers rules to prevent conflicts with future lab exercises: # >/etc/hosts.deny # >/etc/hosts.allow Clean up telnet and ftp services so that future lab exercises will operate correctly: # chkconfig telnet off # chkconfig vsftpd off # service vsftpd stop 1-35
Objectives y Use Netfilter stateful packet filtering to protect the system. Requirements bb (2 stations) c (classroom server) Relevance Netfilter provides a powerful packet manipulation framework that can be used to protect networked services. Today's hostile network environments basically necessitate packet filtering, and Netfilter's stateful rules allow even simple configurations to provide impressive security. Lab 1 Task 4 Securing Services with Netfilter Estimated Time: 15 minutes EVALUATION COPY 1) Remove any pre-configured firewall: # service iptables stop 2) Change the chain policies for all chains in the filter table to DROP: # iptables -P INPUT DROP # iptables -P FORWARD DROP # iptables -P OUTPUT DROP 3) Clear out any rules that may already be present in Netfilter: # iptables -F # iptables -t nat -F # iptables -t mangle -F 4) Try connecting to server1 via telnet: # telnet 10.100.0.254 Trying 10.100.0.254 Ó c Hangs because the outbound packet is discarded due to the OUTPUT chain's DROP policy. 5) Try pinging the loopback address in an attempt to identify the networking problem: # ping -c3 localhost 1-36
PING localhost.localdomain (127.0.0.1) from 10.100.0.X : 56(84) bytes of data. ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted --- localhost.localdomain ping statistics --- 3 packets transmitted, 0 received, 100% loss, time 11999ms EVALUATION COPY 6) Always allow all traffic on the loopback interface: # iptables -A INPUT -i lo -j ACCEPT # iptables -A OUTPUT -o lo -j ACCEPT Many local services use loopback to communicate and break without these rules. 7) Add rules to allow connection tracked traffic (i.e. stateful rules) to be accepted: # iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT 8) 9) 10) 11) Add a rule to allow this host to send ICMP echo-request messages (so that the ping command will work): # iptables -A OUTPUT -p icmp --icmp-type echo-request -m state --state NEW -j ACCEPT Ping server1, to test the new rules: # ping -c1 10.100.0.254 PING 10.100.0.254 (10.100.0.254) from 10.100.0.X : 56(84) bytes of data. 64 bytes from 10.100.0.254: icmp_seq=1 ttl=64 time=0.211 ms --- 10.100.0.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms Add a rule to allow this system to initiate telnet sessions: # iptables -A OUTPUT -p tcp --sport 1024:65535 --dport telnet -m state --state NEW -j ACCEPT Try to use telnet to connect to server1 by IP address: # telnet 10.100.0.254 1-37
Trying 10.100.0.254... Connected to 10.100.0.254. Escape character is 'ˆ]'. Red Hat Enterprise Linux Server release 6.0 (Santiago) Kernel 2.6.32-71.el6.i686 on an i686 login: Ó ] telnet> close Connection closed. EVALUATION COPY 12) 13) 14) 15) Try to use telnet to connect to server1 by hostname: # telnet server1 telnet: server1: Name or service not known server1: Host name lookup failure Insert rules to allow this system to perform DNS lookups: # iptables -I OUTPUT 4 -p udp --sport 1024:65535 --dport 53 -m state --state NEW -j ACCEPT # iptables -I OUTPUT 5 -p tcp --sport 1024:65535 --dport 53 -m state --state NEW -j ACCEPT Try to use telnet to connect to server1 by hostname: # telnet server1 Trying 10.100.0.254... Connected to 10.100.0.254. Escape character is 'ˆ]'. Red Hat Enterprise Linux Server release 6.0 (Santiago) Kernel 2.6.32-71.el6.i686 on an i686 login: Ó ] telnet> close Connection closed. Wait for your lab partner to reach this point before continuing with this step. Try to ping your lab partner's system: # ping stationy PING stationy.example.com (10.100.0.Y) from 10.100.0.X : 56(84) bytes of data. 1-38
16)... snip... Ó c Allow echo requests of a reasonable size (the default is 56 bytes), but don't allow large flood pings: # iptables -A INPUT -p icmp --icmp-type echo-request -m length --length :128 -j ACCEPT This fails because there is no rule in the target's Netfilter configuration to allow incoming echo request packets EVALUATION COPY 17) 18) 19) Wait for your lab partner to reach this point before continuing with this step. Ping your lab partner's system, then again with a large packet size: # ping -c1 stationy PING stationy.example.com (10.100.0.Y) 56(84) bytes of data. 64 bytes from stationy.example.com (10.100.0.Y): icmp_seq=1 ttl=64 time=0.444 ms --- stationy.example.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 2038ms t min/avg/max/mdev = 0.085/0.212/0.224/0.045 ms # ping -s 500 -c1 stationy PING stationy.example.com (10.100.0.Y 508(528) bytes of data. --- stationy.example.com ping statistics -- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms Add a rule to allow incoming SSH connections: # iptables -A INPUT -s 10.100.0.0/24 -p tcp --dport 22 -m state --state NEW -j ACCEPT Find the rule which allows this system to initiate telnet connections: This fails because the ping packet is greater than the allowed incoming echo packet size # iptables -nv -L OUTPUT --line-numbers tail -n1 6 18 5728 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:1024:65535 dpt:23 20) Replace the rule that allows this system to connect using telnet, with one that permits SSH: # iptables -D OUTPUT 6 Rule number must match the one found in Step 19. # iptables -I OUTPUT 6 -p tcp --sport 1024: --dport 22 -m state --state NEW -j ACCEPT 1-39
21) Make these firewalling rules persistent across reboots: # service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] EVALUATION COPY 22) Drop down to single-user mode and then back to run-level 5 to demonstrate that the rules are restored: # init 1... snip... # init 5... snip... 23) Login as root and check the Netfilter rules: # iptables -nvl... output omitted... Cleanup 24) Clean up to prevent Netfilter from interfering with future labs: # for i in INPUT OUTPUT FORWARD; do iptables -P $i ACCEPT; done # iptables -F # service iptables save 1-40
Objectives y Practice troubleshooting common system errors. Requirements b (1 station) Relevance Troubleshooting scenario scripts were installed on your system as part of the classroom setup process. You use these scripts to break your system in controlled ways, and then you troubleshoot the problem and fix the system. Lab 1 Task 5 Troubleshooting Practice Estimated Time: 20 minutes EVALUATION COPY 1) 2) As the root user, invoke the tsmenu program: # tsmenu The first time the troubleshooting framework is started, you are required to confirm some information about your system: Confirm the distribution of Linux is correct by selecting Yes and then press Õ. select OK and then press Õ. You are presented with the 'Select Troubleshooting Group' screen. 3) This first break system scenario is a simple HOW-TO for the tsmenu program itself. Its function is to familiarize you with the usage of the program: Use the UP and DOWN arrow keys to select 'Troubleshooting Group #0'. Use the LEFT and RIGHT arrow keys to select OK and press Õ to continue. You are taken to the 'Select Scenario Category' screen. 4) Pick the category of problem that you want: Select the 'Learn: Example learning scenarios' category. Select OK and press Õ to continue. You are taken to the 'Select Scenario Script' screen. 1-41
5) Pick the specific troubleshooting script that you want to run: EVALUATION COPY Select the 'learn-01.sh' script select OK and press Õ. You are taken to the 'Break system?' screen. 6) 7) 8) The 'Break system?' screen provides you with a more detailed description of the scenario and asks whether you want to proceed and "break the system now?". Read the description of the problem and then Select Yes and press Õ. You are taken to the 'SYSTEM IS BROKEN!' screen. Select OK and press Õ. The system is now locked on the selected scenario and will not permit you to run another scenario until the current scenario is solved. Depending on the scenario, a reboot may be required before the problem is noticed. In these cases, the system will reboot automatically when you select OK. You can re-read the description of the scenario two different ways. First, the description is written into a text file. Display the contents of this file: # cat /problem.txt... output omitted... You can also get information about the currently locked problem by re-running the tsmenu program. As the root user, launch the tsmenu program: # tsmenu You are taken to the 'You're in learn-01.sh (Example scenario)' screen. 9) 10) Use the UP and DOWN arrow keys to select the 'Description' menu item, then select OK and press Õ to continue. The scenario description text is displayed. Select the 'Hints' menu item, then select OK and press Õ. The 'Hints' screen is displayed. Notice that the total number of hints available is indicated, and that past hints (if already displayed) are also shown in the list. 1-42
11) Select the 'Hints' menu item several times until you have seen all of the available hints for the learn-01.sh scenario. EVALUATION COPY 12) The 'Check' menu item is used to check if the currently locked troubleshooting scenario has been correctly solved. Select 'Check', then select OK and press Õ. You are presented with the message 'ERROR: Scenario not completed'. This indicates that the conditions required by the script have not yet been met. If you feel that you have solved the problem, then you may need to carefully review the requirements as listed in the 'Description'. If you are still unsure about how to proceed then you should consult with the instructor. Select Cancel and press Õ to exit the tsmenu program. 13) 14) 15) Solve this problem by creating the required file: # touch /root/solved Launch the tsmenu program again. # tsmenu Each time the tsmenu program launches, it checks to see if you have solved the current problem. If you leave the tsmenu program open, then you can check a problem at any time by following these steps: select the 'Check' menu item select OK and press Õ you are presented with a 'SUCCESS: Scenario completed' message Select OK and press Õ You are taken back to the main 'Select troubleshooting group' screen Select Cancel and press Õ to exit the tsmenu program 16) You should now proceed to complete the troubleshooting scenarios shown in the following table using the same basic procedure as shown in the previous steps. 1-43
Troubleshooting Group Scenario Category Scenario Name Group 2 TCP Wrappers tcpwrappers-01.sh Group 2 IP Tables iptables-01.sh Group 2 Xinetd xinetd-01.sh EVALUATION COPY 1-44
Objectives y Examine the effect of the cp and mv commands on SELinux file contexts Requirements b (1 station) Relevance When copying and moving files, the SELinux security label (file context) attached to a file can be modified. Understanding the effect of commands on these file context labels will allow you to fix and avoid file context problems. Lab 1 Task 6 SELinux File Contexts Estimated Time: 5 minutes EVALUATION COPY 1) 2) 3) Create a new file and view the default SELinux file context assigned: # echo "data" > /etc/testfile # ls -Z /etc/testfile -rw-r--r--. root root unconfined_u:object_r:etc_t:s0 /etc/testfile Copy and move the file into the /tmp directory and view the file contexts assigned to the new files: # cp /etc/testfile /tmp/testfile-cp # mv /etc/testfile /tmp/testfile-mv # ls -Z /tmp/testfile-* -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 /tmp/testfile-cp -rw-r--r--. root root unconfined_u:object_r:etc_t:s0 /tmp/testfile-mv Set the file context on the moved file to the correct value for the new location: # chcon -t user_tmp_t /tmp/testfile-mv # ls -Z /tmp/testfile-mv -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 /tmp/testfile-mv 1-45
EVALUATION COPY
EVALUATION COPY Content Naming Services................................... 2 DNS A Better Way................................ 3 The Domain Name Space............................ 4 Delegation and Zones............................... 6 Server Roles....................................... 7 Resolving Names................................... 8 Resolving IP Addresses............................ 10 Basic BIND Administration.......................... 12 Configuring the Resolver........................... 13 Testing Resolution................................. 14 Lab Tasks 16 1. Configuring a Slave Name Server.................. 17 Chapter 2DNS CONCEPTS
EVALUATION COPY Naming Services Computers and Humans Computers fundamentally work at a binary level Objects have numerical labels Humans work best with names rather than numbers Naming Services map names to native numerical labels The original Internet Naming Service HOSTS.TXT and SRI-NIC Scalability problems as ARPAnet grew Excessive traffic/load on SRI-NIC Name collisions with flat name-space Consistency problems High administrative burden Before DNS In the 1970s, all name-to-address mappings for hosts on ARPAnet, the precursor to today's Internet, were maintained in a single master file, HOSTS.TXT. Starting in March of 1973, this file was updated by submitting changes via email to a central repositoryat Stanford Research Institute's Network Information Center (SRI-NIC). New releases were posted a couple of times a week for retrieval via anonymous FTP. This system worked well when ARPAnet was small, but as ARPAnet grew, researchers quickly realized that this HOSTS.TXT system was not going to scale. Among other problems, SRI was being overwhelmed trying to maintain and distribute the file, people were often requesting the same host name for different sites (a conflict SRI did not even have the authority to resolve, other than by informally adopting a first-come, first-serve policy), and system administrators often found that their systems' copies of the file were inconsistent with other systems' copies. By the time updated copies made their way across ARPAnet, they were already out-of-date. Obtaining a copy of the HOSTS.TXT file The SRI-NIC server was a DEC-2065, running the TOPS20 operating system. It was located in Menlo Park, California. It contained over 90 network related documents available via FTP. An example of downloading the HOSTS.TXT file: % ftp sri-nic Connected to sri-nic 220 SRI-NIC.ARPA FTP Server Process 5Z(47)-6 at Fri 25-Dec-87 20:28-PST Name (sri-nic.arpa:dkelson): anonymous 331 ANONYMOUS user ok, send real ident as password. Password: Õ 230 User ANONYMOUS logged in at Fri 25-Dec-87 20:28-PST. ftp> get NETINFO:HOSTS.TXT 200 Port 4.158 at host 128.114.130.3 accepted. 150 ASCII retrieve of <NETINFO>HOSTS.TXT.5 started. 226 Transfer completed. 652121 bytes transferred. local: NETINFO:HOSTS.TXT remote: NETINFO:HOSTS.TXT 652121 bytes received in 22.74 seconds (28 Kbytes/s) ftp> quit 221 QUIT command received. Goodbye. After obtaining the HOSTS.TXT file, a script was run to convert it to the /etc/hosts format. 2-2
EVALUATION COPY DNS A Better Way The Domain Name System (DNS) Distributed Database Client/Server Hierarchical Namespace Distributed Administration DNS Server Implementations BIND versions 4, 8, 9 djbdns, MaraDNS, PowerDNS, NSD & Unbound Replacing HOSTS.TXT ARPAnet's governing bodies set out to design a replacement for the HOSTS.TXT system, eventually leading to Paul Mockapetris' release of RFC882 and RFC883 in 1984. These RFCs described the technical details of a new Domain Name System (DNS), which Mockapetris had implemented a couple of years earlier in a software package he called JEEVES. DNS was intended to solve all the problems inherent to the HOSTS.TXT scheme, and as such it was designed as a distributed database with a client/server architecture and a decentralized, hierarchical, distributed administration. After the release of JEEVES, a team of graduate students at UC Berkeley wrote a suite of DNS software for the upcoming 4.3 BSD. Their package, BIND, is still by far the most popular DNS software in use today. BIND, The Berkeley Internet Name Domain The BIND project was started as a graduate student project at the University of California, Berkeley under a grant from the US Defense Advanced Research Projects Administration (DARPA). The code produced at Berkeley is the core of BIND version 4. BIND version 8 was created by the Internet Software Consortium. It has many improvements over BIND version 4 including negative answer caching, incremental zone transfers, and split DNS. BIND version 9 is a complete re-write of version 8. The feature set is nearly the same as version 8, with the addition of IPv6 support. A primary goal of the re-write was making version 9 more robust and secure. BIND Security Troubles All versions of BIND have had poor security track records, although BIND 9 is much better. In light of its track record, it is important that BIND updates be closely tracked and applied. This is normally done through the operating system vendor, as BIND is the standard DNS server shipped with Unix systems. Alternative DNS Server Implementations Popular free alternatives to BIND include djbdns, MaraDNS, PowerDNS and NSD/Unbound. Daniel J. Bernstein's djbdns hasn't been updated for years but is still very popular because it has a very good security history. MaraDNS was also created with security as a primary goal and while its history hasn't been perfect, it has been good. PowerDNS was designed for performance and flexibility. It can provide unique responses based on the geographic location of the client. It also supports multiple backends including RDBMS and LDAP. NSD was created by NLnet Labs of Amsterdam "with the purpose of creating more diversity in the DNS landscape." It holds the distinction of being the only software other than BIND used on root DNS servers. Because NSD can only be used as an authoritative name server, Unbound was created as a companion recursive and caching name server. 2-3
EVALUATION COPY The Domain Name Space The nameless root "." Top Level Domains (TLDs) Second Level Domains (SLDs) Hosts are organized into domains Logical relationship independent of physical location or network address Host name/domain name/fqdn Stores more than just host names to IP mappings DNS Structure As a hierarchical database, DNS is structured as a large inverted tree. At the top of the tree is the nameless root node, Under that node are the Top-Level Domains (TLDs). The original list of TLDs included: TLD Description.arpa DNS structure (like reverse lookups) and other meta info.com.edu.gov.int.mil.net.org commercial educational institutions the U.S. government international organizations, like NATO, WHO, the World Bank, etc. U.S. military networks, meant for internetworking providers organization, meant for non-profits that don't fit in one of the other TLDs The original list also included all of the 2-letter ISO country codes. This table shows a few examples: TLD Country.de.it.jp.ru.uk.us Germany Italy Japan Russia, originally including all of the U.S.S.R. the United Kingdom United States All of these TLDs are still in use today, along with several newer TLDs which have been officially adopted over the years. Under each TLD are Secondary-Level Domains (SLDs); these include entries such as.google.com,.whitehouse.gov, and.utah.edu. Below each SLD there can be additional subdomains, until eventually the level of the individual host is reached, such as sloane.section31.example.com. This inverted database offers several advantages over the flat map provided by the earlier HOSTS.TXT scheme. Individual hosts are grouped into logical subdomains, which are further organized into logical domains, allowing management responsibilities for each domain and subdomain to be delegated, removing the necessity for a single administrator like SRI-NIC. Consider the.gurulabs.com subdomain. ICANN administers the.com 2-4
domain, so it has delegated authority to Guru Labs for the gurulabs.com. zone, which contains the.gurulabs.com domain. Guru Labs then administers the gurulabs.com. zone independently of ICANN. Logical relationships between hosts are easily identified by their fully qualified domain name (FQDN). All machines at the University of California, Berkeley will have a domain name of.berkeley.edu, just as all hosts at Fuzzy Wuzzy will have a domain name of.fuzzywuzzy.com. The inverted tree structure of DNS also makes possible a logical, tree-walking approach to determine IPs from FQDNs. For example, to resolve www.fuzzywuzzy.com, a resolver will first ask a root name server for the answer. That server will tell the resolver where to find the name server for the com. zone, which will tell the resolver where to find the name server for the fuzzywuzzy.com. zone, which will return the IP address for www.fuzzywuzzy.com. EVALUATION COPY " ".com.edu.gov.info cnn GuruLabs utah stanford usps whitehouse linuxtraining 2-5
EVALUATION COPY Delegation and Zones Decentralized administration through delegation Domains vs. zones Domains define logical groupings of hosts Zones contain the same information as the corresponding domain name except for delegated subdomains A DNS server is authoritative for domains delegated to it Delegations are defined via the DNS Tree. Delegation Delegation of administrative duties is one of the key advantages of DNS over its predecessor, HOSTS.TXT. This is done by assigning responsibilities for subdomains to other organizations; for example, ICANN assigned responsibility for the.gurulabs.com subdomain to Guru Labs, LC. One subtle distinction to keep in mind is the difference between "zones" and "domains." Domains represent logical groupings of hosts; all commercial hosts could belong to the.com domain, all educational organizations could belong to the.edu domain, etc. Zones, however, represent the administrative duties associated with their corresponding domain or subdomain. For example, in the United States, NeuStar administers the.us domain and the corresponding us. zone. Under the.us domain exist subdomains for all 50 states, such as.ca.us (California) and.ut.us (Utah). The concept of delegation holds true throughout the Internet, starting from the root (".") server. Zone delegation can be managed separately from their parent zones. A couple of good examples are www.l.google.com and mail.yahoo.com. 2-6
EVALUATION COPY Server Roles Types of name servers Authoritative servers Primary masters (masters) Secondary masters (slaves) Caching only Zone transfers keep slaves in sync with masters A DNS server can be authoritative for many different domains Limited only by resources (CPU, Memory, Bandwidth, etc.) A single name server can serve different roles BIND DNS Server Types It is important to understand that from the DNS namespace perspective all servers that are authoritative for a domain are treated equally. To help manage, and keep all the servers synchronized, the BIND name server uses the concept of primary master, and secondary master, (sometimes called slave servers). Primary master servers are those which read the data for a zone from a local file. All edits and administration takes place on the primary master. Secondary master servers, while still being authoritative, obtain the zone data from another authoritative server for the zone. The other server may either be the primary master server or another secondary master server. The transfer of zone data from the master server to a secondary master server is known as a zone transfer. Roles are Defined at the Zone Level It's important to keep in mind that most name servers fill a variety of roles. They may be primary master servers for some zones, while simultaneously being secondary master servers for other zones and caching servers for all other zones. DNS Record Caching In addition to being authoritative for domains, name servers also cache DNS queries. Resolving host names is a lengthy iterative process; the first time a user tries to load http://www.yahoo.com/ at least three name servers will be queried (root,.com, and.yahoo.com). To avoid having to repeat all those queries every single time a user wants to go to Yahoo!, servers typically cache the results of DNS lookups. The second time a user goes to http://www.yahoo.com/, the DNS information for www.yahoo.com will be retrieved from the cache. If the user then wants to go to http://mail.yahoo.com/, the cache will already know where to find the name server responsible for the.yahoo.com domain, so the root and.com name servers will not have to be queried. Because of the speed benefits, even sites which have no other need for a name server will commonly set up caching-only name servers. Primary Master NS Authorative nameservers for example.com Zone Transfer Secondary Masters (Slaves) NS NS 2-7
EVALUATION COPY Resolving Names Mapping Name to IP address Recursion (recursive resolution) Iteration (iterative resolution) Mapping IP address to Name Caching DNS record TTL trade-off Types of DNS Queries Resolution of a FQDN to an IP address can either be of a type recursive or iterative. Normally, queries that a host makes to a DNS server are recursive, and queries between DNS servers are iterative. In recursive resolution, a client queries a name server for the IP of a FQDN, and the name server can respond in two ways: it can either send the IP address, which it will have to determine itself, or it can send an error saying that it can't resolve that FQDN. With an iterative query, the name server can respond in one of three ways: it can send the IP if it is known; it can send an error saying that the FQDN can not be resolved; or it can refer the client to a different name server (at which point the process starts over). The reverse process, that of resolving IP addresses to a FQDN, is not as straight-forward, but is resolved by a rather clever method. Every IP address is also part of another domain, the.in-addr.arpa domain, which can be queried just like any other domain. Caching Records and TTL Values Resolution of names relies heavily on caching of previous results. One important detail to keep in mind when managing DNS records is that data is cached based on its Time To Live, or TTL. The TTL determines how long the data can be cached before the caching name server must discard it. The TTLs for both positive and negative caching are adjustable by the zone administrator. For negative results, the TTL is configured in the SOA record for the zone. The TTL for positive caching is adjustable in the zone file using the $TTL variable and can be set on individual records. The TTL for each record is one of the parameters returned in answer to every DNS query. When setting the TTL, keep in mind that doing so is always making a trade-off between accuracy and performance. Long TTLs improve network performance, but mean out-of-date DNS records will be cached for a long time, while short TTLs mean that cached DNS records are always likely to be accurate, but that DNS queries will have to be made more frequently. Negative Record Caching RFC2308 In addition, newer name servers can also do negative caching, meaning that they cache not only successful DNS queries (such as that www.yahoo.com has, amongst other addresses, an IP of 66.94.230.50), but also unsuccessful DNS queries (such as that fake.example.com does not exist, at least at the time this book was written). The benefits in terms of increased speed and reduced network traffic on the whole Internet from caching negative DNS queries are enormous. 2-8
DNS Resolution for www.gurulabs.com 1. What's the IP for www.gurulabs.com? 2. What's the IP for www.gurulabs.com? 3. We don't know. Go talk to the com. name servers. Here is a list of them. (Server A caches this information.) 4. What's the IP for www.gurulabs.com? 5. We don't know. Go talk to the gurulabs.com. name servers. Here is a list of them. (Server A caches this information.) 6. What's the IP for www.gurulabs.com? 7. The IP address for www.gurulabs.com is 67.137.148.8. (Server A caches the answer.) 8. Server A returns the answer back to the requesting client. 2 NS NS NS NS 3 4 B NS NS NS NS 5 6 C NS NS NS NS D 1 NS 7 A EVALUATION COPY A: Client configured DNS server B: Root name servers C:.com name servers D: GuruLabs.com name servers PC 8 2-9
EVALUATION COPY Resolving IP Addresses Why resolve IP address? Authorization based on hostname or domainname Analyzing log files Sorting nodes by their IP address in-addr.arpa. subdomain Provides for logical delegation on class boundaries The.in-addr.arpa Domain In addition to mapping names to addresses, it is often desirable to map addresses to names. Perhaps you're an administrator looking through server logs, and you want to know that the cracker attempting to break into your system from 192.168.0.254 was coming from server1.example.com, or perhaps you're running TCP Wrappers or other software which does an IP address to hostname resolution before permitting authentication. Resolution of IP addresses into names is typically referred to as reverse lookup or reverse resolution. It also uses DNS, by querying the special.in-addr.arpa domain. All IPv4 addresses on the Internet have a corresponding node in the.in-addr.arpa domain. The corresponding node of a given address is obtained by reversing the order of the blocks in their IP address and using this reverse IP address as their hostname within the.in-addr.arpa domain. For the previous example of 192.168.0.254, the corresponding node in the.in-addr.arpa domain is 254.0.168.192.in-addr.arpa. Resolution is Performed in the Normal Fashion Addresses in the.in-addr.arpa domain are resolved in the standard fashion. When querying DNS for the address 254.0.168.192.in-addr.arpa, the resolver will first ask the root server for the address of the.in-addr.arpa nameserver. It will then ask the.in-addr.arpa server for the address of the nameserver which is authoritative for the 192.in-addr.arpa. zone (in other words, the nameserver which knows about all hosts in the 192.0.0.0/8 netblock). It will then ask the nameserver for the.192.in-addr.arpa domain for the address of the authoritative server for the 168.192.in-addr.arpa. zone..168.192.in-addr.arpa is the domain for all addresses in the 192.168.0.0/16 netblock; that particular netblock has been allocated to example.com in the classroom, so the authoritative nameserver for the 168.192.in-addr.arpa. zone will be the classroom nameserver. The resolver will connect to that nameserver and query it for 254.0.168.192.in-addr.arpa, and the nameserver will respond that 254.0.168.192.in-addr.arpa has the hostname server1.example.com. IPv6 DNS Namespace While IPv4 uses the.in-addr.arpa domain, IPv6 uses the ip6.arpa domain. For example: 6.6.6.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.8.1.c.0.0.0.b.0.a e.f.f.3.ip6.arpa 2-10
EVALUATION COPY The in-addr.arpa node for the 192.0.2.53 IP address arpa " ".com in-addr 0 255 192 0 0 255 2 0 255 0 255 53 hostname station53.example.com 2-11
EVALUATION COPY Basic BIND Administration Primary daemon: named start/stop/restart/reload/status via Sys V init script Fine grained control of running daemon via rndc Logs to /var/log/messages debug level set with named -d level debug level changed with rndc trace [level] toggle logging of queries with rndc querylog SELinux named_selinux(8) named_{conf,zone,cache,var_run}_t Managing the named Process The daemon which implements BIND is called named. Basic administration of named is managed by its SysV Init script, which supports the standard commands to start and stop the daemon as well as a few other management commands: # service named start... output omitted... # service named status version: 9.7.0-P2-RedHat-9.7.0-5.P2.el6 CPUs found: 2 worker threads: 2 number of zones: 18 debug level: 0 xfers running: 0 xfers deferred: 0... snip... In addition, named can be managed while running by using one of the name daemon controller utilities. For BIND 8, this utility is called ndc, while on BIND 9 this utility is called rndc. One advantage of rndc is the use of keys for authentication. The rndc utility uses this feature to allow secure remote administration of a BIND 9 server, It connects to a running named process, using either a Unix domain socket to control named running on the local machine, or using a TCP socket to control named running on remote systems. In BIND 9, this control channel is authenticated, greatly decreasing the security risks posed by it. The rndc command can be used for more fine grained control of the running named process. For example, service named reload will reload all zones, but rndc reload zone_name will reload just the specified zone. The following table shows examples of other commands (a complete list is produced by running rndc without and arguments): Option reload zone Description reload the specified zone freeze thaw zone suspend or enable updates for a dynamic zone reconfig stats querylog dumpdb stop halt trace level notrace flush status reload configuration file and any new zones write server statistics to the log file toggle query logging on or off dump the cache to a file save any pending updates and stop the server stop the server immediately increase debugging by the specified level turn off debugging flush the server's caches display the server status 2-12
EVALUATION COPY Configuring the Resolver Client needs the IP address of at least one name server, and optionally one or more search suffixes /etc/resolv.conf search domain.tld nameserver w.x.y.z Static vs. DHCP configuration Client Configuration DNS operates via a client-server architecture. The name server supplies DNS information in response to queries from a client, the resolver. On Unix machines, the resolver is configured in the file /etc/resolv.conf, which tells the client which DNS servers to use and which domains to search when provided with bare host names rather than with fully qualified domain names. The search Directive When given a hostname, the resolver will, by default, append the localhost's domain name to the hostname and then query the name servers for that fully qualified domain name. This behavior can be overridden by specifying a search path, listing domain names which should be appended to bare host names. This is done with statements like: File: /etc/resolv.conf search domain1.com domain2.com file. Name servers should be selected which are well-connected via separate networks and which are geographically dispersed. Name servers are defined with statements like: File: /etc/resolv.conf nameserver 10.100.0.254 nameserver 82.165.40.134 nameserver 192.128.167.77 When given a fully qualified domain name, the resolver will connect to the name servers listed, in the order in which they are listed, and query them for the IP address of that fully qualified domain name until it gets a positive response or an authoritative negative response. Static vs. DHCP Configuration Most DHCP clients configure /etc/resolv.conf automatically, providing both a useful search path and nameserver(s). When using static IP addresses, the file must be manually configured for the system resolver to be able to find any nameservers. This tells the resolver to first search for host.domain1.com and then to search for host.domain2.com if host.domain1.com cannot be found when asked to search for a bare hostname. The nameserver Directive Up to three name servers can be defined in the /etc/resolv.conf 2-13
EVALUATION COPY Testing Resolution ping, telnet host dig Querying a specific server nslookup Interactive vs. command line Testing Configurations Testing DNS resolution can involve several different tools. Simple applications like ping or telnet can be used just to see if the system you're on can resolve anything. They are good to test whether the system resolver can resolve: $ ping foo.example.com $ telnet foo.example.com host and dig can also be used to query specific servers. If no nameserver is specified on the command line, they both will use the nameserver lines in /etc/resolv.conf. Both of these forms query the name server ns1.gurulabs.com about the host www.gurulabs.com: $ host www.gurulabs.com ns1.gurulabs.com... output omitted... $ dig @ns1.gurulabs.com www.gurulabs.com... output omitted... The following examples show queries with the host and dig commands: $ host www.gurulabs.com www.gurulabs.com has address 67.137.148.8 $ host gurulabs.com gurulabs.com has address 67.137.148.8 gurulabs.com mail is handled by 5 mail.gurulabs.com. Notice that the dig command outputs results in "zone file" format by 2-14 default. $ dig www.gurulabs.com ; <<>> DiG 9.7.0-P2 <<>> www.gurulabs.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55825 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.gurulabs.com. IN A ;; ANSWER SECTION: www.gurulabs.com. 3600 IN A 64.245.157.8 ;; AUTHORITY SECTION: gurulabs.com. 172800 IN NS ns2.p02.dynect.net. gurulabs.com. 172800 IN NS ns3.p02.dynect.net. gurulabs.com. 172800 IN NS ns1.p02.dynect.net. gurulabs.com. 172800 IN NS ns4.p02.dynect.net. ;; Query time: 640 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Mar 24 12:04:58 2011 ;; MSG SIZE rcvd: 136 One important difference between dig and host is that host will automatically use the search list specified in resolv.conf, and dig will not (although both commands have options to control this behavior).
BIND 4, BIND 8 and BIND 9 provide the nslookup command which can be used either non-interactively or interactively, such as: EVALUATION COPY $ nslookup > www.kersplat.com Server: server1.example.com Address: 10.100.0.254#53 Non-authoritative answer: Name: www.kersplat.com Address: 205.178.145.166 > exit This example shows a non-interactive invocation of nslookup: $ nslookup www.kersplat.com Server: server1.example.com Address: 10.100.0.254#53 Non-authoritative answer: Name: www.kersplat.com Address: 205.178.145.166 This example shows the result of one of the more common errors seen in BIND DNS zone files. The data portion of the CNAME record for the zulu host is missing the terminating dot (period) and subsequently has had the origin string appended causing the domain portion (example.com) to be repeated: # dig zulu.example.com... snip... ;; QUESTION SECTION: ;zulu.example.com. IN A ;; ANSWER SECTION: zulu.example.com. 86400 IN CNAME station1.example.com.example.com. ;; AUTHORITY SECTION: example.com. 600 IN SOA localhost. root.localhost. 2007050401 28800 14400 3600000 600 ;; Query time: 1 msec ;; SERVER: 10.100.0.254#53(10.100.0.254) ;; WHEN: Tue Apr 20 13:34:05 2010 ;; MSG SIZE rcvd: 119 2-15
EVALUATION COPY Lab 2 Estimated Time: 30 minutes Task 1: Configuring a Slave Name Server Page: 2-17 Time: 30 minutes Requirements: b (1 station) c (classroom server) 2-16
Objectives y Install the BIND name server and configure it to act as a slave for the example.com and the 0.100.10.in-addr.arpa classroom domains. Requirements b (1 station) c (classroom server) Relevance Slave name servers provide redundancy and load balancing for DNS record resolution. Lab 2 Task 1 Configuring a Slave Name Server Estimated Time: 30 minutes EVALUATION COPY 1) 2) 3) The following actions require administrative privileges. Switch to a root login shell: $ su -l Password: makeitso Õ Install the BIND packages onto your system: # yum install -y bind bind-chroot... output omitted... Examine what files are included in the bind package: # rpm -qil bind less... output omitted... In addition to the core components, RHEL6 includes many DNSSEC related binaries, and extensive documentation (ARM, RFC's, sample configs, man pages, etc.) 4) Examine the changes made by the bind-chroot package: # rpm -ql bind-chroot less /var/named/chroot /var/named/chroot/dev /var/named/chroot/dev/null... snip... # rpm -q --scripts bind-chroot sed -n '/postinstall/,/:;/p'... output omitted... Basic directory structure provided by package. /etc/sysconfig/named modified by postinstall script. 2-17
The BIND SysV init script (/etc/init.d/named) sources /etc/sysconfig/named and reacts to the $ROOTDIR variable. EVALUATION COPY 5) Configure the system to act as a slave server for the example.com domain by adding the following to the end of the existing configuration file: File: /etc/named.conf + zone "example.com" { + type slave; + file "slaves/example.com.zone"; + masters { 10.100.0.254; }; + }; 6) In a dedicated terminal window (designated [X2]) monitor the system log: [X2]# tail -f /var/log/messages Leave this terminal window open (and running tail) for the duration of the lab task. BIND is good at reporting configuration and other problems, and this can help you catch problems quickly. 7) Validate the basic syntax and structure of your new configuration file using the supplied BIND program: # named-checkconf && echo OK OK # chkconfig named on If errors are reported, re-open the file and repair it as needed. You may get notified of newly changed defaults, but should not see any error messages. An example of a notification would be the following: the default for the 'auth-nxdomain' option is now 'no' 8) In a separate terminal window, start the name server: # service named start Starting named: [ OK ] 2-18
9) Verify that the needed files and directories are mounted into the chroot: EVALUATION COPY # mount grep named /etc/named on /var/named/chroot/etc/named type none (rw,bind) /var/named on /var/named/chroot/var/named type none (rw,bind) /etc/named.conf on /var/named/chroot/etc/named.conf type none (rw,bind) /etc/named.rfc1912.zones on /var/named/chroot/etc/named.rfc1912.zones type none (rw,bind) /etc/rndc.key on /var/named/chroot/etc/rndc.key type none (rw,bind) /usr/lib/bind on /var/named/chroot/usr/lib/bind type none (rw,bind) /etc/named.iscdlv.key on /var/named/chroot/etc/named.iscdlv.key type none (rw,bind) 10) 11) See what files are visible to the BIND process: # ls -R /proc/$(pgrep named)/root/... output omitted... Examine the contents of the system log by looking at the running tail command in terminal [X2]. There should be a line similar to: named[27114]: transfer of 'example.com/in' from 10.100.0.254#53: Transfer completed:a 1 messages, 509 records, 11994 bytes, 0.045 secs (266533 bytes/sec) This indicates that the transfer of the zone file from 10.100.0.254 (server1) was successful. 12) 13) Examine the contents of the zone database file that was created from the zone transfer from the master server: # less /var/named/slaves/example.com.zone... output omitted... Run a simple query to see if name resolution is working: # host station1 station1.example.com. has address 10.100.0.1 The lookup returned the correct answer, but which name server is providing them? Looking at the contents and syntax of this file is a great prelude to the upcoming DNS lab. 2-19
14) Try another lookup using the verbose option to see (among other things) the server that gave the answer: # host -t A -v station1 Trying "station1.example.com." ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31519 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;station1.example.com. IN A ;; ANSWER SECTION: station1.example.com. 86400 IN A 10.100.0.1 ;; AUTHORITY SECTION: example.com. 86400 IN NS server1.example.com. ;; ADDITIONAL SECTION: server1.example.com. 86400 IN A 10.100.0.254 Received 92 bytes from 10.100.0.254#53 in 0 ms... output omitted... The -t A option limits the query to just an address record (otherwise it attempts several queries by default). Notice the IP address of the server that returned the answer. EVALUATION COPY 15) 16) Run the query again, this time specifying the slave name server running on your local system for name resolution: # host station1 localhost Using domain server: Name: localhost Address: ::1#53 Aliases: station1.example.com has address 10.100.0.1 Use a text editor to modify the /etc/resolv.conf so that it defaults to using your own server for name resolution: File: /etc/resolv.conf - nameserver 10.100.0.254 + nameserver 127.0.0.1 2-20
17) Verify that only the correct entries are present. If not, modify the /etc/resolv.conf file again. # tail -n2 /etc/resolv.conf search example.com nameserver 127.0.0.1 EVALUATION COPY 18) If your station is currently configured to use DHCP, when the DHCP lease is renewed, any changes made to /etc/resolv.conf will be lost. To prevent this, disable DNS configuration updating from DHCP, then restart your network interface: # echo 'PEERDNS="no"' >> /etc/sysconfig/network-scripts/ifcfg-eth0 # echo 'DNS1="127.0.0.1"' >> /etc/sysconfig/network-scripts/ifcfg-eth0 # service network restart... output omitted... Some classrooms may have more than one NIC. If this is the case on your system and you are uncertain which interface is being used, ask the Instructor for help with this step. 19) 20) 21) Verify that names in the example.com domain are now resolved by this system's name server by default: # host -v station1 tail -n 1 Received 77 bytes from 127.0.0.1#53 in 3 ms Attempt a reverse lookup for this station's IP address: # host 10.100.0.1 Host 1.0.100.10.in-addr.arpa not found: 3(NXDOMAIN) Configure the name server as a slave server for the appropriate reverse lookup zone by adding these lines to the end of the named.conf configuration file: Notice the IP address of the server that returned the answer. This command fails because the name server is not currently loading a zone file for this domain. 2-21
File: /etc/named.conf zone "example.com" { type slave; file "slaves/example.com.zone"; masters { 10.100.0.254; }; }; + zone "0.100.10.in-addr.arpa" { + type slave; + file "slaves/0.100.10.in-addr.arpa.zone"; + masters { 10.100.0.254; }; + }; EVALUATION COPY 22) Verify that the configuration file does not contain typographical errors by running it through the BIND syntax checker: # named-checkconf If errors are reported then double check the configuration file and correct the errors. 23) 24) Restart the named daemon so that the new configuration takes effect: # service named restart Stopping named: [ OK ] Starting named: [ OK ] Try the reverse lookup of this station's IP address again. This time it should succeed: # host 10.100.0.1 1.0.100.10.in-addr.arpa. domain name pointer station1.example.com. 2-22
EVALUATION COPY Content HTTP Operation.................................... 2 Apache Architecture................................ 4 Dynamic Shared Objects............................ 6 Adding Modules to Apache.......................... 7 Apache Configuration Files.......................... 8 httpd.conf Server Settings......................... 9 httpd.conf Main Configuration..................... 10 HTTP Virtual Servers............................... 11 Virtual Hosting DNS Implications.................... 12 httpd.conf VirtualHost Configuration................ 13 Port and IP based Virtual Hosts...................... 14 Name-based Virtual Host........................... 15 Apache Logging................................... 16 Log Analysis...................................... 17 The Webalizer..................................... 18 Lab Tasks 19 1. Apache Architecture............................. 20 2. Apache Content................................. 23 3. Configuring Virtual Hosts......................... 25 Chapter 8USING APACHE
EVALUATION COPY HTTP Operation HTTP Version 1.1 Stateless Requires the Host: header be first Client Request Structure Request Method (GET, POST, etc.) Headers Body (optional) Server Response Structure Response code (200,300,400, etc.) Headers Body HTTP v1.1 HTTP is the protocol that powers the World Wide Web. It is a simple, stateless file transfer protocol consisting of a client-side request and a server-side response. Both the request and the response are structured in three parts: a request method/response code, a header section followed by a blank line, and a body section. Use the netstat command to see connections initiated by your web browser. For example, this command shows an established connection from a host to http://www.google.com/: $ netstat -tp tcp 0 0 entmoot:32931 www.google.com:80 ESTABLISHED You can view the server headers using the HTTP HEAD request method. You can connect to web servers and manually issue GET requests using telnet... $ telnet www.google.com 80 Trying 216.239.53.100... Connected to www.google.com (216.239.53.100). Escape character is 'ˆ]'. GET / HTTP/1.1... snip......or using the wget retrieval tool using the --server-response, and --spider options: $ wget --server-response --spider http://www.google.com... output omitted... You can also use the w3m text-mode web browser which allows you to do HEAD requests using the -dump_head option: $ w3m -dump_head http://www.google.com/... output omitted... Finally, the Lynx text-mode web browser allows for HEAD requests using the -head option: $ lynx -head http://www.google.com/... output omitted... Client Request GET / HTTP/1.1 Host: www.linuxtraining.com User-Agent: Mozilla/5.0 (X11; U; Linux i686) Accept: text/xml, image/png, image/jpeg, image/gif Accept-Language: en-us, en; Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Connection: keep-alive Server Response HTTP/1.1 200 OK Date: Fri, 12 Jan 2007 01:24:27 GMT Server: Apache Set-Cookie: Apache=66.62.77.11.279431042161867442; 8-2
path=/; expires=sun, 17-Jan-2038 01:24:27 GMT Last-Modified: Thu, 11 Jan 2007 22:44:48 GMT ETag: "bea6a-46fd-3e1dfb60" Accept-Ranges: bytes Content-Length: 18173 Connection: close Content-Type: text/html EVALUATION COPY 8-3
EVALUATION COPY Apache Architecture Maintained by the Apache Software Foundation Membership by invitation only, based on skills and contributions More Apache servers than all other HTTP servers combined Over 60% Market Share Apache runs as a stand-alone network service daemon Multiple process/thread models (MPMs) available Extending Apache via Modules Apache Core modules http://httpd.apache.org/docs/2.2/mod/ Third-party modules http://modules.apache.org/ Apache History In 1995, most people running web servers were using NCSA's httpd 1.3 software which NCSA had stopped actively maintaining. A group of web server administrators led by Brian Behlendorf began collecting all the various patches they were individually using to make NCSA's httpd more usable, and A PAtCHy server was born. Shortly thereafter the Apache Group re-wrote Apache from the ground up. Less than a year after it was created Apache had become the most commonly used web server and has continued to pull ahead of other competitors. Apache owns more than 60% market share on the Internet, according to the NetCraft survey, which can be found online at http://www.netcraft.com/archives/web_server_survey.html. Apache Availability Apache remains one of the most actively developed open source projects. Its success has spawned a number of important web projects including the Java-based Jakarta projects (Ant, Tomcat, Struts, Taglib), XML parsers and libraries (Xerces, Xalan, FOP), as well as the popular web scripting language, PHP. In addition to its Unix roots, which comprises the vast majority of web servers, Apache runs on several non-unix platforms. The Apache web server can be downloaded freely from the Apache Software Foundation (ASF) at http://www.apache.org/. Comprehensive documentation is also available at the site. General Architecture Apache's success is in large part due to its architecture. Two key concepts went into the development of Apache: modularity and efficiency. Modularity: Dynamic Shared Objects Apache was designed to be a small, fast web server. At the same time the server needed to be extensible able to run the latest features in the fast changing landscape of the Internet. To accommodate these conflicting requirements the developers of Apache built a modular server. Only the most basic web server functions exist in the core Apache server. Other functionality can be added by loading dynamic modules. There are now hundreds of modules available implementing support for a vast range of functions from server-side scripting languages like PHP, JSP, and ASP to authentication against LDAP and SQL servers, audio streaming, and even intrusion detection. Efficiency: Multi-process/Multi-thread To better support native operating system multi-threading features, Apache developers have modularized Apache's multi-processing engine. A number of Multi-Processing Modules (MPMs) exist to implement native multi-threading features for specific operating systems and architectures. Historically Apache had a single method of operation. A child process 8-4
was assigned to exclusively serve a single connection. This turned out to be a highly reliable method of operation. This is due to the standard memory protection of Linux that isolates each child process from other processes coupled with the fact that child processes are replaced after a defined number of connection which means that memory leaks never get too severe. This method is known as the prefork MPM. A more scalable method (yet less robust and potentially incompatible see below) method known as the worker MPM creates multiple child processes which each spawn multiple threads. Each thread handles a single connection. There is an experimental MPM variant of worker known as event that increases scalability even more by more efficiently handling network sockets. However it sacrifices compatibility with mod_ssl and other input filters. Non-threadsafe Modules If you use the threaded worker MPM, you must make sure that all of the modules that you load into Apache are also threadsafe (meaning the module's source code has been synchronized to work with multiple threads). Otherwise a condition called "deadlock" can occur in which a thread enters a mode to wait for a signal that the program has no mechanism to send and the program hangs. Currently many of the third-party modules, and even some standard Apache modules, are not thread-safe and deadlocks are common. Work is being done to fix the problems, but the Apache Software Foundation recommends using the prefork MPM when third-party modules are required. Enabling the worker MPM Currently MPM is compiled into the httpd binary so an entirely different binary must be used to switch MPMs. Fortunately modern Linux distributions provide alternative binaries. This is due to change in Apache 2.4 where a single httpd binary can load a MPM at runtime. Use the following procedure to switch to the worker MPM: The httpd package provides multiple httpd binaries. To switch to the worker MPM, make the following edit: File: /etc/sysconfig/httpd - #HTTPD=/usr/sbin/httpd.worker + HTTPD=/usr/sbin/httpd.worker Note that the /usr/sbin/httpd.event binary is available as well. EVALUATION COPY 8-5
EVALUATION COPY Dynamic Shared Objects Dynamically Load Modules at Run Time Update or add modules without recompiling Apache. Prototype and test features and functionality. DSO support is provided by mod_so.c mod_so must be statically compiled into the Apache core. List the modules statically compiled into the Apache core # /usr/sbin/httpd -l Third-party Modules Modules maintained outside of the Apache Software Foundation http://modules.apache.org/ Core and MPM Modules core Core Apache HTTP Server features that are always available (must be compiled in). mpm_common A collection of directives that are implemented by more than one multi-processing module (MPM). perchild MPM allowing daemon processes serving requests to be assigned a variety of different UIDs. prefork Implements a non-threaded, pre-forking web server. This is how Apache 1.3 works. threadpool Yet another experimental variant of the standard worker MPM. mpm_winnt MPM optimized for WinNT (NT/2000/XP/2003). worker MPM implementing a hybrid multi-threaded, multi-process web server. Important Modules Included in the Apache Distribution mod_access Provides access control based on client. hostname, IP address, or other characteristics of the client request. mod_alias Provides for mapping different parts of the host filesystem in the document tree and for URL redirection. mod_auth User authentication using text filesmod_auth_anon Allows "anonymous" user access to authenticated areas. mod_cgi Permits execution of CGI scripts. mod_dav Distributed Authoring and Versioning (WebDAV) functionality. mod_dir Provides for "trailing slash" redirects and serving directory index files. mod_imap Server-side imagemap processing. mod_include Server-parsed HTML documents (also called Server Side Includes, or SSI for short). mod_mime_magic Determines the MIME type of a file by looking at the first few bytes of its content. mod_rewrite Provides a rule-based rewriting engine to rewrite requested URLs on the fly. mod_so Loading of executable code and modules into the server at start-up or restart time (must be compiled in). mod_speling Attempts to correct mistaken URLs that users might have entered by ignoring capitalization and by allowing up to one character of misspelling. mod_ssl Strong cryptography using the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. mod_status Provides information on server activity and performance. mod_suexec Allows CGI scripts to run as a specified user and group. mod_unique_id Provides an environment variable with a unique identifier for each request. mod_userdir User-specific directories (i.e. http://www.example.com/~guru/). mod_usertrack Clickstream logging of user activity on a site. 8-6
EVALUATION COPY Adding Modules to Apache The APache extension (apxs) Utility Build modules outside of the Apache source tree. Apache C header files Compiler and linker flags Matches the currently running core httpd binary SELinux Context: httpd_modules_t Edit the httpd.conf LoadModule Optionally use -a or -A Module-specific configuration directives conf.d/mod_modulename.conf The APache extension Utility Apache modules are standard shared libraries (.so on Unix,.dll on Windows). In order for a library to be loaded and integrated into Apache's already running binary, the module has to be precisely compatible with the internal binary structures of the Apache program. Those structures can vary depending on how Apache was compiled, even including what flags were used with the compiler and linker. The APache extension utility (part of the Apache package), is compiled with Apache, and therefor the apxs command can be used to compile modules after-the-fact. To use this extension mechanism, the platform must support DSO module loading and the Apache binary must be built with the mod_so module. Check if the mod_so module has been statically built into the Apache binary with this command: $ httpd -l... output omitted... Using the APache extension Utility The APache extension utility can build the module and install the shared object file into the Apache library directory. The -a option causes the command to insert the required LoadModule directive into the httpd.conf file. If the -A option is used, the command will insert the directive commented out. An example of using this utility is: # apxs -i -a -c mod_modulename.c... output omitted... DSO Configuration File Any module-specific directives should be placed into a configuration file located in the conf.d/ directory. For example, for a module named mod_modulename, create a file named mod_modulename.conf (or, perhaps, modulename.conf). The filename is not important, but it must end with the.conf filename extension. RHEL6 Apache Module RPM Packaging There are a number of pre-built modules that can extend Apache by installing them via the yum command. These modules are included in these RPM packages: y php, mod_perl y mod_ssl, mod_python y mod_auth_mysql, mod_auth_pgsql, mod_auth_kerb y mod_dav_svn, mod_authz_ldap 8-7
EVALUATION COPY Apache Configuration Files The httpd.conf file is divided into three main sections: Section 1: Global environment Section 2: "Main" server configuration Section 3: Virtual hosts Apache configuration on RHEL6 /etc/httpd/conf/httpd.conf /etc/httpd/conf.d/ SELinux Context: httpd_config_t Apache Configuration Files The main configuration file for Apache is httpd.conf. This file allows for configuring global options, options that affect the main server, and virtual hosting options. Options include the port to have the server listen on, the server's document root directory location, or the MPM configuration. In general, the default Apache configuration provides a basic web server for the host, providing a single place to put files that can be shared publically, though requiring an index file to be created. In some cases, a $HOME/public_html directory is enabled for users to share files. The configuration file on RHEL6 is located at /etc/httpd/conf/httpd.conf. Red Hat Enterprise Linux disables public_html directories by default. Section 1: Global Environment The global environment for the Apache configuration has options that affect the server itself. Directives such as MinSpareServers and KeepAliveTimeOut tune the server's operation, and directives such as ServerName, Listen, and LoadModule configure the server's environment. Module specific configuration files are loaded with the line: Include conf.d/*.conf Section 2: "Main" Server Configuration Directives in this section affect the main, or default, server environment as opposed to individual virtual host configurations. For example: specifying multiple <Directory> block directives to limit access to documents within the default server's shared document root. The default document root is /var/www/html/. No index file is provided. Web browsers requesting the document root result in an error, and a custom error page (titled "Test page") is displayed that explains the situation. Section 3: Virtual Hosts Each configured virtual host may have its own DocumentRoot environment. The documents available within this environment can be controlled within the virtual host context. For example, you can specify multiple <Directory> block directives to limit access to documents within the virtual host's shared document root. 8-8
EVALUATION COPY httpd.conf Server Settings Important configuration directives include: ServerRoot StartServers, MinSpareServers, MaxSpareServers MaxClients LoadModule Listen User/Group SELinux When protected, httpd processes run as the httpd_t type Many related SELinux booleans affect restrictions imposed on process httpd.conf Section 1 ServerName The name and port that the server uses to identify itself. Example: www.example.com:80 ServerRoot The absolute path specifying the directory structure under which the Apache configuration and log files are kept. On Red Hat, this is usually set to /etc/httpd/. StartServers Accepts a numeric argument specifying how many servers to initially start. MinSpareServers/MaxSpareServers These directives take numerical arguments regulating the size of the idle Apache server pool. At all times, at least MinSpareServers will be held in reserve, and no more than MaxSpareServers will be running idle. MaxClients Specifies the maximum number of simultaneous client connections that Apache will handle. Others who connect when the server is "full", will get a 503 (Service Unavailable) "error" message, instead. ServerLimit Specifies the maximum number of Apache processes which may run. This setting will prevent the MinSpareServers directive from causing Apache to launch more spares. LoadModule Used to load modules into the server and activate them. Listen Specifies the port(s) to which Apache should bind. By default, this is 80 for HTTP and 443 for HTTPS. User/Group Specify the user and group (either by name or ID) the Apache process should run as. For security reasons, it's a good idea to create a special user and group for this. Syntax for the LoadModule directive is: LoadModule module-name filename Apache module RPMs place their configuration files in the /etc/httpd/conf.d/ directory. Apache will include all files in the /etc/httpd/conf.d/ directory with names that end in.conf. For example, to configure the SSL options edit the /etc/httpd/conf.d/mod_ssl.conf file. 8-9
EVALUATION COPY httpd.conf Main Configuration Important configuration directives include: DocumentRoot SELinux context for HTML files: httpd_sys_content_t Alias url-path /path/to/{file,dir} <Directory /path/to/dir></directory> <Location url-path></location> AccessFileName MIME type definitions and handlers: /etc/mime.types Log file locations and formats Module specific configuration blocks Shared Directives DocumentRoot Specifies what filesystem path the root URL path maps to (the root of the site). Alias url-path /path/to/{file,dir} Maps a URL into the filesystem namespace. Allows Apache to serve documents that exist outside of <DocumentRoot>. <Directory /path/to/dir>... </Directory> Limits the scope of the configuration options contained within the block to just the filesystem path(s) specified. <Location url-path>...</location> Limits the scope of the configuration options contained within the block to just the URL path(s) specified. AccessFileName Specifies the name of the file in each directory which can set access control information. If omitted, the default name.htaccess is used. TypesConfig MIME type definitions can extend Apache's support for additional file types, such as new compression algorithms, by defining handlers (external programs which can be launched to handle files for which Apache doesn't have built-in support). The definitions of available mime types are found in the /etc/mime.types file. AddType Can override the types defined in the TypesConfig specified file or add additional types. 8-10
EVALUATION COPY HTTP Virtual Servers Virtual Web Hosting Serving multiple domains from a single server HTTP 1.0: IP-based hosting HTTP 1.1: IP-based and name-based hosting Dynamic Mass Virtual Hosting Add directory to your filesystem No config changes No server restart Virtual Servers For a single machine to host several domains, it must be able to differentiate between inbound requests and route them to the appropriate virtual server. The two most common mechanisms used to differentiate the inbound requests are IP-based and Name-based Virtual Hosts. IP-based virtual hosts handle requests inbound to a specific IP address, whereas Name-based virtual hosts handle requests based on the Host: header specified in the inbound request by the web client. A third mechanism, Port-based virtual hosting, is less commonly used because it often requires manual manipulation of the URL by the client. The older HTTP 1.0 protocol does not provide the Host: header and is therefore limited to IP-based solutions for virtual hosting. On the other hand, the HTTP 1.1 protocol introduces, and requires, the Host: header, providing a way for the client application to specify the fully qualified domain name of the server to which it is connecting. This allows multiple virtual servers to all listen on the same IP and port and route requests to the appropriate virtual server by name only. See http://httpd.apache.org/docs-2.2/vhosts/ for detailed documentation on configuration of virtual hosts under Apache. Dynamic Mass Virtual Hosting An Internet Service Provider (ISP) or web hosting service will often use virtual servers to host multiple domains. Using the Apache web server, these service providers can add new virtual hosts by adding a directory to their filesystems. No configuration file changes are needed and therefore the web server does not have to be restarted. This technique involves using the mod_vhost_alias Apache module and then specifying the VirtualDocumentRoot and VirtualScriptAlias directives in the httpd.conf configuration file with variable placeholders specifying the domain name being served. For example, these configuration directives enable mass virtual hosting under the /srv/www/hosts/ directory: File: httpd.conf # Obtain the server name from the Host: header UseCanonicalName Off # Prefix log entries with virtual host name LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon CustomLog logs/access_log vcommon # Use /srv/www/hosts/servername/html as documentroot VirtualDocumentRoot /srv/www/hosts/%0/html You can find complete details at http://httpd.apache.org/docs-2.2/vhosts/mass.html. 8-11
EVALUATION COPY Virtual Hosting DNS Implications IP-based Virtual Hosting Multiple IP addresses bound to NIC Each DNS record maps to own unique IP address www.dom1.com -> 192.0.2.1 www.dom2.com -> 192.0.2.2 Name-based Virtual Hosting Single IP address Multiple DNS records map to the single IP address www.dom1.com -> 192.0.2.10 www.dom2.com -> 192.0.2.10 IP-based vs. Name-based Virtual Hosts Although virtual hosts can share a single server, each virtual host still requires its own DNS record. For IP-based hosting, each DNS record must point to one or more unique IP addresses. To conserve IP addresses, HTTP version 1.1 added the Host: header, which made name-based virtual hosting possible. Although virtual hosts may share a single IP when using name-based virtual hosting, clients are required to include the server's fully qualified domain name in the Host: header. This makes it possible to distinguish the intended virtual host for each request. Binding IP Addresses for IP-based Hosting Linux supports assigning multiple IP addresses to a single physical network card. One way to accomplish this is using interface aliases. For example, eth0 could have one address while the alias eth0:0 could have another. Under RHEL6 you can create interface aliases by populating files in the /etc/sysconfig/network-scripts/ directory. The configuration files are named by placing a colon and the alias number after the network device name of the physical interface you are aliasing. For example, if the device eth0 had four IP addresses bound to it, you would create these files: ifcfg-eth0 ifcfg-eth0:1 ifcfg-eth0:2 ifcfg-eth0:3 8-12 The alias configuration files contain the same information needed to configure a physical network interface: IP address, subnet mask, etc. HTTPS and Virtual Hosting The HTTPS protocol requires that an SSL handshake occur as the very first step. During the SSL handshake the web server sends the client its certificate. The certificate contains the FQDN of the web server. For name-based virtual HTTPS hosting to work, the web server would have to know which certificate to send to the client during the SSL handshake. The server doesn't know this until the client sends the Host: header transmitted after a successful SSL handshake. Therefore, virtual hosting SSL sites requires the use of unique IP addresses for each site.
EVALUATION COPY httpd.conf VirtualHost Configuration NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin sinuhe@linuxtraining.com DocumentRoot "/srv/lt" ServerName www.linuxtraining.com ServerAlias linuxtraining.com </VirtualHost> VirtualHost Directives The first VirtualHost entry overrides the Main configuration, (and inherits what it does not override). It is the default fallback for any undefined ServerName referenced by the name server. NameVirtualHost Specifies which IP address (or address:port pairs) you want to configure to serve multiple domains by name. This will usually be the address to which your name-based virtual hostnames resolve. You can enter the IP address or use the * wildcard. <VirtualHost *:80>... </VirtualHost> This block directive specifies the options for the virtual namespace. The IP address specified as an option inside the directive must exactly match the IP address or wildcard used in the NameVirtualHost directive. Within the <VirtualHost> directive's block you can specify any directives that can be used in section 2 of the httpd.conf file for configuring the main server's webspace. ServerName Use this directive within the <VirtualHost> block to tell Apache which domain is being served by this <VirtualHost>. If the address:port pair specified in the <VirtualHost> block wrapped around this ServerName directive matches one of the NameVirtualHost values, then the value of ServerName will be used by Apache to identify which <VirtualHost> to use. ServerAlias Specifies additional names the containing <VirtualHost> block applies to. ServerAlias can only be used within <VirtualHost> blocks. DocumentRoot When used within the <VirtualHost> block, specifies which directory is being served by a specific virtual host. 8-13
EVALUATION COPY Port and IP based Virtual Hosts Port-based Multiple Sites Single Host Single IP Address and Host Name No Additional DNS Entries Client Specifies Port in URL Listen and NameVirtualHost Directives IP-based Multiple IP Addresses Single Host Multiple Network Interfaces Network Interface Aliases One-to-One Domain to IP Address VirtualHost Directive for Each Virtual Host Port-based Virtual Host Port-based virtual servers allow multiple web sites to run on a single host without having to update DNS records. The drawback is that the URL must specify the port number. Port-based virtual hosting can be used in conjunction with IP-based or Name-based virtual hosting, or standalone. To configure Apache for Port-based virtual hosting, specify Listen directives for each port number, and IP address if needed. This lends flexibility, allowing multiple IP addresses to be served on separate ports. This example illustrates a web server using a single IP address to serve two virtual hosts on different ports. File: httpd.conf Listen 80 Listen 3080 <VirtualHost 192.0.34.166:80> ServerName www.foo.com DocumentRoot /srv/www/virtual/foo.com-port80/docroot... snip... </VirtualHost> <VirtualHost 192.0.34.166:3080> ServerName www.foo.com DocumentRoot /srv/www/virtual/foo.com-port3080/docroot... snip... </VirtualHost> IP-based Virtual Host The best practice is to use IP-based virtual hosting from port 80. You can host multiple domains from the same server by binding the IP addresses of the domains to the network interface of the single host. These IP addresses must then exist in the DNS for the domains they serve. For example, the IP addresses 192.0.2.166 and 209.140.64.2 are either bound to multiple network cards on the host, or the host has a single network card bound with multiple IP address aliases. The following example illustrates the Apache httpd.conf file virtual host configuration for two virtual hosts with IP addresses 192.0.2.166 and 209.140.64.2. File: httpd.conf <VirtualHost 192.0.2.166> ServerName www.foo.com DocumentRoot /srv/www/virtual/foo.com/docroot </VirtualHost> <VirtualHost 209.140.64.2> ServerName www.bar.com DocumentRoot /srv/www/virtual/bar.com/docroot </VirtualHost> 8-14
EVALUATION COPY Name-based Virtual Host Multiple Host Names Single Host Single IP Address NameVirtualHost Directive VirtualHost Directive for Each Virtual Host Name-based Virtual Host With the HTTP1.1 specification's Host: header, you can now specify virtual hosting based on the fully-qualified domain name of the host. The benefit is that you can now host multiple virtual domains from a single host bound to a single IP address. Each virtual host has a DNS entry pointing to the same IP address. This IP address must be specified in the NameVirtualHost directive in the Apache httpd.conf configuration file. Alternatively, you can specify all inbound IP addresses by using the wildcard character, *, with the NameVirtualHost directive. Each virtual host needs a VirtualHost directive entry in the Apache httpd.conf file. The IP address in the VirtualHost directive must match the IP address specified in the NameVirtualHost directive. Therefore, if the wildcard is used with NameVirtualHost, the same wildcard must be used with the VirtualHost directive. File: httpd.conf NameVirtualHost *:80 <VirtualHost *:80> ServerName www.foo.com DocumentRoot /srv/www/virtual/foo.com/docroot... snip... </VirtualHost> <VirtualHost *:80> ServerName www.bar.com DocumentRoot /srv/www/virtual/bar.com/docroot... snip... </VirtualHost> 8-15
EVALUATION COPY Apache Logging Log formats Common Log Format Combined logs Log customization LogFormat CustomLog access_log ErrorLog error_log SELinux Context: httpd_log_t Apache Logging Apache supports a very flexible logging mechanism that lets you define exactly what you want to log and where you will log it to. Traditionally web servers have used Common Log Format (CLF), and depending on what you plan on doing with your Apache logs, you may need to use CLF. For example if you plan on using a log analysis package which only supports CLF. CLF logs the requesting host ("host"), the username of the client as reported by identd ("ident"), the user ID used if the request was for a password-protected document ("authuser"), the date and time of the request ("date"), the response returned to the client ("status"), and the number of bytes returned to the client ("bytes"). Another standard log format is the "combined" log which extends the CLF by adding the previous URL the client was at ("referer") and what software the client is using ("user agent"). Log Customization Configuration of logging in Apache is extremely flexible and can be based on the response returned. You can also specify separate log files for each virtual server. Logging styles are first defined with the LogFormat directive and then used with a CustomLog statement. For example: LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog /var/log/httpd/access_log common Additional logging formats are pre-defined and can be used by uncommenting the appropriate line in the stock httpd.conf file shipped with Linux. Custom LogFormat This example illustrates a more specific logging task in which a separate log is created for broken referrals sites elsewhere which link to pages on your server which don't exist. You might then have a log analysis tool or script that reads this file and sends an email report to a web developer who can notify the owners of the linked web sites. You can create the new format by creating a new LogFormat directive and then using the new format in a CustomLog directive. For example: LogFormat "%!200,304,302{Referer}i" broken CustomLog /var/log/httpd/broken_referral_log broken For a detailed description of the LogFormat variable formatters, see http://httpd.apache.org/docs-2.2/mod/mod_log_config.html#formats 8-16
EVALUATION COPY Log Analysis Apache Log Analysis HostnameLookups Log Analysis Tools Summaries of page access, Kbytes downloaded, etc. Analog Webalizer AWStats Log Maintenance and Rotation Apache's rotatelogs program Apache Log Analysis Browsing through your Apache log files can give you isolated information about pages accessed, kbytes downloaded, etc., but to be useful you need summaries of the information into reports. This analysis can give you information about parts of the site that are not being visited or let you know when your system needs to be upgraded. Disable Hostname Lookups Apache can log requests either by IP address or by hostname. For performance and network traffic reasons you should configure Apache to log requests by IP address. Logging by hostname requires at least one additional DNS lookup for every client request that the server receives. This behavior is controlled in the httpd.conf file by the HostnameLookups directive. Apache provides the logresolve program which will lookup hostnames for the IP addresses within the log file and rewrite the output. This way you can update the hostnames as a batch during analysis rather than take the performance hit real-time. This example processes the access_log file and redirects the output to a new log that can now be used for analysis. # logresolve < access_log > output_log Note that most log analysis tools provide hostname lookups themselves. Log Analysis Tools Multiple utilities exist that can process raw logs and produce more meaningful data. These tools provide services like access summaries, graphing, charts, etc. on a daily, weekly, or monthly basis. The reports are often rendered in HTML making them immediately accessible. Analog is the oldest log analysis tool. The reports it produces have a dated look and lack the detail of newer tools. The Webalizer is a robust analysis tool written in C for speed. The AWStats is the newest utility and produces the nicest looking reports with very good detail that also offers the ability to drill down to specific time periods. The package can be obtained from http://awstats.sourceforge.net/. Log Maintenance and Rotation Keep in mind that the web logs may need periodic rotating. Every 10,000 hits will use approximately one megabyte of drive space, so as your server becomes popular, the web logs could overload your storage unless they are periodically trimmed. Apache ships with the rotatelogs program which can be used to automatically rotate log files by doing logging through a pipe. This example will rotate the access log every 24 hours (that's every 86,400 seconds). TransferLog " rotatelogs access_log 86400" 8-17
EVALUATION COPY The Webalizer Webalizer can process your web server logs and create graphs and statistics such as: Number of hits per hour, day, month, etc. Entry and exit pages Top referrers Bandwidth usage by hour, day, month, etc By default, report available at http://server/usage/ The Webalizer In addition to analysis of web logs, the Webalizer can also analyze logs from many other common services, such as squid and wu-ftpd. It also has support for analysis of incremental logs such as those created by rotatelogs, an increasingly important feature as your site becomes popular and your web logs become huge. A great resource to check out if you encounter problems with the Webalizer is http://www.mrunix.net/webalizer/, which has documentation and example configuration files. Webalizer Configuration Details The configuration file is /etc/webalizer.conf. The produced report is available from: http://server/usage/ Webalizer is included on RHEL6 and when the RPM is installed, places a script in the /etc/cron.daily/ directory. The script checks for the existence of the /var/log/httpd/access_log file, and if it exists, runs /usr/bin/webalizer. 8-18
EVALUATION COPY Lab 8 Estimated Time: 75 minutes Task 1: Apache Architecture Page: 8-20 Time: 15 minutes Requirements: b (1 station) c (classroom server) Task 2: Apache Content Page: 8-23 Time: 15 minutes Requirements: b (1 station) Task 3: Configuring Virtual Hosts Page: 8-25 Time: 45 minutes Requirements: b (1 station) d (graphical environment) 8-19
Objectives y Explore the Apache Configuration y Determine which Apache modules are configured to load Requirements b (1 station) c (classroom server) Lab 8 Task 1 Apache Architecture Estimated Time: 15 minutes EVALUATION COPY Relevance Apache has many components and can be installed onto a system in a wide variety of ways by a Linux distributor. Knowing how Apache is installed on the system will enable you to quickly administer it. 1) 2) Install the Apache web server and manual packages: # yum install -y httpd httpd-manual... output omitted... Get an idea of where the main Apache files are located: # rpm -ql httpd less /etc/httpd /etc/httpd/conf /etc/httpd/conf.d /etc/httpd/conf.d/readme... snip... 3) Examine the main Apache configuration directory structure: # ls -l /etc/httpd total 28 drwxr-xr-x 2 root root 4096 Aug 18 15:19 conf drwxr-xr-x 2 root root 4096 Aug 18 17:41 conf.d lrwxrwxrwx 1 root root 19 Aug 18 14:50 logs ->../../var/log/httpd lrwxrwxrwx 1 root root 27 Aug 18 14:50 modules ->../../usr/lib/httpd/modules lrwxrwxrwx 1 root root 13 Aug 18 14:50 run ->../../var/run Notice the symbolic links. 8-20
4) Examine the main configuration file: # cd /etc/httpd/conf # less httpd.conf No need to read every line, just skim through to get a feel for the structure and how well commented it is. EVALUATION COPY 5) Examine the main configuration file stripping out all comments to see just the actual configuration directives: # egrep -v "ˆ *(# $)" httpd.conf less 6) 7) 8) 9) Apache configuration supports the Include directive that enables the configuration to be loaded from alternative files. This is used by default. Find exact configuration line in order to determine the directory and file pattern for configuration files that are automatically loaded: # grep ˆInclude httpd.conf Include conf.d/*.conf Determine which Apache configuration files are in that directory, examine a configuration file, and then read the README contained therein: # cd /etc/httpd/conf.d/ # ls -l... output omitted... # cat manual.conf # cat README List which Apache modules are available from the repositories configured on your system (most Apache module RPMs including 3rd party modules follow the naming convention mod_module_name): # yum list 'mod_*'... output omitted... The Apache web server comes with many modules for specific features and functionality. Most modules are loaded by default. For example, the mod_proxy module enables Apache to function as a caching proxy server. Determine which Apache modules are enabled: 8-21
# grep LoadModule /etc/httpd/conf/httpd.conf... output omitted... # grep LoadModule /etc/httpd/conf.d/*.conf... output omitted... # httpd -M Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared)... snip... Syntax OK EVALUATION COPY 10) 11) Start Apache and configure it for persistence: # service httpd start Starting httpd: [ OK ] # chkconfig httpd on Determine where Apache is configured to log to by the values of ErrorLog and CustomLog: # grep -E 'ˆ(Error Custom)Log' /etc/httpd/conf/httpd.conf ErrorLog logs/error_log CustomLog logs/access_log combined # ls -dl /etc/httpd/logs lrwxrwxrwx 1 root root 19 Aug 18 14:50 /etc/httpd/logs ->../../var/log/httpd # ls -l /var/log/httpd total 12 -rw-r--r-- 1 root root 0 Aug 18 18:29 access_log -rw-r--r-- 1 root root 440 Aug 18 18:29 error_log # cat /var/log/httpd/error_log... output omitted... 8-22
Objectives y Create an index.html file Requirements b (1 station) Relevance Being able to create minimal content for Apache is a basic task for System Administrators. By doing so, an administrator can confirm that Apache is serving up the correct files. Lab 8 Task 2 Apache Content Estimated Time: 15 minutes EVALUATION COPY 1) 2) By default on RHEL the Apache document root directory is /var/www/html/. If a web browser connects to the web server without specifying a file (e.g. http://stationx.example.com/), Apache will search the DirectoryIndex directive for a list of resources. These resources are usually the name of a file within the directory. Apache will return the first one that it finds. Determine what file(s) Apache looks for by seeing what parameter the DirectoryIndex directive has by default: # grep ˆDirectoryIndex /etc/httpd/conf/httpd.conf DirectoryIndex index.html index.html.var Determine if either file exists: # ls -al /var/www/html/ total 4 drwxr-xr-x 2 root root 4096 Jul 15 2009. drwxr-xr-x 9 root root 4096 Aug 18 14:50.. The files are listed in order of preference from left to right. The index.html.var can be used for auto language negotiation between Apache and the web browser. 3) As you see, there is not a /var/www/html/index.html file. The configuration directive in the file /etc/httpd/conf.d/welcome.conf specifies the error page to load if there is not an index file present: # cat /etc/httpd/conf.d/welcome.conf # # This configuration file enables the default "Welcome" # page if there is no default index page present for # the root URL. To disable the Welcome page, comment # out all the lines below. 8-23
# <LocationMatch "ˆ/+$"> Options -Indexes ErrorDocument 403 /error/noindex.html </LocationMatch> EVALUATION COPY 4) 5) Connect to http://stationx.example.com/ using an HTTP user agent and verify that the Test Page (/var/www/error/noindex.html) is displayed: # wget --server-response --spider http://stationx.example.com --13:53:55-- http://stationx.example.com/ Resolving stationx.example.com... 10.100.0.X Connecting to stationx.example.com 10.100.0.X :80... connected. HTTP request sent, awaiting response... HTTP/1.1 403 Forbidden... snip... Create the new /var/www/html/index.html file with content that identifies your workstation: File: /var/www/html/index.html + <!DOCTYPE html> + <html lang="en-us"> + <head> + <meta charset="utf-8"> + <title>your Name's Web Server</title> + </head> + <body> + <h1>stationx</h1> + <p>hello, World!</p> + </body> + </html> 6) Connect to http://stationx.example.com/ with a web browser and verify that the new web page is displayed. 8-24
Objectives y Configure Apache Virtual Hosts y Use the "Main" server for global settings y Create virtual servers for www.superthingy.org and www.random-nonsense.com Requirements b (1 station) d (graphical environment) Lab 8 Task 3 Configuring Virtual Hosts Estimated Time: 45 minutes EVALUATION COPY Relevance Using virtual hosts is a method that webservers use to host more than one domain on the same server. Making use of virtual hosts will allow an administrator to save resources by combining multiple domains onto a single server, thus reducing the extra overhead of maintaining a single server for each site being served. 1) 2) The following actions require administrative privileges. Switch to a root login shell: $ su -l Password: makeitso Õ Before you can bring up any virtual hosts, name resolution must be functioning properly. In order to avoid configuring BIND with a new zone for your virtual hosts, configure the /etc/hosts file for name resolution. Add your virtual hosts to the 127.0.0.1 line as follows: File: /etc/hosts - 127.0.0.1 localhost.localdomain localhost + 127.0.0.1 localhost.localdomain localhost www.superthingy.orga www.random-nonsense.com 3) Create directories to serve as your virtual host's document root directories: # mkdir -p /srv/www/{html,superthingy,random-nonsense} # restorecon -R /srv/www 8-25
4) Create the index.html for each virtual host. # echo "Super Thingy Website" > /srv/www/superthingy/index.html # echo "Random Nonsense Website" > /srv/www/random-nonsense/index.html EVALUATION COPY 5) Configure Apache to handle requests for your virtual hosts by creating a virtual host configuration file snippet that will be automatically included into the Apache configuration and adding a NameVirtualHost directive as the first line in the file: File: /etc/httpd/conf.d/vhosts-127.0.0.1.conf + NameVirtualHost 127.0.0.1:80 6) Append the superthingy.org and random-nonsense.com virtual host entries to the file just created: 8-26
File: /etc/httpd/conf.d/vhosts-127.0.0.1.conf NameVirtualHost 127.0.0.1:80 + + <VirtualHost 127.0.0.1:80> + ServerName www.superthingy.org + ServerAdmin webmaster@superthingy.org + DocumentRoot /srv/www/superthingy + ErrorLog logs/superthingy-error_log + TransferLog logs/superthingy-access_log + <Directory /srv/www/superthingy> + Options Indexes FollowSymLinks ExecCGI Includes + </Directory> + </VirtualHost> + + <VirtualHost 127.0.0.1:80> + ServerName www.random-nonsense.com + ServerAdmin webmaster@random-nonsense.com + DocumentRoot /srv/www/random-nonsense + ErrorLog logs/random-nonsense-error_log + TransferLog logs/random-nonsense-access_log + <Directory /srv/www/random-nonsense> + Options Indexes FollowSymLinks ExecCGI Includes + </Directory> + </VirtualHost> EVALUATION COPY 7) Open a second terminal window, switch to a root login shell, and monitor the Apache error log: $ su -l Password: makeitso Õ # tail -f /var/log/httpd/error_log 8) Run a syntax check on the changes you have made to the Apache configuration: # apachectl configtest Syntax OK If you get an error message, note the line number and fix it by editing httpd.conf. 8-27
9) Restart the Apache web server: # service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ] EVALUATION COPY 10) 11) 12) Use a web browser to connect to the http://www.superthingy.org/ URL. You should see "Super Thingy Website." Also try connecting to the http://www.random-nonsense.com URL. If your results are not what you expected, check the logs and verify your virtual host configuration in the httpd.conf file. Use a web browser to connect to the http://localhost URL. Did you get the expected results? Can you explain what happened? When you specify an IP address in a NameVirtualHost directive the requests to that IP address will only be served by matching <VirtualHost> sections. The "Main" server context will no longer service requests to the IP address listed in the NameVirtualHost directive. Therefore, when you use name-based virtual hosts you no longer use the "main" server as an independent web server context. The "Main" server becomes merely a place for configuring directives that are common for your virtual hosts which your virtual hosts can override within their <VirtualHost> sections. Configure your localhost to be its own virtual web server by setting up another name-based virtual host by appending the following to the configuration file previously created: 8-28
File: /etc/httpd/conf.d/vhosts-127.0.0.1.conf + <VirtualHost 127.0.0.1:80> + ServerName localhost + ServerAlias localhost.localdomain + ServerAdmin webmaster@localhost.localdomain + DocumentRoot /var/www/html + ErrorLog logs/localhost-error_log + TransferLog logs/localhost-access_log + <Directory /var/www/html> + Options Indexes FollowSymLinks ExecCGI Includes + </Directory> + </VirtualHost> EVALUATION COPY 13) 14) Run a syntax check on the changes you have made to the Apache configuration: # apachectl configtest Syntax OK Restart the Apache web server: # service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ] If you get an error message, note the line number and fix it by editing httpd.conf. 15) Use a web browser to connect to the http://localhost URL. You should now see the index.html of your main server document root, or the test page. Cleanup 16) Restore the pre-lab configuration by renaming the vhosts-127.0.0.1.conf file so that it no longer ends in.conf and therefore won't be automatically included into Apache's configuration. # cd /etc/httpd/conf.d # mv vhosts-127.0.0.1.conf vhosts-127.0.0.1.conf.disabled # service httpd restart... output omitted... 8-29
17) Exit from the tail command, and exit from the second terminal. EVALUATION COPY Ó c Ó d Ó d 18) Administrative privileges are no longer required; exit the root shell to return to an unprivileged account: # exit 8-30