Securing Web Apps with NGINX

Size: px
Start display at page:

Download "Securing Web Apps with NGINX"

Transcription

1 Securing Web Apps with NGINX Stephan Ilyin,

2 How many of you have your websites hacked?

3 Each application probably has vulnerabilities

4 and someday it can be hacked

5 How to harder/secure your application?

6 How deal with attacks to your application? Chapter 1.

7 Tip #1. mod_security can be a good choice

8 Mod_security rocks! Open-source. Finally available for NGINX It works! It can be quite efficient in detecting attacks Supports virtual patching It is incredible customisable

9 server { listen 80; server_name localhost; location / { ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; } } { proxy_pass proxy_read_timeout 180s; }

10 Relies on regex but mod_security is not so good! It is expensive in performance prospective If you use default rulesets, you will get a huge number of false-positives Rules tuning is a hard job (difficult to maintain) Signatures never covers all the attacks REGEXs can be bypassed

11 What rules look like # ShellShock virtual patch (Bash attack) SecRule REQUEST_HEADERS "^\(\s*\)\s+{" "phase:1,deny,id: ,t:urlDecode,status: 400,log,msg:'CVE Bash Attack'"

12 Good practice (imho) Use public ruleset for monitoring mode Craft rules from scratch specifically for your application for blocking mode

13 More rules = More overhead!

14 Using phases is good idea 1. Request headers (REQUEST_HEADERS) 2. Request body (REQUEST_BODY) 3. Response headers (RESPONSE_HEADERS) 4. Response body (RESPONSE_BODY) 5. Logging (LOGGING)

15 SecRule phase 2 SecRule REQUEST_BODY "/+etc/+passwd" "t:none,ctl:responsebodyaccess=on,msg:'- IN- PASSWD path detected', phase: 2,pass,log,auditlog,id:'10001',t:urlDeco de,t:lowercase,severity:1"

16 SecRule phase 4 SecRule RESPONSE_BODY "root\:x\:0\:0" "id:'20001',ctl:auditlogparts=+e, msg:'- OUT- Content of PASSWD detected!',phase: 4,allow,log,auditlog,t:lowercase,severit y:0"

17 Handbook by Ivan Ristic. Must read!

18 Tip #2. Give a chance to naxsi (another WAF for NGINX)

19 Why naxsi? NAXSI means Nginx Anti Xss & Sql Injection (but do more) Naxsi doesn't rely on a signature base (regex)!

20 naxsi rules Reads a small subset of simple scoring rules (naxsi_core.rules) containing 99% of known patterns involved in websites vulnerabilities. For example, '<', ' ' or 'drop' are not supposed to be part of a URI.

21 This rule triggers on select or other SQL operators MainRule "rx:select union update delete insert table from ascii hex unhex drop" "msg:sql keywords" "mz:body URL ARGS $HEADERS_VAR:Cookie" "s:$sql:4" id:1000;

22 naxsi setup http { include /etc/nginx/naxsi_core.rules; include /etc/nginx/mime.types; } [...]

23 But! Ruleset is not enough! Those patterns may match legitimate queries! Therefore, naxsi relies on whitelists to avoid false positives Nxutil tool helps the administrator to create the appropriate whitelist there are pre-generated whitelists for some CMS (e.g. WordPress)

24 LearningMode; #Enables learning mode SecRulesEnabled; #SecRulesDisabled; DeniedUrl "/RequestDenied"; ## check rules CheckRule "$SQL >= 8" BLOCK; CheckRule "$RFI >= 8" BLOCK; CheckRule "$TRAVERSAL >= 4" BLOCK; CheckRule "$EVADE >= 4" BLOCK; CheckRule "$XSS >= 8" BLOCK;

25 naxsi ruleset

26 naxsi whitelist

27 Naxsi pros and cons Pros: Pretty fast! Update independent Resistant to many waf-bypass techniques Cons: You need to use LearningMode with each significant code deployment

28 Tip #3. Try repsheet (behaviour based security)

29 Watch Aaron Bedra s talk

30 Tip #4. And there is also Wallarm WAF based on NGINX

31

32 How deal with DDoS? Chapter 2.

33 How to deal with DDoS? The traditional technique for self-defense is to read the HTTP server s log file, write a pattern for grep (to catch bot requests), and ban anyone who falls under it. That s not easy! The following are tips on where to place pillows in advance so it won t hurt so much when you fall.

34 Tip #5. Use test_cookie module

35 Use test_cookie module Usually HTTP-flooding bots are pretty stupid Lack HTTP cookie and redirect mechanisms Testcookie-nginx works as a quick filter between the bots and the backend during L7 DDoS attacks, allowing you to screen out junk requests

36 Use test_cookie module Straightforward checks: Whether the client can perform HTTP Redirect Whether it supports JavaScript Whether it supports Flash

37 Use test_cookie module In addition to its merits, test_cookies also has its drawbacks: Cuts out all bots (including Googlebot) Creates problems for users with Links and w3m browsers Does not protect against bots with full-browser-stack

38 Tip #6. Code 444

39 Code 444 The goal of DDoSers is often the most resourceintensive part of the site. A typical example is a search engine. Naturally, it can be exploited by charging tens of thousands of queries So what can we do?

40 Code 444 Temporarily disable this search function Nginx supports custom code 444, which allows you to simply close the connection and give nothing in response

41 Code 444 location /search { } return 444;

42 Tip #7. Use ipset

43 Ban bots IPs with ipset If you re sure that location/search requests are coming only from bots Ban bots (getting 444) with a simple shell script ipset -N ban iphash tail -f access.log while read LINE; do echo $LINE cut -d -f3 cut -d -f2 grep -q 444 && ipset -A ban ${L%% *} ; done

44 Tip #8. Banning based on geographic indicators

45 Tip #8. Banning based on geographic indicators You can strictly limit certain countries that make you feel uneasy But. It is a bad practice! GeoIP data isn t completely accurate!

46 Tip #8. Banning based on geographic indicators Connect to the nginx GeoIP module Display the geographic indicator information on the access log grep the nginx access log and add clients by geographic indicators to the ban list.

47 Tip #9. You can use neural network!

48 Tip #9. You can use neural Bad request: network [20/Dec/2011:20:00: ] "POST /forum/index.php HTTP/1.1" " -" Good request: [20/Dec/2011:15:00: ] "GET /forum/rss.php?topic= HTTP/1.0" "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9) Gecko/ Firefox/3.0"

49 Tip #9. You can use neural network Use Machine Learning (ML) to detect bots: use neural network (e.g. PyBrain) stuffed logs inside analyse the requests for classification between "bad" and "good" clients under DDoS A good proof-of-concept: neural_networks_vs_ddos

50 Tip #9. You can use neural network Useful to have the access.log before a DDoS attack, because it lists virtually 100% of your legitimate clients It is an excellent dataset for neural network training

51 Tip #10. Keep track of the number of requests per second

52 Tip #10. Keep track of the number of requests per second You can estimate this value with the following shell command echo $(($(fgrep -c "$(env LC_ALL=C date +%s)-60)) +%d/%b/%y: %H:%M)" $ACCESS_LOG )/60))

53 Tuning the web server Of course, you put nginx on silent and hope that everything will be OK. However, things are not always OK. So the administrator of any server should devote a lot of time to tweaking and tuning nginx.

54 Tip #11. Limit buffer sizes and timeouts in NGINX

55 Every resource has a limit Every resource has a limit. In particular, this applies to memory. the size of the header and all buffers need to be limited to adequate values on the client and on the server as a whole

56 Limit buffers client_header_buffer_size large_client_header_buffers client_body_buffer_size client_max_body_size

57 And time_outs reset_timeout_connection client_header_timeout client_body_timeout keepalive_timeout send_timeout

58 Question: what are the correct parameters for the buffers and timeouts?

59 There s no universal recipe here But there is a proven approach you can try

60 How to limit buffers and timeout? 1. Mathematically arrange the minimum parameter value. 2. Launch site test runs. 3. If the site s full functionality works without a problem, the parameter is set. 4. If not, increase the parameter value and go to step 2.

61 Tip #12. Limit connections in NGINX (limit_conn and limit_req)

62 Ideally you need to test application to see how many requests it can handle and set that value in the NGINX configuration

63 http { limit_conn_zone $binary_remote_addr zone=download_c:10m; limit_req_zone $binary_remote_addr zone=search_r:10m rate=1r/s; server { location /download/ { limit_conn download_c 1;.. } location /search/ { limit_req zone=search_r burst=5;.. } } }

64 What to limit? It makes sense to set limits for limit_conn and limit_req for locations where it s costly to implement scripts You can also fail2ban utility here:

65 Bad practices / How not to configure NGINX Chapter 3.

66 Bad practices NGINX has secure-enough defaults Sometimes administrators can make mistakes cooking it

67 Tip #13. Be careful with rewrite with $uri

68 rewrite with $uri Everyone knows $uri / ( normalized" URI of the request) normalization is decoding the text encoded in the '%XX' form, resolving references to the relative path components '.' and '..', and possible compression of two or more adjacent slashes into a single slash

69 rewrite with $uri Typical HTTP -> HTTPS redirect snippet: location / { rewrite ^ } location / { return } What can go wrong? CRLF (%0d%0a) comes to play

70 rewrite with $uri Request GET /test%0d%0aset-cookie:%20malicious%3d1 HTTP/1.0 Host: yourserver.com Respond HTTP/ Moved Temporarily Server: nginx Date: Mon, 02 Jun :08:09 GMT Content-Type: text/html Content-Length: 154 Connection: close Location: Set-Cookie: malicious=1

71 Use $request_uri instead of $uri

72 Tip #14. Pay attention to try_files

73 try_files try_files checks the existence of files in the specified order and uses the first found file for request processing if none of the files were found, an internal redirect to the URI specified in the last parameter is made

74 try_files There is a Django project $ tree /your/django/project/root +-- media some_static.css +-- djangoproject init.py settings.py urls.py wsgi.py +-- manage.py

75 try_files Administrators decide to serve static files with nginx and use this configuration root /your/django/project/root; location / { try_files } { proxy_pass }

76 try_files: what s wrong? NGINX will first try to serve static file from root, and only if it does not exists pass the request location Therefore, anyone can access manage.py and all of the project sources (including djangoproject/ settings.py)

77 Tip #15. Use disable_symlinks if_not_owner

78 Hosters usually do this location /static/ { } root /home/someuser/www_root/static;

79 What s the problem? User can create symlink to any file available to nginx worker (including files of another users)! [root@server4 www]# ls -alh total 144K drwxr-x--- 6 usertest nobody 4.0K Apr 10 20:09. drwx--x--x 13 usertest usertest 4.0K Apr 7 02:16.. -rw-r--r-- 1 usertest usertest 184 Apr 6 21:29.htaccess lrwxrwxrwx 1 usertest usertest 38 Apr 6 22:48 im1.txt -> /home/ another_user/public_html/config.php -rw-r--r-- 1 usertest usertest 3 May index.html

80 What you can do 1. Turn off symlinks (and users will suffer) 2. Use option disable_symlinks if_not_owner (best choice)

81 Slides: bit.ly/nginx_secure_webapps Stephan Ilyin,