Why should you look at your logs? Why ELK (Elasticsearch, Logstash, and Kibana)?



Similar documents
Logentries Insights: The State of Log Management & Analytics for AWS

How To Create A Virtual Private Cloud In A Lab On Ec2 (Vpn)

Application Security Best Practices. Matt Tavis Principal Solutions Architect

Every Silver Lining Has a Vault in the Cloud

How To Create A Virtual Private Cloud On Amazon.Com

319 MANAGED HOSTING TECHNICAL DETAILS

Opsview in the Cloud. Monitoring with Amazon Web Services. Opsview Technical Overview

A BETTER SOLUTION FOR MAINTAINING HEALTHCARE DATA SECURITY IN THE CLOUD

Overview and Deployment Guide. Sophos UTM on AWS

AWS Directory Service. Simple AD Administration Guide Version 1.0

XpoLog Center Suite Log Management & Analysis platform

Deploy Remote Desktop Gateway on the AWS Cloud

Microsoft Labs Online

Simone Brunozzi, AWS Technology Evangelist, APAC. Fortress in the Cloud

Eucalyptus User Console Guide

Primex Wireless OneVue Architecture Statement

KeyControl Installation on Amazon Web Services

Amazon WorkSpaces. Administration Guide Version 1.0

Tutorial: Using HortonWorks Sandbox 2.3 on Amazon Web Services

Logz.io See the logz that matter

FortiGate-AWS Deployment Guide

Introduction to DevOps on AWS

Chapter 9 PUBLIC CLOUD LABORATORY. Sucha Smanchat, PhD. Faculty of Information Technology. King Mongkut s University of Technology North Bangkok

Cisco Performance Visibility Manager 1.0.1

VXOA AMI on Amazon Web Services

Instructions for Activating and Configuring the SAFARI Montage Managed Home Access Software Module

Getting Started with AWS. Hosting a Static Website

TECHNOLOGY WHITE PAPER Jan 2016

Amazon WorkDocs. Administration Guide Version 1.0

VX 9000E WiNG Express Manager INSTALLATION GUIDE

Simple Storage Service (S3)

Amazon Elastic Beanstalk

Dell AppAssure cloud replication

This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1.

How To Deploy Sangoma Sbc Vm At Amazon Cloud Service (Awes) On A Vpc (Virtual Private Cloud) On An Ec2 Instance (Virtual Cloud)

Challenges in Deploying Public Clouds

Talari Virtual Appliance CT800. Getting Started Guide

How AWS Pricing Works May 2015

Cloud Computing with Amazon Web Services and the DevOps Methodology.

AWS CodePipeline. User Guide API Version

WHITE PAPER Redefining Monitoring for Today s Modern IT Infrastructures

AWS Service Catalog. User Guide

Securing Privileges in the Cloud. A Clear View of Challenges, Solutions and Business Benefits

TECHNOLOGY WHITE PAPER Jun 2012

Architecture Statement

AWS Account Setup and Services Overview

How AWS Pricing Works

UTILIZING CLOUDCHECKR FOR SECURITY

The Virtualization Practice

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida

Monitoring the Real End User Experience

Running Oracle on the Amazon Cloud

Live Guide System Architecture and Security TECHNICAL ARTICLE

Microsoft Labs Online

Amazon Web Services (AWS) Setup Guidelines

The Purview Solution Integration With Splunk

Background on Elastic Compute Cloud (EC2) AMI s to choose from including servers hosted on different Linux distros

Snoopy. Objective: Equipment Needed. Background. Procedure. Due Date: Nov 1 Points: 25 Points

Immersion Day. Creating an Elastic Load Balancer. Rev

Blackboard Open Source Monitoring

SECURITY IS JOB ZERO. Security The Forefront For Any Online Business Bill Murray Director AWS Security Programs

Machine Data Analytics with Sumo Logic

Microservices on AWS

USER CONFERENCE 2011 SAN FRANCISCO APRIL Running MarkLogic in the Cloud DEVELOPER LOUNGE LAB

Manual. Netumo NETUMO HELP MANUAL Copyright Netumo 2014 All Rights Reserved

6.0. Getting Started Guide

AWS Security. Security is Job Zero! CJ Moses Deputy Chief Information Security Officer. AWS Gov Cloud Summit II

Cloud Hosting. QCLUG presentation - Aaron Johnson. Amazon AWS Heroku OpenShift

Getting Started with AWS. Static Website Hosting

Thing Big: How to Scale Your Own Internet of Things.

Auto-Scaling WebApplication. Securityinthe Cloud. Stephen Coty. Chief Security Evangelist

Network Agent Quick Start

Amazon EFS (Preview) User Guide

How To Set Up Wiremock In Anhtml.Com On A Testnet On A Linux Server On A Microsoft Powerbook 2.5 (Powerbook) On A Powerbook 1.5 On A Macbook 2 (Powerbooks)

AWS Direct Connect. User Guide API Version

Building Energy Security Framework

IAAS REFERENCE ARCHITECTURES: FOR AWS

Ivy migration guide Getting from Dashboard to Ivy

RSA Authentication Manager

SOOKASA WHITEPAPER SECURITY SOOKASA.COM

HTTPS Inspection with Cisco CWS

10 Configuring Packet Filtering and Routing Rules

Using Amazon EMR and Hunk to explore, analyze and visualize machine data

Cloud models and compliance requirements which is right for you?

CrashPlan Security SECURITY CONTEXT TECHNOLOGY

EXTENDING SINGLE SIGN-ON TO AMAZON WEB SERVICES

Secure Web Service - Hybrid. Policy Server Setup. Release Manual Version 1.01

Customer Case Study. Sharethrough

vcloud Director User's Guide

BlackBerry Enterprise Service 10. Secure Work Space for ios and Android Version: Security Note

Building a Continuous Integration Pipeline with Docker

Using NXLog with Elasticsearch and Kibana. Using NXLog with Elasticsearch and Kibana

Transcription:

Authors Introduction This guide is designed to help developers, DevOps engineers, and operations teams that run and manage applications on top of AWS to effectively analyze their log data to get visibility into application layers, operating system layers, and different AWS services. This booklet is a step-by-step guide to retrieving log data from all cloud layers and then visualizing and correlating these events to give a clear picture of one s entire AWS infrastructure. Asaf Yigal VP Product and co-founder of Logz.io Tomer Levy CEO and co-founder of Logz.io Why should you look at your logs? Cloud applications are inherently more distributed and built out of a series of components that need to operate together to deliver a service to the end user successfully. Analyzing logs becomes imperative in cloud environments because the practice allows relevant teams to see how all of the building blocks of a cloud application are orchestrated independently and in correlation with the rest of the components. Why ELK (Elasticsearch, Logstash, and Kibana)? ELK is the most common log analytics platform in the world. It is used by companies including Netflix, LinkedIn, Facebook, Google, Microsoft, and Cisco. ELK is an open source stack of three libraries (Elasticsearch, Logstash, and Kibana) that parse, index, and visualise log data (and, yes, it s free).

Why Analyzing Application Logs Why should I analyze my application logs? Logz.io enables companies to get ELK as a service in the cloud. So instead of going through the challenging task of building a production-ready ELK stack internally, users can signup and start working in a matter of minutes. In addition, Logz.io s ELK as a service includes alerts, multi-user, role-based access, and unlimited scalability. On top of providing an enterprise-grade ELK platform as a service, Logz.io employs unique machine-learning algorithms to automatically surface critical log events before they impact operations, providing users with unprecedented operational visibility into their systems. Application logs are fundamental to any troubleshooting process. This has always been true -- even for mainframe applications and those that are not cloud-based. With the pace at which instances are spawned and decommissioned, the only way to troubleshoot an issue is to first aggregate all of the application logs from all of the layers of an application. This enables you to follow transactions across all layers within an application s code. How do I ship application logs? There are dozens of ways to ship application logs. The best method to use depends on the type of application, the format of the logs, and the operating system. For example, Java applications running on Linux servers can use Logstash or logstashforwarder (a version that is lightweight and includes encryption) or ship it directly from the application layer using a log4j appender via HTTPs/HTTP. You can read more in our essay on Modern Log Management here: http: /logz.io/blog/modernlog-management/

Analyzing Infrastructure Logs Monitoring System Performance with ELK What are infrastructure logs? We consider everything which is not the proprietary application code itself to be an infrastructure log. These include system logs, database logs, web server logs, network device logs, security device logs, and countless others. Why should I analyze infrastructure logs? Infrastructure logs can shed light on problems in the code that is running or supporting your application. Performance issues can be caused by overutilized or broken databases or web servers, so it is crucial to analyze those log files especially when correlated with the application logs. While troubleshooting performance issues, we ve seen many cases in which the root cause was a Linux kernel issue. Overlooking such low-level logs can make forensics processes long and fruitless. Read more about why it s important to ship OS logs in our essay on http: /logz.io/ blog/elasticsearch-cluster-disconnects/. How do I ship infrastructure logs? Shipping infrastructure logs is usually done with open-source agents such as rsyslog, logstash, logstash forwarder, or nxlog that read the relevant operating system files such as access logs, kern.log, and database events. You can read here about more methods to ship logs here: https: /app.logz.io/#/dashboard/data-sources/ One of the challenges organizations face when troubleshooting performance issues is that they are looking at one dashboard that shows performance metrics and another system to troubleshoot issues and analyze logs. In many cases, it s possible to use a single dashboard that shows both the performance metrics and the visualized log data that is being generated by all of the components of your system. In many cases, performance issues are related to events in application stacks that are recorded in log files. Collecting system performance metrics and shipping them as log entries then enables quick correlations between performance issues and their respective events in the logs. How do I ship performance metrics? To use ELK to monitor your platform s performance, run probes on each host to collect system performance metrics. Software service operations teams can then visualize the data with the Kibana part of ELK and use the resulting charts to present their results. For example, we encapsulated Collectl in a Docker container to have a Docker image that covered all of our data collecting and shipping needs. Read more and get a download on our site: http: /logz.io/blog/elk-monitor-platform-performance/

Monitoring ELB Logs Security - AWS CloudTrail Logs What are ELB log files? ELB is Amazon Web Services EC2 load balancer. The ELB logs are a collection of all of the traffic running through the ELB. This data includes from where the ELB was accessed, which internal machines were accessed, the identity of the requester (e.g., the operating system and browser) and additional metrics such as processing time and traffic volume. How can I use ELB log files? There are many uses for ELB logs, but the main reasons are to check the operational health of the ELB and it s efficient operation. In the context of operational health, you might want to determine if your traffic is being equally distributed amongst all internal servers. For operational efficiency, you might want to identify the volume of access that you are getting from different locations in the world. You can visit ELK labs at https: /app.logz.io/#/labs and can search for ELB to find different visualizations, dashboards, and alerts. How can I ship ELB log files? ELB logs can be saved into a S3 bucket by making a very simple configuration in your EC2 console. Once the files are in the S3 bucket, you can configure readonly access to that bucket by visiting: https: /app.logz.io/#/dashboard/datasources/elb. What are CloudTrail log files? CloudTrail logs is a logging mechanism of Amazon Web Services EC2, which records all of the changes done in an environment. This is a very powerful and robust tool that gives a different set of events for each EC2 object that can be leveraged according to the desired use. EC2 log events include, among other things, access to the EC2 account and changes to security groups as well as activation and termination of machines and services. How can I use CloudTrail log files? CloudTrail logs are very powerful and have many uses. One of the main uses revolves around auditing and security. For example, we monitor access and receive internal alerts on suspicious activity in our environment. Two important things to remember: Keep track of any changes being done to security groups and VPC access levels, and monitor your machines and services to ensure that they are being used properly by the proper people. You can visit the ELK Labs at https: /app.logz.io/#/labs and can search for CloudTrail to find different visualizations, dashboards, and alerts. How can I ship CloudTrail log files? CloudTrail logs are easy to configure because they ship to S3 buckets. As opposed to some EC2 services, CloudTrail logs can be collected from all different regions and availability zones into a single S3 bucket. Once the files are in the S3 bucket, you can configure read-only access to that bucket by visiting: https:/app.logz.io/#/dashboard/ data-sources/cloudtrail.

AWS VPC Flow logs Cloudfront Logs What are VPC Flow Logs? VPC flow logs provide the ability to log all traffic transmitted within an AWS VPC (Virtual Private Cloud). The information captured includes information about allowed and denied traffic (based on security group and network ACL rules). It also includes source and destination IP addresses, ports, the IANA protocol numbers, packet and byte counts, time intervals during which the flows were observed, and the actions (ACCEPT or REJECT). How can I use the VPC logs? VPC flow logs can be turned on for a specific VPC, VPC subnet, or an Elastic Network Interface (ENI). Most common uses are around the operability of the VPC. You can visualize the rejection rates to identify configuration issues or system misuses of the system, you can correlate flow increases in traffic to load in other parts of the systems, and you can verify that only a specific sets of servers are being accessed and belong to the VPC. You can also make sure the right ports are being accessed from the right servers and receive alerts whenever certain ports are being accessed. You can visit ELK Llabs at https: /app.logz.io/#/labs and can search for VPC to find different visualizations, dashboards, and alerts. How can I ship VPC llogs? Once enabled, the VPC flow logs are being stored in the Cloudwatch logs, and you can extract them to a third-party log analytics service via several methods. The two most common methods are redirecting logs to a Kinesis stream and dumping them to S3 using a Lambda function. We recommend using a third-party open source tool to dump cloudwatch logs to S3. You can read more about the different methods here https: /app.logz.io/#/dashboard/data-sources/vpc What are CloudFront Access logs? CloudFront is AWS s CDN, and the CloundFront logs include information in the format of W3C Extended Fformat (http: /www.w3.org/tr/wd-logfile.html) and report all access to all objects by the CDN. How can I use the CloudFront logs? The CloudFront logs can be used mainly for analysis and verification of the operational efficiency of the CDN. You can see the error rates through the CDN, from where is the CDN being accessed, from and what percentage of traffic is being served by the CDN. These logs, though very verbose, can reveal a lot about the responsiveness of your website as customers navigate experience it. You can visit the ELK Llabs at https: / app.logz.io/#/labs and can search for CloudFront to find different visualizations, dashboards, and alerts. How can I ship Cloudfront llogs? Once enabled, CloudFront will write the data to your S3 Bucket every hour or so. You can then pull the CloudFront logs to Llogz.io by pointing to the relevant S3 Bucket. Go to https: /app.logz.io/#/ dashboard/data-sources/cloudfront for additional assistance and to see examples on how to configure access.

S3 Access Logs Conclusion What are S3 Access logs? S3 Access logs record events for every access of being done to an S3 Bucket. Access data includes the identities of the entities accessing the bucket, the identities of the buckets and their owners, and metrics around access time and turnaround time as well as the response codes that are returned. How can I use the S3 Access logs? Monitoring S3 Access logs is a key part of securing your AWS environments. You can determine from where in the world and how are your buckets being accessed and receive alerts on illegal access to your Buckets. You can also leverage the information to receive provide performance metrics and analyses on such access to ensure that your overall application response times are being properly monitored. ELK is a very powerful platform and can provide tremendous value when you invest the effort to generate a holistic view of your environment. When running on AWS, the majority of the infrastructure logs can be added with a single click of the button to Logz.io s ELK Cloud platform. In a manner of minutes, you ll be able to leverage the auto-generated dashboards and alerts. There are many uses for AWS logs that range from performing audits to maintaining security -- and all of those uses can be supported with S3 Access and CloudTrail logs and then monitored with CloudFront and VPC flow logs. Make sure to check out ELK Labs for the marketplace for auto-generated dashboards and alerts: https: /app.logz.io/#/labs How can I ship S3 Access logs? Once enabled, S3 Access logs are written to a S3 Bucket of your choice. You can then pull the S3 aaccess logs to Logz.io by pointing to the relevant S3 Bucket. Go to https: /app.logz.io/#/dashboard/data-sources/s3access for additional assistance and to see examples of how to configure access. Connect with us E-mail Website Twitter LinkedIn Facebook Google+ info@logz.io www.logz.io www.twitter.com/logzio www.linkedin.com/company/4831888 www.facebook.com/logz.io https: /plus.google.com/+logzio/