Designing and Building an Open IT Operations Analytics (ITOA) Architecture
|
|
|
- Colin Copeland
- 10 years ago
- Views:
Transcription
1 WHITE PAPER Designing and Building an Open IT Operations Analytics (ITOA) Architecture Abstract This white paper provides a roadmap for designing and building an open IT Operations Analytics (ITOA) architecture. You will learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets. After weighing the role of each IT data source for your organization, you can learn how to combine them in an open ITOA architecture that avoids vendor lock-in, scales out cost-effectively, and unlocks new and unanticipated IT and business insights.
2 Table of Contents Executive Summary 3 The Need for an Open ITOA Architecture 3 The Four Sources of IT Visibility: A Taxonomy 5 Machine Data 6 Wire Data 7 Agent Data 8 Synthetic Data 9 Other Sources 10 Tying Everything Together: Open Architecture for ITOA 10 Selecting Sources of IT Visibility 10 Selecting a Data Store for the ITOA Platform 10 Selecting a Visualization Solution 11 Conclusion 12
3 Executive Summary This paper explains the need for and benefits of designing an Open IT Operations Analytics (ITOA) architecture based on the principle of streaming vastly different IT data sets into a scalable, non-proprietary data store for exploration and multi-dimensional analysis. Many IT organizations do not realize how they can use rich data sets that they already have for better operational and business decisions. The objective of Open ITOA is to enable the discovery of new valuable relationships and insights derived from these combined data sets. This will drive improved IT operations, add business value, prevent vendor or data lock-in to proprietary systems, and provide a roadmap for the cost-effective expansion of the analytics architecture. With the aim of providing practical and prescriptive guidance, this paper introduces a taxonomy that covers the primary data sets that serve as the foundation of Open ITOA. This taxonomy not only defines machine data, wire data, agent data, and synthetic data, but also explains each data source s strengths, limitations, and potential uses in an organization. By classifying data sets according to their source, you can select best-of-breed technology on an objective basis. The goal is to put you in control of your ITOA platform by providing vendor choice and minimizing switching costs if you decide to replace any particular data source technology. Using the four sources taxonomy, you can assess your current toolset to identify and eliminate redundancies, address technology gaps, and evaluate new solutions while building toward a unified and Open ITOA practice. Organizations that have adopted this practice have reduced the number of tools and products in use from an average of 22 products down to less than 8 while increasing operational visibility, IT productivity, improved MTTR, and cross-team collaboration (through fewer and dramatically shorter war room sessions). Open ITOA architectures can also augment traditional business intelligence architectures. The final section of this paper provides some prescriptive guidance for designing an open ITOA architecture, including concrete actions to take when evaluating your existing tool sets, selecting non-proprietary data stores, as well as choosing visualization and analysis tools. PLAN NOW FOR ITOA Gartner estimates that by 2017, 15% of enterprises will actively use ITOA solutions, up from just 5% in IT executives should plan an open architecture for their ITOA solution, which will minimize costs and ensure the best results for their organization. The Need for an Open Architecture for ITOA IT organizations purposefully design and build architectures for their applications, networks, storage, cloud, and security systems, but what about IT Operations? If you asked a CIO or VP of IT, Do you have a network or security architecture? they would of course answer, Yes. In most cases, they would be able to describe their strategy, the principles driving that strategy, and the components of those architectures. But if you ask, Do you have an IT Operations architecture? most would not be able to describe a strategy let alone the principles behind their tool selection. It is ironic that production IT Operations the intersection where the technology and business meet is the one place in the IT organization that lacks a well-defined and unified architecture. Lacking a planned architecture for gaining visibility, IT organizations purchase monitoring and management products according to the specific needs of specialist groups. The network team will have one set of tools; the application support team
4 another; DBAs, storage teams, and system administrators will all have others; and so forth. What results is tool bloat and a number of disparate views, and information locked in their own data stores, with no easy way to correlate data sets and gain holistic insight. However, it is not entirely IT organizations fault for a lack of a unified architecture; IT management vendors are primarily to blame because they rarely, if ever, pursue meaningful integration between siloed products that would enable a unified architecture. Meaningful integration does not mean sharing SNMP traps, alerts, or writing plug-ins. While these can be useful, the only way to achieve ITOA insights from expected relationships while discovering new ones is through unrestricted data integration using a common data store. This data store must be open and non-proprietary. It cannot be controlled by any one vendor because that represents too big of a business risk for the others. The data store serves as a neutral ground where different monitoring products can be fairly compared based on the data they provide. This approach also negates vendors attempts to control their customers by locking their data in a proprietary data store and then discouraging meaningful integration with products that could be perceived as competing. Context Flexibility Scalability Cost TRADITIONAL IT MONITORING AND ANALYTICS IT teams purchase their own monitoring tools, each with a data set that cannot be easily correlated across other data sets. IT teams lack context when answering questions. Traditional proprietary data stores and analytics solutions rely on rigid schemas that require weeks or months to change. Organizations data is locked in a proprietary data store and cannot be ported. Relational database technology does not scale well. Discrete proprietary tools for each IT team and limited scalability increase licensing and maintenance costs. NEXT-GENERATION IT MONITORING AND ANALYTICS Open data streams and generous APIs enrich data with context and enable collaboration across multiple IT teams and offer value to business stakeholders. Modern, non-proprietary data stores enable organizations to easily perform ad hoc queries on unstructured data. Data can be moved from one data store to another without hinderance. NoSQL data stores are built for scalability. A rationalized toolset offering the four sources of IT visibility and a non-proprietary data store reduce licensing and maintenance costs. It is time for IT organizations to take an architectural approach to ITOA, intentionally designing a open system to deliver consistent, high-quality results for performance, availability, security, and business insight. An Open ITOA platform will provide both real-time analysis for what is happening within the environment at any given time as well as the ability to query indexed data on an ad hoc basis for forensic analysis and long-term trending. This platform will primarily provide value for IT Operations, Security, and Application teams, but will also benefit business
5 stakeholders through business-value dashboards. With an open architecture for ITOA, IT organizations can: Proactively identify and resolve performance and security issues Correlate transactions and events across every tier of the infrastructure Intelligently plan IT investments and optimize infrastructure Facilitate collaboration across various IT teams Improve the IT staff productivity and workflow efficiency Reduce costs by rationalizing their IT operations management toolset Provide line-of-business stakeholders with real-time insights As stated previously, no single IT management or monitoring vendor will provide this ideal solution, but with a planned and purposefully designed open architecture that incorporates the four sources of IT visibility, you can effectively evaluate different solutions and how they fit into a unified ITOA architecture that is appropriate for your organization. The Four Sources of IT Visibility: A Taxonomy In order to break down the siloes in the IT organization, IT teams need to stop thinking in terms of product categories such as NPM, APM, and EUM and instead think in terms of the data output of those products. Thinking in terms of product categories only leads to greater confusion. Go to any of these vendors web site and you will see that they all provide application, network, cloud, virtualization, infrastructure, system, database, and storage visibility. As organizations prepare to adopt ITOA architectures, a new taxonomy can help to clarify what components are required. Borrowing from Big Data principles, a new IT data taxonomy classifies data sets by their source. The benefit of this taxonomy is to accurately set expectations of what types of outputs you can expect from each product. An IT Operation Analytics (ITOA) architecture must combine the following four IT data sources for complete visibility. Each data source provides important information that makes it well suited for specific objectives and is complimentary to the others. Borrowing from Big Data principles, a new IT data taxonomy classifies data sets by their source to accurately set expectations of what types of outputs you can expect. 1. Machine data is system self-reported data, including logging provided by vendors, SNMP, and WMI. This information about system internals and application activity helps IT teams identify overburdened machines, plan capacity, and perform forensic analysis of past events. 2. Wire data is all L2-L7 communications between all systems. This source of data has traditionally included including deep packet inspection and header sampling but recent advancements allow for far deeper, real-time wire data analysis. 3. Agent data from byte-code instrumentation and call-stack sampling. Code diagnostic tools have traditionally been the purview of Development and QA teams, helping to identify hotspots or errors in software code. 4. Synthetic data from synthetic transactions and service checks. This data enables IT teams to test common transactions from locations around the globe or within the datacenter.
6 The following sections describe each source of data in detail. It is important to understand the capabilities and limitations of each data source so that you can determine its applicability in your organization. This paper will not cover the capabilities of products that consume machine data and only discuss the data source itself and what that source can and cannot provide to assist end-users. Machine Data Definition: Machine data is system self-reported information produced by clients, servers, network devices, security appliances, applications, remote sensors, or any data that is selfreported by a machine. Machine data outputs tend to be log files or SNMP and WMI polling and traps. Machine data can be classified as time-series, event-driven data. Examples: SNMP and WMI are common methods for capturing point-in-time metrics, representing active and passive methods of acquiring machine data. When polling a system, you are asking for information. When using traps, you are waiting for the system to push out information. Log files are ubiquitous across systems and devices and include application logs, call records, file system audit logs, and syslog. Strengths: Machine data is ubiquitous and can contain a wealth and variety of information, including: application counters, exceptions, methods and service states; CPU, memory, and disk utilization and I/O, NIC performance; Windows and Linux events; firewall and security events, configuration changes, audit logs, and sensor data such as temperature and pressure readings. The variety of information to be mined from machine data is nearly endless which makes it a powerful source of data not only for IT Operations but for the business as well. The information about system internals and self-reported application activity helps IT teams identify overburdened machines, plan capacity, and perform forensic analysis of past events. Limitations: As the data source implies, you are dependent upon the machine to produce timely, thorough, usable, and accurate information. Because of the wide variety of machines,
7 systems, and applications in existence from many different vendors, the quality, usability, and thoroughness of machine data can vary greatly depending upon vendor and even vary widely among product teams working for the same company. Another limitation is that if a machine or application is under duress, the system may not log information accurately or at all. Finally, there is valuable information that is either impractical or impossible for machines to log. This would include information found in transaction payloads, database stored procedures, storage transaction performance or any timed measurements between multiple systems. Key Questions Are your systems administrators using tools such as PowerShell to manage the types of machine data collected? The answer to this question will help determine how well your organization is using machine data. Do you have a log indexing and analysis solution in place? If so, how much data do you send to it for indexing each day? What percentage of your server infrastructure is virtualized? In highly virtualized environments, application performance is less correlated with resource utilization metrics than when applications run on dedicated infrastructure. Wire Data Definition: Wire data comprises communication that passes between networked systems. At the most fundamental level, wire data is the raw bits flowing between hosts, but it includes the entire OSI stack from L2 to L7, inclusive of TCP state machines and application information passed in L7 payloads. Wire data can be classified as time-series, globally observed transaction-level data. Example: Traditional network monitoring tools scan traffic to capture basic L2-L4 metrics such as TCP flags, Type of Service (ToS), and bytes by port, while packet analyzers take a deeper look at point-in-time captures using deep packet inspection (DPI), which enables IT teams to view individual or contiguous packet payloads. New high-speed packet processing capabilities make it possible to fully analyze all L2-L7 communications passing over the wire in real time. Strengths: Considering that a single 10Gbps link can carry over 100TB in a single day, the communication between servers is easily the most voluminous source of data for an ITOA architecture. This data represents a tremendously rich source of IT and business intelligence providing it can be efficiently extracted and analyzed. Wire data can be analyzed passively using a copy of network traffic mirrored off a physical or virtual switch. With this approach, there is no need to deploy agents. Wire data is deterministic, observed behavior and not self-reported data. So if a transaction failed because a server crashed, that failure will be observed on the wire but would not be reported by the server. Wire data touches each element in the application delivery chain, making it an excellent source of correlated, cross-tier visibility. Wire data is real-time and always on. Unlike other data sources that may require configuration with changes, wire data provides an uninterrupted view of dynamic environments.
8 Limitations: The primary limitation of wire data is that not all information important for performance, security, or business analysis passes over the wire. Resource utilization metrics, policy and configuration changes, and internal application execution details are some examples of important information that does not pass between systems on the network. Additionally, wire data is dependent on the quality of the network feed. Overloaded network infrastructure should prioritize production traffic over mirrored traffic, which can lead to lossy wire data feeds. Key Questions: What is the volume and speed of the communications carried over the network in your datacenter? Do you have methods in place to analyze all of your wire data? Do those methods of analysis require engineering expertise? How many applications is your IT Operations team responsible for? SFLOW, NETFLOW, AND PACKET CAPTURE sflow and NetFlow collectors produce basic network data that is a pre-cursor to wire data, but is more accurately self-reported machine data derived from network routers and switches. These systems do not transform raw packets to structured data that can be measured, visualized, alerted upon, or trended. However, because they provide L2-L4 visibility similar to wire data, sflow and NetFlow are comparable to other packet-based tools. Packet-based tools are not useful in Open ITOA architectures because packet captures of greater than 10GB become inefficient for ad hoc IT or business analytics. It is much more effective to use a wire data analytics platform to stream pre-processed, structured data to the ITOA data store. Agent Data Definition: Agent data is derived from bytecode instrumentation and call stack sampling. In custom applications, agents hook into the underlying application runtime environment such as.net or Java, inserting code at the beginning and end of method calls. Agents can then generate method performance profiles, and using tags or keys, they can trace transactions through various tiers of an application. Examples: Code diagnostic tools have traditionally been the purview of Development and QA teams, helping to identify hotspots or errors in software code. As agents have become more lightweight, they have increasingly been deployed in production environments to better characterize how application code affects performance. Strengths: Agent-based monitoring is the microscope in the ITOA architecture. If you have identified a specific application or device that requires code-level monitoring and management, agent-based monitoring tools can be incredibly useful. Agent data is vital to DevOps, where IT teams must integrate seamlessly with the development process. For custom-developed applications, agents provide unprecedented ability to do root cause
9 analysis at a code level. By dumping the call stack, developers can see exactly where the code failed; you can capture method entries and exits, memory allocation and free events, and build method traces. By identifying hotspots and bottlenecks, you can optimize the overall performance of your application. Limitations: Agents require trade-offs in terms of performance. Because they exist in the execution path, agents, by design, increase overhead and must also support safe reentrance and multithreading. During runtime, these agents share the same process and address space as your application, which means they have direct access to the stack heap and can perform stack traces and memory dumps. You can capture every class and method call, but your application will be too burdened to run well in a production environment. Tailoring agents for specific applications requires an understanding of both the agents and the application, and optimizing agent monitoring must be considered as part of the development and QA process for each application rollout. Deploying agents requires access to application code so that they provide little value for off-theshelf applications like Microsoft SharePoint, SAP, or Epic EMR. For those types of applications, do not expect any greater insight with agents than could be gathered from SNMP or WMI machine data. In addition, systems that offer no agent hooks (e.g. ADCs/load balancers, DNS servers, firewalls, routers/switches, and storage) cannot be instrumented with agents. Key Questions Does your organization develop mission-critical applications, does it primarily rely on offthe-shelf applications, or does it use both types? Is your IT Operations team able to interpret the application code? Do you have staff resources available to configure agents for production and maintain compatibility? Synthetic Data Definition: Synthetic data originates outside of your application delivery chain through hosted monitoring tools or as part of active service checks. Firing on a predefined schedule, service checks (pingers) can be anything from simple ICMP pings to fully scripted checks that work through the application flow. To more accurately mimic customer geolocation, probes can be fired from around the globe, representing many points of presence. Synthetic data could be characterized as scheduled and scripted time-series transaction-level data. Example: Typically, pingers answer the question Can a connection be built? Most pingers are lightweight and easy to implement. There are many vendors who offer hosted ping services, and many free options available for on-premises deployment. Using services that offer many points of presence, IT teams can understand how geography affects the end user experience. Strengths: Synthetic data enables IT teams to test common transactions from locations around the globe or within the datacenter, and can quickly identify hard failures. Synthetic monitors are easy to implement for both custom and off-the-shelf applications, and can be deployed broadly without significant impact on the application as long as care is taken in setting up the checks.
10 Limitations: Because synthetic data is generated outside of the application delivery chain, it does not include insight into why the application is slow or failing. Additionally, synthetic data is generated through sampling and not continuously monitoring real-user transactions; it therefore can miss intermittent problems or problems that only affect a segment of users. Likewise, without an understanding of the success or failure of a transaction, it is easy to mark a service green that is accessible but not functioning correctly. Key Questions Does your application have a geographically distributed user base? Do you need to monitor the success or failure of specific, well-defined transactions, such as an online checkout process? Other Sources Human-Generated Data Human-generated data includes text documentation, engineering diagrams, Facebook and Twitter posts, and posts and messages from internal social collaboration software. Analyzing human data can provide the end-user contextual information previously unavailable to IT Operations teams. While Big Data solutions can store and manage this type of unstructured data, solutions that convert this data into actionable intelligence are still in their infancy. This remains largely an experimental data source for ITOA architectures. Many applications and use cases for leveraging human-generated data will begin to surface with greater market adoption, which requires tools that take advantage of processing gains to provide real insight. Tying It All Together: Open Architecture for ITOA Once you determine the relative importance of each of the four sources of IT visibility to your organization, you are ready to design an Open ITOA architecture. There are three elements of any Open ITOA platform: extraction of the data, storage and indexing of the data, and presentation or visualization of the data. This section addresses the issues that IT organizations must consider for each element. Selecting Sources of IT Visibility When evaluating your current toolset, you may determine that some platforms will work for your Open ITOA architecture by providing one or more of the four sources. Or you may decide that it is time for a refresh. When considering whether an existing or new platform is needed to provide machine data, agent data, synthetic data, and wire data, keep in mind these requirements: Openness Integration options, including robust APIs and the possibility for real-time data feeds into a backend data store. Batch data integration will not suffice. Flexibility Do not depend on vendors to define new metrics or extend the solution. You should be able to easily adapt the data collected and analyzed to meet new IT and business requirements. Scalability Solutions that have onerous licensing or fees for data usage will hinder the growth and utility of your Open ITOA architecture. You should select solutions where you are not constrained by the number of data points or metrics stored based on cost. Selecting a Data Store for the Open ITOA Platform The real power of your Open ITOA platform is unlocked when your data sources are combined in
11 an open, non-proprietary data store that enables contextual search and ad hoc queries, such as ElasticSearch or MongoDB. This is a case where the sum becomes greater than the parts. When selecting a data store, consider the characteristics of Big Data in general, namely the ever-increasing volume, variety, and velocity of data. With that in mind, the considerations for an Open ITOA data store include: Cost Plan for twice as much capacity as you initially think adequate. Your data usage will grow with broader adoption and as your organization adds use cases. Selecting a non-proprietary data store will help avoid licensing costs. Vendor lock-in Your data is extremely valuable and should be portable should you decide to change your ITOA data store. Be wary of data stores that are closed or limit how you can use your data. Choosing an open data store will ensure that you have the flexibility to adopt the best technologies in the future. Ecosystem Data stores that have broad ecosystems will enable you to take advantage of community resources. Scalability The data store should be able to scale out storage and processing capacity easily and require minimal ongoing maintenance as the environment grows. Selecting a Visualization Solution Many organizations already have visualization solutions in place, such as Tableau, Chartio, Domo, Pentaho, or JSON Studio. It may make sense to examine those and see if these existing visualization solutions can be applied to your ITOA platform, especially if one of the primary goals is to provide increased value for sales, marketing, customer support, and other non-it stakeholders that are already using those solutions. A key selection criteria in choosing the nonproprietary data store will be the type and variety of visualization solutions currently available for that data store.
12 Conclusion Every IT executive today must grapple with building an ITOA platform. It is a strategic issue with significant ramifications, not only in regard to cost but also the competitiveness of the business. According to Gartner, global spending on ITOA products and services is projected to reach $1.6 billion by the end of 2014, representing a 100 percent increase over 2013 levels. 1 Starting out with a purposefully designed architecture for ITOA will ensure cost-effectiveness, scalability, and the best results for your organization. The first step in planning an open ITOA architecture is understanding the relative importance of the four sources of IT visibility for your organization and then selecting the right-fit solutions for each data source. The second step is to choose an open data store that can consume real-time streams for each data source and does not limit the amount of data you store or how you use it. Finally, you will want to select the best visualization solution that will provide insights to both IT and non-it stakeholders. ExtraHop believes that an Open ITOA architecture is the best future direction for IT organizations and where the IT operations management industry is headed. To learn more about how ExtraHop is leading the industry in supporting an Open ITOA approach with our open and scalable wire data analytics platform, visit ExtraHop is the global leader in real-time wire data analytics. The ExtraHop Operational Intelligence platform analyzes all L2-L7 communications, including full bidirectional transactional payloads. This innovative approach provides the correlated, cross-tier visibility essential for application performance, availability, and security in today's complex and dynamic IT environments. The winner of numerous awards from Interop and others, the ExtraHop platform scales up to 20 Gbps, deploys without agents, and delivers tangible value in less than 15 minutes. [email protected] (0) (EMEA) 1 Gartner, June 2014: ITOA at Mid-Year 2014: Five Key Trends Driving Growth ExtraHop Networks, Inc. All rights reserved.
The Purview Solution Integration With Splunk
The Purview Solution Integration With Splunk Integrating Application Management and Business Analytics With Other IT Management Systems A SOLUTION WHITE PAPER WHITE PAPER Introduction Purview Integration
A Vision for Operational Analytics as the Enabler for Business Focused Hybrid Cloud Operations
A Vision for Operational Analytics as the Enabler for Focused Hybrid Cloud Operations As infrastructure and applications have evolved from legacy to modern technologies with the evolution of Hybrid Cloud
Network Monitoring Comparison
Network Monitoring Comparison vs Network Monitoring is essential for every network administrator. It determines how effective your IT team is at solving problems or even completely eliminating them. Even
How To Set Up Foglight Nms For A Proof Of Concept
Page 1 of 5 Foglight NMS Overview Foglight Network Management System (NMS) is a robust and complete network monitoring solution that allows you to thoroughly and efficiently manage your network. It is
ARE AGENTS NECESSARY FOR ACCURATE MONITORING?
ARE AGENTS NECESSARY FOR ACCURATE MONITORING? In managing network performance, user experience takes priority. Being proactive in managing performance means not only tracking the network and application,
STEELCENTRAL APPRESPONSE
STEELCENTRAL APPRESPONSE REAL-TIME APPLICATION PERFORMANCE MONITORING BASED ON ACTUAL END-USER EXPERIENCE BUSINESS CHALLENGE Problems can happen anywhere at the end user device, on the network, or across
Riverbed SteelCentral. Product Family Brochure
Riverbed SteelCentral Product Family Brochure Application performance from the perspective that matters most: Yours Applications are now the center of the business world. We rely on them to reach customers,
WHITEPAPER. PHD Virtual Monitor: Unmatched Value. of your finances. Unmatched Value for Your Virtual World WWW.PHDVIRTUAL.COM
WHITEPAPER PHD Virtual Monitor: Taking control of your finances. Unmatched Value Unmatched Value for Your Virtual World WWW.PHDVIRTUAL.COM PHD Virtual Monitor: Unmatched Value PHD Virtual Monitor VMTurbo
mbits Network Operations Centrec
mbits Network Operations Centrec The mbits Network Operations Centre (NOC) is co-located and fully operationally integrated with the mbits Service Desk. The NOC is staffed by fulltime mbits employees,
Support the Era of the App with End-to-End Network and Application Performance Visibility
Support the Era of the App with End-to-End Network and Application Performance Visibility Traditional Performance Management Is Not Enough The realities of the modern IT landscape are daunting. Your business-critical
The Evolution of Load Testing. Why Gomez 360 o Web Load Testing Is a
Technical White Paper: WEb Load Testing To perform as intended, today s mission-critical applications rely on highly available, stable and trusted software services. Load testing ensures that those criteria
Monitoring applications in multitier environment. Uroš Majcen [email protected]. A New View on Application Management. www.quest.
A New View on Application Management www.quest.com/newview Monitoring applications in multitier environment Uroš Majcen [email protected] 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Management Challenges
Whitepaper. Business Service monitoring approach
Whitepaper on Business Service monitoring approach - Harish Jadhav Page 1 of 15 Copyright Copyright 2013 Tecknodreams Software Consulting Pvt. Ltd. All Rights Reserved. Restricted Rights Legend This document
HP End User Management software. Enables real-time visibility into application performance and availability. Solution brief
HP End User Management software Enables real-time visibility into application performance and availability Solution brief Figure 1 HP End User Management lets you proactively identify application performance
RIVERBED APPRESPONSE
RIVERBED APPRESPONSE REAL-TIME APPLICATION PERFORMANCE MONITORING BASED ON ACTUAL END-USER EXPERIENCE BUSINESS CHALLENGE Problems can happen anywhere at the end user device, on the network, or across application
MSP. HOW MSPs Can Use Performance Monitoring to Create New Revenue Streams. [ WhitePaper ] Introduction
[ WhitePaper ] MSP HOW MSPs Can Use Performance Monitoring to Create New Revenue Streams. Introduction Today s Managed Service Providers (MSPs) face the common challenge of needing to grow revenue and
IBM Tivoli Composite Application Manager for WebSphere
Meet the challenges of managing composite applications IBM Tivoli Composite Application Manager for WebSphere Highlights Simplify management throughout the life cycle of complex IBM WebSphere-based J2EE
Choosing Application Performance Management (APM) Tools
1990-2010 Peter Sevcik and NetForecast, Inc., All rights reserved. Choosing Application Performance Management (APM) Tools Peter Sevcik NetForecast, Inc. Charlottesville, Virginia [email protected]
Riverbed SteelCentral. Product Family Brochure
Riverbed SteelCentral Product Family Brochure Application performance from the perspective that matters most: Yours Applications are now the center of the business world. We rely on them to reach customers,
Network Performance Management Solutions Architecture
Network Performance Management Solutions Architecture agility made possible Network Performance Management solutions from CA Technologies compliment your services to deliver easily implemented and maintained
Comprehensive Monitoring of VMware vsphere ESX & ESXi Environments
Comprehensive Monitoring of VMware vsphere ESX & ESXi Environments Table of Contents Overview...3 Monitoring VMware vsphere ESX & ESXi Virtual Environment...4 Monitoring using Hypervisor Integration...5
Network Management and Monitoring Software
Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the
Modern IT Operations Management. Why a New Approach is Required, and How Boundary Delivers
Modern IT Operations Management Why a New Approach is Required, and How Boundary Delivers TABLE OF CONTENTS EXECUTIVE SUMMARY 3 INTRODUCTION: CHANGING NATURE OF IT 3 WHY TRADITIONAL APPROACHES ARE FAILING
ExtraHop and AppDynamics Deployment Guide
ExtraHop and AppDynamics Deployment Guide This guide describes how to use ExtraHop and AppDynamics to provide real-time, per-user transaction tracing across the entire application delivery chain. ExtraHop
IBM QRadar Security Intelligence Platform appliances
IBM QRadar Security Intelligence Platform Comprehensive, state-of-the-art solutions providing next-generation security intelligence Highlights Get integrated log management, security information and event
Server & Application Monitor
Server & Application Monitor agentless application & server monitoring SolarWinds Server & Application Monitor provides predictive insight to pinpoint app performance issues. This product contains a rich
PANDORA FMS NETWORK DEVICE MONITORING
NETWORK DEVICE MONITORING pag. 2 INTRODUCTION This document aims to explain how Pandora FMS is able to monitor all network devices available on the marke such as Routers, Switches, Modems, Access points,
PANDORA FMS NETWORK DEVICES MONITORING
NETWORK DEVICES MONITORING pag. 2 INTRODUCTION This document aims to explain how Pandora FMS can monitor all the network devices available in the market, like Routers, Switches, Modems, Access points,
MIMIC Simulator helps testing of Business Service Management Products
Technical white paper from FireScope and Gambit Communications MIMIC Simulator helps testing of Business Service Management Products By Ryan Counts, FireScope, Inc. & Pankaj Shah, Gambit Communications
whitepaper SolarWinds Integration with 3rd Party Products Overview
SolarWinds Integration with 3rd Party Products Overview This document is intended to provide a technical overview of the integration capabilities of SolarWinds products that are based on the Orion infrastructure.
How It Works and Real-World Results
WHITE PAPER The ExtraHop IT Operational Intelligence Platform: By Tyson Supasatit Technical Marketing Manager Abstract ExtraHop accelerates IT transformation with real-time IT operations analytics. The
QRadar Security Intelligence Platform Appliances
DATASHEET Total Security Intelligence An IBM Company QRadar Security Intelligence Platform Appliances QRadar Security Intelligence Platform appliances combine typically disparate network and security management
Network Performance + Security Monitoring
Network Performance + Security Monitoring Gain actionable insight through flow-based security and network performance monitoring across physical and virtual environments. Uncover the root cause of performance
End Your Data Center Logging Chaos with VMware vcenter Log Insight
End Your Data Center Logging Chaos with VMware vcenter Log Insight By David Davis, vexpert WHITE PAPER Table of Contents Deploying vcenter Log Insight... 4 vcenter Log Insight Usage Model.... 5 How vcenter
Application Performance Monitoring (APM) Technical Whitepaper
Application Performance Monitoring (APM) Technical Whitepaper Table of Contents Introduction... 3 Detect Application Performance Issues Before Your Customer Does... 3 Challenge of IT Manager... 3 Best
Monitoring and Diagnosing Production Applications Using Oracle Application Diagnostics for Java. An Oracle White Paper December 2007
Monitoring and Diagnosing Production Applications Using Oracle Application Diagnostics for Java An Oracle White Paper December 2007 Monitoring and Diagnosing Production Applications Using Oracle Application
When application performance is better, business works better.
Solution brief When application performance is better, business works better. How APM improves IT operational efficiency and customer satisfaction. Table of contents 3 Monitor. Manage. Perform. 3 What
STEELCENTRAL APPINTERNALS
STEELCENTRAL APPINTERNALS BIG DATA-DRIVEN APPLICATION PERFORMANCE MANAGEMENT BUSINESS CHALLENGE See application performance through your users eyes Modern applications often span dozens of virtual and
WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT
WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT INTRODUCTION TO WAIT-TIME METHODS Until very recently, tuning of IT application performance has been largely a guessing
Scalable Extraction, Aggregation, and Response to Network Intelligence
Scalable Extraction, Aggregation, and Response to Network Intelligence Agenda Explain the two major limitations of using Netflow for Network Monitoring Scalability and Visibility How to resolve these issues
IBM Security. 2013 IBM Corporation. 2013 IBM Corporation
IBM Security Security Intelligence What is Security Intelligence? Security Intelligence --noun 1.the real-time collection, normalization and analytics of the data generated by users, applications and infrastructure
ForeScout CounterACT. Device Host and Detection Methods. Technology Brief
ForeScout CounterACT Device Host and Detection Methods Technology Brief Contents Introduction... 3 The ForeScout Approach... 3 Discovery Methodologies... 4 Passive Monitoring... 4 Passive Authentication...
White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary
White Paper The Ten Features Your Web Application Monitoring Software Must Have Executive Summary It s hard to find an important business application that doesn t have a web-based version available and
Application Performance Management (APM) Inspire Your Users With Every App Transaction. Anand Akela CA Technologies @aakela
Application Performance Management (APM) Inspire Your Users With Every App Transaction Anand Akela CA Technologies @aakela Agenda 1 2 3 The App Economy Business Reputation Relies on App Experience APM
Measuring end-to-end application performance in an on-demand world. Shajeer Mohammed Enterprise Architect
Measuring end-to-end application performance in an on-demand world Shajeer Mohammed Enterprise Architect Agenda 1 Introduction to CA 2 Application Performance Management and its Need 3 How CA Solutions
Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System
Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System By Jake Cornelius Senior Vice President of Products Pentaho June 1, 2012 Pentaho Delivers High-Performance
HP Application Performance Management
HP Application Performance Management Improving IT operational efficiency and customer satisfaction Solution brief IT organizations are under pressure to reduce downtime and improve the quality of user
WHITE PAPER SPLUNK SOFTWARE AS A SIEM
SPLUNK SOFTWARE AS A SIEM Improve your security posture by using Splunk as your SIEM HIGHLIGHTS Splunk software can be used to operate security operations centers (SOC) of any size (large, med, small)
NOT ALL END USER EXPERIENCE MONITORING SOLUTIONS ARE CREATED EQUAL COMPARING ATERNITY WORKFORCE APM TO FOUR OTHER MONITORING APPROACHES
NOT ALL END USER EXPERIENCE MONITORING SOLUTIONS ARE CREATED EQUAL COMPARING ATERNITY WORKFORCE APM TO FOUR OTHER MONITORING APPROACHES COMPREHENSIVE VISIBILITY INTO END USER EXPERIENCE MONITORING REQUIRES
Monitoring Best Practices for
Monitoring Best Practices for OVERVIEW Providing the right level and depth of monitoring is key to ensuring the effective operation of IT systems. This is especially true for ecommerce systems like Magento,
Elevating Data Center Performance Management
Elevating Data Center Performance Management Data Center innovation reduces operating expense, maximizes employee productivity, and generates new sources of revenue. However, many I&O teams lack proper
Big data: Unlocking strategic dimensions
Big data: Unlocking strategic dimensions By Teresa de Onis and Lisa Waddell Dell Inc. New technologies help decision makers gain insights from all types of data from traditional databases to high-visibility
Performance Management for Enterprise Applications
performance MANAGEMENT a white paper Performance Management for Enterprise Applications Improving Performance, Compliance and Cost Savings Teleran Technologies, Inc. 333A Route 46 West Fairfield, NJ 07004
Beyond Monitoring Root-Cause Analysis
WHITE PAPER With the introduction of NetFlow and similar flow-based technologies, solutions based on flow-based data have become the most popular methods of network monitoring. While effective, flow-based
FIVE WAYS WIRE DATA ANALYTICS ENABLES REAL-TIME HEALTHCARE SYSTEMS
WHITE PAPER FIVE WAYS WIRE DATA ANALYTICS ENABLES REAL-TIME HEALTHCARE SYSTEMS Abstract Healthcare organizations face a transformational shift with the rise of what Gartner has dubbed the realtime healthcare
LOG INTELLIGENCE FOR SECURITY AND COMPLIANCE
PRODUCT BRIEF uugiven today s environment of sophisticated security threats, big data security intelligence solutions and regulatory compliance demands, the need for a log intelligence solution has become
agility made possible
SOLUTION BRIEF Flexibility and Choices in Infrastructure Management can IT live up to business expectations with soaring infrastructure complexity and challenging resource constraints? agility made possible
Splunk for VMware Virtualization. Marco Bizzantino [email protected] Vmug - 05/10/2011
Splunk for VMware Virtualization Marco Bizzantino [email protected] Vmug - 05/10/2011 Collect, index, organize, correlate to gain visibility to all IT data Using Splunk you can identify problems,
CA Application Performance Management Cloud Monitor
PRODUCT SHEET: CA APM Cloud Monitor CA Application Performance Management Cloud Monitor agility made possible CA Application Performance Management Cloud Monitor (CA APM Cloud Monitor) provides end-to-end
NetCrunch 6. AdRem. Network Monitoring Server. Document. Monitor. Manage
AdRem NetCrunch 6 Network Monitoring Server With NetCrunch, you always know exactly what is happening with your critical applications, servers, and devices. Document Explore physical and logical network
Monitor all of your critical infrastructure from a single, integrated system.
Monitor all of your critical infrastructure from a single, integrated system. Do you know what s happening on your network right now? Take control of your network with real-time insight! When you know
User-Centric Proactive IT Management
User-Centric Proactive IT Management The Powered Rise by Frontline of the Performance Mobile Intelligence Workforce Aternity for SAP Gain Control of End User Experience with Aternity 1 P a g e Prepared
Monitoring Best Practices for COMMERCE
Monitoring Best Practices for COMMERCE OVERVIEW Providing the right level and depth of monitoring is key to ensuring the effective operation of IT systems. This is especially true for ecommerce systems
Network Management Deployment Guide
Smart Business Architecture Borderless Networks for Midsized organizations Network Management Deployment Guide Revision: H1CY10 Cisco Smart Business Architecture Borderless Networks for Midsized organizations
Virtual Desktop Infrastructure Optimization with SysTrack Monitoring Tools and Login VSI Testing Tools
A Software White Paper December 2013 Virtual Desktop Infrastructure Optimization with SysTrack Monitoring Tools and Login VSI Testing Tools A Joint White Paper from Login VSI and Software 2 Virtual Desktop
HP Intelligent Management Center v7.1 Network Traffic Analyzer Administrator Guide
HP Intelligent Management Center v7.1 Network Traffic Analyzer Administrator Guide Abstract This guide contains comprehensive information for network administrators, engineers, and operators working with
EMC Data Protection Advisor 6.0
White Paper EMC Data Protection Advisor 6.0 Abstract EMC Data Protection Advisor provides a comprehensive set of features to reduce the complexity of managing data protection environments, improve compliance
BIG DATA THE NEW OPPORTUNITY
Feature Biswajit Mohapatra is an IBM Certified Consultant and a global integrated delivery leader for IBM s AMS business application modernization (BAM) practice. He is IBM India s competency head for
Implement a unified approach to service quality management.
Service quality management solutions To support your business objectives Implement a unified approach to service quality management. Highlights Deliver high-quality software applications that meet functional
CA Unified Infrastructure Management: The IT Monitoring Solution for the Digital Enterprise
The Power & Payback Of Unified Monitoring CA Unified Infrastructure Management: The IT Monitoring Solution for the Digital Enterprise (Formerly known as CA Nimsoft Monitor) The Power & Payback Of Unified
SolarWinds Network Performance Monitor powerful network fault & availabilty management
SolarWinds Network Performance Monitor powerful network fault & availabilty management Fully Functional for 30 Days SolarWinds Network Performance Monitor (NPM) is powerful and affordable network monitoring
Extreme Networks: A SOLUTION WHITE PAPER
Extreme Networks: The Purview Solution Integration with SIEM Integrating Application Management and Business Analytics into other IT management systems A SOLUTION WHITE PAPER WHITE PAPER Introduction Purview
Frequently Asked Questions Plus What s New for CA Application Performance Management 9.7
Frequently Asked Questions Plus What s New for CA Application Performance Management 9.7 CA Technologies is announcing the General Availability (GA) of CA Application Performance Management (CA APM) 9.7
How To Get Started With Whatsup Gold
WhatsUp Gold v16.2 Getting Started Guide Co Welcome Welcome to WhatsUp Gold... 1 About WhatsUp Gold... 1 WhatsUp Gold Editions... 2 Deploying Deploying WhatsUp Gold... 4 STEP 1: Prepare the network...
HDP Hadoop From concept to deployment.
HDP Hadoop From concept to deployment. Ankur Gupta Senior Solutions Engineer Rackspace: Page 41 27 th Jan 2015 Where are you in your Hadoop Journey? A. Researching our options B. Currently evaluating some
A TECHNICAL WHITE PAPER ATTUNITY VISIBILITY
A TECHNICAL WHITE PAPER ATTUNITY VISIBILITY Analytics for Enterprise Data Warehouse Management and Optimization Executive Summary Successful enterprise data management is an important initiative for growing
Solution Brief TrueSight App Visibility Manager
Solution Brief TrueSight App Visibility Manager Go beyond mere monitoring. Table of Contents 1 EXECUTIVE SUMMARY 1 IT LANDSCAPE TRENDS AFFECTING APPLICATION PERFORMANCE 1 THE MOBILE CONSUMER MINDSET DRIVES
Wait-Time Analysis Method: New Best Practice for Performance Management
WHITE PAPER Wait-Time Analysis Method: New Best Practice for Performance Management September 2006 Confio Software www.confio.com +1-303-938-8282 SUMMARY: Wait-Time analysis allows IT to ALWAYS find the
Detecting Anomalous Behavior with the Business Data Lake. Reference Architecture and Enterprise Approaches.
Detecting Anomalous Behavior with the Business Data Lake Reference Architecture and Enterprise Approaches. 2 Detecting Anomalous Behavior with the Business Data Lake Pivotal the way we see it Reference
Introducing Microsoft SharePoint Foundation 2010 Executive Summary This paper describes how Microsoft SharePoint Foundation 2010 is the next step forward for the Microsoft fundamental collaboration technology
MICROSOFT DYNAMICS CRM Roadmap. Release Preview Guide. Q4 2011 Service Update. Updated: August, 2011
MICROSOFT DYNAMICS CRM Roadmap Release Preview Guide Q4 2011 Service Update Updated: August, 2011 EXECUTIVE SUMMARY Microsoft has delivered significant innovation and value in customer relationship management
Application Performance Management in a Virtualized Environment
WHITE PAPER: APPLICATION PERFORMANCE MANAGEMENT Application Performance Management in a Virtualized Environment Growing business demands for new IT functionality (web sites, self-service portals, ecommerce,
Best Practices for Managing Virtualized Environments
WHITE PAPER Introduction... 2 Reduce Tool and Process Sprawl... 2 Control Virtual Server Sprawl... 3 Effectively Manage Network Stress... 4 Reliably Deliver Application Services... 5 Comprehensively Manage
End-User Experience. Critical for Your Business: Managing Quality of Experience. www.manageengine.com/apm appmanager-support@manageengine.
End-User Experience Measurement ManageEngine is Powering IT ahead Critical for Your Business: Managing Quality of Experience [email protected] Table of Contents 1. The need for end-user
XpoLog Center Suite Data Sheet
XpoLog Center Suite Data Sheet General XpoLog is a data analysis and management platform for Applications IT data. Business applications rely on a dynamic heterogeneous applications infrastructure, such
SapphireIMS 4.0 BSM Feature Specification
SapphireIMS 4.0 BSM Feature Specification v1.4 All rights reserved. COPYRIGHT NOTICE AND DISCLAIMER No parts of this document may be reproduced in any form without the express written permission of Tecknodreams
Remote Network Monitoring Software for Managed Services Providers
http://www.packettrap.com Remote Network Monitoring Software for Managed Services Providers PacketTrap MSP provides a cost-effective way for you to offer enterprise-class server, application, and network
THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT
WHITE PAPER: CONVERGED NPM/APM THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT Today, enterprises rely heavily on applications for nearly all business-critical
The Advantages of Enterprise Historians vs. Relational Databases
GE Intelligent Platforms The Advantages of Enterprise Historians vs. Relational Databases Comparing Two Approaches for Data Collection and Optimized Process Operations The Advantages of Enterprise Historians
HP Business Service Management 9.2 and
HP Business Service Management 9.2 and Operations Analytics Mark Pinskey Product Marketing Network Management 2011Hewlett-Packard 2013 Development.The information Company, contained L.P. herein is subject
Unified network traffic monitoring for physical and VMware environments
Unified network traffic monitoring for physical and VMware environments Applications and servers hosted in a virtual environment have the same network monitoring requirements as applications and servers
QRadar Security Management Appliances
QRadar Security Management Appliances Q1 Labs QRadar network security management appliances and related software provide enterprises with an integrated framework that combines typically disparate network
Getting the Most Out of Your Existing Network A Practical Guide to Traffic Shaping
Getting the Most Out of Your Existing Network A Practical Guide to Traffic Shaping Getting the Most Out of Your Existing Network A Practical Guide to Traffic Shaping Executive Summary As organizations
An Oracle White Paper June, 2013. Enterprise Manager 12c Cloud Control Application Performance Management
An Oracle White Paper June, 2013 Enterprise Manager 12c Cloud Control Executive Overview... 2 Introduction... 2 Business Application Performance Monitoring... 3 Business Application... 4 User Experience
pt360 FREE Tool Suite Networks are complicated. Network management doesn t have to be.
pt360 FREE Tool Suite Networks are complicated. Network management doesn t have to be. pt360 FREE Tool Suite - At a Glance PacketTrap Networks November, 2009 PacketTrap's pt360 FREE Tool Suite consolidates
The SIEM Evaluator s Guide
Using SIEM for Compliance, Threat Management, & Incident Response Security information and event management (SIEM) tools are designed to collect, store, analyze, and report on log data for threat detection,
Site24x7: Powerful, Agile, Cost-Effective IT Management from the Cloud. Ensuring Optimal Performance and Quality Web Experiences
Site24x7: Powerful, Agile, Cost-Effective IT Management from the Cloud Ensuring Optimal Performance and Quality Web Experiences Must-know facts about Site24x7: We bring expertise gained from ManageEngine
