Does function point analysis change with new approaches to software development? January 2013 Scope of this Report The information technology world is constantly changing with newer products, process models and technologies arriving at a seemingly rapid pace. Every evolutionary step of technology brings with it not only marvelous capabilities but a new set of challenges. One of the major challenges for the function point analysis (FPA) community is adapting the use of function points to be as effective in today s environments as it has been for the last 25 years. Is the function point method still viable as is in newer technologies? The answer is yes but the function point counter must know how to appropriately apply the FPA guidelines. This article will examine 3 types of architectures used in today s business sectors: Client server/cloud computing; Real-time, Process Control and Embedded Systems; and Service Oriented Architecture (SOA); and what to consider when counting in those scenarios: Process Review Let s begin by reviewing the FPA process. When performing a count, the FPA process the counter has to (1) gather the requirements; (2) determine the purpose of the count; (3) determine the scope of the count; (4) determine the application boundaries; and (5) perform the count. The requirements may be in any format (e.g. word, excel, use cases, business rules) or multiple types of documentation (e.g. (functional, system specifications), but are usually available and are not an issue in newer technologies. Likewise the purpose of the count is easily determinable regardless of technology or process (e.g. To determine productivity or perform estimation). What becomes a challenge for the counters is the scope and boundaries in newer technologies. Scope is the applications included in the count; boundaries are determined by the applications involved. When performing the count, there are five components of FPA. There are two (2) types of data functions (internal and external) and three (3) types of transaction functions (inputs, outputs and queries). An internal data functions (called Internal Logical File, or ILF) is data maintained by the application, while an external data function (External Interface File, or EIF) is data that an application uses but does not maintain. An external input (EI) is received from outside the system and generally maintains the internal data or triggers an action. A simple example of an EI is an added or updated transaction. An external output (EO) or external query (EQ) exits the system to be used or seen by a user, and the user can be a person or be another application. An EQ or EO could be an output file, report or query on a screen. In newer technologies, it can be difficult especially for an inexperienced counter to identify the five components because they may have a different look and feel than the more traditional functions. 2013 David Consulting Group Page 1 of 6 v1
Client-Server/Cloud Computing Client-Server is a technical architecture where an application consists of processing on two or more processors: one processor is responsible for requesting information (client) and the other processor is responsible for providing the requested information (server). The client is generally a user interface (usually a workstation or application) whose purpose is to provide a platform from which to perform business operations. The server(s) is a workstation (or workstations) whose purpose is to provide a centralized repository for the handling of data processing, data storage, application version control, and security. Sometimes referred to as the front end and the back end, it is important to realize that from a logical, user perspective the two are not separate applications but they are two equal parts of one whole application. A modified version of the Client-Server architecture, according to NIST, cloud computing means using multiple server computers via a digital network as though they were one computer rather than a dedicated server cluster as with the traditional client-server. This means that if a particular website or application is experiencing high volumes of traffic, other server clusters within the cloud can be utilized for their processing power, if available. A Cloud is designed to send traffic to whichever of the available servers can provide the lowest latency, instead of sending it to the same group of dedicated servers. Generally a pay for play type of service, the cloud can not only be a service but for data storage. Cloud storage is a model of networked data storage where data is stored on multiple virtual servers, generally hosted by third parties in large data centers, rather than being hosted on dedicated servers. According to NIST, there are several types of cloud computing (see Table 1); however the common characteristic is that from a user perspective, it is no different than a client / server architecture. The use of multiple servers is a technical consideration and has no bearing on the counting process. Type of Cloud Infrastructure Public Cloud Community Cloud Hybrid Cloud Combined Cloud Private Cloud Characteristics Resources are dynamically provisioned by a 3 rd party provider on a fine-grained, self-service basis over the internet Several organizations have similar requirements and can share an infrastructure. Costs are spread across more users than a public cloud, though access is not exclusive. A combination of cloud hosting and client / server style dedicated hosting; works well for data centers that need to both maintain data and also make it available for replication Two clouds that have been joined together which allows an easier transition from a client / server structure to the public cloud structure A version of Hybrid cloud which is exclusive and privately available only to the corporation which maintains it Table 1. Types of Clouds 2012 David Consulting Group Page 2 of 6 v1
A Client-Server system is composed of the client(s) sending messages to and from the server. This allows all clients in the system to access and update data in a centralized location. The client recognizes the need to begin communication and contacts the server; the server verifies the identity of the client then responds with the necessary information as requested by the client; and the transaction is processed using typical input/validation techniques on both sides of the transaction. Since the client and server are in the same application from a boundary perspective, then the communication transaction between client and server is generally not countable by itself but rather is a step in a larger process which began with client data entry or receipt of an external file or request. It is especially important to look at each elementary process to determine if it leaves the business of the application in a consistent state. Message connections and instance connections contain processing in both the sending & receiving between client and server, but are rarely countable alone. Multiple transmission steps are typically within the same process and not unique under current IFPUG rules. IFPUG Function Points count complete elementary processes regardless of the number of steps it takes to complete that process. For the data functions, identify those external objects used by the application which generally quality as EIFs. When identifying internal objects and other types of data storage being maintained by the application, remember that the data could actually reside on both the client and the server but still be one logical data store. If your organization uses the optional General System Characteristics (GSC) to produce a Value Adjustment Factor for your counts, then there are some considerations for client-server applications. GSC#1 Data Communications is key to these types of applications. You need to determine what the protocols are for the application and generally score this GSC at a 4 - Application more than front-end but supports only one TP communications protocol, or 5 - Application more than front-end and supports more than one TP communications protocol. Other GSCs which may require additional attention for client server are # 2 Distributed Data Processing, # 3 Performance, # 4 Heavily Used Configuration, # 5 Transaction Rate, # 6 On-line Data Entry, and # 9 Complex Processing. When scoring client- server/cloud applications, ensure that the degree of influence reflects the entire application, not a few components. Real time, Process Control and Embedded Systems As technology advances, industry and manufacturing demands for more efficient applications also increases. The major category of applications used in industry, military and communications are real-time, process control and embedded systems. They are very similar applications and frequently are hybrids of one or more types. The principal operating characteristic of a real-time system is that the software must execute almost instantly, at the boundary of the processing limits of the Central Processing Unit. It is often an on-line, continuously available, critically timed system which generates event driven outputs almost simultaneously with their corresponding inputs. Inputs can occur at either fixed or variable intervals, selectively or continuously, and could require interruption to process input of a higher priority, often utilizing a buffer area or queue to determine processing priorities. In some cases, the queues are not considered temporary storage (like the buffer) and could be the only ILFs for this type of system. 2012 David Consulting Group Page 3 of 6 v1
Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for maintaining the output of a specific process within a desired range. It is process and event driven, rather than data driven. Typically these processes and/or events are unique transactions (EIs or EOs). An example is a PLC (programmable logic controller) in which a set of digital and analog inputs is read; a set of logic is applied; and it generates analog and digital outputs. Process control systems are known for their abilities to alter the behavior of the system (e.g. open/close hot water valve base upon thermostat entry), which every counter should recognize as external inputs or external outputs. Larger, more complex systems use a Distributed Control System (DCS) or SCADA (supervisory control and data acquisition) system but typically are considered real-time systems and some are even embedded. An embedded system is an application that is designed to do one or more dedicated and/or specific functions often with real-time constraints. It is called embedded because it is a part of complete device often including hardware and mechanical parts. They support portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Operating with microcontrollers or digital signal processors (DSP), most systems will have some capability for programmability and many allow peripherals to be attached (e.g. handheld computers). Examples of these types of applications are Radar Systems, GPS systems, Telephone Switches and other telecommunications systems, Signal Processors, Satellite Communications, Missile Guidance Systems, Manufacturing Systems, Weapons Systems, Plant Monitoring Systems, Navigation Systems, Safety Systems, Operating Systems, Flight Recorders, and Analyzers. One of the chief benefits of a real time, process control and embedded systems is that they enable a small staff to operate a complex process through central control facility. They often Include mathematical and/or logical complex algorithms (consider GSC#7), memory constraints, timing constraints, interruptions, execution speed requirements, communication constraints, and continuous availability. The constraints are usually documented and tested using performance tools, which qualifies for higher degrees of influence on many GSCs. In addition these constraints and algorithms could drive the uniqueness of the events and transactions. When considering data functions, data must be user recognizable and maintained to be considered an ILF. However, data is not always maintained in a data base but could be in memory storage or some other technical mechanism. File structures that are used by an elementary process to analyze transaction data for the duration of the transaction could be considered an ILF, providing it is maintained by the application. Examples include "state" data, variable processing/control data, location/tracking data, parameter values, maps/topography data, and system tables/directories, system set up data, usage data, industrial production data, provisioning data, switch data, alarms, and scans/faxes/voice data. EIFs could be values read from other platforms/devices but must be an ILF in another application. Examples of control EIs include detection devices, sensors, satellite messages, alarms which might contain processing instructions, setup or shut-down commands, or volume controls. Other inputs could arrive via data packets/files, voice, tones, wireless, teleprocessing signals, calls, tones or radar. EOs could include system status, outgoing messages, commands, alarms, processing instructions, routed calls, faxes, or outgoing e-mails. 2012 David Consulting Group Page 4 of 6 v1
In Table 2, various types of applications in this category are listed with their other characteristics and business use. Type Characteristic Business Sector Robotics Commonly used in auto Manufacturing, motion industry and packaging The production of discrete pieces of product DISCRETE (PROCESS CONTROL) BATCH (PROCESS CONTROL) CONTINUOUS (PROCESS CONTROL) HYBRID Production of raw materials combined in specific ways to produce an intermediate or end result Variables that are smooth and uninterrupted in time (Example: Control of water temperature) Applications having elements of discrete, batch, and continuous process control EMBEDDED Enables automation Enables mass production of continuous processes Adhesives and glues, food, beverages and medicines Production of fuels, chemicals and plastics Manufacturing Oil refining, paper manufacturing, chemicals, power plants and many other industries Table 2. Real Time Systems Characteristics and Business Sectors Service Oriented Architecture (SOA) The definition of SOA according to the Organization for the Advancement of Structured Information Standards (OASIS) is A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations. SOA is a modified version of the object oriented architecture which relies on using multiple services to produce the needed results through a larger API or interface. This allows users to reuse existing functionality to create new applications, based primarily from already existing services or applications. For SOA to be effective, the components chosen need to make use of two main concepts: Components need to be able to operate amongst different systems and programming languages through the use of a common communication protocol. Components also may share or create a large set of resources to be used. This allows new functionality to reference a common format for each element. 2012 David Consulting Group Page 5 of 6 v1
Like many other current technologies, multiple layers of data or services may be employed (e.g. Enterprise services layer; domain service layer, application service layer). These layers are a technical consideration and are considered all in one application. The communication and transfer of data between these layers are typically steps in larger processes and not countable since they are internal to the application. The data used in all layers may be one or more ILFs depending on the logical view and if the data is user recognizable. Unlike object oriented, which uses embedded calls to other objects to pass information, SOA use communication protocols to parse and encapsulate metadata. This is done through a process called orchestration, allowing the developers to arrange applications in a nonhierarchical list containing all services, their characteristics and the ability to use these services. An interface is used as a sort of middleware handling the communication protocols between the services. Some of the protocols or mechanisms used may be Web service or APIs, connecting the separate applications into one service identifiable by the user. While each application will have its own boundaries, the web services and APIs should be counted with each application that uses them. When data is requested of an application and the application takes control and then responds via web service or API then that is generally counted as an EQ or EO. The requesting application, however, is using the data to complete a higher level transaction and it is the higher level transaction that is counted, rather than the API or web service. In some cases, to support productivity and/or support metrics, web services and APIs that are maintained by a team for use in other systems are countable as individual transactions. Conclusion Function points are versatile and adaptable to current technologies and processes. In fact, one of the more useful characteristic of function points is that they are technology independent. The key to counting in newer environments is understanding the terminology, where to consider the application boundaries, and recognizing the myriad types of functions and transactions associated with each environment. Sources The NIST Definition of Cloud Computing". National Institute of Standards and Technology. Special Publication 800-145; Peter Mell and Timothy Grance. September 2011. FUNCTION POINT ANALYSIS: Sizing The Software Deliverable; Applying Function Points to Emerging Technologies (Class materials), David Consulting Group, 2012. Emdedded Systems, Wikipedia: http://en.wikipedia.org/wiki/embedded_system. 2011. Bieberstein et al.; Service-Oriented Architecture (SOA) Compass: Business Value, Planning, and Enterprise Roadmap; IBM Press books, 2005. 2012 David Consulting Group Page 6 of 6 v1