Application Response Measurement

Application Response Measurement (ARM) is an open standard published by the Open Group for monitoring and diagnosing performance bottlenecks within complex enterprise applications that use loosely-coupled designs or service-oriented architectures.

It includes an API for C and Java that allows timing information associated with each step in processing a transaction to be logged to a remote server for later analysis.

History

Version 1 of ARM was developed jointly by Tivoli Software and Hewlett Packard in 1996. Version 2 was developed by an industry partnership (the ARM Working Group) and became available in December 1997 as an open standard approved by the Open Group. ARM 4.0 was released in 2003 and revised in 2004.

As of 2007, ARM 4.1 version 1 is the latest version of the ARM standard.

Introduction

Current application design tends to be more complex and distributed over networks. This leads to new challenges in today's development and monitoring tools to provide application developers, system- and application administrators with the information they need.

Within distributed applications it is not easy to estimate if the application performs well. The following issues help in the evaluation of distributed applications:

  • Are business transactions succeeding and, if not, what is the cause of failure?
  • What is the response time of a transaction?
  • Where are the bottlenecks, which sub-transaction could cause a bottleneck?
  • Which and how many transactions are executed in an application?
  • How to tune an application or its environment to perform better?

ARM helps answer these questions. It's important to mention that the ARM benefits as they are defined here are now just a subset of the Application Performance Management space.

Approach

The main approach of using ARM is:

  1. Define business as well as technical transactions which are of interest.
  2. Insert calls into the application to the ARM interface to measure these defined transactions.
  3. Deploy the instrumented application in their normal environment with an installed ARM agent.
  4. The used ARM implementation now provides the transaction measurements of interest.

Concepts

ARM defines the following concepts to provide the described functionality.

ARM Application

Complex distributed applications usually consist of many different single applications (processes). In order to be able to understand the relationship between all single applications the concept of an ARM application is introduced with version 4.0 of the ARM standard. Each ARM transaction is executed exactly within one ARM application.

ARM Transaction

Transactions are the main concept of the ARM standard and represents a single performance measurement. A transaction definition defines the type (name) and additional attributes of an ARM transaction. A transaction can be executed (started and stopped) several times which results in multiple measurements. Each measurement has basic attributes like status of completion (good, failed, aborted), start- and stop timestamp, the resulting duration and the system address (host) it was executed on. Additionally special metrics or context properties can be associated with a transaction measurement.

ARM System Address

Uniquely defines a host by its name, IP address or other unique information.

ARM Correlator

ARM correlators are used to express a correlation between two ARM transactions. This is a synchronous relationship also known as parent-child relationship. Commonly, a parent transaction triggers a child transaction and only continues its execution when the child transaction has finished. Using correlators, it is possible to split a complex transaction into several nested child transactions, where each child transaction can have child transactions of its own. This results in a tree of transactions with the topmost parent transaction being the root of the tree.

ARM 4.1 defines asynchronous relationships to support data flow driven architectures.

ARM Metric

ARM Metrics can be used to get more information about the execution of a transaction. ARM defines a set of metric types for different purposes such as a counter, a gauge or just a numeric value.

ARM Properties

Properties are a set of so-called name/value pair strings which qualifies an ARM transaction or an ARM application beyond the basic definition of these entities and allows to associate additional context information to each transaction measurement.

ARM User

Defines a name of a user on behalf an transaction measurement was executed.

ARM Instrumented Applications

The following applications are already instrumented with ARM calls:

References

  1. ^ "Apache ARM 4.0 Module". Httpd.apache.org. Retrieved 2012-05-20.
  2. ^ "For productive environments modified Apache ARM 4.0 Module". Myarm.com. 2011-06-13. Retrieved 2012-05-20.
  3. ^ "npARM xpcom extension for Mozilla Firefox". Myarm.com. 2011-11-26. Retrieved 2012-05-20.
  4. ^ "WAS v6.1 ARM Transactions". Publib.boulder.ibm.com. 2012-04-04. Retrieved 2012-05-20.
  5. ^ "Enabling ARM on HTTP Server". Publib.boulder.ibm.com. Retrieved 2012-05-20.
  6. ^ http://publib.boulder.ibm.com/infocenter/eserver/v1r2/topic/ewlminfo/eicaaarmdb2.html
  7. ^ "Using SAS 9.2 ARM Interface with Existing ARM Applications: SAS 9.2 ARM Interface with Existing SAS Applications Overview". Support.sas.com. 2010-05-27. Retrieved 2012-05-20.

External links

Application performance management

In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ([i.e.] value)."

Computer Measurement Group

The Computer Measurement Group (CMG), founded in 1974, is a worldwide non-profit organization of data processing professionals whose work involves measuring and managing the performance of computing systems. In this context, performance is understood to mean the response time of software applications of interest, and the overall capacity (or throughput) characteristics of the system, or of some part of the system.

CMG members are primarily concerned with evaluating and maximizing the performance of existing computer systems and networks, and with capacity management, in which planned enhancements to existing systems or the designs of new systems are evaluated to find the necessary resources required to provide adequate performance at a reasonable cost.

Instrumentation (computer programming)

In the context of computer programming, instrumentation refers to an ability to monitor or measure the level of a product's performance, to diagnose errors, and to write trace information. Programmers implement instrumentation in the form of code instructions that monitor specific components in a system (for example, instructions may output logging information to appear on the screen). When an application contains instrumentation code, it can be managed by using a management tool. Instrumentation is necessary to review the performance of the application. Instrumentation approaches can be of two types: source instrumentation and binary instrumentation.

Multitier architecture

In software engineering, multitier architecture (often referred to as n-tier architecture) or multilayered architecture is a client–server architecture in which presentation, application processing, and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture.

N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific layer, instead of reworking the entire application. A three-tier architecture is typically composed of a presentation tier, a domain logic tier, and a data storage tier.

While the concepts of layer and tier are often used interchangeably, one fairly common point of view is that there is indeed a difference. This view holds that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. For example, a three-layer solution could easily be deployed on a single tier, such as a personal workstation.

Open standard

An open standard is a standard that is publicly available and has various rights to use associated with it, and may also have various properties of how it was designed (e.g. open process). There is no single definition and interpretations vary with usage.

The terms open and standard have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, and the ownership of rights in the standard. The term "standard" is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis.

The definitions of the term open standard used by academics, the European Union and some of its member governments or parliaments such as Denmark, France, and Spain preclude open standards requiring fees for use, as do the New Zealand, South African and the Venezuelan governments. On the standard organisation side, the World Wide Web Consortium (W3C) ensures that its specifications can be implemented on a royalty-free basis.

Many definitions of the term standard permit patent holders to impose "reasonable and non-discriminatory licensing" royalty fees and other licensing terms on implementers or users of the standard. For example, the rules for standards published by the major internationally recognized standards bodies such as the Internet Engineering Task Force (IETF), International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), and ITU-T permit their standards to contain specifications whose implementation will require payment of patent licensing fees. Among these organizations, only the IETF and ITU-T explicitly refer to their standards as "open standards", while the others refer only to producing "standards". The IETF and ITU-T use definitions of "open standard" that allow "reasonable and non-discriminatory" patent licensing fee requirements.

There are those in the open-source software community who hold that an "open standard" is only open if it can be freely adopted, implemented and extended. While open standards or architectures are considered non-proprietary in the sense that the standard is either unowned or owned by a collective body, it can still be publicly shared and not tightly guarded. The typical example of “open source” that has become a standard is the personal computer originated by IBM and now referred to as Wintel, the combination of the Microsoft operating system and Intel microprocessor. There are three others that are most widely accepted as “open” which include the GSM phones (adopted as a government standard), Open Group which promotes UNIX and the like, and the Internet Engineering Task Force (IETF) which created the first standards of SMTP and TCP/IP. Buyers tend to prefer open standards which they believe offer them cheaper products and more choice for access due to network effects and increased competition between vendors.Open standards which specify formats are sometimes referred to as open formats.

Many specifications that are sometimes referred to as standards are proprietary and only available under restrictive contract terms (if they can be obtained at all) from the organization that owns the copyright on the specification. As such these specifications are not considered to be fully open. Joel West has argued that "open" standards are not black and white but have many different levels of "openness". Ultimately a standard needs to be open enough that it will become adopted and accepted in the market, but still closed enough that firms can get a return on their investment in developing the technology around the standard. A more open standard tends to occur when the knowledge of the technology becomes dispersed enough that competition is increased and others are able to start copying the technology as they implement it. This occurred with the Wintel architecture as others were able to start imitating the software. Less open standards exist when a particular firm has much power (not ownership) over the standard, which can occur when a firm’s platform “wins” in standard setting or the market makes one platform most popular.

Outline of computing

The following outline is provided as an overview of and topical guide to computing:

Computing – activity of using and improving computer hardware and software.

Performance engineering

Performance engineering encompasses the techniques applied during a systems development life cycle to ensure the non-functional requirements for performance (such as throughput, latency, or memory usage) will be met. It may be alternatively referred to as systems performance engineering within systems engineering, and software performance engineering or application performance engineering within software engineering.

As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventative and perfective role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.

The term performance engineering encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of IT service management (see also ITIL).

Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to systems engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the information technology organization.

Profiling (computer programming)

In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization.

Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler (or code profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods.

Response time (technology)

In technology, response time is the time a system or functional unit takes to react to a given input.

Software performance testing

In software quality assurance, performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system.

The Open Group

The Open Group is an industry consortium that seeks to "enable the achievement of business objectives" by developing "open, vendor-neutral technology standards and certifications". It has over 625 members and provides a number of services, including strategy, management, innovation and research, standards, certification, and test development. It was established in 1996 when X/Open merged with the Open Software Foundation.

The Open Group is the certifying body for the UNIX trademark, and publishes the Single UNIX Specification technical standard, which extends the POSIX standards. The Open Group also develops and manages the TOGAF standard, which is an industry standard enterprise architecture framework.The over 625 members include a range of IT buyers and vendors as well as government agencies, including, for example, Capgemini, Fujitsu, Oracle, HPE, Orbus Software, IBM, Huawei, Philips, U.S. Department of Defense, NASA.

Website monitoring

Website monitoring is the process of testing and verifying that end-users can interact with a website or web application as expected. Website monitoring is often used by businesses to ensure website uptime, performance, and functionality is as expected.

Website monitoring companies provide organizations the ability to consistently monitor a website, or server function, and observe how it responds. The monitoring is often conducted from several locations around the world to a specific website, or server, in order to detect issues related to general Internet latency, network hop issues, and to prevent false positives caused by local or inter-connect problems. Monitoring companies generally report on these tests in a variety of reports, charts and graphs. When an error is detected monitoring services send out alerts via email, SMS, phone, SNMP trap, pager that may include diagnostic information, such as a network trace route, code capture of a web page's HTML file, a screen shot of a webpage, and even a video of a website failing. These diagnostics allow network administrators and webmasters to correct issues faster.

Monitoring gathers extensive data on website performance, such as load times, server response times, page element performance that is often analyzed and used to further optimize website performance.

Standards by The Open Group

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.