The title, "Industrial automation systems and integration—Integration of life-cycle data for process plants including oil and gas production facilities", is regarded too narrow by the present ISO 15926 developers. Having developed a generic data model and Reference Data Library for process plants, it turned out that this subject is already so wide, that actually any state information may be modelled with it.
In 1991 a European Union ESPRIT-, named ProcessBase, started. The focus of this research project was to develop a data model for lifecycle information of a facility that would suit the requirements of the process industries. At the time that the project duration had elapsed, a consortium of companies involved in the process industries had been established: EPISTLE (European Process Industries STEP Technical Liaison Executive). Initially individual companies were members, but later this changed into a situation where three national consortia were the only members: PISTEP (UK), POSC/Caesar (Norway), and USPI-NL (Netherlands). (later PISTEP merged into POSC/Caesar, and USPI-NL was renamed to USPI).
EPISTLE took over the work of the ProcessBase project. Initially this work involved a standard called ISO 10303-221 (referred to as "STEP AP221"). In that AP221 we saw, for the first time, an Annex M with a list of standard instances of the AP221 data model, including types of objects. These standard instances would be for reference and would act as a knowledge base with knowledge about the types of objects. In the early nineties EPISTLE started an activity to extend Annex M to become a library of such object classes and their relationships: STEPlib. In the STEPlib activities a group of approx. 100 domain experts from all three member consortia, spread over the various expertises (e.g. Electrical, Piping, Rotating equipment, etc.), worked together to define the "core classes".
The development of STEPlib was extended with many additional classes and relationships between classes and published as Open Source data. Furthermore, the concepts and relation types from the AP221 and ISO 15926-2 data models were also added to the STEPlib dictionary. This resulted in the development of Gellish English, whereas STEPlib became the Gellish English dictionary. Gellish English is a structured subset of natural English and is a modeling language suitable for knowledge modeling, product modeling and data exchange. It differs from conventional modeling languages (meta languages) as used in information technology as it not only defines generic concepts, but also includes an English dictionary. The semantic expression capability of Gellish English was significantly increased by extending the number of relation types that can be used to express knowledge and information.
For modelling-technical reasons POSC/Caesar proposed another standard than ISO 10303, called ISO 15926. EPISTLE (and ISO) supported that proposal, and continued the modelling work, thereby writing Part 2 of ISO 15926. This Part 2 has official ISO IS (International Standard) status since 2003.
POSC/Caesar started to put together their own RDL (Reference Data Library). They added many specialized classes, for example for ANSI (American National Standards Institute) pipe and pipe fittings. Meanwhile, STEPlib continued its existence, mainly driven by some members of USPI. Since it was clear that it was not in the interest of the industry to have two libraries for, in essence, the same set of classes, the Management Board of EPISTLE decided that the core classes of the two libraries shall be merged into Part 4 of ISO 15926. This merging process has been finished. Part 4 should act as reference data for part 2 of ISO 15926 as well as for ISO 10303-221 and replaced its Annex M. On June 5, 2007 ISO 15926-4 was signed off as a TS (Technical Specification).
In 1999 the work on an earlier version of Part 7 started. Initially this was based on XML Schema (the only useful W3C Recommendation available then), but when Web Ontology Language (OWL) became available it was clear that provided a far more suitable environment for Part 7. Part 7 passed the first ISO ballot by the end of 2005, and an implementation project started. A formal ballot for TS (Technical Specification) was planned for December 2007. However, it was decided then to split Part 7 into more than one part, because the scope was too wide.
ISO 15926 has eleven parts (as of June 2009):
The model and the library are suitable for representing lifecycle information about technical installations and their components.
They can also be used for defining the terms used in product catalogs in e-commerce. Another, more limited, use of the standard is as a reference classification for harmonization purposes between shared databases and product catalogues that are not based on ISO 15926.
The purpose of ISO 15926 is to provide a Lingua Franca for computer systems, thereby integrating the information produced by them. Although set up for the process industries with large projects involving many parties, and involving plant operations and maintenance lasting decades, the technology can be used by anyone willing to set up a proper vocabulary of reference data in line with Part 4.
In Part 7 the concept of Templates is introduced. These are semantic constructs, using Part 2 entities, that represent a small piece of information. These constructs then are mapped to more efficient classes of n-ary relations that interlink the Nodes that are involved in the represented information.
In Part 8 the data model of Part 2 is mapped to OWL, and so are, in concept, the Reference Data of Part 4 and the templates of Part 7. For validation and reasoning purposes all are represented in First-Order Logic as well.
In Part 9 these Node and Template instances are stored in Façades. A Façade is an RDF quad store, set up to a standard schema and an API. Any Façade only stores the data for which the Façade owner is responsible.
Each participating computer system maps its data from its internal format to such ISO-standard Node and Template instances. These are stored in a System Façade, each system its own Façade.
Data can be "handed over" from one Façade to another in cases where data custodianship is handed over (e.g. from a contractor to a plant owner, or from a manufacturer to the owners of the manufactured goods). Hand-over can be for a part of all data, whilst maintaining full referential integrity.
Façades can be set up for the consolidation of data by handing over data produced by various participating computer systems and stored in their System Façades. Examples are: a Façade for a project discipline, a project, a plant).
Documents are user-definable. They are defined in XML Schema and they are, in essence, only a structure containing cells that make reference to instances of Templates. This represents a view on all lifecycle data: since the data model is a 4D (space-time) model, it is possible to present the data that was valid at any given point in time, thus providing a true historical record. It is expected that this will be used for Knowledge Mining.
Data can be queried by means of SPARQL. In any implementation a restricted number of Façades can be involved, with different access rights. This is done by means of creating a CPF Server (= Confederation of Participating Façades). An Ontology Browser allows for access to one or more Façades in a given CPF, depending on the access rights.
There are a number of projects working on the extension of the ISO 15926 standard in different application areas.
Within the application of Capital Intensive projects, some cooperating implementation projects are running:
Finalised projects include:
The Norwegian Oil Industry Association (OLF) has decided to use ISO 15926 (also known as the Oil and Gas Ontology) as the instrument for integrating data across disciplines and business domains for the Upstream Oil and Gas industry. It is seen as one of the enablers of what has been called the next (or second) generation of Integrated operations, where a better integration across companies is the goal.
The following projects are currently running (May 2009):
Finalised projects include:
One of the main requirements was (and still is) that the scope of the data model covers the entire lifecycle of a facility (e.g. oil refinery) and its components (e.g. pipes, pumps and their parts, etc.). Since such a facility over such a long time entails many different types of activities on a myriad of different objects it became clear that a generic and data-driven data model would be required.
A simple example will illustrate this. There are thousands of different types of physical objects in a facility (pumps, compressors, pipes, instruments, fluids, etc). Each of these has many properties. If all combinations would be modelled in a "hard-coded" fashion, the number of combinations would be staggering, and unmanageable.
The solution is a "template" that represents the semantics of: "This object has a property of X yyyy" (where yyyy is the unit of measure). Any instance of that template refers to the applicable reference data:
Without being able to make reference to those classes, via the Internet, it will be impossible to express this information.
OptiPlant is a computer-aided engineering (CAE) software application for 3D conceptual design. OptiPlant is manufactured and sold by ASD Global. OptiPlant is a 3D knowledge based automation tool with 3D parametric modeling of equipment and structures, interference-free pipe router and tray router, and engineering analytic.OptiPlant solely for Microsoft Windows Operating system.COMOS
COMOS is a plant engineering software solution of the Siemens AG. The applications for this software lie in particular in the process industries for the engineering, operation, and maintenance of process plants as well as their asset management.Data exchange
Data exchange is the process of taking data structured under a source schema and transforming it into data structured under a target schema, so that the target data is an accurate representation of the source data. Data exchange allows data to be shared between different computer programs.
It is similar to the related concept of data integration except that data is actually restructured (with possible loss of content) in data exchange. There may be no way to transform an instance given all of the constraints. Conversely, there may be numerous ways to transform the instance (possibly infinitely many), in which case a "best" choice of solutions has to be identified and justified.Gellish
Gellish is a formal language that is natural language independent, although its concepts have 'names' and definitions in various natural languages. Any natural language variant, such as Gellish Formal English is a controlled natural language. Information and knowledge can be expressed in such a way that it is computer-interpretable, as well as system-independent and natural language independent. Each natural language variant is a structured subset of that natural language and is suitable for information modeling and knowledge representation in that particular language. All expressions, concepts and individual things are represented in Gellish by (numeric) unique identifiers (Gellish UID's). This enables software to translate expressions from one formal natural language to any other formal natural language.
Gellish is a universal and extendable conceptual data modeling language. Because it includes domain-specific terminology and definitions, it is also a semantic data modelling language and the Gellish modeling methodology is a member of the family of semantic modeling methodologies.
Gellish started out as an engineering modeling language ("Generic Engineering Language", hence the name, "Gellish") and was subsequently developed into a language with general applications.Generic data model
Generic data models are generalizations of conventional data models. They define standardised general relation types, together with the kinds of things that may be related by such a relation type.ISO/TC 184/SC 4
ISO/TC 184/SC 4 is an international standards organization responsible for industrial data. ISO/TC 184/SC 4 develops and maintains ISO standards that describe and manage industrial product data throughout the life of the product. ISO/TC 184/SC 4, Industrial data, is Subcommittee 4 of ISO/TC 184, Automation systems and integration, which is Technical Committee 184 of the International Organization for Standardization (ISO).ISO 10303
ISO 10303 is an ISO standard for the computer-interpretable representation and exchange of product manufacturing information. Its official title is: Automation systems and integration — Product data representation and exchange. It is known informally as "STEP", which stands for "Standard for the Exchange of Product model data". ISO 10303 can represent 3D objects in Computer-aided design (CAD) and related information.ISO 15926 WIP
The ISO 15926 is an interoperability standard in the process industry. ISO 15926 includes the Work In Progress (WIP) database. WIP is available online and includes technical class descriptions of all the main equipment items, pipe, instruments, buildings, activities and anything else used in engineering, constructing, procuring, operating and maintaining process facilities.Integrated Operations in the High North
Integrated Operations in the High North (IOHN, IO High North or IO in the High North) is a unique collaboration project that during a four-year period starting May 2008 is working on designing, implementing and testing a Digital Platform for what in the Upstream Oil and Gas Industry is called the next or second generation of Integrated Operations.
The work on the Digital platform is focussed on capture, transfer and integration of Real-time data from the remote production installations to the decision makers. A risk evaluation across the whole chain is also included. The platform is based on open standards and enables a higher degree of interoperability. Requirements for the digital platform come from use cases defined within the Drilling and Completion, Reservoir and Production and Operations and Maintenance domains. The platform will subsequently be demonstrated through pilots within these three domains.This new platform is considered an important enabler for safe and sustainable operations in remote, vulnerable and hazardous areas such as the High North, but the technology is clearly also applicable in more general applications.
The IOHN project consortium consists of 23 participants, including operators, service providers, software vendors, technology providers, research institutions and universities. In addition, the Norwegian Defence Force is working with the project to resolve common infrastructural and interoperability challenges.The project is managed by Det Norske Veritas (DNV). Nils Sandsmark was the project manager during the initiation and start-up phase. Frédéric Verhelst took over as project manager from the beginning of 2009.Financing comes from the participants and the Research Council of Norway (RCN) for parts of the project (GOICT
and AutoConRig).Integrated operations
In the Petroleum industry, Integrated operations (IO) refers to the integration of people, disciplines, organizations, work processes and information and communication technology to make smarter decisions. In short, IO is collaboration with focus on production.Ontology components
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes, and relations.POSC Caesar
POSC Caesar Association (PCA) is an international, open, not-for-profit, member organization that promotes the development of open specifications to be used as standards for enabling the interoperability of data, software and related matters.
PCA is the initiator of ISO 15926 "Integration of life-cycle data for process plants including oil and gas production facilities" and is committed to its maintenance and enhancement.
Nils Sandsmark has been the General Manager of POSC Caesar Association since 1999 and Thore Langeland, Norwegian Oil Industry Association (Norwegian: Oljeindustriens Landsforening, OLF), is the Chairman of the Board.Plant lifecycle management
Plant lifecycle management (PLM) is the process of managing an industrial facility's data and information throughout its lifetime. Plant lifecycle management differs from product lifecycle management by its primary focus on the integration of logical, physical and technical plant data in a combined plant model.
A PLM model can be used through a plants whole lifecycle, covering:
Land rehabilitation.Semantic data model
Semantic data model is a high-level semantics-based database description and structuring formalism (database model) for databases. This database model is designed to capture more of the meaning of an application environment than is possible with contemporary database models. An SDM specification describes a database in terms of the kinds of entities that exist in the application environment, the classifications and groupings of those entities, and the structural interconnections among them. SDM provides a collection of high-level modeling primitives to capture the semantics of an application environment. By accommodating derived information in a database structural specification, SDM allows the same information to be viewed in several ways; this makes it possible to directly accommodate the variety of needs and processing requirements typically present in database applications. The design of the present SDM is based on our experience in using a preliminary version of it.SDM is designed to enhance the effectiveness and usability of database systems. An SDM database description can serve as a formal specification and documentation tool for a database; it can provide a basis for supporting a variety of powerful user interface facilities, it can serve as a conceptual database model in the database design process; and, it can be used as the database model for a new kind of database management system.
A semantic data model(SDM) in software engineering has various meanings:
It is a conceptual data model in which semantic information is included. This means that the model describes the meaning of its instances. Such a semantic data model is an abstraction that defines how the stored symbols (the instance data) relate to the real world.
It is a conceptual data model that includes the capability to express information that enables parties to the information exchange to interpret meaning (semantics) from the instances, without the need to know the meta-model. Such semantic models are fact-oriented (as opposed to object-oriented). Facts are typically expressed by binary relations between data elements, whereas higher order relations are expressed as collections of binary relations. Typically binary relations have the form of triples: Object-RelationType-Object. For example: the Eiffel Tower
The second kind of semantic data models are usually meant to create semantic databases. The ability to include meaning in semantic databases facilitates building distributed databases that enable applications to interpret the meaning from the content. This implies that semantic databases can be integrated when they use the same (standard) relation types. This also implies that in general they have a wider applicability than relational or object-oriented databases.Semantic reasoner
A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including Pei Wang's non-axiomatic reasoning system, and probabilistic logic networks.Standard data model
A standard data model or industry standard data model (ISDM) is a data model that is widely applied in some industry, and shared amongst competitors to some degree. They are often defined by standards bodies, database vendors or operating system vendors.
When in use, they enable easier and faster information sharing because heterogeneous organizations have a standard vocabulary and pre-negotiated semantics, format, and quality standards for exchanged data. The standardization affects software architecture as solutions that vary from the standard may cause data sharing issues and problems if data is out of compliance with the standard.
The more effective standard models have developed in the banking, insurance, pharmaceutical and automotive industries, to reflect the stringent standards applied to customer information gathering, customer privacy, consumer safety, or just in time manufacturing.
Typically these use the popular relational model of database management, but some use the hierarchical model, especially those used in manufacturing or mandated by governments, e.g., the DIN codes specified by Germany. While the format of the standard may have implementation trade-offs, the underlying goal of these standards is to make sharing of data easier.
The most complex data models known are in military use, and consortia such as NATO tend to require strict standards of their members' equipment and supply databases. However, they typically do not share these with non-NATO competitors, and so calling these 'standard' in the same sense as commercial software is probably not very appropriate.Upper ontology
In information science, an upper ontology (also known as a top-level ontology or foundation ontology) is an ontology (in the sense used in information science) which consists of very general terms (such as "object", "property", "relation") that are common across all domains. An important function of an upper ontology is to support broad semantic interoperability among a large number of domain-specific ontologies by providing a common starting point for the formulation of definitions. Terms in the domain ontology are ranked "under" the terms in the upper ontology, and the former stand to the latter in subclass relations.
A number of upper ontologies have been proposed, each with its own proponents. Each upper ontology can be considered as a computational implementation of natural philosophy, which itself is a more empirical method for investigating the topics within the philosophical discipline of physical ontology.
Library classification systems predate upper ontology systems. Though library classifications organize and categorize knowledge using general concepts that are the same across all knowledge domains, neither system is a replacement for the other.Valve
A valve is a device that regulates, directs or controls the flow of a fluid (gases, liquids, fluidized solids, or slurries) by opening, closing, or partially obstructing various passageways. Valves are technically fittings, but are usually discussed as a separate category. In an open valve, fluid flows in a direction from higher pressure to lower pressure. The word is derived from the Latin valva, the moving part of a door, in turn from volvere, to turn, roll.
The simplest, and very ancient, valve is simply a freely hinged flap which drops to obstruct fluid (gas or liquid) flow in one direction, but is pushed open by flow in the opposite direction. This is called a check valve, as it prevents or "checks" the flow in one direction. Modern control valves may regulate pressure or flow downstream and operate on sophisticated automation systems.
Valves have many uses, including controlling water for irrigation, industrial uses for controlling processes, residential uses such as on/off and pressure control to dish and clothes washers and taps in the home. Even aerosols have a tiny valve built in. Valves are also used in the military and transport sectors.
ISO standards by standard number