Breaking Barriers: Interoperability

Digital Twin Hub > Articles & Publications > Breaking Barriers: Interoperability

During our research activities within the DT Hub, several barriers relating to the use of digital twins were identified.  This blog post is one of a series which reflects on each barrier and considers related issues so that we can discuss how they may be addressed.

As our members, and indeed other organisations active in the built environment, develop data and information about their assets, the ability to ensure that this data can be used within other tools is a priority.  To do so, the data needs to be interoperable. One definition of interoperability is:

Quote

interoperabilitycapability of two or more functional units to process data cooperatively.[SOURCE: ISO 2382:2015, 2120585 – Information Technology – Vocabulary]

In brief, if data can be shared between systems it is considered interoperable.  Typically, this can be achieved in one of two ways:

  1. Both systems use the same formal description (schema) to structure the data; or
  2. One system transforms its data using an intermediate formal description (schema) to structure the data

The simplest solution appears to be (1), to have all systems create, use and maintain information using the same schema.  This would mean that information could be used in its default (native) format and there would be no risk of data being lost or corrupted during its transformation.  However, this isn’t practicable as, from a technical perspective, it is unlikely that the broad range of information needed to support every possible purpose could be captured against the same schema.  In addition, public procurement directives require performance-based technical specifications as opposed to naming specific software. This means that an organization may be challenged if they specify their supply chain use a particular piece of software as it would circumvent directives around competition and value for money.

As it is not possible to guarantee that the same schema will be used throughout, it is far more practicable to identify which established industry schema is most suitable to accept data within (2) depending on the purpose of using this information.  In doing so, there is an added benefit that the information you receive may be open data.

Typically misused as a synonym for interoperability, open data is important for sharing but for a specific reason.

Quote

open datadata available/visible to others and that can be freely used, re-used, re-published and redistributed by anyone[SOURCE: ISO 5127:2017, 3.1.10.13 – Information and Documentation – Foundation and Vocabulary]

Open data, in brief, is un-restricted data.  By using proprietary software and systems the schema used to structure that data is hidden.  As a user of that software you are effectively given permission by the vendor to use that structure to view your information.  For built environment assets this can be a problem as the physical asset can outlast the software used to design and manage it.  Meaning that in 50 years a tool that allows access to this information may not exist – or sooner given the cannibalistic nature of the software industry.  Consider SketchUp for example.  Since its release in 2000, it has been owned by three different organizations: @Last Software, Google, and Trimble.  The permission to use the SKP schema has changed hands several times.  Who will produce software to view these files in 30 years’ time?

To ensure enduring access to asset information, either bespoke schemas need to be developed and maintained internally, or an established open schema needs to be used.  However, while several open schemas are readily available (such as IFC, PDF, PNG, MQTT) they can raise concerns related to access, control and abuse of the data within. 

These concerns, thankfully, can be offset through control.  Using open data structures, it is possible to ensure that only the information you wish to exchange is delivered.  By using proprietary structures hidden information can also be exchanged which cannot be controlled; potentially causing a larger risk than their open counterparts.  Conversely, to produce a “need-to-know” dataset an open data approach is, ironically, easier to control.

When considering which methodologies to use, open data benefits typically outweigh its risks.  The use of these open data structures will not only unlock interoperability between digital twins within an organization but will be the mechanism that enables a secure national digital twin. 

Access to appropriate data about our national infrastructure is currently held behind proprietary schema.  Let’s make Britain’s data open again!

 

We hope you enjoyed this short piece on breaking the barriers related to interoperability.  What specific challenges have you faced relating to the implementation of interoperability?  Do you consider open data in this content is an opportunity or a threat? Would you prefer the National Digital Twin to be based on an open or a propriety schema?

the_pathway_towards_an_imf.pdf the_pathway_towards_an_imf.pdf DTHUb_NewbieGuide_May2020_(1).pdf HUB Version_DT Standards Roadmap_November 2020 (3).pdf DT Hub SII SUMMARY – Published.pdf

Leave a comment

Your email address will not be published. Required fields are marked *

Content

Discover