I recently posed a question in this forum to clarify thoughts on the need for a digital twin ‘test’... a way of determining if a proposed digital twin is actually what everyone can agree upon and that matches expectations.
A test will serve as an invaluable tool for educating and up-skilling, avoidingconfusion and set a direction for implementation. This is something particularly close to my heart as we’re currently (still) experiencing this in global BIM discussions. Whilst on the topic of BIM, the test could be a great way of identifying what a typical BIM process deliverable is and how a digital twin might differ. This is particularly pertinent as we’re currently observing digital twin negativity and the misconception that digital twins are ‘just BIM’.
Take a look at the attached image, a snapshot of a Twitter Poll... this may be just a small sample, but of 113 people on twitter who responded to this tweet by a Canadian colleague, just over HALF of them think digital twins are software vendors marketing vaporware - a product that doesn’t come to fruition. The other half are of the impression that digital twins are a ‘technology’. Clearly there’s work to be done...
Personally, I think we need a mutually agreed distinction to engage and involve a wider group of professionals from within our sector and outside of it to really progress and deliver the benefits outlined in The Gemini Principles.
Comments you’ve provided so far suggest that a test could be helpful, although some of you share the concern that the time taken to form a test may be better spent developing a digital twin. Other comments have highlighted the need to avoid being short-sighted in the ‘boundaries’ of a test. If we are to develop a test, it will need to be flexible enough to cater for edge cases and to evolve over time as technologies and possibilities become more easily achievable - i.e. when the goal posts move!
Do we need to define a baseline case, so that all proposed digital twins are measured against it? If so, what are the fundamentals?
For example, which of the following might be considered a digital twin:
Each of these are similar but constitute different fundamentals. LightningMapsuses weather station data, while TideTimes uses a database of pre-established tide peaks and throughs. Is the collection of (near) real-time data fundamental, or something that is only applicable to specific use case?
Once we have the fundamentals, which digital twins need to be tested? If we are ultimately aiming for a national digital twin, surely we need to test all of them to ensure compatibility and value if it is to be included/connected to it? If this is the case, then I’m talking myself into the notion that a simple yes/no or pass/fail will never be enough... We need to find a way to identify and celebrate the (positive) extremes, to encourage the development of borderline cases to become true digital twins and to seek new directions and measures of ‘what looks good’ as the sector integrates digital twins into its decision-making.
It looks like we have a LOT to discuss in the proposed workshop on the 17thNovember to explore why, what and how we should be measuring.
Outline agenda below, to be informed by the ongoing forum discussions.
- The Why - Discussing the pros and cons of a digital twin test.
- Objectives & Activities for looking at intuitive tests for digital twins
- Summary of initial industry feedback.
- A Yes/No, Pass/Fail or a Sliding Scale?
- Existing 'test' examples that could be leveraged from other industries.
- Discussing what elements make up a digital twin.
I hope you will continue the discussion on this thread, which will give us time to prepare the workshop materials and key discussion points and to do that, I have some questions to continue the discussion...
The workshop will take place on the 17th November from 14:00 – 16:00. Register on Eventbrite to receive joining instructions. See you there!