News about senior leaders’ training in digital twin capabilities.
An important part of our work at the DT Hub, through our governance and operations teams, is to get involved with the community, find out its needs and drive forward projects like the Gemini skills programme that will help make a difference. The Gemini activity is a good example of how we link with DT Hub working groups to realise change.
As a quick recap, the Gemini initiative is being developed by the DT Hub and the community to address the skills gap as a barrier to digital twin adoption.
But how do we know there is a skills gap?
We hear that it is a challenge faced by all industries and at different levels, with the strongest evidence noted in the government response to its Cyber-Physical Infrastructure Consultation in 2022. It reads:
‘Skills was highlighted as a critical enabler across the breadth of technical and non-technical. From data engineers, software and hardware developers, systems architects and security experts, to organisational change, legal, procurement, and cross-domain skills, there was recognition of both the existing and growing needs.’
Not only do we have this, but we also have a directive. The response highlights the convening power of the DT Hub to help advance the cross-sector vision for connected digital twins in its section on Frameworks, guidance and standardisation, and identifies Skills as a critical enabler, using the Cranfield Digital and Technology Solutions MSc as its case study.
This summer, the DT Hub formalised a partnership with Cranfield University, and, building out from Cranfield’s MSc, began a series of discussions with asset owners and infrastructure providers.
The team also went back to the policymakers and consulted with those in the wider industries where digital twins can provide solutions.
In working with these groups, we have been able to create a model that will deliver focused training at different levels.
You can see from this diagram, how the DT Hub is creating a skills development framework, built on existing skills and competency frameworks, from the Centre for Digital Built Britain and others, to address different depths and scales of learning.
Our recently launched Data and Digital Twins elearning course, and the Gemini skills training course, called Digital Twins for Senior Leaders, sit in the Awareness and Working areas of knowledge development.
Our recent focus has been on leaders and future leaders in roles that will influence decision-making and organisational change.
The senior leaders’ course addresses the digital twin skills gap by creating understanding and consensus in the market about digital twins, aligning demand and supply for digital twin competency and by encouraging systems thinking as a solution.
Of interest to the DT Hub community is how we can take the course and tailor it to specific sectors. We are creating a pilot programme and first cohort of a transport focused course.
The course will deliver the What, Why and How of digital twins, as required by senior leaders in the transport sector as it builds towards key Outputs in the TRIB Vision and Roadmap to 2035.
The training will be a starting element on the route to the Roadmap’s Skills and Capabilities Outcomes, to enable DfT’s vision. It will help leaders understand the critical importance of digital twins, considering areas where they give benefits, for example, infrastructure decarbonisation, autonomous vehicle safety, digital local roads, and how they will help to keep pace with the rate of digital change.
The support of DfT and the approach we are taking for this pilot will mean that we can focus on sector-specific use cases, and also bring in universal examples of good practice across other sectors. It paves the way for further bespoke cohorts, based on a core curriculum of learning.
The core course works across the Gemini Principles, Gemini Papers and outputs from CDBB, the National Digital Twin Programme, knowledge from the DT Hub community, including case studies and insight, plus research outputs from Cranfield, and recent elearning on Data and Digital Twins.
It brings in knowledge and expertise from academic and industry experts across the international field. Our intention is that this training becomes a baseline for competency in digital twin skills.
A few more words on the pilot and what the attendees will gain from it, far more than the Cranfield and Gemini certificate and digital badge.
The pilot course is bringing together groups of delegates from DfT, Network Rail, Arup and Anglian Water, as examples. Each week it answers important questions and give senior leaders, the skills and knowledge founded on the Gemini Principles and recognised frameworks. At the same time, it will allow them to build a business canvas, a real strategy and a value framework to take back to their businesses, and accelerate change.
The course runs online for an hour and a half each week across eight weeks. It will launch fully on 1 February 2024.
We are continuing to explore needs to structure further training across our skills model.
Please contact me if you would like further information, or register now on the Cranfield website.
Cyber-Physical Infrastructure Review
Update from Simon Hart, Head of Digital Twins and Cyber-Physical Infrastructure at Innovate UK
Cyber-physical infrastructure connects data, infrastructure, robotics and services. It is the vast interconnected web of intelligent systems across the UK. It is not a single thing, a single data source or asset, but an interconnected system of systems that has the potential to improve our lives and tackle some of the greatest challenges to humanity.
Cyber-physical infrastructure means digital systems interacting with physical assets. The concept is simple, but the implementation is complex. To truly decarbonise our energy system, we must accept that new dynamic sources of energy, storage and consumption will rapidly become connected, but how will we ensure that these work together for the public good?
We want to identify and manage traffic on our rail networks. How can we do this without access to information and data?
Security of data is paramount, so how can we make real-time data about the movement of people through a city be available in a safe and secure way to develop better healthcare services for that population?
These are some of the challenges for the development of cyber-physical infrastructure.
Last year, Innovate UK funded collaborative projects at six different Catapult centres across the UK. We ran workshops with UK experts in digital research infrastructure and engaged with start-ups experts and government stakeholders to capture their needs. See my earlier blog: How we created the cyber-physical infrastructure.
Faster commercialisation, cheaper development, and more effective collaboration of innovative ideas
Innovate UK has been proud to work with Henry Fenby-Taylor to review the progress made so far. This review showcases project examples, demonstrating how cyber-physical infrastructure can be applied in various contexts and domains.
Project examples are:
DOME and 5G Marine Portal, providing offshore wind developers and operators with a unique platform to demonstrate and test virtual solutions
Transport Research Innovation Board (TRIB) Vision and Roadmap to 2035, a vision for an ecosystem of connected digital twins to support decarbonisation and improved performance of UK transport systems
Apollo Protocol, a collaborative framework to aid the development of digital twins across different sectors
National Digital Twin Programme (NDTP), an R&D programme to develop standards, processes, and tools to enable a functioning market in digital twins.
Over the coming months we will work to gain a greater understanding of business needs and opportunities for the National Cyber-Physical Infrastructure ecosystem. It is an exciting time for the innovation community and I look forward to working with you.
The Fenby-Taylor Cyber-Physical Infrastructure Review, published on 16 November 2023, is accessible to everyone and I would encourage innovators, leaders and collaborators to read it.
A standard approach to implementing ESG strategies and accelerating sustainability goals
Fabrizio Cannizzo, Chief Architect, IOTICS
Organisations worldwide are embracing Environmental, Social, and Governance (ESG) strategies, recognising their importance for sustainable business operations and societal progress. Technology is essential for accelerating these strategies, but traditional solutions have their limits. A standard approach, focused on data centricity and data interoperability, is preferable to any single-vendor turn-key solution. This approach offers numerous benefits, including improved visibility and insights, enhanced collaboration, and reduced costs. It can also help organisations achieve their sustainability goals faster and more sustainably.
Organisational challenges
Organisations face several challenges that impact the choice of technologies that implement ESG strategies
● Siloed data: Data sourced from different parts of an organisation may not be integrated, making it challenging to obtain a complete picture of an organisation's ESG performance.
● Proprietary data: Data may not be available for sharing, or organisations may not want to relinquish control to access their data.
● Data scalability: Organisations may possess extensive data that hinders cost-effective sharing. What organisations might prefer is to share information or insights derived from processing some data in-situ.
● Process alignment: Merely sharing and exchanging data may not be sufficient to align organisational processes to comprehensively tackle ESG goals.
Data silos, proprietary constraints, and scalability issues often compound the challenges faced by organisations. As the ESG landscape continues to evolve, organisations need to seek more adaptable, accessible, and comprehensive technological solutions.
Limitations of vendor-specific solutions
Private initiatives, such as Microsoft's Sustainability Cloud and Common Data Model, have emerged as popular solutions for driving ESG strategies. They provide an integrated framework for capturing and reporting ESG metrics, making it easier for organisations to track their sustainability efforts. Yet while these tools offer considerable benefits, they also come with some limitations.
They are not universally adopted
These private initiatives, while comprehensive, are not universally adopted. Their use is often limited to organisations that already use the specific vendor’s products. As a result, a significant number of organisations that use different tech stacks are unable to leverage these solutions. This lack of universal adoption leads to fragmented and inconsistent ESG reporting and tracking, hindering holistic progress towards sustainability goals.
They are not always comprehensive
These solutions typically focus on a specific subset of ESG metrics, which means organisations might not be able to measure their performance across all aspects of ESG. This leads to a potential gap in understanding and addressing the full spectrum of ESG issues.
They can be expensive
The cost of implementing and maintaining these systems can be prohibitive, especially for smaller organisations or those with limited resources. The expense can act as a barrier to the adoption of these initiatives, thereby limiting their scope and effectiveness.
They can require vendor lock-in
These technologies often require vendor lock-in, which can limit an organisation’s flexibility and choice in selecting and using other tools. This is particularly problematic for organisations that need to interoperate with other systems or vendors.
A standard approach to technology
To overcome these challenges, we need to shift our approach to a data-centric model where data is decoupled from the tools and enriched to make it interoperable across any data source. Adopting an approach rooted in making data available and interoperable offers several key benefits.
Avoid vendor lock-in
This approach allows organisations to sidestep the confines of vendor lock-in, promoting interoperability and facilitating the seamless integration and analysis of ESG data across different platforms and organisational boundaries. This freedom enables organisations to select the best tools for their needs to create a tailored end-to-end solution, rather than being constrained by a single vendor’s offerings.
Data-information-insights-actions loop
Beyond sharing and exchanging raw data, the technology should also enable the sharing and exchange of information, insights, and actionable directives. This enriches the alignment of organisational processes towards ESG goals, promoting a culture of sustainable operations across the ecosystem.
Take, for example, an organisation working to reduce its carbon emissions. This organisation can share data, insights, and actionable directives with its suppliers, helping them adapt their internal processes to also lower emissions. This collaboration benefits both entities and their shared goal, creating a ripple effect that extends beyond individual organisational boundaries, all working towards shaping a better future.
Growth and scale
The technology adopted should facilitate a start-small and scale organically approach. Quick wins are instrumental in proving the immediate success of ESG initiatives, and the technology chosen should support the scaling of these wins as part of a business’s regular operations.
The choice of technology plays a pivotal role in the evolution and scale of ESG initiatives. Ideally, technologies adopted should provide mechanisms to 'start small' and scale organically. This strategy is key because ‘time-to-market’ and ‘right-first-time’ are crucial metrics to get started and prove immediate success in achieving ESG goals.
Starting small can refer to piloting a project within a single department before scaling it to the whole organisation or targeting a specific ESG goal before tackling more. This approach enables quick wins, delivers immediate value, and provides momentum to build on, bolstering the confidence of stakeholders and making the case for further investment.
As these initiatives prove successful, technology should facilitate their scaling as part of business as usual. Scaling, in this context, could mean expanding the initiative across the organisation, replicating the project in different geographical locations, or broadening the scope to include more ESG goals.
By allowing for growth at a comfortable pace, technology enables companies to maintain momentum in their ESG efforts without overwhelming their resources or processes. Thus, technology that enables quick wins and facilitates seamless scalability should be favoured in the pursuit of ESG goals.
Benefits of a standardised approach
This standard approach to sharing data, information, and actionable insights can profoundly impact an organisation's ESG efforts.
Improved decision-making
With access to a broader range of data and insights, organisations can make more informed decisions regarding their ESG initiatives. This improved decision-making capacity enables companies to optimise their strategies, leading to better performance and a more significant ESG impact.
Increased transparency
Transparency is fundamental in the ESG landscape. By adopting a standard approach to sharing data and insights, organisations can foster a greater degree of openness. This transparency can boost stakeholder trust, improve reputational strength, and solidify long-term relationships with partners, customers, and the public.
Reduced risk
Identifying and managing ESG risks is an integral part of any sustainability strategy. With access to a wealth of shared data and insights, organisations can better predict, evaluate, and mitigate potential risks, reducing their exposure to financial and reputational damage.
Scalability
A standardised approach allows for a scalable model, letting organisations adapt and grow their ESG initiatives at their own pace, in tandem with their partners. This flexibility avoids the pressure of an all-or-nothing approach, enabling incremental growth and facilitating an ecosystem-wide progression towards ESG goals.
Looking ahead
Technology is essential for accelerating global ESG efforts. To truly achieve this, we need technology that promotes trusted technical interoperability, allows the sharing of raw data, information, and actionable insights, and supports an agile approach to scaling.
To accelerate ESG strategies globally, organisations must avoid limiting themselves to private, vendor-specific solutions. Instead, they need to build on open standards that encourage sharing, collaboration, and adaptability, providing the necessary infrastructure for the collective pursuit of sustainable futures.
Fabrizio Cannizzo is Chief Architect at IOTICS. He is the steward of both conceptual and concrete architecture, playing a pivotal role in shaping the foundation of the organisation's technology and meeting the evolving needs of clients and partners.
Potential expansions to the CReDo model (the Climate Resilience Demonstrator) including further geographical, use case and climate scenarios, were unveiled at The Future of CReDo event at Connected Places Catapult. Participants also mapped out commercial options for how the model could be owned and operated.
Helping infrastructure owners to better understand the threats to their assets posed by extreme weather – and take appropriate action – are broad aims of three new reports published by Connected Places Catapult on behalf of CReDo: the Climate Resilience Demonstrator, and discussed at a workshop held in London on 29 September.
CReDo is a project that shows how connected digital twins can be key tools in helping to protect infrastructure assets from threats posed by climate change. It seeks to understand the ways in which different pieces of public infrastructure closely depend on one another; pointing out that if one asset fails it can have a dramatic impact on several others.
Advocates of the project make the case that overall infrastructure resilience in the face of challenging climate events can be improved if a greater focus is placed on making investments in the most appropriate manner and adopting new technologies.
Until now, the focus of CReDo has centred around flooding and how using a digital twin and sharing data among infrastructure owners can help to predict how a major flood event could have an impact on the supply of power, communications and water.
Great progress has been made so far by Connected Places Catapult alongside project partners BT, Anglian Water and UK Power Networks to demonstrate the impact a connected digital twin could have on improving infrastructure resilience in a market town in Norfolk.
But the demonstrator is entering a new phase and beginning to expand in scope. Over the next year, the CReDo project will continue its expansion geographically as well as into new climate scenarios, namely extreme heat.
Delegates welcomed to the workshop
At the workshop in London, hosted by Connected Places Catapult’s Chris Jones, stakeholders representing owners of several infrastructure asset types including water, telecoms, energy and transport – as well as government, regulators and academics – came together to be introduced to CReDo and the next steps for the project.
Participants were then presented with several possible ownership models for CReDo, options for commercialising digital twins, and the likely barriers to overcome in order to create an economically sustainable future for the development and use of such platforms. In an interactive part of the session, they evaluated the options, workshopped the barriers and discussed ways to overcome those barriers.
CReDo Engagement Lead, Sarah Hayes said the digital twin will help to create so called ‘failure models’ to help improve the understanding of how assets may behave in difficult weather conditions. “We hope to realise the value from sharing data,” she said. “CReDo promises to increase resilience, provide better and improved decision making and reduce the cost of repairs when responding to extreme weather events in future.”
Sarah added that better understanding of how different infrastructure assets depend on one another should mean that network resilience is managed in a more co-ordinated way; building trust among stakeholders and creating more of a “willingness to share data across organisations”.
Delegates were told that sharing data and improving resilience through simply expanding CReDo nationally could provide an economic benefit to the country of as much as £432 million – 55 times the cost of implementation – although estimates of benefit vary depending on how much data is captured. The economic benefits from further expansions into different climate scenarios and to different asset sectors could greatly increase this number. The next step for the demonstrator is to make it ready for market.
Understanding infrastructure interdependencies
Connected Places Catapult’s Senior Product Manager, Holly Hensler told the event that CReDo will map infrastructure interdependencies in a manner that was not previously possible, to see when and how likely it is that assets will fail; allowing owners to know where to increase resilience and investment. “As climate change continues, a platform like CReDo will become even more important,” she said.
“If a flood knocks out two power assets, it could also cause failure with interconnected assets creating a ‘cascade fail’. It is important to understand these interdependencies, which is where CReDo can help.”
The platform also promises to demonstrate the power of different asset stakeholders working together; bringing previously siloed data to the table to improve strategic resilience planning. Such an effort could also benefit society by ensuring that the public has continued access to clean drinking water, heating during extreme weather events and can use the telephone to reach the emergency services.
Holly added that the demonstrator will now look to expand its geographic coverage from its focus on a market town in Norfolk, look at more infrastructure sectors and asset types, and consider a wider range of climate events such as extreme heat, wind, cold, drought and snow.
Anglian Water’s asset performance specialist Tom Burgoyne said of CReDo: “We welcome the focus on exploring how we encourage cross sector investment in the platform, increase the level of data sharing and pass useful information back to asset owners.
“CReDo promises to help us to explore the challenges of increasing resilience, and make sure our investments are directed to the right places.”
BT Group’s service specialist Justine Webster added: “We expect CReDo will help to improve understanding of how multiple asset owners must work together to focus on resilience and reduce knock on effects for other sectors. It also promises to provide better real-time data about weather conditions to help improve decision making.”
Operating models considered
Connected Places Catapult’s Senior Business Analyst, Friso Buker presented the Strategic Business Case report that explores five different operating models and structures for CReDo – where the platform is managed by either government, SMEs, the private sector, a public-private partnership or an ‘open source’ model.
Important points to consider, he explained, include whether the model can hold onto the ethos of the public good, how it will manage industry rivalries and who gets to make decisions. In a show of hands, participants at the event appeared to favour a public-private model, allowing both market understanding, and government control. But Friso asked: “Will the private sector want to get involved if there is little indication if CReDo will become a successful service?”
He also outlined several revenue models being considered for the platform: a ‘freemium’ service where users pay a flat fee for access for a set period of time; a pay-as-you-go model; and arrangements set around licensed or ‘value based’ models similar to an insurance arrangement. “But how we negotiate that, and agree on the value of the service, represent an extremely difficult proposition,” he remarked.
Data Strategist Cara Navarro spoke of the methodology behind how CReDo can expand into different use cases, taking the transport sector as a case study example. She explained that professionals responsible for protecting road and rail assets will need to better understand how their assets including drainage, structures and telecoms equipment rely on – and are vulnerable to – other networks.
Cara showed a ‘knowledge graph’ of the connections of different technologies within the road and rail sectors, showcasing the interconnectivity and potential disruption brought on by cascading failures in those sectors. Detail of the knowledge graph can be seen the infographic CReDo Transport Use Case Dependencies, published on the DT Hub.
Workshop focus
An interactive section at the event was opened by Systems Engineer Tom Marsh who unveiled a roadmap for how CReDo could be brought to market, and considered some of the gaps in understanding and activities that need to fill those gaps.
He went on to showcase three different cross-sector funding models. More detail can be found in the Developing CReDo from a Demonstrator to a Market-Ready Tool report on the DT Hub.
Participants addressed barriers to the different ownership roadmap models: stipulating barriers that could and could not be overcome across each model. They then focused on the roadmap of their choice in focus groups, discussing actions needed.
Chairing the session, Holly Hensler said the most unexpected outcome was that even though the groups were focused on different roadmaps, they converged into a similar roadmap model – namely one where government is the main sponsor and owner of CReDo, but making use of the agility and speed of SMEs to deliver parts of the work.
Infrastructure asset owners, regulators and government bodies are encouraged to sign up to the CReDo Network on the DT Hub to take part in the discussion forum.
For further details, or to get involved, contact credo@cp.catapult.org.uk
Access CReDo reports
Learn more about CReDo
Climate resilience via distributed data sharing and connected understanding
Amit Bhave, CEO and Co-Founder of CMCL
Critical infrastructure and climate change resilience
Critical infrastructure networks, for example water energy, transport and telecommunications, are interdependent. The Joint Committee on the National Security Strategy report in 2022 highlighted the potential impact of climate change on cascading risks (failure or malfunctioning in one network causing a knock-on effect on other networks) affecting these sectors [1]. The Climate Change Committee, for instance, has warned that flooding is set to become more frequent and severe, affecting infrastructure including energy, water, transport, waste and digital communication. A lack of a formal mechanism for information sharing between critical infrastructure providers and the siloed regulation of each sector pose significant obstacles to climate adaptation and the development of climate resilience across the combined networks.
Substantial action is needed to develop anticipatory measures to help communities avoid climate disasters. Climate resilience is an intrinsically cross-sector challenge and a rapidly growing concern. The preparations to address it have started, but wider contributions are needed to match the severity of the challenge.
Cross-sector digitalisation and the CReDo approach
Consider a weather event such as a flood that has compromised assets from the energy network, this can in turn have a knock-on impact on other sectors. The other networks, however, may not be aware that they are vulnerable to this cascading risk (Figure 1).
Connecting digital twins to develop a shared understanding and actionable insights across sectors is a key means to address this cross-sector resilience challenge. The Climate Resilience Demonstrator (CReDo) is a climate change adaptation digital twin project that demonstrates how cross-sector data sharing can improve climate adaptation and resilience across networks. The underlying principle is a based on a connected system of systems approach, which a) offers more functionality, scale and insights than the sum of the constituent networks alone, and b) accounts for the interconnected nature of networks. This is essential to mimic the real-life cascading risks. This technical approach is powered by a knowledge graph-based connected digital twin approach, described elsewhere [2].
Figure 1: Network of interconnected critical infrastructure assets – failure cascading risk
Beyond the legal barriers, data silos and lack of interoperability represent major technical challenges to data sharing. Here, to understand the merits of developing a common language to enable data interoperability, I encourage the readers to read the blog posts by Jonathen Eyre [3] and Sarah Hayes [4], as well as the many previous insightful blogs and posts published on DT Hub. It is equally important to embrace the heterogeneity of data from different sources, and to cater for the varying levels of digital maturity across stakeholders. In CReDo, we have adopted a distributed data sharing architecture (Figure 2) that hosts asset owner data on separate servers, which could be hosted by asset owners in their own IT systems. This approach supports the extensibility of CReDo to enable the connection of data across more domains (sectors, expertise, know-how, etc.)
The distributed data sharing architecture enables the individual asset owners to retain control of their own data assets, with security and access controls safeguarding access to the data. It facilitates different views of the insights and data tailored to each individual user/asset owner, whilst retaining a connected understanding based on shared data.
Figure 2: Technical approach adopting a distributed shared data architecture
The internal data structure used by CReDo is based on a hierarchy of simple ontologies. The ontologies are used to represent the assets from the infrastructure networks, the connectivity and properties (such as its owner, location, operational state, etc.) of each asset, and flood data for different climate scenarios. The approach enables the straightforward mapping of data from asset owners to the CReDo data structure, ensuring outward compatibility. It is easily extensible, offering the possibility to broaden the scope of CReDo to include additional asset properties, new asset owners and new sectors. The assets and the connectivity between them result in a directed graph. This is shown using synthetic data in Figure 3. This is analogous to the knowledge graph that CReDo creates by using ontologies to representing the data and relationships between data items.
Figure 3: CReDo assets and connectivity - a directed information graph (synthetic data)
Use case and insights
CReDo supports scenarios comprising assets across infrastructure networks and different types of floods for different climate scenarios. The sensitivity of the combined network to cascading failures caused by different types of flooding (coastal, fluvial and pluvial) provides insights to support strategic planning decisions to improve the climate resilience of the combined infrastructure network.
Among the various use cases studied, just one case (also using synthetic data) is considered here for the sake of brevity. The primary substation that supplies power to two NHS hospitals in the region is depicted below in Figure 4. One of the hospitals (marked by a red circle) has directly failed and lost power and telephone service. The second hospital has not flooded but is experiencing indirect failure. The latter also shows the consequences of the flood cascading through the asset networks. An alternative scenario suggesting investments to improve the resilience of the power substation shows an option to avoid the loss of power for both hospitals.
Figure 4: NHS hospital infrastructure - direct and indirect asset failure (synthetic data)
CReDo also offers an analytics dashboard (Figure 5) that derives and summarises information across multiple scenarios, providing a simple view of complex information to support decision making. Examples include a list of assets with the highest vulnerability and economic information regarding potential interventions, etc.
A publicly accessible version of the CReDo demonstrator that uses synthetic data is available via the DT Hub, while a version that uses real asset data has been deployed in a secure environment hosted by the Science and Technology Facilities Council (STFC) within the Data and Analytics Facility for National Infrastructure (DAFNI).
Figure 5: CReDo analytics dashboard (synthetic data)
Engagement
CReDo continues to be a rewarding journey for CMCL; being part of what is a pioneering collaborative project between industry, academia and government to deliver a connected climate adaptation digital twin. A technical showcase describing how the CReDo demonstrator works is now available via a video on the DT Hub.
While we embark on improving the capabilities of CReDo by considering additional weather events such as extreme heat; it would be great to leverage the extensibility of the technical approach and the distributed data sharing by engaging with other infrastructure asset owners, regulators and potential collaborators. You can help shape this engagement via the Digital Twin Hub: https://digitaltwinhub.co.uk/credo/taking-action/.
Links
1. https://publications.parliament.uk/pa/jt5803/jtselect/jtnatsec/132/report.html
2. J. Akroyd, S. Mosbach, A. Bhave and M. Kraft (2021), Universal Digital Twin – A Dynamic Knowledge Graph, Data Centric Engineering, 2, e14
3. https://digitaltwinhub.co.uk/articles/blogs/guest-blog-we-all-speak-the-same-language-don%E2%80%99t-we-by-jonathan-eyre-high-value-manufacturing-catapult-r239/
4. https://digitaltwinhub.co.uk/articles/blogs/guest-blog-data-sharing-between-digital-twins-%E2%80%93-can-we-show-this-in-a-simple-way-by-sarah-hayes-credo-r231/
Amit Bhave is the CEO and Co-Founder of CMCL, a digital engineering company offering products and technical services to the industry. He is also a By-Fellow of Hughes Hall and an Affiliated Research Fellow at the CoMo Group, Department of Chemical Engineering and Biotechnology at the University of Cambridge. His research interests include cross-sector digitalisation, smart infrastructure, negative emissions technologies, carbon abatement, low-emission energy conversion and nanomaterials.
Combining asset data into a connected digital twin can give asset owners across energy, water and telecoms networks a better understanding of the risk of extreme weather events caused by climate change, allowing them to take action.
“We cannot plan for a more resilient future in silos” heard delegates to a showcase event this month exploring progress with CReDo – the Climate Resilience Demonstrator – whose second phase is led by Connected Places Catapult and the Digital Twin Hub.
Instead, a “whole system” approach is needed which considers the complex connections and interdependencies between different types of infrastructure essential to society’s functioning.
Over 450 people joined the hybrid event and heard Elliot Christou, CReDo Technical Lead at the Catapult, and Sarah Snelson, a Director specialising in public policy practice with Frontier Economics, explain the threats posed by a changing climate and the need to take action using sophisticated tools such as CReDo to limit disruption from future flooding.
CReDo is a climate change adaptation digital twin which brings together data across energy, water and telecoms networks to create a bird’s eye view of the infrastructure system. Connected Places Catapult has been working with Anglian Water, BT and Openreach and UK Power Networks who have brought their people and their data to the project to investigate how it possible to share data across sectors and how there is benefit in doing so through increased climate resilience.
We heard how simulations can be run and data interrogated using the CReDo digital twin to allow users to understand more fully the vulnerabilities of their infrastructure networks to flooding. With the correct information to hand, asset owners can make more informed decisions to protect their assets in advance of these extreme weather events impacting and causing failure across the system.
Scenarios can be created that demonstrate the impact of a range of different future flooding risks, and show how the loss of one piece of the infrastructure jigsaw puzzle can disrupt other services. CReDo can then be used to coordinate and support decision making to allow the infrastructure system to be better protected and made more resilient.
Using data to create “actionable insights” could therefore allow decisions to be made that “keep the lights on at a lower cost for the benefit of network operators and society” it was said.
Recent months have been focused on CReDo as a decision-making tool for asset operators. Going forward, the benefits for customers and wider society are set to be explored further. A report on progress with phase two of the project is to be published shortly.
Flooding threats made clear
The showcase event began with a powerful video featuring Baroness Brown of Cambridge, (Professor Dame Julia King) Chair of the Adaptation Committee of the Climate Change Committee outlining the rising occurrences of extreme weather, the need for infrastructure to be resilient to such events and how the impact on society can be more serious if authorities are not more prepared.
“The climate change resilience of infrastructure networks is a challenge that is not yet well understood and is one that we need to address urgently,” she said. “Asset owners really need to know, who are they dependent on”, she added, pointing out that if one is impacted by flooding and that problem was to affect an energy substation for instance, that problem could cascade further. “Understanding risks in advance and how we can mitigate them is key.”
Speakers at the event included Sarah Hayes, Strategic Engagement Lead for CReDo, who explained that one vision for the digital twin is for asset owners to be able to assess the impact of future investment decisions, such as relocating or improving defences for a power substation.
While phase one of the CReDo project used a centralised database, phase two explores how to develop a distributed architecture to enable scalability across sectors, regions and organisations, she explained. “We are on a journey towards connected digital twins”.
Jethro Akroyd, Principal Engineer at CMCL Innovations, ran through the approach to developing the distributed architecture and explained how CReDo uses a common data structure to enable interoperability between the data sets from the asset owners. He walked the audience through a technical demonstration of the CReDo visualisation showing how the assets are connected and then impacted by flooding scenarios as failure cascades throughout the system.
Industry panel shares its insights
A panel discussion involving representatives from asset owners involved in CReDo, Anglian Water, BT and UK Power Networks together with representatives across infrastructure and climate resilience and moderated by Arup’s global digital leader Simon Evans concluded proceedings.
"I am hugely impressed by what I have heard,” remarked one of the panel. “What we are talking about is getting access to data". Another said: “We are increasingly seeing the impact of climate change, so energy and water networks definitely need to work more closely together."
One utility provider remarked that more frequent severe weather events caused by climate change were having a big impact on its fault rates. “We cannot protect everything all of the time, so the better we understand how systems are inter-related, the more we can help customers and create insights into the most sensible way to protect our network.”
National Infrastructure Commissioner Jim Hall commented that it was great to see the use of digital tools to help with the planning of resilient infrastructure. “This is a really exciting space,” he noted, “let’s not stop experimenting”.
Connected Places Catapult’s Ecosystem Director for Integrated Infrastructure, Chris Jones described CReDo as a “great example of the Catapult ethos of bringing together infrastructure sectors, generating a conversation, identifying common ground and sparking innovation”.
“We have got ambitious plans to scale CReDo”, he added, “and we want you all to work with us to take this project forward.”
Learn more about CReDo
Get involved in our next phase
Contact us: credo@cp.catapult.org.uk
We all speak the same language, don't we?
Jonathan Eyre, Senior Technical Fellow in Digital Twins, AMRC and Digital Twin Lead, HVMC
And what does industrial common language mean for data interoperability?
In conversation we use language to express thoughts and our point of view, but are the same words being used and understood by everyone in the same way? At the moment, this is most certainly not the case for a term like digital twin at the moment, especially with use cases all being so different to each other. This misalignment can be manageable for a small group discussion, but for larger world-wide collaborations the question becomes how can you trust and perhaps even validate that everyone is using the same language in the same way?
These issues are already deeply embedded in current information systems; information is actively consumed across organisations, supply chains and even across human languages that are all trying to exchange information without any loss of quality. Common acronyms can have different meanings; SME stands for both "small and medium-sized enterprise" and "subject matter expert" which even a small misunderstanding like this can cause major downstream issues. Ensuring language is being used in the same way for every end user is difficult, but not impossible as we’ll discuss.
So how do we create an industrial common language?
We live in a complex world where manufactured goods are produced for other sectors (like the built environment), that are then transported around to let other sectors like healthcare to provide services to society. A sprawling system of systems.
Creating consistency in this interconnectedness is not to be understated in its difficulty. As with most things, there is prior work such as “The pathway Towards an Information Management Framework” [1] where this report and other supporting outputs detailed key principles and captured common language formally as “Industry Data Models & Reference Data”. The scale of the overall challenge is overwhelming for any individual; however individually we don’t need to solve everything. The framework critically enables experts in their fields to create consistent language to support everyone in managing information quality all the way to the top.
This is what the Apollo Protocol [2] is empowering by having a method for convening forums to solve problems, establishing a consistent language for them and justifying 'why?' with evidence along the way. Language is an ever-evolving process and creating an industrial language is no different with on-going efforts required.
I'm convinced, but what does it really give us?
Chiefly, data interoperability. This often gets mentioned superficially as being crucial to enable digital twins and cyber-physical systems, however, to get there it is common language that is a key step towards having it.
The next thing is producing reference data libraries (RDLs) which are a “particular common set of classes and the properties we will want to use to describe our digital twins.” [1]. These will define the underpinning common language structures that will enable a click of a button exchange of information between information systems without data loss.
Data interoperability and RDLs together provides a new layer to build upon for managing quality information ensuring overall consistency. Importantly though nothing is technology (or vendor) dependent and is simply a methodology to analyse the world backed with evidence. This, itself, has a lot of advantages but overall allows a much greater agility to enable its development in distributed environments, thus avoiding silos of information that are typically controlled by a dictated single source. Critically, this consistent, but distributed, approach enables continual open extensions to improve and innovate the structuring of the language we build up, even in different data management systems in different ecosystems.
So, what next?
Consistent language is critical for data interoperability and requires input by everyone. Being able to agree on language though doesn’t mean that everyone needs to unanimously agree, but by creating understanding where we can in our respective areas will enable the success of the transformations we are all making. With this approach, other areas of opportunity naturally open up such as mapping reference data libraries, ultimately enabling us to solve the wicked problems we face together worldwide.
The Apollo Protocol and its approach is enabling convening to develop unified common language for industrial data. If you are involved in initiatives and events also trying to enable interoperability and data sharing, then perhaps consider what you can do to enable consistent language as a starting point?
Jonathan Eyre is a member of the DT Hub Advisory Board, Senior Technical Fellow for Digital Twins for the Advanced Manufacturing Research Centre and Digital Twin Lead for High Value Manufacturing Catapult. Contact Jonathan via the DT Hub.
Links:
[1] The pathway Towards an Information Management Framework:
https://www.cdbb.cam.ac.uk/what-we-did/national-digital-twin-programme/pathway-towards-information-management-framework
[2] The Apollo Protocol: https://theiet.org/apollo-protocol
Join the Apollo Protocol network discussions:
https://digitaltwinhub.co.uk/networks/29-the-apollo-protocol/
Read more...
Data sharing between digital twins – can we show this in a simple way?
Sarah Hayes, Strategic Engagement Lead for CReDo
It’s great to hear about our digital twin projects because they’re exciting and innovative. Our use cases are different and varied because there are so many problems that digital twins can help us solve. In a recent radio interview[1], Lord Deben, Chair of the Committee on Climate Change suggested that it should be legally required that every single government decision should be made with climate change and sustainability in mind and that each decision should be made quicker than it currently is. We, the DT Hub community, know that connected digital twins are part of the answer to this to enable quicker decisions taking account of more information so that every decision can be made with climate change and sustainability in mind, and it’s part of our duty to communicate this.
But it’s also part of our job to explain well what digital twins are and how we develop them, not just what they can do for us. When I sat and listened to other presentations at the Utility Week conference last May in Birmingham, I started to wonder how others might become confused by the variety of ways we choose to describe our digital twin projects. Each project has a different diagram to represent how the data is brought together, what the controls over the data are and what the governance looks like. If we had a common diagram to describe these areas we could start to properly compare and contrast our approaches and better understand where bespoke approaches trump a common approach and vice versa.
A group of data and digital twin experts have come together since the summer to talk about how our own projects tackle the thorny problems of data integration and access. How do we bring together data from different sources? Where do we put that data? And how do we ensure as much of it is as open as possible and data that needs to be secure stays so? We found that we use different names for the same things but after some discussion we can come to consensus on which names seem most appropriate. It’s not an exact science, but through discussion and working through examples together, we’re all making progress.
We developed the data architecture wheel (with thanks to the Virtual Energy System team at National Grid ESO for sharing the original basis for this diagram) to show how data can be shared. Organisations have digital twins of their assets and may want to share some of their data and almost certainly need data from outside their organisation. We have different ways to share this data. We can share data on a point-to-point basis as below; I email you my file. But that won’t scale as multiple parties send multiple emails to each other (much like today?).
Or we can develop a central database for open or shared data. We’ll need some access, security and quality protocols (the padlocks) to govern the database and we’ll need a way to agree that. And whilst central databases do have their place, one central database cannot become the national digital twin. And many databases will continue to silo our information, causing duplication, inefficiency and friction.
So we can develop a distributed data sharing architecture with agreed common access, security and quality protocols (the padlocks). This allows organisations to retain control over their data and who accesses it and its’ quality.
In reality we know, we’re going to get a bit of each, and can it be represented like this?
We want your feedback, so let us know! Of course, these diagrams will best come to life when presented in the context of real projects, and that’s why we’re presenting them in the context of CReDo and the Virtual Energy System. Stay tuned to the Gemini Call and the CReDo team will be talking more about the distributed architecture being developed on 21 February.
In order to ingest data into particular use cases or digital twin projects, it is necessary to use 1) a high level data structure or model and 2) a more bespoke data structure tailored to the use case. A foundational or top level ontology would lay the foundations for 1) and 2) as is the thinking behind the development of the Information Management Framework.
Without an agreed top level ontology at this stage in our journey, we can still make progress by sharing our common high level data structures at the industry level and sharing our bespoke data structures at the use case and project level (which can be copied and adopted for similar use cases.) But we just need to make sure we’re talking about the same thing and that we can share our learnings as we go.
Using the same diagrams to point out differences of approach can help. I’m talking through these diagrams at the Gemini call today and putting out a call to action to help us improve these diagrams and to join in our discussion. Can you use this diagram to represent your digital twin or data sharing project? It would be fantastic to see others using these diagrams to talk about their projects at the Gemini calls. And can we start to develop shared rules that will enable distributed architectures to work across industries? Join the Data sharing architectures network on the DT Hub and share your feedback Data sharing architectures - DT Hub Community (digitaltwinhub.co.uk). If you’d like to get more involved then please get in touch.
Sarah Hayes is the Strategic Engagement Lead for CReDo, author of Data for the public good.
sarah@digitaltwinnercouk.onmicrosoft.com
Join the Gemini Call Tuesdays at 10:30-11:00 Gemini Call - DT Hub Community (digitaltwinhub.co.uk)
[1] BBC Sounds - Rethink, Rethink Climate, Leadership
A digital twin approach to embodied carbon calculations
Glen Worrall, Bentley Systems
This article considers how to use the DT Toolkit roadmap to deliver a digital twin suitable for embodied carbon reporting.
The requirement for a reduction in everyone’s carbon footprint can have a wide-ranging net when you consider the many “carbon” interactions we have in any given day.
However, the latency of any transaction tends to affect the ability for carbon teams to have any influence on the design or materials used that can materially impact an assets carbon footprint.
The digital twin mindset means that we can ensure our digital model is as close to the physical model as possible, but also can be interrogated quickly and easily.
Going digital is a common theme, but the effectiveness of this must come from an increase in our targeted goals. The following process may be aligned with embodied carbon workflows but can just as easily be applied to any process that utilises the digital twin framework.
Why... and what is it for?
We require a digital twin to ensure the standardisation of embodied carbon reporting, which will be effective if we can reduce the embodied carbon calculation cycle from two weeks to instant. We also aim to make the report accessible by all project team members to ensure they are aware of how their decisions can make an impact upon the assets embodied carbon.
The digital twin must be fed from current working practices and will remove the duplication of any data. This is an interesting problem that surfaces many times and generally conflicts with ISO 19650 processes, i.e. while we want to access SHARED / PUBLISHED information we do not want to access WIP information and we do not want to access source information using a variety of tools.
There are many ways we can use standards to enable access to the information and while visualising the result should be seen as part of the enhancements, the federation of external sources is not as straightforward as one would imagine.
Carbon Calculators enable smart material selection
What information do we need and what data do we have?
A simple question such as ‘how much concrete is in my model’ unfortunately has a very complex process in obtaining a result and one which costing, construction and carbon teams must answer on a regular basis. The definition of the term concrete is not as simple as it should be with different grades, but standards such as UNICLASS assist and help us locate those elements that will materially affect our embodied carbon total. Even the standardisation of units between the teams that report environmental product declarations and the teams that build engineering models is challenging. However, there are many unit conversion libraries which allow us to utilise tonnes, meters squared or cubic yards.
Standards such as Uniclass assist in developing industry processes
Who will do what?
This is interesting as most teams think that digital twins will remove the task. However, there is still a trade-off that must happen by carbon teams. Are we locally sourcing or cost restrained? This is part of the project planning that does impact the ability to be carbon zero. As long as make the task the sole focus and remove the requirement for finding data and presenting results for the project team, there should be a positive impact on the desired outcomes. We need the carbon teams to focus on effective material selection not being data wizards.
What does the data tell us?
The key here is that the results should be accurate, timely and effective. Improving the latency of the carbon reporting needs to have an impact that sees the reduction of the embodied carbon for all infrastructure. The data should show the reduction but also how complete the calculation is. An ISO 19650 workflow may release data which is not suitable, i.e. is the volume of a steel vessel really what we want to track or is it the volume of the shell of the vessel. This information must be transparent and especially when content is missing which will materially impact the global warming potential, i.e. what is the factor for the MEP systems in a building which may not be modelled for the next six months?
Information Accessibility should be a key value proposition
How are we doing?
Like all infrastructure projects, the project is an evolving twin. Further as we move into construction what processes are in place to ensure the as-built embodied carbon matches the as-designed embodied carbon? There are plenty of processes in place to ensure the as-built asset matches the engineering requirements, but where was a different material used, how did the construction process impact the actual embodied carbon for an asset. What happens in 12 years’ time or after the first maintenance window. This is information which we need to ensure that carbon and whole life costs align with our expectations.
Conclusion
Regardless of our outcome, a framework for our process and data requirements should lead us to a positive outcome. Further, the ability to identify the preferred outcomes allows us to identify those parts of the process that cannot meet requirements or can be improved.
Glen Worrall is a member of the DT Hub Community Council and Director of Digital Integration at Bentley Systems. Contact Glen via the DT Hub.
The Digital Twin Journeys workstream has taken world leading research and turned it into accessible and useful information to enable those who are just starting out on their digital twin journeys to get ahead. We have learnt about more than just innovative technologies and their implementation, we have learnt about the type of thinking that makes this research ground-breaking. To take this research forwards and discover what your Minimum Viable Twin is, check out the infographic, the final summary of our workstream.
Join Desmond and Mara as they embark on a journey of their own to develop a digital twin. As you follow them, you will learn about an approach to design thinking and iterative development that paves the way for effective digital twin prototyping.
Read the full infographic here.
We have taken our journey through assessing the need of users as they utilise our services. This enables the interventions that we make to be tailored to their needs, considering the ecosystem of services they rely on and the differing levels of access to these services.
We have learnt that care needs to be taken when selecting whether to create your own solution from scratch, buy something pre-existing or work with partners. The Deep Dish project used well established code to handle computer vision, the sensors used in the Staffordshire bridges projects were not custom made for it. In short, there is no need to reinvent the wheel.
As digital twins were themselves first conceived by NASA as a way of managing assets in the most inaccessible place, space, so too have we learnt how we can manage inaccessible assets from space with the help of satellite telemetry. But we also discovered how important skilled data scientists are to making this technique accessible to industry.
We learned that digital twin prototypes can be used as a tool for their own continuous cycle of improvement, as each iteration teaches us how to better classify, refine and optimise the data we use in our decision-making.
The key to it all is the decisions that we make, the way that we change the world around us based upon the information that we have in front of us. We have learnt that working with decision makers is central to creating digital twins that improve outcomes for people and nature as part of a complex system of systems. We can provide these stakeholders with the information that they need to realise our collective vision for a digital built Britain.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
Check out the rest of the outputs on the CDBB Digital Twin Journeys page.
Read more...
To asset owners and managers, understanding how people move through and use the built environment is a high priority, enabling better, more user-focused decisions. However, many of the methods for getting these insights can feel invasive to users. The latest output from Digital Twin Journeys looks at how a researcher at the University of Cambridge has solved this problem by teaching a computer to see. Watch the video to learn more.
Working from the University of Cambridge Computer Laboratory, Matthew Danish is developing an innovative, low-cost sensor that tracks the movement of people through the built environment. DeepDish is based on open-source software and low-cost hardware, including a webcam and a Raspberry Pi. Using Machine Learning, Matthew has previously taught DeepDish to recognise pedestrians and track their journeys through the space, and then began training them to distinguish pedestrians from Cambridge’s many cyclists.
One of the key innovations in Matthew’s technique is that no images of people are actually stored or processed outside of the camera. Instead, it is programmed to count and track people without capturing any identifying information or images. This means that DeepDish can map the paths of individuals using different mobility modes through space, without violating anyone’s privacy.
Matthew’s digital twin journey teaches us that technological solutions need not be expensive to tick multiple boxes, and a security- and privacy-minded approach to asset sensing can still deliver useful insights.
To find out more about DeepDish, read about it here.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
Read more...
We all want the built environment to be safe and to last. However, minor movements over time from forces such as subsidence can impact how well our assets perform. It can also make connecting and modifying assets harder if they have shifted from the position in which they were built. If the assets are remote or hard to access, this makes tracking these small movements even more difficult.
The latest instalment from the Digital Twin Journeys series is a video showing the construction and built environment sectors what they need to know about remote sensing and using satellite data, featuring the Construction Innovation Hub-funded research by the Satellites group based at the Universities of Cambridge and Leeds.
Using satellite imaging, we may be able to detect some of the tell-tale signs of infrastructure failure before they happen, keeping services running smoothly and our built environment performing as it was designed over its whole life.
You can read more from the Satellites project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
Read more...
Interviews with DT Hub Community Co-Chairs Ali Nicholl, IOTICS, Melissa Zanocco, Infrastructure Client Group and Mark Enzer, Director, CDBB and Head of the National Digital Twin programme
As a new phase opens up for the Digital Twin Hub (DT Hub), we have relaunched our Hub Insights 'Live' series.
The introduction of the Digital Twin Hub Community Council brings with it opportunities for DT Hub members to share experiences and get involved in shaping the direction of the Hub. Community Council Co-Chairs @Ali Nicholland @Melissa Zanocco outline their thoughts on a changing attitude to data sharing, the development of key projects such as the Climate Resilience Demonstrator (CReDo) which have grown out of DT Hub relationships, and how the Council will use its voice to help enable socio-technical change.
'Learning by doing, progressing by sharing'
In the third interview of this mini-series, @Mark Enzer, Director of CDBB and Head of the National Digital Twin programme, speaks about his passion for digital twins and connected digital twins, where it began for him, plus a look at the digital twin landscape and how co-ordination and collaboration will be key to taking the work forward.
Mark talks about the exciting opportunities that will result from the transition of the DT Hub to an industry/Catapult partnership hosted at the Connected Places Catapult, the influence of the Centre for Digital Built Britain and the National Digital Twin programme, and the importance of a future strategy for the DT Hub focused on its membership - bringing the industry together to develop the roadmap for an ecosystem of connected digital twins.
Watch the Hub Insights - New horizons videos here:
Sam Chorlton interviews Ali Nicholl
Tom Hughes interviews Melissa Zanocco
Tom Hughes interviews Mark Enzer
Motion sensors, CO₂ sensors and the like are considered to be benign forms of monitoring, since they don’t capture images or personal data about us as we move through the buildings we visit. Or at least, that’s what we want to believe. Guest blogger Professor Matthew Chalmers (University of Glasgow) helped develop a mobile game called About Us as part of the CDBB funded Project OAK. The game takes players through a mission using information from building sensors to help them achieve their aims — with a twist at the end. He writes about why we all need to engage with the ethics of data collection in smart built environments.
Mobile games are more than just entertainment. They can also teach powerful lessons by giving the player the ability to make decisions, and then showing them the consequences of those decisions. About Us features a simulated twin of a building in Cambridge, with strategically placed CO₂ sensors in public spaces (such as corridors), and raises ethical questions about the Internet of Things (IoT) in buildings.
The premise of the game is simple. While you complete a series of tasks around the building, you must avoid the characters who you don’t want to interact with (as they will lower your game score), and you should contact your helpers — characters who will boost your score. You can view a map of the building, and plan your avatar’s route to accomplish your tasks, based on which route you think is safest. On the map, you can watch the building’s sensors being triggered. By combining this anonymous sensor data with map details of which offices are located where, you can gather intelligence about the movements of particular characters. In this way, you can find your helpers and avoid annoying interactions. If you’ve avoided the bad characters and interacted with the good characters while completing your tasks, you win the game.
However, a twist comes after you have finished: the game shows you how much could be inferred about your game character, from the exact same sensors that you had been using to make inferences about other characters. Every task in the game exposes some sensitive data about the player’s avatar, and reinforces the player’s uncomfortable realisation that they have exploited apparently neutral data to find and avoid others.
What does this tell us about the ethics of digital twins? Our journeys through the built environment can reveal more than we intend them to, e.g. our movements, our routines, where we congregate, and where we go to avoid others. All this information could inadvertently be revealed by a building digital twin, even though the data used seems (at first glance) to be anonymous and impersonal. The game used CO₂ levels as an example of apparently impersonal data that, when combined with other information (local knowledge in this case), becomes more personal. More generally, data might be low risk when isolated within its originating context, but risk levels are higher given that data can be combined with other systems and other (possibly non-digital) forms of information.
The Gemini Principles set out the need for digital twins to be ethical and secure, but About Us demonstrates that this can be surprisingly difficult to ensure. Collecting data through digital twins provides aggregate insights — that’s why they’re so useful — but it also creates risks that need ongoing governance. It’s vitally important that citizens understand the double-edged problem of digital twins, so that citizens are more able to advocate for how they want the technology to be used, and not used, and for how governance should be implemented.
Gamification is now a well-established technique for understanding and changing user attitudes toward digital technology. About Us was designed to create a safe but challenging environment, in which players can explore an example of data that could be collected in distributed computing environments, the uses to which such data can be put, and the intelligence that can be gathered from resulting inferences. The ultimate purpose of Project OAK is to enable anyone concerned with how data is managed (e.g., data processors, data subjects, governance bodies) to build appropriate levels of trust in the data and in its processing. Only if we recognise the ethical and legal issues represented by digital twins can we start to give meaningful answers to questions about what good system design and good system governance look like in this domain.
Information about this project is available on their GitHub page.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
To join the conversation with others who are on their own digital twin journeys, join the Digital Twin Hub.
Read more...
Described in the Pathway to the Information Management Framework, the Integration Architecture is one of the three key technical components of the Information Management Framework (IMF), along with the Reference Data Library and the Foundation Data Model. It consists of the technology and protocols that will enable the managed sharing of data across the National Digital Twin (NDT).
The IMF Integration Architecture (IA) team began designing and building the IA in April 2021. This blog gives an insight on its progress to date.
Principles
First, it is worth covering some of the key principles being used by the team to guide the design and build of the IA:
Open Source: It is vital that the software and technology that drives the IA are not held in proprietary systems that raise barriers to entry and prevent community engagement and growth. The IA will be open source, allowing everyone to utilise the capability and drive it forward.
Federated: The IA does not create a single monolithic twin. When Data Owners establish their NDT Node, the IA will allow them to publish details of data they want to share to a NDT data catalogue, and then other users can browse, select and subscribe to the data they need to build a twin that is relevant to their needs. This subscription is on a node-to-node basis, not via a central twin or data hub, and Owners can specify the access, use, or time constraints that they may wish to apply to that subscriber. Once subscribed, the IA takes care of authenticating users and updating and synchronising data between nodes.
Data-driven access control: To build trust in the IA, Data Owners must be completely comfortable that they retain full control over who can access the data they share to the NDT. The IA will use an ABAC security model to allow owners to specify in fine-grained detail who can access their data, and permissions can be added or revoked very simply and transparently. This is implemented as data labels which accompany the data, providing instructions to receiving systems on how to protect the data.
IMF Ontology Driven: NDT Information needs to be accessed seamlessly. The NDT needs a common language so that data can be shared consistently, and this language is being described in the IMF Ontology and Foundation Data Model being developed by another element of the IMF team. The IA team are working with them closely to create capabilities that will automate conversion of incoming data to the ontology and transact it across the architecture without requiring further “data wrangling” by users.
Simple Integration: To minimise the risk of implementation failure or poor engagement due architectural incompatibility or high cost of implementation, the IA needs to be simple to integrate into client environments. The IA will use well understood architectural patterns and technologies (for example REST, GraphQL) to minimise local disruption when data owners create an NDT node, and ensure that once implemented the ongoing focus of owner activity is on where the value is – the data – rather than maintenance of the systems that support it.
Cloud and On-Prem: An increasing number of organisations are moving operations to the cloud, but the IA team recognises that this may not be an option for everyone. Even when cloud strategies are adopted, the journey can be long and difficult, with hybridised options potentially being used in the medium to long term. The IA will support all these operating modes, ensuring the membership of the NDT does not negatively impact existing or emerging environment strategies.
Open Standards: for similar drivers behind making the IA open-source, the IA team is committed to ensuring that data in the NDT IA are never locked-in or held in inaccessible proprietary formats.
What has the IA team been up to this year?
The IMF chose to adopt the existing open-source Telicent CORE platform to handle the ingest, transformation and publishing of data to the IMF ontology within NDT nodes, and the focus has been on beginning to build and prove some of the additional technical elements required to make the cross-node transactional and security elements of the IA function. Key focus areas were:
Creation of a federation capability to allow Asset Owners to publish, share and consume data across nodes
Adding ABAC security to allow Asset Owners to specify fine-grain access to data
Building a ‘Model Railway’ to create an end-to-end test bed for the NDT Integration Architecture, and prove-out deployment in containers
Sensor technology has come a long way over the last 30 years, from the world’s first, bulky webcam at the University of Cambridge Computer Science Department to near ubiquitous networks of sleek sensors that can provide data at an unprecedented volume, velocity and quality. Today, sensors can even talk to each other to combine single points of data into useful insights about complex events. The new webcomic ‘Coffee Time’ by Dave Sheppard, part of the Digital Twin Journeys series, tells the story of this evolution and what it means for what we can learn about our built environment through smart sensors.
Starting with a simple problem – is there coffee in the lab’s kitchen? – researchers in the early 1990s set up the world’s first webcam to get the information they wanted. Today, people in the Computer Lab still want to know when the coffee is ready, but there are more ways to solve the problem, and new problems that can be solved, using smart sensors. Smart sensors don’t just send information from point A to point B, providing one type of data about one factor. That data needed to be collated and analysed to get insights. Now sensors can share data with each other and generate insights more instantaneously.
The West Cambridge Digital Twin team at the computer lab have looked at how specific sequences of sensor events can be combined into an insight that translates actions in the physical world into carefully defined digital events. When someone makes coffee, for example, they might turn on a machine to grind the coffee beans, triggering a smart sensor in the grinder. Then they’d lift the pot to fill it with water, triggering a weight sensor pad beneath to record a change in weight. Then they would switch the coffee machine on, triggering a sensor between the plug and the outlet that senses that the machine is drawing power. Those events in close succession, in that order, would tell the smart sensor network when the coffee is ready.
These sequences of sensor triggers are known as complex events. Using this technique, smart sensors in the built environment can detect and react to events like changes in building occupancy, fires and security threats. One advantage of this approach is that expensive, specialist sensors may not be needed to detect rarer occurrences if existing sensors can be programmed to detect them. Another is that simple, off-the-shelf sensors can detect events they were never designed to. As the comic points out, however, it is important to programme the correct sequence, timing and location of sensor triggers, or you may draw the wrong conclusion from the data that’s available.
Something as simple as wanting to know if the coffee is ready led to the first implementation of the webcam. Digital twin journeys can have simple beginnings, with solving a simple problem with a solution that’s accessible to you, sparking off an evolution that can scale up to solve a wide range of problems in the future.
You can read and download the full webcomic here.
You can read more from the West Cambridge Digital Twin project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
Read more...
By 2050, an estimated 4.1 million people will be affected by sight loss in the UK, making up a portion of the 14.1 million disabled people in the UK. How might digital twins create opportunities for better accessibility and navigability of the built environment for blind and partially sighted people? A new infographic presents a conception of how this might work in the future.
In their work with the Moorfields Eye Hospital in London, the Smart Hospitals of the Future research team have explored how user-focused services based on connected digital twins might work. Starting from a user perspective, the team have investigated ways in which digital technology can support better services, and their ideas for a more accessible, seamless experience are captured in a new infographic.
In the infographic, service user Suhani accesses assistive technology for blind people on her mobile phone to navigate her journey to an appointment at an eye hospital. On the way, she is aided by interoperable, live data from various digital twins that seamlessly respond to changing circumstances. The digital twins are undetectable to Suhani, but nevertheless they help her meet her goal of safely and comfortably getting to her appointment. They also help her doctors meet their goals of giving Suhani the best care possible. The doctors at the eye hospital are relying on a wider ecosystem of digital twins beyond their own building digital twin to make sure this happens, as Suhani’s successful journey to the hospital is vital to ensuring they can provide her with care.
Physical assets, such as buildings and transport networks, are not the only things represented in this hypothetical ecosystem of connected digital twins. A vital component pictured here are digital twins of patients based on their medical data, and the team brings up questions about the social acceptability and security of digital twins of people, particularly vulnerable people.
No community is a monolith, and disabled communities are no exception. The research team acknowledges that more research is needed with the user community of Moorfields to understand the variety of needs across the service pathway that digital twins could support. As such, developers need to consider the range of users with different abilities and work with those users to design a truly inclusive ecosystem of digital twins. The work by the Smart Hospitals research team raises wider questions about the role of digital technology both in creating more physical accessibility in the built environment but also potentially creating more barriers to digital accessibility. It is not enough to create assistive technologies if not everyone can – or wants to – have access to those technologies.
‘The role of digital technologies in exacerbating potentially digital inequalities is something that needs to be looked at from a policy perspective, both at the hospital level, but also more generally, from a government Department of Health perspective,’ says Dr Michael Barrett, the project’s principal investigator.
Dr Karl Prince, co-investigator, reflects that, ‘The traditional questions when it comes to this type of technology are raised as to: do they have access to equipment, and do they have the technical ability?’ The lesson is that you can build digital twins that create a better experience for people if you design digital systems from the perspective of an ecosystems of services, with input from users of that ecosystem.
Through exciting case studies, the project raises vital questions about digital ethics and the potentially transformative effects of digital twins on the physical built environment.
To read the infographic in detail, click here.
You can read more from the Smart Hospitals project by visiting their research profile page.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
To join the conversation with others who are on their own digital twin journeys, join the Digital Twin Hub.
Digital twins are not just a useful resource for understanding the here-and-now of built assets. If an asset changes condition or position over its lifecycle, historical data from remote sensors can make this change visible to asset managers through a digital twin. However, this means retaining and managing a potentially much larger data set in order to capture value across the whole life of an asset. In this blog post, Dr Sakthy Selvakumaran, an expert in remote sensing and monitoring, tells us about the importance of curation in the processing of high-volume built environment data.
There are many sources of data in the built environment, in increasing volumes and with increasing accessibility. They include sensors added to existing structures – such as wireless fatigue sensors mounted on ageing steel bridges – or sensors attached to vehicles that use the assets. Sources also include sensing systems including fibre optics embedded in new structures to understand their capacity over the whole life of the asset. Even data not intended for the built environment can provide useful information; social media posts, geo-tagged photos and GPS from mobile phones can tell us about dynamic behaviours of assets in use.
Remote sensing: a high-volume data resource
My research group works with another data source – remote sensing – which includes satellite acquisitions, drone surveys and laser monitoring. There have been dramatic improvements in spatial, spectral, temporal and radiometric resolution of the data gathered by satellites, which is providing an increasing volume of data to study structures at a global scale. While these techniques have historically been prohibitively expensive, the cost of remote sensing is dropping. For example, we have been able to access optical, radar and other forms of satellite data to track the dynamic behaviour of assets for free through open access policy of the European Space Agency (ESA).
The ESA Sentinel programme’s constellation of satellites fly over assets, bouncing radar off them and generating precise geospatial measurements every six days as they orbit the Earth. This growing data resource – not only of current data but of historical data – can help asset owners track changes in the position of their asset over its whole life. This process can even catch subsidence and other small positional shifts that may point to the need for maintenance, risk of structural instability, and other vital information, without the expense of embedding sensors in assets, particularly where they are difficult to access.
Data curation
One of the key insights I have gained in my work with the University of Cambridge’s Centre for Smart Infrastructure and Construction (CSIC) is that data curation is essential to capture the value from remote sensing and other data collection methods. High volumes of data are generated during the construction and operational management of assets. However, this data is often looked at only once before being deleted or archived, where it often becomes obsolete or inaccessible. This means that we are not getting the optimal financial return on our investment on that data, nor are we capturing its value in the broader sense.
Combining data from different sources or compiling historical data can generate a lot of value, but the value is dependent on how it is stored and managed. Correct descriptions, security protocols and interoperability are important technical enablers. Social enablers include a culture of interdisciplinary collaboration, a common vision, and an understanding of the whole lifecycle of data. The crucial element that ensures we secure value from data is the consideration of how we store, structure and clean the data. We should be asking ourselves key questions as we develop data management processes, such as: ‘How will it stay up to date?’ ‘How will we ensure its quality?’ and ‘Who is responsible for managing it?’
Interoperability and standardisation
The more high-volume data sources are used to monitor the built environment, the more important it is that we curate our data to common standards – without these, we won’t even be able to compare apples with apples. For example, sometimes when I have compared data from different satellite providers, the same assets have different co-ordinates depending on the source of the data. Like ground manual surveying, remote measurements can be made relative to different points, many of which assume (rightly or wrongly) a non-moving, stationary point. Aligning our standards, especially for geospatial and time data, would enable researchers and practitioners to cross-check the accuracy of data from different sources, and give asset managers access to a broader picture of the performance of their assets.
Automated processing
The ever increasing quantity of data prohibits manual analysis by human operators beyond the most basic tasks. Therefore, the only way to enable data processing at this large scale is automation, fusing together remote sensing data analysis with domain-specific contextual understanding. This is especially true when monitoring dynamic urban environments, and the potential risks and hazards in these contexts. Failure to react quickly is tantamount to not reacting at all, so automated processing enables asset owners to make timely changes to improve the resilience of their assets. Much more research and development is needed to increase the availability and reliability of automated data curation in this space.
If we fail to curate and manage data about our assets, then we fail to recognise and extract value from it. Without good data curation, we won’t be able to develop digital twins that provide the added value of insights across the whole life of assets. Data management forms the basis for connected digital twins, big data analysis, models, data mining and other activities, which then provide the opportunity for further insights and better decisions, creating value for researchers, asset owners and the public alike.
You can read more from the Satellites project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
For more on the Digital Twin Journeys projects, visit the project's homepage on the CDBB website.
Our latest output from the Digital Twin Journeys series is a webcomic by David Sheppard. 'Now We Know' tells the story of a fictional building manager, Hank, who isn't sure how a building digital twin can help him in his work when the existing building management system tells him what he thinks he needs to know.
This same tension plays out around real-world digital twin development, as advocates point to the promise of perfect, right-time information to make better decisions, while others remain unconvinced of the value that digital twins can add. As the West Cambridge Digital Twin research team developed a prototype digital twin, they encountered this barrier, and found that working with the building-manager-as-expert to co-develop digital twin capability is the way to go. While they grounded iterations of the prototype in the building managers' present needs, they were also able to present the potential capability of the digital twin in ways that demonstrated its value. This is mirrored in the fictional narrative of the comic in the consultation between the Cambridge Digital Twin Team expert and the building manager, Hank.
Involving end users, like building occupants and managers, in the design and development of digital twins will ensure that they meet real-world information needs. Both people and data bring value to the whole-life management of assets. Many uncertainties exist in the built environment, and in many cases when pure data-driven solutions get into trouble (e.g. through poor data curation or low data quality), expertise from asset managers can bolster automated and data-driven solutions. Therefore, incorporating the knowledge and expertise of the frontline managers is crucial to good decision-making in building operations.
The benefits of this hybrid approach work in the other direction as well. While the knowledge developed by building managers is often lost when people move on from the role, the digital twin enables the curation of data over time, making it possible to operate buildings beyond the tenure of individual staff members based on quality data.
At present, the knowledge of experienced asset managers in combination with existing building information, is greater than the insights that early-stage digital twins can offer. But that does not mean that the promise of digital twins is a false one. It simply means that there is still a long way to go to realise the vision of right-time, predictive information portrayed in the comic. Digital twin prototypes should be developed in partnership with these experienced stakeholders.
You can read more from the West Cambridge Digital Twin project by visiting their research profile, and find out about more Digital Twin Journeys on the project's homepage.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
Read more...
When we travel by train, we expect that we will arrive at our destination safely and on time. Safety and performance of their service network is therefore a key priority for Network Rail. Our latest video in the Digital Twin Journeys series tells the story of how researchers have inherited two intensively instrumented bridges and are transforming that high volume and velocity of data into a digital twin showing the wear and pressures on the bridges, as well as other information that can help the asset owners predict when maintenance will be required and meet their key priorities.
Remote monitoring has several benefits over using human inspectors alone. Sensors reduce the subjectivity of monitoring. Factors such as light levels, weather and variations in alertness can change the subjective assessments made by human inspectors. They may also be able to identify issues arising before visual inspection can detect them by monitoring the stresses on the bridge. A human inspector will still be sent to site to follow up on what the remote sensing has indicated, and engineers will of course still need to perform maintenance. However, remote monitoring allows the asset owners to be smarter about how these human resources are deployed.
One important insight for Network Rail is based on more accurate data about the loads the bridges are experiencing, and the research team have developed a combination of sensors to make a Bridge Weigh-In-Motion (B-WIM) Technology. As shown in the video, a combination of tilt sensors, bridge deformation and axle location sensors to calculate the weight of passing trains. As the accuracy of weight prediction data is impacted by changes to ambient humidity and temperature, sensors were added that detect these factors as well. Accelerometers were added to calculate rotational restraints at the boundary conditions to improve the accuracy of weight predictions and cameras were installed so that passing trains can be categorised by analysing the video footage.
The digital twin of the Staffordshire Bridges centres on a physics-based model for conducting structural analysis and load-carrying capacity assessments. The site-specific information, such as realistic loading conditions obtained by the sensors, will be fed into the physics-based model to simulate the real structure and provide the outputs of interest. A digital twin replica of the structure will be able to provide bridge engineers with any parameter of interest anywhere on the structure, including in non-instrumented locations.
All of the sensors on these bridges produce a high volume of data at a high velocity. Without data curation, we could easily be overwhelmed by the volume of data they produce, but the research team is learning to narrow down to managing the right data in ways that provide the right insights at the right time. Working with Network Rail, this project will demonstrate the use of real-time data analytics integrated with digital twins to provide useful information to support engineers and asset managers to schedule proactive maintenance programmes and optimise future designs, increasing safety and reliability across their whole portfolio of assets.
You can read more from the Staffordshire Bridges project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
To see more from the Digital Twin Journeys series, see the homepage on the CDBB website.
A new infographic, enabled by the Construction Innovation Hub, is published today to bring to life a prototype digital twin of the Institute for Manufacturing (IfM) on the West Cambridge campus. Xiang Xie and Henry Fenby-Taylor discuss the infographic and lessons learned from the project.
The research team for the West Cambridge Digital Twin project has developed a digital twin that allows various formats of building data to function interoperably, enabling better insights and optimisation for asset managers and better value per whole life Pound.
The graphic centres the asset manager as a decision maker as a vital part of this process, and illustrates that each iteration improves the classification and refinement of the data. It also highlights challenges and areas for future development, showing that digital twin development is an ongoing journey, not finite destination.
The process of drawing data from a variety of sources into a digital twin and transforming it into insights goes through an iterative cycle of:
Sense/Ingest - use sensor arrays to collect data, or draw on pre-existing static data, e.g. a geometric model of the building
Classify - label, aggregate, sort and describe data
Refine - select what data is useful to the decision-maker at what times and filter it into an interface designed to provide insights
Decide – use insights to weigh up options and decide on further actions
Act/Optimise - feed changes and developments to the physical and digital twins to optimise both building performance and the effectiveness of the digital twin at supporting organisational goals.
Buildings can draw data from static building models, quasi-dynamic building management systems and smart sensors, all with different data types, frequencies and formats. This means that a significant amount of time and resources are needed to manually search, query, verify and analyse building data that is scattered across different databases, and this process can lead to errors.
The aim of the West Cambridge Digital Twin research facility project is to integrate data from these various sources and automate the classification and refinement for easier, more timely decision-making. In their case study, the team has created a digital twin based on a common data environment (CDE) that is able to integrate data from a variety of sources. The Industry Foundation Classes (IFC) schema is used to capture the building geometry information, categorising building zones and the components they contain. Meanwhile, a domain vocabulary and taxonomy describe how the components function together as a system to provide building services.
The key to achieving this aim was understanding the need behind the building management processes already in place. This meant using the expertise and experience of the building manager to inform the design of a digital twin that was useful and usable within those processes. This points to digital twin development as a socio-technical project, involving culture change, collaboration and alignment with strategic aims, as well as technical problem solving.
In the future, the team wants to develop twins that can enhance the environmental and economic performance of buildings. Further research is also needed to improve the automation at the Classify and Refine stages so they continue to get better at recognising what information is needed to achieve organisational goals.
You can read more from the West Cambridge Digital Twin project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
To see more from the Digital Twin Journeys series, see the homepage on the CDBB website.
Read more...
Digital twins enable asset owners to use better information at the right time to make better decisions. Exploring the early stages of a digital twin journey – understanding the information need – are Staffordshire Bridges researcher Dr Farhad Huseynov and Head of Information Management Henry Fenby-Taylor.
Network Rail manages over 28,000 bridges, with many being more than 150 years old. The primary means of evaluating the condition of the bridges is through two assessment programmes; visual examination and Strength Capability Assessment. Every conceivable form of bridge construction is represented across Network Rail’s portfolio of assets, from simple stone slabs to large estuary crossings, such as the Forth Bridge. Managing a portfolio of this diversity with frequent and extensive assessments is a considerable challenge.
Condition monitoring
The current process for condition monitoring involves visual examination by engineers and takes place every year, along with a more detailed examination every six years. The visual inspection provides a qualitative outcome and does not directly predict the bridge strength; it is conducted to keep a detailed record of visible changes that may indicate deterioration. The load-carrying capacity of bridges is evaluated every five years through a Strength Capability Assessment, conducted in three levels of detail:
Level 1 is the simplest, using safety assumptions known to be conservatively over-cautious (i.e. 1-dimensional structural idealisation).
Level 2 involves refined analysis and better structural idealisation (i.e. grillage model). This level may also include the use of data on material strength based on recent material tests, etc.
Level 3 is the most sophisticated level of assessment, requiring bridge-specific traffic loading information based on a statistical model of the known traffic.
Understanding the information and insights that asset owners require helps shape what data is needed and how frequently it should be collected – two essential factors in creating infrastructure that is genuinely smart. During the discussions with Network Rail, the research team found that Level 3 assessment is only used in exceptional circumstances. This is because there is no active live train load monitoring system on the network; hence there is no site-specific traffic loading information available for the majority of bridges. Instead, bridges failing Level 2 assessment are typically put under weight and/or speed restrictions, reducing their ability to contribute to the network. This means that there is potentially huge value in providing Level 3 assessment at key sites with greater frequency.
Digital twins for condition assessment
The Stafford Area Improvement Programme was setup to remove a bottleneck in the West Coast Main Line that resulted in high-speed trains being impeded by slower local passenger and goods trains. To increase network capacity and efficiency, a major upgrade of the line was undertaken, including the construction of 10 new bridges. Working with Atkins, Laing O’Rourke, Volker Rail and Network Rail, a research team including the Centre for Smart Infrastructure and Construction (CSIC), the Centre for Digital Built Britain (CDBB) and the Laing O’Rourke (LOR) Centre for Construction Engineering and Technology at the University of Cambridge is collaborating with Network Rail to find a digital twin solution for effective condition monitoring.
Two bridges in the scheme were built with a variety of different sensors to create a prototype that would enable the team to understand their condition, performance and utilisation. Both bridges were densely instrumented with fibre optic sensors during construction, enabling the creation of a digital twin of the bridges in use. The digital twin’s objective is to provide an effective condition monitoring tool for asset and route managers, using the sensor array to generate data and derive insights.
Identifying challenges and solutions
Meetings were held with key stakeholders including route managers and infrastructure engineers at Network Rail to learn the main challenges they face in maintaining their bridge stock, and to discover what information they would ideally like to obtain from an effective condition monitoring tool. The team liaised closely with the key stakeholders throughout to make sure that they were developing valuable insights.
Through discussions with Network Rail about the team’s work on the two instrumented bridges in the Staffordshire Bridges project the following fundamental issues and expected outcomes were identified:
A better understanding of asset risks: How can these be predicted? What precursors can be measured and detected?
A better understanding of individual asset behaviour
Development of sensor technology with a lifespan and maintenance requirement congruent with the assets that they are monitoring
How structural capability be calculated instantly on the receipt of new data from the field
Development of a holistic system for the overall health monitoring and prognosis of structures assets
Realistic traffic population data in the UK railway network. (Can this be predicted with sufficient accuracy for freight control and monitoring purposes?)
To address these issues, the team instrumented one of the bridges with the following additional sensors, which, combined, produce a rich dataset:
Rangefinder sensors to obtain the axle locations.
A humidity and temperature sensor to improve the accuracy of weight predictions against variations in ambient temperature.
Accelerometers to calculate rotational restraints at the boundary conditions and therefore improve the accuracy of weight predictions.
Cameras to categorise passing trains.
Data from these sensors feeds into a finite element model structural analysis digital twin that interprets the data and provides a range of insights about the performance of the bridge and the actual strain it has been put under.
Applying insights to other bridges
Significantly, information from the instrumented bridge sites is relevant to adjacent bridges on the same line. Having one bridge instrumented on a specific route would enable Level 3 assessment for other structures in their portfolio and those of other asset owners, including retaining walls, culverts, and other associated structures. Just as the new bridges relieved a service bottleneck, digital twins can resolve procedural and resource bottlenecks by enabling insights to be drawn about the condition of other assets that weren’t instrumented.
This is a valuable insight for those developing their own digital twins, because given that one bridge is instrumented it follows that where trains cannot have diverted course, then any other bridges along that same stretch of track will be undergoing the same strain from the same trains. This insight will enable teams implementing sensors to be able to efficiently implement a sensor network across their own assets.
One of the outcomes of the Staffordshire Bridges project is development towards a holistic approach for the overall health monitoring and prognosis of bridge stocks. Such changes improve workforce safety by reducing the requirement for costly site visits while maintaining a healthy bridge network.
You can read more from the Staffordshire Bridges project by visiting their research profile.
This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF).
To keep up with the Digital Twin Journeys project, check out the Digital Twin Journeys home page.
Read more...
Digital twins can help organisations achieve various goals. In some cases, the end goal is for buildings and infrastructure to last longer, use less energy, and be safer. In others, it is enhancing the lives of people who interact with the built environment and its services. As highlighted by the Gemini Principles, these are not mutually exclusive aims, so wherever you are on your digital twin journey, it is important to consider other perspectives on the hybrid digital and physical systems you create. How will your digital twin fit into a wider ecosystem that provides services to all kinds of people? How will your asset’s performance impact the wider built environment and those who need to navigate it? Whose lives will be better if you share data securely and purposefully.
In the first output from the Digital Twin Journeys series, the team working on the Smart Hospital of the Future research project, enabled by the Construction Innovation Hub, shared case studies from two smart hospitals and reflect on the innovations they saw during the COVID-19 pandemic. In this two video mini-series, the research team shares insights about how existing digital maturity enabled these hospitals to respond to the pandemic in agile ways, transforming to a hybrid physical and digital model of care distributed across multiple sites. They also explored how individual asset digital twins fit into a wider landscape of ecosystem services, guiding how we approach interoperability to achieve better outcomes.
These insights inform the way we think about the role of digital twins in the smart built environments of the future. Dr Nirit Pilosof reflects that, ‘Digital twin as a concept can promote the design of the new system, the design process of the built environment and the technologies, but also really help operate… the hybrid models looking at the physical and virtual environments together.’ If health care is enabled by connected digital twins, how could the design of hospitals – and whole cities – change?
In the videos, the team also discusses the limitations and ethics of services enabled by digital data and the use of digital technologies to improve staff safety, from isolated COVID wards to telemedicine. They frame service innovation as an iterative and collaborative process, informed by the needs of digital twin users, whether those are the asset owners and operators, or the people benefitting from the services they provide.
According to project co-lead Dr Michael Barrett, ‘The people who need to drive the change are the people who are providing the service.' After the COVID crisis, we can better recognise what we have learned from implementing digital services at scale, as more people than ever have relied on them. The team reflect that having the right people in the right roles enabled the smart hospitals in these cases to transform their services rapidly in response to the need. The same human and organisational infrastructure that is creating the smart hospital of the future is also needed to create the flexible, responsive built environments of the future.
Digital Twin Journeys can start from the perspective of available technology, from a problem-solving perspective, or from the perspective of users experiencing a service ecosystem. The smart hospitals project demonstrates the value of the latter two approaches. Hospital staff were instrumental in shaping the digitally-enabled service innovation to keep them safe and offer better services on and offsite, but project co-lead Dr Karl Prince points out how people accessing those services have to navigate a variety of different services in the built environment to get there. As we begin to connect digital twins together, we need to consider not just our own needs but the needs of others that digital twins can address.
For more on this project, including links to their publications, see the team’s research profile on the CDBB website. Keep up with the Digital Twin Journeys series on the CDBB website or here on the Digital Twin Hub blog.
Read more...
This week marks the one-year anniversary of the National Digital Twin programme’s (NDTp) weekly Gemini Call – an online progress update from the NDTp with a feature spot for members of the Digital Twin Hub to showcase projects and share digital twin knowledge and experiences. DT Hub Chair, Sam Chorlton, tells us about the call, its beginnings and the latest move to the DT Hub.
There’s no doubt that the Gemini Call has been a game-changing addition to the NDTp. Brought about by CDBB CReDo Lead, Sarah Hayes, we launched it in September 2020 as part of the Gemini programme to inform our friends and followers about programme developments.
In its early days, the call also played a major part in opening the dialogue for creation and delivery of NDTp projects, notably the Digital Twin Toolkit project, which resulted in a report and template package to help organisations build a business case for a digital twin. (We’re excited that the template has since been downloaded approaching 1,000 times.)
We could not have achieved the Toolkit project without the input of supporters across 17 DT Hub member organisations, and it was the members’ pro bono contributions and willingness to collaborate in this venture that enabled us to open up opportunities for knowledge sharing and discussions about digital twin journeys.
By the community, for the community
Today, the half-hour Gemini Call brings in around 60 participants each week, and over the year nearly 300 members have attended at least once. This year we have changed the agenda to allow for a feature focus by DT Hub member organisations to present digital twin projects or research, followed by a forum for Q&A. To date, there have been 16 digital twin presentations given by organisations worldwide. It is this free exchange of knowledge and open discussion between members of the community that is pushing progress on an international scale.
Sarah Hayes gives her take on the year, “We’re thrilled with what has happened with the call and we are telling everyone to come and get involved. We have over 2,000 members from government, public and private industry sectors and academia, and there is so much we can all learn from one another. Right now, there is a ground swell of connected digital twin development, and the DT Hub community can access this first hand.”
Gemini Call chair and Digital Energy leader at Arup, Simon Evans, said, “The call has been an excellent forum to bring industry together, whatever the experience or involvement with digital twins, and provide that regular knowledge-share and update on leading international digital twin developments.”
The Gemini Call sits centre stage within the DT Hub community as a member-focused exchange to help organisations increase their digital twin knowhow - it is a focal point for the community as we experience and drive digital transformation. Come and join the conversation!
Progressing by sharing challenge
One year on, we set this challenge to our members: invite a guest from your network to the next Gemini Call so we can expand the discussion and break down the barriers to sharing data.
Become a DT Hub member
Sign up to join the Gemini Call
An update from the Information Management Framework Team of the National Digital Twin programme
The mission of the National Digital Twin programme (NDTp) is to enable the National Digital Twin (NDT), an ecosystem of connected digital twins, where high quality data is shared securely and effectively between organisations and across sectors. By connecting digital twins, we can reap the additional value that comes from shared data as opposed to isolated data: better information, leading to better decisions through a systems thinking approach, which in turn enable better outcomes for society, the economy and our environment.
The NDTp’s approach to data sharing is ambitious: we are aiming for a step change in data integration, one where the meaning is captured sufficiently accurately that it can be shared unambiguously. Conscious that “data integration” may justifiably mean different things to different people, we would like to shed some light on our current thinking and present one of the tools we are currently developing to help us articulate the need for this step change. It is a scheme for assessing the level of digitalisation of data items based upon four classifiers: the extent of what is known, media, form, and semantics. The scheme entails the 8 levels below - which are likely to be finetuned as we continue to apply the scheme to assess real data sets:
Levels of digitalisation: towards grounded semantics
We trust that the first levels will resonate with your own experience of the subject:
Extent: as it is not possible to represent what is unknown, the scheme starts by differentiating the known from the unknown. By looking into the information requirements of an organisation, “uncharted territories” may be uncovered, which will need to be mapped as part of the digitalisation journey.
Media: information stored on paper (or only in brains) must be documented and stored in computer systems.
Form: information held in electronic documents such as PDFs, Word documents, and most spreadsheets, needs to be made computer readable, i.e. moved to information being held as data, in databases and knowledge graphs for example.
Semantics: the progression towards “grounded semantics” and in particular the step from the “explicit” level to the “grounded” level is where, we believe, the fundamental change of paradigm must occur. To set the context for this step, it is worth going back to some fundamental considerations about the foundational model for the Integration Architecture of the NDT.
From a Point-to-Point model to a Hub and Spoke model empowered by grounded semantics
A key challenge at present is how to share data effectively and efficiently. What tends to happen organically is that point-to-point interfaces are developed as requirements are identified between systems with different data models and perhaps reference data. The problem is that this does not scale well. As more systems need to be connected, new interfaces are developed which share the same data to different systems, using different data models and reference data. Further there are maintenance problems, because when a system is updated, then its interfaces are likely to need updating as well. This burden has been known to limit the effective sharing of data as well as imposing high costs.
The alternative is a hub and spoke architecture. In this approach, each system has just one interface to the hub, which is defined by having a single data model and reference data, that all systems translate into and out of. It is important to note, that although this could be some central system, it does not need to be, the hub can be virtual with data being shared over a messaging system according to the hub data model and reference data. This reduces costs significantly and means that data sharing can be achieved more efficiently and effectively. Neither is this novel. The existing Industry Standard Data Models were developed to achieve exactly this model. The new piece is that the requirement now is to be able to share data across sectors, not just within a single sector, and to meet more demanding requirements.
Thus, the National Digital Twin programme is developing a Foundation Data Model (a pan-industry, extensible data model), enabling information to be taken from any source and amendments to be made on a single node basis.
But what would differentiate the NDT's common language - the Foundation Data Model - from existing industry data models?
Our claim is that the missing piece in most existing industry data models which have “explicit semantics”, is an ontological foundation, i.e. ”grounded semantics”.
Experience has shown us that although there is just one real world to model, there is more than one way to look at it, which gives way to a variety of data models representing the same “things” differently and eventually, to challenges for data integration. To tackle them, we recommend to clarify ontological commitments (see our first conclusions on the choice of a Top Level Ontology for the NDT’s Foundation Data Model) so that a clear, accurate and consistent view of “the things that exist and the rules that govern them” can be established. We believe that analysing datasets through this lens and semantically enriching them is a key step towards better data integration.
As we begin to accompany organisations in their journey towards “grounded semantics”, we are looking forward to sharing more details on the learnings and emerging methodologies on the DT Hub. We hope this window into our current thinking, which is by no mean definitive, has given you a good sense of where the positive disruption will come from … We are happy to see our claims challenged, so please do share your thoughts and ask questions.
Read more...