Jump to content
Join live conversation in our weekly Gemini Call - knowledge sharing and digital twin presentations from the community ×

Search the Community

Showing results for tags 'Integration Architecture'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Themes

  • Member Forums
    • General discussion
    • Testing digital twin concepts
    • Digital twin competencies
    • Pathway to value
    • IMF Pathway
    • Videos
    • Q & A
  • IMF Community's Forum
  • Data Value and Quality's Forum
  • 4-dimensionalism's Forum
  • The Defence Digital Network's Welcome!
  • The Defence Digital Network's Defence DT Roadmap
  • The Defence Digital Network's Acronym Buster
  • The Defence Digital Network's Open Forum
  • The Defence Digital Network's Documents
  • Open Innovation - Open (Citizen) Science - SDG's Open Innovation
  • Open Innovation - Open (Citizen) Science - SDG's Social Innovation Ecosystem
  • Open Innovation - Open (Citizen) Science - SDG's Events
  • Funding / Collaborative R&D Opportunities's Challenges
  • Funding / Collaborative R&D Opportunities's Funding
  • Italian DT Hub's Q&A
  • Italian DT Hub's News
  • Gemini Papers Community Review's Gemini Papers
  • DT Hub Community Champions's Discussion
  • Gemini Call's Full recordings
  • Gemini Call's Gemini Snapshot and agenda

Calendars

  • Community Calendar
  • Italian DT Hub's Events
  • DT Hub Community Champions's Events

Categories

  • A survey of Top-level ontologies - Appendix D

Categories

  • Articles
  • Blogs
  • Publications
  • Editorials
  • Newsletters
  • Shared by the Community

Categories

  • A Survey of Industry Data Models and Reference Data Libraries

Categories

  • Climate Resilience Demonstrator (CReDo)
  • Gemini Call Feature Focus presentations
  • Hub Insights
  • Digital Twin Talks: Interconnection of digital twins
  • Digital Twin Talks: Exploring the definitions and concepts of digital twins
  • Other Digital Twin media

Categories

  • Member Resources
  • Public Resources
  • Guidance
  • IMF Community's Files
  • Data Value and Quality's Shared Resources
  • Italian DT Hub's Risorse
  • Gemini Call's Slide decks
  • Gemini Call's Archive

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 8 results

  1. Marek Suchocki

    Flexible Digital Twins

    A digital twin is a digital representation of something that exists in the physical world (be it a building, a factory, a power plant, or a city) and, in addition, can be dynamically linked to the real thing through the use of sensors that collect real-time data. This dynamic link to the real thing differentiates digital twins from the digital models created by BIM software—enhancing those models with live operational data. Since a digital twin is a dynamic digital reflection of its physical self, it possesses operational and behavioral awareness. This enables the digital twin to be used in countless ways, such as tracking construction progress, monitoring operations, diagnosing problems, simulating performance, and optimizing processes. Structured data requirements from the investor are crucial for the development of a digital twin. Currently project teams spend a lot of time putting data into files that unfortunately isn’t useful during the project development or ultimately to the owner; sometimes it is wrong, at other times too little, or in other cases an overload of unnecessary data. At the handover phase, unstructured data can leave owner/operators with siloed data and systems, inaccurate information, and poor insight into the performance of a facility. Data standards such as ISO 19650 directly target this problem that at a simple level require an appreciation of the asset data lifecycle that starts with defining the need in order to allow for correct data preparation. Implementing a project CDE helps ensure that the prepared data and information is managed and flows easily between various teams and project phases, through to completion and handover. An integrated connected data environment can subsequently leverage this approved project data alongside other asset information sources to deliver the foundation of a valuable useable digital twin. To develop this connected digital twin, investors and their supply chains can appear to be presented with two choices: an off-the-shelf proprietary solution tied to one vendor or the prospect of building a one-off solution with risk of long term support and maintenance challenges. However, this binary perspective is not the case if industry platforms and readily available existing integrations are leveraged to create a flexible custom digital twin. Autodesk has provided its customer base with the solutions to develop custom data integrations over many years, commencing with a reliable common data environment solution. Many of these project CDEs have subsequently migrated to become functional and beneficial digital twins because of a structured data foundation. Using industry standards, open APIs and a plethora of partner integrations, Autodesk’s Forge Platform, Construction Cloud and recently Tandem enable customers to build the digital twin they need without fear of near term obsolescence or over commitment to one technology approach. Furthermore partnerships with key technology providers such as ESRI and Archibus extend solution options as well as enhancing long term confidence in any developed digital twin. The promises of digital twins are certainly alluring. Data-rich digital twins have the potential to transform asset management and operations, providing owners new insights to inform their decision-making and planning. Although digital twin technologies and industry practice are still in their youth, it is clear that the ultimate success of digital twins relies on connected, common, and structured data sources based on current information management standards, coupled with adoption of flexible technology platforms that permit modification, enhancement or component exchange as the digital twin evolves, instead of committing up front to one data standard or solution strategy.
  2. Described in the Pathway to the Information Management Framework, the Integration Architecture is one of the three key technical components of the Information Management Framework (IMF), along with the Reference Data Library and the Foundation Data Model. It consists of the technology and protocols that will enable the managed sharing of data across the National Digital Twin (NDT). The IMF Integration Architecture (IA) team began designing and building the IA in April 2021. This blog gives an insight on its progress to date. Principles First, it is worth covering some of the key principles being used by the team to guide the design and build of the IA: Open Source: It is vital that the software and technology that drives the IA are not held in proprietary systems that raise barriers to entry and prevent community engagement and growth. The IA will be open source, allowing everyone to utilise the capability and drive it forward. Federated: The IA does not create a single monolithic twin. When Data Owners establish their NDT Node, the IA will allow them to publish details of data they want to share to a NDT data catalogue, and then other users can browse, select and subscribe to the data they need to build a twin that is relevant to their needs. This subscription is on a node-to-node basis, not via a central twin or data hub, and Owners can specify the access, use, or time constraints that they may wish to apply to that subscriber. Once subscribed, the IA takes care of authenticating users and updating and synchronising data between nodes. Data-driven access control: To build trust in the IA, Data Owners must be completely comfortable that they retain full control over who can access the data they share to the NDT. The IA will use an ABAC security model to allow owners to specify in fine-grained detail who can access their data, and permissions can be added or revoked very simply and transparently. This is implemented as data labels which accompany the data, providing instructions to receiving systems on how to protect the data. IMF Ontology Driven: NDT Information needs to be accessed seamlessly. The NDT needs a common language so that data can be shared consistently, and this language is being described in the IMF Ontology and Foundation Data Model being developed by another element of the IMF team. The IA team are working with them closely to create capabilities that will automate conversion of incoming data to the ontology and transact it across the architecture without requiring further “data wrangling” by users. Simple Integration: To minimise the risk of implementation failure or poor engagement due architectural incompatibility or high cost of implementation, the IA needs to be simple to integrate into client environments. The IA will use well understood architectural patterns and technologies (for example REST, GraphQL) to minimise local disruption when data owners create an NDT node, and ensure that once implemented the ongoing focus of owner activity is on where the value is – the data – rather than maintenance of the systems that support it. Cloud and On-Prem: An increasing number of organisations are moving operations to the cloud, but the IA team recognises that this may not be an option for everyone. Even when cloud strategies are adopted, the journey can be long and difficult, with hybridised options potentially being used in the medium to long term. The IA will support all these operating modes, ensuring the membership of the NDT does not negatively impact existing or emerging environment strategies. Open Standards: for similar drivers behind making the IA open-source, the IA team is committed to ensuring that data in the NDT IA are never locked-in or held in inaccessible proprietary formats. What has the IA team been up to this year? The IMF chose to adopt the existing open-source Telicent CORE platform to handle the ingest, transformation and publishing of data to the IMF ontology within NDT nodes, and the focus has been on beginning to build and prove some of the additional technical elements required to make the cross-node transactional and security elements of the IA function. Key focus areas were: Creation of a federation capability to allow Asset Owners to publish, share and consume data across nodes Adding ABAC security to allow Asset Owners to specify fine-grain access to data Building a ‘Model Railway’ to create an end-to-end test bed for the NDT Integration Architecture, and prove-out deployment in containers
  3. You’re invited to a webinar on 2nd March to find out how collaboration through connected digital twins can help plan resilient cities and infrastructure. The National Digital Twin programme has developed a Climate Resilience Demonstrator (CReDo), a pioneering climate change adaptation digital twin project that provides a practical example of how connected data can improve climate adaptation and resilience across a system of systems. Watch the film Tomorrow Today, and try the interactive app to see what CReDo has been working towards. The CReDo team will use synthetic data developed through the project to show how it is possible to better understand infrastructure interdependencies and increase resilience. Join the webinar to hear from the CReDo team about the work that has happened behind the scenes of developing a connected digital twin. CReDo is the result of a first-of-its-kind collaboration between Anglian Water, BT and UK Power Networks, in partnership with several academic institutions. The project has been funded by Connected Places Catapult (CPC) and the University of Cambridge, and technical development was led by CMCL and the Hartree Centre. This collaboration produced a demonstrator that looks at the impact of flooding on energy, water and telecoms networks. CReDo demonstrates how owners and operators of these networks can use secure, resilient, information sharing across sector boundaries to adapt to and mitigate the effect of flooding on network performance and service delivery. It also provides an important template to build on to turn it to other challenges, such as climate change mitigation and Net Zero. Hear from members of the CReDo team – including the asset owners, CPC, and the technical development team - about the demonstrator they have delivered and the lessons they learned. If you’re interested in using connected digital twins to forge the path to Net Zero, then this event is for you. Register for our end-of-project webinar on 2nd March, 10:30 – 12:00: https://www.eventbrite.co.uk/e/credo-collaborating-and-resilience-through-connected-digital-twins-tickets-228349628887
  4. A new infographic, enabled by the Construction Innovation Hub, is published today to bring to life a prototype digital twin of the Institute for Manufacturing (IfM) on the West Cambridge campus. Xiang Xie and Henry Fenby-Taylor discuss the infographic and lessons learned from the project. The research team for the West Cambridge Digital Twin project has developed a digital twin that allows various formats of building data to function interoperably, enabling better insights and optimisation for asset managers and better value per whole life Pound. The graphic centres the asset manager as a decision maker as a vital part of this process, and illustrates that each iteration improves the classification and refinement of the data. It also highlights challenges and areas for future development, showing that digital twin development is an ongoing journey, not finite destination. The process of drawing data from a variety of sources into a digital twin and transforming it into insights goes through an iterative cycle of: Sense/Ingest - use sensor arrays to collect data, or draw on pre-existing static data, e.g. a geometric model of the building Classify - label, aggregate, sort and describe data Refine - select what data is useful to the decision-maker at what times and filter it into an interface designed to provide insights Decide – use insights to weigh up options and decide on further actions Act/Optimise - feed changes and developments to the physical and digital twins to optimise both building performance and the effectiveness of the digital twin at supporting organisational goals. Buildings can draw data from static building models, quasi-dynamic building management systems and smart sensors, all with different data types, frequencies and formats. This means that a significant amount of time and resources are needed to manually search, query, verify and analyse building data that is scattered across different databases, and this process can lead to errors. The aim of the West Cambridge Digital Twin research facility project is to integrate data from these various sources and automate the classification and refinement for easier, more timely decision-making. In their case study, the team has created a digital twin based on a common data environment (CDE) that is able to integrate data from a variety of sources. The Industry Foundation Classes (IFC) schema is used to capture the building geometry information, categorising building zones and the components they contain. Meanwhile, a domain vocabulary and taxonomy describe how the components function together as a system to provide building services. The key to achieving this aim was understanding the need behind the building management processes already in place. This meant using the expertise and experience of the building manager to inform the design of a digital twin that was useful and usable within those processes. This points to digital twin development as a socio-technical project, involving culture change, collaboration and alignment with strategic aims, as well as technical problem solving. In the future, the team wants to develop twins that can enhance the environmental and economic performance of buildings. Further research is also needed to improve the automation at the Classify and Refine stages so they continue to get better at recognising what information is needed to achieve organisational goals. You can read more from the West Cambridge Digital Twin project by visiting their research profile. This research forms part of the Centre for Digital Built Britain’s (CDBB) work at the University of Cambridge. It was enabled by the Construction Innovation Hub, of which CDBB is a core partner, and funded by UK Research and Innovation (UKRI) through the Industrial Strategy Challenge Fund (ISCF). To see more from the Digital Twin Journeys series, see the homepage on the CDBB website.
  5. I came across an EU funded project "xr4all" which provides a development environment(among other things) for XR projects. The details are here: https://dev.xr4all.eu Will it be possible for the NDT programme to provide similar platform for DT community in the UK? It will help in fostering rapid collaboration and development of the DT ecosystem. Thanks and kind regards, Ajeeth
  6. 122 downloads

    As set out in the Pathway to the Information Management Framework, the Integration Architecture is one of the three key technical components of the Information Management Framework, along with the Reference Data Library and the Foundation Data Model. It consists of the protocols that will enable the managed sharing of data across the National Digital Twin. In the Integration Architecture Pattern and Principles paper, the National Digital Twin programme’s (NDTp) technical team sets out key architectural principles and functional components for the creation of this critical technical component. The team defines a redeployable architectural pattern that allows the publication, protection, discovery, query and retrieval of data that conforms to the NDT’s ecosystem of Reference Data Libraries and the NDT’s Foundation Data Model. The paper will take you through: A requirement overview: a series of use cases that the Integration Architecture needs to enable, including: routine operational use cases: where data from a diverse set of organisations can be shared and analysed for a single purpose (e.g to support legal and regulatory requirements) the ability to respond to an emergency: pulling together data from across different communities in a way that was not foreseen before the incident that caused the requirement ‘business as usual’ NDT maintenance use cases such as publishing a Digital Twin or adding a user to the NDT ecosystem. Architectural principles: key architectural principles that must be adhered to, regardless of the type of architecture that is implemented, including: Data quality: data quality needs to be measurable and published with the data itself Privacy of the published data: the Integration Architecture shall ensure that data is shared and used only according to the conditions under which it was published. Security: ensuring that all data and functions are secure from bad actors. Encryption will be a particularly key aspect of the security features in the Integration Architecture. Recommended integration architecture pattern: Three general architectural pattern options are explored in the paper (centralised, distributed, and federated). The benefits and concerns for each pattern are discussed with respect to the requirements. The recommended architectural pattern is a hybrid of these three approaches – centralising certain functions, whilst distributing and federating others. The recommended pattern is intended to allow datasets to be shared locally (i.e., within an NDT Node, see figure below), but will also allow for inter-node discovery, authorisation and data sharing to take place. NDT Nodes may be established by individual organisations, regulators and industry associations, or service providers and will be able to handle Digital Twins on behalf of their constituent organisations and provide a secure sharing boundary. In the recommended architecture: Datasets are published by the data owner (1), these are then made available to the organisations within the community of interest, in addition an event is issued to register publication with the Core (2). When queries are submitted (A), the dataset can then be discovered by organisations in other communities of interest (B) and retrieved where appropriate (C). The release, discovery and retrieval are carried out according to the authorisation service so that access is controlled as specified by the data owner. Detail of the functional components: The Core Services are likely to be quite thin, comprising mainly of: a master NDT Catalogue that holds the location of available NDT Datasets across the ecosystem the master FDM/RDL that will synchronise with the subset that is relevant for each NDT Node a publish/ subscribe model to propagate data changes to parties that have an interest and appropriate contract in place. The Core and each NDT Node shall interact through a microservice layer, with which they shall have to be compliant. Next steps: The paper concludes with a list of ten key tasks to develop further the Integration Architecture components. We will make sure to inform you on progress and in the meantime, we are looking forward to hearing your questions and comments.
  7. As set out in the Pathway to the Information Management Framework, the Integration Architecture is one of the key technical components of the Information Management Framework. It consists of the protocols that will enable the managed sharing of data across the National Digital Twin. In the recently released Integration Architecture Pattern and Principles paper, the NDTp’s technical team set out key architectural principles and functional components for the creation of this critical technical component. The team defines a redeployable architectural pattern that allows the publication, protection, discovery, query and retrieval of data that conforms to the NDT’s ecosystem of Reference Data Libraries and the NDT’s Foundation Data Model. Download the Integration Architecture Pattern and Principles paper The Integration Architecture Pattern and Principles paper will take you through: A requirement overview: a series of use cases that the Integration Architecture needs to enable, including: routine operational use cases: where data from a diverse set of organisations can be shared and analysed for a single purpose (e.g to support legal and regulatory requirements) the ability to respond to an emergency: pulling together data from across different communities in a way that was not foreseen before the incident that caused the requirement ‘business as usual’ NDT maintenance use cases such as publishing a Digital Twin or adding a user to the NDT ecosystem. Architectural principles: key architectural principles that must be adhered to, regardless of the type of architecture that is implemented, including: Data quality: data quality needs to be measurable and published with the data itself Privacy of the published data: the Integration Architecture shall ensure that data is shared and used only according to the conditions under which it was published. Security: ensuring that all data and functions are secure from bad actors. Encryption will be a particularly key aspect of the security features in the Integration Architecture. Recommended integration architecture pattern: Three general architectural pattern options are explored in the paper (centralised, distributed, and federated). The benefits and concerns for each pattern are discussed with respect to the requirements. The recommended architectural pattern is a hybrid of these three approaches – centralising certain functions, whilst distributing and federating others. The recommended pattern is intended to allow datasets to be shared locally (i.e., within an NDT Node, see figure below), but will also allow for inter-node discovery, authorisation and data sharing to take place. NDT Nodes may be established by individual organisations, regulators and industry associations, or service providers and will be able to handle Digital Twins on behalf of their constituent organisations and provide a secure sharing boundary. In the recommended architecture: Datasets are published by the data owner (1), these are then made available to the organisations within the community of interest, in addition an event is issued to register publication with the Core (2). When queries are submitted (A), the dataset can then be discovered by organisations in other communities of interest (B) and retrieved where appropriate (C). The release, discovery and retrieval are carried out according to the authorisation service so that access is controlled as specified by the data owner. Detail of the functional components: The Core Services are likely to be quite thin, comprising mainly of: a master NDT Catalogue that holds the location of available NDT Datasets across the ecosystem the master FDM/RDL that will synchronise with the subset that is relevant for each NDT Node a publish/ subscribe model to propagate data changes to parties that have an interest and appropriate contract in place. The Core and each NDT Node shall interact through a microservice layer, with which they shall have to be compliant. Next steps The paper concludes with a list of 10 key tasks to develop further the Integration Architecture components. We will make sure to inform you on progress and in the meantime, we are looking forward to hearing your questions and comments on the paper!
Top
×
×
  • Create New...