Project Coordination | ICCS
Ensure the achievement of project objectives with high quality and within the predefined time-schedule and budget constraints; Perform continuous monitoring of the project progress, assure the quality and innovation of all project deliverables and outputs.
Handle the procedures for receiving EC financial contribution and distributing it among the consortium; Provide administrative management including signature of the Grant Agreement with the EC, tenure of certificates on financial statements from partners, keeping records on expenditure, distributing EC payments.
Maintain the communication channels among the participants and the EC regarding the regular and updated technical and administrative reporting of the project progress and efforts.
Coordinate the technical activities of the project, ensuring a coordination among technical WP leaders, issues early identification and resolution with respect to implementation.
Define and monitor the project risk and quality plan and procedures.
Compile and consolidate the project data management plan at least in three phases and on-demand when needed; Guarantee that all contractual, legal, ethical, security, society and gender equality issues related to the project research are properly considered and any relevant conventions are respected.
The project coordination will develop management processes and implement them for an efficient and successful project execution, following the PMBOK principles and best practices. The following tasks will be performed:
- Preparation of technical and financial interim reports in a six-month basis (see below) so that deliverables and milestones of each period are closely followed and so that technical activities follow resource consumption.
- Preparation of the external EC reports (intermediate in M18 and final report in M36) according to the EC guidelines, monitoring technical and financial activities as well as audit reports in applicable cases; Distribution of the EC financial contribution and costs coordination and controlling.
- Organisation of consortium meetings (project board, technical).
- Communication with EC and submission of deliverables to the EC.
- Ensuring the communication between partners at all levels and the proper information exchange.
- Support of the evolution of the work plan (time plan consistency, critical tasks highlighted), including in collaboration with T1.2 for technical matters.
This task will monitor and coordinate the technical progress of each WP in order to assure that the respective outcomes are in line with the needs of the other WPs and consistent with the project objectives. This procedure will involve the Project Technical Coordinator (TC) and the various WP Leaders (WPLs) where the latter are responsible for the corresponding deliverables and milestones. WPLs will provide periodic technical reports on a quarterly basis, identifying on a task by task basis, progress, deviations, challenges and issues. Technical Management will work to ensure the respect of technical objectives and deadlines, while coordinating to resolve critical technical issues. It will conduct physical or virtual meetings between the TC and the WPLs to provide oversight of work progress, work completion monitoring, and conformance to planned deadlines. Attention will be particularly paid to the WP resource utilization and WPLs’ commitment to the project's technical goals.
This task includes Quality Management, aiming to ensure that all deliverables and project procedures are compliant with quality objectives before they are issued. The quality assurance process will be defined and monitoring of milestones as well as KPIs (key performance indicators) will be performed. Deliverables including the progress reports deadlines will be controlled in order to be submitted to the EC on-time and with maximum quality. A Quality Management Plan (QMP) will be produced establishing a quality control and criteria strategy and identifying the quality objectives. This will include a peer review system consisting of representatives of each partner and quality criteria in the form of a methodology that will be followed through the project duration. A Risk Management Plan (RMP) will also be developed specifying in detail how risks are classified and handled. The RMP will classify all risks and present in detail the methods used to manage risks as well as relevant contingency measures. The resolution of risks will be based upon evaluation of technical progress, schedule and cost impact. The RMP will be kept under review during the project duration, updated in plenary meetings and at the end of each reporting period. RMP and QMP will be included in the EC reports (intermediate and final).
In this task, the data being handled throughout the project will be analysed to ensure its correct treatment according to legal frameworks. It will ensure that the data collected during the project are accessed, saved, shared, and reused following best practices and specific standards that will be identified and periodically included in a Data Management plan (see Sections 2.2.4, 5). Any personal data collected during the project’s participatory activities (WP2, WP7, WP8) will be compliant with GDPR, and, if applicable, national data privacy regulations and obtained through informed consent. Moreover, regarding secondary use of data from GEOSS or other sources, if and when such data comprises personal information (e.g. potentially socio-economic or VHR data), appropriate anonymization methods and aggregation methods will be applied so that the requested information for each CC application is obtained, yet, the specificities for a person and its associated information are not kept. Additionally, ethical and privacy protection will play an important role in ensuring compliance with the GDPR, especially within the scope of the project's pilots where work will be done in some cases with more sensitive information.
Co-Design of Climate Change applications based on GEOSS | DRAXIS
Create the personas, user stories and scenarios for the CC applications using co-design principles.
Consolidate user needs into user requirements that are related to both EIFFEL tools and applications.
Prepare, appropriately classify the specifications for the EIFFEL tools and applications and document them in a traceability matrix which will be updated until the end of the project.
Create a document that describes the system architecture, interfaces among tools and applications.
Devise an integration plan early on in the project, which is to be reviewed once the alpha versions of EIFFEL technical components are in place and prior to the integration process starting.
In this task, the EIFFEL user stories and scenarios shall be compiled. The user stories will be inspired by the set of EIFFEL personas, created by the task leader with the respective CoP/pilot leader- and will be crafted in a two-step approach. The first step will be through a co-design process implemented during and after the focus groups taking place until M3. The focus groups shall consist of small groups of participants, being among 6-10 people and corresponding –in terms of profile, expertise and interest- to the stakeholders of each CC application. The focus groups shall include coordinated brainstorming sessions and hands-on sessions for the creation of user stories facilitated by modern tools for such as Trello50 or similar. The use of these tools shall allow for a shared shaping and discussion of a user story, giving the flexibility for off-line improvements and additions after the focus groups conclusion. The process will be facilitated by a focus group moderator, and a backlog of exemplary user stories, circulated to participants prior to the focus groups. In the second step, the user stories shall be further elaborated into high-level scenarios. The user stories and high-level scenarios shall be made publicly available, shared in the project wiki or website. This way, the project outputs shall be made visible from an early stage, triggering CoPs’ and external stakeholders, and allowing for a maximum visibility. When this is completed, this task will collect needs and requirements, based on the scenarios derived; an online survey circulated to the project network of stakeholders will be utilized in order to expand the user base. With the support of the CC applications’ task leaders (IHE, i-BEC, PRO, NOA, SYKE), the needs will be transformed to requirements using a more formal language and taxonomised appropriately (e.g. with unique id, type, category, persona/scenario they relate to, associated EIFFEL tool or application, functional/non-functional etc.). The formulated list of requirements will be properly framed to assist the design of EIFFEL tools and also applications.
Based on the definition of the CC scenarios and project user requirements done in T2.1, the EIFFEL vision will be translated into a first set of functional and non-functional system requirements and specifications, and an elaborate set of features for the functions and operation of both the EIFFEL system/tools and the five pilot-specific CC applications. Determining specifications will be coordinated by Draxis that is EIFFEL Technical Manager and mainly involve the technical partners that will review scenarios and user requirements, formulate them in a technical context, considering also the technical limitations of the solutions and their business value. The mapping will be reflected in a user requirements-specifications traceability matrix that will clearly demonstrate the priority (must, should, could) level for each specification and the application(s) involved; this matrix will be a live document throughout the project development, updated in each critical milestone of the workplan. They will form the backlog throughout the development phase. Some of them may be amended or become obsolete, and some new ones may be created during the development and/or pilot phase. A set of wireframes or equivalent tool will be used to help the CoP stakeholders better understand the main functionalities of the applications but will also enable the co-design process to continue into this task. Non-functional specifications will include hardware required, performance, resilience, user friendliness, scalability and security/data privacy.
This task will produce the EIFFEL system architecture. The consortium will follow a well-established methodology that leads to a comprehensive architecture description, based on the architectural decomposition into views. Views are representations of the structural aspects of the architecture that address issues raised by stakeholders. In particular, the functional, information as well as deployment views are going to be addressed. The work to be conducted includes the specification of the logical structure of the end-to-end-system, giving special focus on the integration framework defining the programming interfaces that will enable the interaction and communication among the individual components (tools and applications). Representative high-level message sequence charts (MSCs) will be created (at least one for each pilot), so that the detailed information flows among EIFFEL tools and the respective application are clarified; this will greatly facilitate the development WPs that follow. In the final activity of the task, the integration plan will be produced as an internal, live document that will start crafting the scenarios to be used for the technical and operational integration testing of the components.
Augmenting GEOSS data exploration | LIBRA
Develop the cloud-based front-end component and visualisation engine of the EIFFEL cognitive search tool.
Setup the data handling and management system of real time metadata retrieval and querying to and from the EIFFEL augmented metadata database.
Create the EIFFEL CC-focused ontology.
Design and develop the EIFFEL’s NLP-based cognitive search engine.
Design and develop a diverse toolset for metadata curation, enrichment, augmentation and extension.
Provide to the pilot applications Sentinel data (in raw format but also related products) operationally.
Explore the possibility of making composite search queries for Copernicus data based on conditions of interest.
The visualisation (or UI/UX) engine of EIFFEL data exploration tools will offer map-based search or, equivalently, spatial search based on given coordinates, and a standard set of filtering criteria that legacy engines already provide, e.g. dataset year/timespan, data provider, format, scale, resolution. Beyond this, an important functionality is that the engine will dynamically show an advanced tree-view checkbox filtering structure, mobilized by both the EIFFEL ontology structure and the concept relations and keywords revealed via NLP. The engine will prioritise the returned results according to their ‘relevance percentage’. More importantly, a list of the most frequent similar searches performed by other EIFFEL cognitive data exploration users shall be presented, as well as a user-friendly and navigable list of relevant -external to GEOSS- datasets, that shall be returned from the T3.4. The tool explores the information without knowing the structure of datasets, taking advantage of the semantic relation among the defined terms. Technically speaking, the UI/UX will be implemented as a web-based tool permitting interoperability among digital systems and universal use, using a common data format (most probably JSON-LD). Moreover, the back-end of the engine will be cloud-based, supporting data storage, querying and retrieving. This will support full text search capabilities for metadata search as well as cosine similarity based direct search for vector matching in cognitive search querying in line with T3.3 and T3.4 requirements.
The result of this task will be the EIFF-O ontology and the associated semantic annotation tools and converters to be used in the EIFFEL pilots. This task will, first, analyse the current ontologies available in the GEOSS and CC fields, especially the latest OGC EO JSON-LD standard issued. Then, the different ECVs will be analysed and a “climate tagger” will be developed, allowing dynamic annotation based on clear rules. Third, the different syntactic and semantic characteristics of the different GEOSS datasets and other data sources present in EIFFEL pilots will be analysed, exploring significant overlaps in terminology and to establish components of a common ontology. This will be extrapolated in order to achieve an as-agnostic and as-scalable-as-possible ontology in the project. Afterwards, advanced semantic techniques will be used for the semantic representation by clustering knowledge, ensuring linked data compliance. Afterwards, EIFF-O will be developed following the guidelines established since the proposal stage. Then, the needed bridges for the pilots will be developed by the technical team. Finally, a set of guidelines, complete documentation on how to use the ontology will be created. This documentation will be made public and the associated API will be created.
This task will develop the EIFFEL cognitive search framework that will enable users to make multi-fashion queries and effectively discover rich GEOSS data content with emphasis to CC-related topics. A major aspect of this task is to enable the developed search engine with advanced semantic comprehension and human language understanding abilities. To achieve this the steps to be followed are: A) We will systematically scrap, collect and characterize vast amounts of relative unstructured document corpora including GEOSS metadata short descriptions, CC and geo-related open access scientific journals, relevant EC project deliverables, Wikipedia and websites linked to GEOSS data assets. B) we will employ unsupervised NLP tools and AI-enabled language models such as Deep bidirectional transformers to produce data-driven, context-aware word embedding specialised on geo-related and CC topics. C) explicitly incorporate domain-knowledge expertise via the EIFF-O (T3.2) and also enhance data-driven word-vectors with hard to detect semantic dependences. Having the steps above successfully fulfilled, the EIFFEL search engine will be built following hybrid intelligent data search and retrieval schemes exploiting ontology-based and AI/data-driven based search. For this task, document ranking and AI-based information retrieval schemes will be developed and applied on augmented GEOSS metadata of T3.4.
The aim of this task is to develop a range of metadata enrichment and curation mechanisms that will automatically augment the metadata of existing GEOSS datasets as well as of datasets that will be produced (by any source or actor) during the project duration and beyond. In particular, within this task we will develop: A) A metadata keyword generator and augmenter that for each relevant GEOSS data asset will consume information of its metadata, short description and content in linked sources and it will generate keyword lists exploiting the semantic and concept associations revealed via the AI-enabled GEOSS-aware language model developed in T3.3. Moreover, existing keyword lists will be enriched with new terms and relevant concepts via linking each asset into domain-specific concepts, and hierarchies underlining EIFF-O produced in T3.2. Finally, existing keywords will be paired with novel ones by means of automatic classification of the asset metadata in concepts, key-terms and synonyms carrying domain knowledge information whenever possible. B) we will develop automatic information retrieval schemes that will extend metadata with relevant external information from a diverse set of sources such as local geospatial data, Corine, LUCAS, information about research works and projects using the data assets, and relevant data from other disciplines, i.e. socio-economic, and sources (e.g. statistical authorities, custodian agencies, national/international registries), which are sparsely available in the portal. C) We will develop dedicated ML models and data analytics tools in order to extracts valuable insights from the raw data themselves, such as statistics, analytics, extreme values, data quality assessments, trends, anomalies etc. Then the metadata will be augmented with such impactful information rendering it searchable by the EIFFEL cognitive search engine. Metadata enrichment, augmentation and extension will be realized dynamically in metadata database, which will not be physically linked with the raw data themselves rendering the generated metadata active allowing progressive augmentation in regular phases as long as new algorithmic developments and new relevant info comes into place.
This task is responsible for providing not only the raw sentinel data to the different pilot but also to provide pre-processing products fine-tuned based on pilots’ requirements. This action will be closely supported from pilot leaders providing both requirements and feedback. The application builds on top of the Sentinel Hub application developed by NOA and provides added value service to pilot leaders in order to not be limited and/or concerned about “where to acquire”, “how to process” and “extract” the needed information from raw satellite data but to only get the required information in the agreed frequency, area, resolution etc. On top of the abovementioned, that is an operational service, this task will also apply research on how to combine/blend sentinel data with datasets from GEOSS portal. GEOSS portal is not providing access to data but acts as a broker to data providers. Therefore, NOA will try to showcase how users can extract the needed information from Sentinel hubs (e.g. Sentinel 1/2/3/5p images) based on a condition that is related to other data sources from GEOSS portal. An example of that could be a blend of weather and/or land use data with sentinel 2 images, thus responding on the following request: “Retrieve all Sentinel 2 images from 2019 for a specific area in which weekly accumulated precipitation was below 50mm” This could result in providing the needed batch of datasets and not the whole archive of 2019. The operational service is assigned to EDGE and the research task will be NOA’s responsibility. The service itself will be hosted either on a DIAS provider or on a cloud provider (e.g. GRNET).
Improving temporal, spatial resolution & data quality of CC related datasets | ICCS
Improve the quality and completeness of time series datasets by providing a stochastic toolbox facilitating temporal data augmentation tasks for GEOSS datasets.
Collect the in-situ data of local and/or regional nature that are relevant for each CC application.
Implement a novel processing cycle, capitalising on ML/super resolution and data fusion techniques of Sentinel data with the aforementioned in-situ data, aiming to enhance the spatial resolution of CC datasets.
Ensure quality assessment of GEOSS data with particular emphasis on CC applications while also incorporating users’ perspective in the metadata of the resources in a standardized way.
In this task, we will design and develop theoretically justified methods and tools based on statistical and probabilistic notions such as those of, time series analysis, stochastic processes, and copulas to address three key challenges of CC-related datasets: 1) lower-scale extrapolation (e.g., temporal downscaling), 2) infilling of time series missing values, and 3) generation of statistically consistent stochastic realizations. The solutions delivered within this task will build upon the concept of Nataf’s joint distribution, a notion closely related to copulas, that has been recently and successfully used to address problems related to the stochastic simulation of non-Gaussian random variables, processes and fields. Here, we exploit the flexibility and generality of this concept, as well its expandability, to remedy the above three challenges, delivering high practical value. The developed methods and tools will be applied to both physical processes such as, meteorological ones, e.g., precipitation, temperature (data required almost by all pilots - e.g., P2, P3, P5), wind direction, relative humidity (P3), surface solar radiation (P4), surface soil moisture (P5), as well as to non-physical processes, such as emissions indices (P2) and air quality indices (P3). The power and utility of the tools will be demonstrated through proof-of-concept applications in several cases (see above), while the final output will be a general-purpose and easy-to-use, validated, toolbox for engineers and researchers working with time series data (e.g., meteorological ones) – substantially simplifying laborious tasks related to data (pre-) processing.
The goal of this task is to establish an ML-based framework and algorithms, that aim at enhancing the spatial resolution of Copernicus data, beyond the capabilities of the original imaging systems. Some of the envisaged methods are described in the relevant paragraph of Section 126.96.36.199.2. The framework will ensure Copernicus cross-platform exploitation, leveraging the inherent characteristics of the Sentinel data as well as super resolution modelling, including Deep Neural Network based architectures and techniques, towards enhancing the spatial characteristics of the observed properties in the context of CC applications. In particular, the spatial resolution of the coarser bands of Sentinel 2 (used by all pilots, P1-P5), will be increased to the resolution of the finer bands (i.e. from 60m & 20m to 10m), whilst on the same time preserving the integrity of spectral resolution. Super resolution techniques will be also applied to Sentinel 3 data (used by P1,P3,P5) aiming to sharpen low resolution observations (i.e. from Sea and Land Surface Temperature Radiometer), using high resolution observations from Sentinel 2 satellites, towards improved monitoring of crucial parameters focusing on the interaction between the ground and the atmosphere. Additionally, another facet of this task involves the application of data fusion techniques enabling the combination of data from different sources (e.g. Sentinel with in-situ datasets) in the same processing cycle. Multi-sensor fusion shall predominantly involve the utilisation of in-situ data, that usually are point-based but of high accuracy, aiming to improve spatial characteristics of the observed parameters whilst enabling large scale mapping. ML models will be employed in order to upscale CC related properties and transform them into regional or national maps. The employed ML models will be formulated as multiple-layer models to learn representations of input/ output data with abstractions from lower to higher levels. The overall process will include the semantic combination and assignment of in-situ measurements to EO data and products (i.e. Sentinel 5p, CAMS) as well as the characterization of areas, for which related in-situ information may not be available.
Two approaches for quality metadata are used since many years (e.g. in QA4EO, GeoViQua and CharMe projects) pursuing producer and user metadata. Both facets are needed to fully describe a dataset, and this task will work in both to increase quality awareness of GEOSS data. On one side, data quality assessment is based on defining quality measures for the GEOSS datasets. In order to do so in a standard and comparable way, metadata producers need to know the details for quality documentation, to properly encode the quality metadata they can produce. To achieve this, a first loop on the tool implementation is needed for modifications allowing quality measures according to the last version of the standards (e.g. QualityML and new revision of ISO 19157 in which UAB is taking part). Once the generic tools are ready, specific requirements to include quality assessment in metadata for climate datasets (e.g. Climate ADAPT uncertainty guidance related to adaptation decision-making) will be debated in a co-design virtual workshop (held around M10). Results will be summarized as best practices on the use of QualityML. On the other side, GUF complements quality assessment as it can include comments, ratings, usage reports, etc. that improves metadata with users’ perspective. On top of the existing GUF standard, knowledge elements will be further developed, such as climate-specific quality indications (positional, temporal and attribute accuracy, completeness) based on ISO 19157 and QualityML, additional lineage steps or usage reports.
Development of the EIFFEL Climate Change applications based on GEOSS | IHE
Implement a DSS for assessing the impact of CC adaptation measures on the water and soil carbon conditions in the Aa river basin in NL.
Estimate carbon stock changes in agriculture, that may be used by farmers to manage microclimate effects, and also by the Lithuanian paying agency for monitoring NDCs, as per the PA-set targets.
Implement an analytical tool and DSS to assess the climate impact of the activity of the Balearic Port Authority, monitoring pollution episodes, correlating them with vessel routes and schedules and assist decision makers optimise vessel traffic so as to allow a lower port environmental footprint.
Implement a DSS to assess the impact of urban GHG mitigation scenarios, with respect to buildings energy efficiency, photovoltaic penetration and electromobility adoption in the Attica region.
Develop a risk assessment and decision support framework for droughts, forest fires and forest pests in Finland.
For all the applications, datasets from GEOSS-centred EIFFEL tools for exploration and augmentation (WP3, WP4) will be obtained, in an incremental way (alpha version M14, final version M18), allowing for a development of the WP5 applications in iterative development cycles.
This task will focus on developing the modelling and DSS for assessing the impact of adaptation measures on the water and soil carbon conditions in the Aa river basin in the Netherlands. The hydrological/water management models will be developed in a detailed scale for testing measures on (groups) of individual properties/fields, while the soil carbon model will be on a major land use changes. ML emulators of these models will be set up, in close collaboration with T6.1, as needed for the DS components that will be used by identified stakeholders. Separate models will be developed for the Belgian part of the Aa river basin, using GEOSS data to assess the potential of such applications in cross-boundary river basins. Detailed requirements for the models will be obtained from the work of WP2 (especially Tasks 2.1, 2.2). The DSS will be web-based that will allow presentation and testing of the adaptation measures by different stakeholders in a user-friendly way.
This task will develop the data framework and mining tools for a consistent land representation system and for efficient estimation of carbon stock changes. The latter will be used by farmers and policymakers to manage local and transnational microclimate effects more efficiently. The pilot application involves a set of novel ensemble models, i.e. multi-input CNNs and XAI models from T6.1 for the accurate (downscaling them at field level) information for operational DS and reporting of GHGs procedures. Carbon monitoring in agro-ecosystems shall be highly enhanced, integrating existing physical process models, with in-situ datasets (e.g. Soil spectral libraries, sensors) and multi-resolution data flows from Sentinel 1,2 in T4.2. The application is a solid demonstrator of the value that can be achieved with the EIFFEL tools. Its importance and impact on the farming sector is substantial, as they support optimisation of the practices of a climate-smart agriculture (e.g. crop diversification), utilizing big data sources (crop metrics), future climatic scenarios (RCP scenarios) and land suitability analysis tools.
This task will implement a CC advanced data analytic tool and DSS, that capitalizes on data mining and AI techniques for the analysis, monitoring and prediction of pollution episodes and, consequently, for the planning and optimization of port activities. The latter include cargo, load/unload operations, cruise ships and land traffic. The overall aim is to quantitatively assess the CC impact of port activity, taking into account the requirements and specifications of T2.1-2.2. This implementation considers the following steps: (i) full integration with EIFFEL tools to acquire Sentinel data at regional level and other in-situ and external datasets at local levels (T3.5 and T4.2); (ii) building predictive models of pollution episodes and traffic congestion based on different AI techniques, e.g. data mining, pattern and anomaly detection, calibration, extrapolation etc.; (iii) using XAI models to improve the interpretability of the application (adapting and applying XAI models from T6.1 to the pollution and traffic models); (iv) quantification and presentation of results in a user-friendly DSS, comprising dashboards with KPIs.
Driven by WP2 (incl. geographic upscaling), four urban CC applications will be developed. Compilation of necessary data with explicit requests for particular socio-economic data is implemented, using relevant EIFFEL tools of WP3, WP4 and the respective datasets. The work will include: 1) Creation of an enhanced Building Stock Model (BSM) utilizing EO and socio-economic data. Assessment of GHG mitigation scenarios using the BSM in the selected LAU; 2) Exploitation of SENSE to derive the energy potential of the Attica region and calculation of solar energy availability for the selected LAU at building level. Identification of potential areas for PV penetration and estimation of GHG abatement equivalent; 3) Utilization of the COPERT methodology to simulate GHG mitigation scenarios, exploring vehicle fleet composition and alternatively fuelled vehicles. Assessment of securing emobility energy from buildings; 4) Air quality, city-scale modelling for quantifying the effects of the above scenarios on pollutant concentration and citizens’ exposure, bundled in a user-friendly DSA.
In this task, we will design and develop a CC-oriented application that will implement a framework for multi-hazard risk assessment, as described in the Pilot 5 part of Section 188.8.131.52. The framework will assess and quantify several biotic and abiotic forest disturbance factors, capitalising on augmented Sentinel- and local data sets (WP3, WP4) and will produce hydrological, fire risk and pest risk models on remotely sensed data, seasonal weather forecasts and long-term climate projections. The analysis will provide novel uses of GEOSS data in a modelling framework at regional and national scale, by building predictive models of forest pest occurrence and the quantification of present-day and future risks of drought and forest fire risk based on impact response surfaces (see 184.108.40.206). These will be the basis for upscaling regional analysis of P5 to a European-scale assessment in Task 6.3.
Interpretability, integration & upscaling of EIFFEL Climate Change applications | i-BEC
Ensure a seamless collaboration predominantly with WP5 tasks, for rendering the developed CC applications interpretable and, thus, transparent and credible.
Identify and demonstrate capabilities of Explainable Artificial Intelligence (XAI) components for CC applications working with GEOSS data and implemented in the EIFFEL pilots.
Update, elaborate and execute the integration plan developed in T2.3, so that the WP7 pilots’ activities may commence at the defined time instances.
Ensure interaction and appropriate interfacing of WP5 DSS applications with WP3/WP4 developments (for the final version of the latter tools), in order to enable appropriate pilot testing and an efficient future creation of other CC applications by third parties; this shall be followed by incremental integration principles.
Demonstrate the potential for upscaling EIFFEL applications to a pan-European level, and for replication to other European pilots, in close interaction with Pilots 1 and 5.
This horizontal task will help to add interpretability capacity in the data-driven models (ML, DL) and applications of WP5 and will enable the CC applications to attain the merits of eXplainable AI (XAI), as detailed in Section 220.127.116.11.3, including understandability and transferability. This task will collaborate with T5.1-5.5 to identify where XAI would be of interest for the researchers and stakeholders. The specific activities are: 1) Assist the WP5 applications in the design and conceptualization phase of the ML based applications, by e.g. helping determine the best methodology/workflow for the task at hand; 2) Apply an interpretability analysis on the derived models to understand their inner workings and identify whether the model has identified a causal connection between input and output; 3) Where applicable, perform robustness analysis and what-if scenarios to ascertain their transferability and test how these models would perform under different scenarios (i.e. different inputs). 4) Where applicable and feasible, develop visual representations (e.g. knowledge graphs, trend analysis) of the interpretations derived by the models, depicting the knowledge captured in a comprehensible way to end-users and researchers.
This task will start from the integration plan derived in Task 2.3 and update it, as the technical components are being evolved and finalised. The task involves combining the individually tested software modules into an integrated whole, leading to an end-to-end interoperability of each of the EIFFEL applications with the EIFFEL horizontal tools and resulting datasets. In this task, a continuous integration approach will be used, instead of a waterfall or “Big-bang” model. Continuous integration is a much less risky approach, wherein the modules and subsystems are integrated as they are developed into multiple working mini-versions of the final system. Integration is bound up in the concept of interaction and communication as will be defined in T2.3 (system architecture and integration plan). Based on lessons learned during the continuous integration process and resulted iterations, the system and its components will be corrected whenever reasonably possible, so as to optimise the outcome of the final system. The task will produce two versions of the EIFFEL integrated framework, the first one on M27 and the final one on M30, after the gradual incorporation of feedback from the pilots.
This task will provide a testbed for upscaling results developed in Pilot 1 and 5. The methodology will make use of the pan-European setting of the models and utilize Impact Response Surfaces (IRS) to assess the sensitivity of the models in different regions. The LISFLOOD model will be employed with focus on water resources and low-flow conditions (Pilot 1), and the GEFF model will be used to assess fire danger (Pilot 5). This task will also assess the feasibility of upscaling the other pilots (P2, P3 and P4) to the European scale in form of a “roadmap”. The experiences and knowledge gained from upscaling P1 and P5 will serve as the blue-print for the study. The task will first assess the added information from GEOSS data on the pan-European scale by clearly mapping where the data availability is sufficient to include in the methodology and where the methodology is fully or partially possible to transfer to the European domain. It will also define a benchmark using a “climatological run”, which will be taken from the available EFAS (LISFLOOD) and EFFIS (GEFF) datasets on the CDS. The added value of the methodology and data will be stepwise evaluated on pre-selected verification points across Europe. These verification points are the points in EFAS and EFFIS, where we have available observations to perform the analysis. The task will provide climate sensitivity analysis summarized across the verification points. More in-depth analysis will be done over the pilot study areas, as well as over different climate regions across Europe. The IRS analysis will be done using the “climatological run” as the benchmark and comparing this with the optimal model setup using GEOSS data used and augmented in the project. The developed methods will be evaluated in a seasonal hindcast mode to assess how well the hazards and risks can be predicted in current climate. The IRS will also be used to assess how they might change in future climate conditions.
EIFFEL Pilot demonstrations & impact assessment | SYKE
Listing and elaboration of evaluation KPIs for technical, operational, environmental and societal impact assessment of the pilot demonstrations; perform and report the assessment as per those KPIs.
Prepare and execute the pilot demonstrations in each of the selected GEO SBAs;
Improve CC application developments based on real world testing and implementation as well as stakeholders’ feedback.
Identify benefits and gaps of using GEOSS data in Climate Change applications.
Assess environmental impact and contributions to policy goals (PA, SDGs, SFDRR) and recommendations and guidance for best practice.
This task will coordinate the assessment of pilot applications including the stakeholder engagement (together with T8.2). The results of the intermediate and final assessments of pilot demonstrations will be captured in a structured way and jointly interpreted against objectives (KPIs, starting from Table 1 and elaborated together with pilot partners and CoPs stakeholders within this task) and user needs (T2.1). The contributions to policy goals and recommendations from pilots, including operational aspects, benefits and gaps for the integration of GEOSS data and exploration tools, best practices and lessons learned (T7.2-7.6) from national or sub-national perspective, will be compiled and synthesized in a systematic way.
In Pilot 1, we run a co-design project on planning and implementing nature-based solutions (NbS) for ‘climate proofing’ in the Aa of Weerijs catchment, using the DS components developed in T5.1. The pilot will address the main challenge for evaluating and proposing new NbS. The impact of potential measures focusing on water management, land use and soil carbon changes (see 18.104.22.168) will be assessed. In a first step, we will activate CoPs stakeholders onboard since the T2.1 focus groups, for joining testing of the applications (T5.1). Together with these stakeholders, we will organize demonstration and testing workshops. User experiences, suggestions for improvement and specification requests will be channelled back to T5.1 and T6.3 for the upscaling part. Special emphasis will be put on the part of the application for cross-border cooperation (NL-BE) in the catchment and monitoring and modelling both climate effects and impacts of measures.
This pilot demonstration will evaluate the overall performance of the Pilot-2 components through complete end-to-end scenarios deployment within a co-design procedure led by NPA. The following activities will take place: (i) Identification of representative pilot studies within Lithuania based on stratification with an active involvement of up to 400 farmers; (ii) Evaluation of the Agricultural Carbon Hybrid Model in support of national GHG inventories by moving to higher Tier methodologies; (iii) Full-scale deployment of the future land suitability tool mapping along with the recording of the first perceptions about performance, usefulness, and immediate or potential benefits by local farmers. The demonstration will address the main challenges identified by the paying agency, offering a novel solution to the monitoring, reporting and verification of LULUCF and to farmers a set of climate smart agricultural services for farming sustainability.
The pilot will focus on the monitoring and prediction of pollutions episodes at BPA and the planning and optimization of port activities (cargo, load/unload operations, cruise ships and land traffic). It will provide a real test-bed for better decision making by minimizing the carbon footprint of the port activity and the impact on the city. The task will test and evaluate the advanced analytical tool and DSS developed in T5.3 in absolute terms (effectiveness towards technical KPIs), in terms of operational KPIs, and also environmental and societal KPIs for the port, city and citizens as a whole, including contributions to policy goals (PA, SDGs), recommendations for best practice and overall exploitation opportunities (input to WP8). This pilot will be evaluated in two iterations. The first one (M28-M30) will validate the complete version of the advanced analytical tool (D5.3) and will provide feedback for the final optimization and changes to it in T5.3, up to M30, while the second one (M31-M34) will provide final conclusions and assessment.
In Pilot 4, the added value of the application for inspecting urban GHG mitigation scenarios for building energy efficiency, photovoltaic penetration and vehicle fleet emissions (Task 5.4) will be demonstrated at two levels: in the selected LAU and at the whole Region of Attica. In the LAU, the DSA, as its components, will be tested in full force, deployed in high spatio-temporal resolution, and higher tier tools for mapping and linking the different sectorial layers shall be utilized. The upscaling from the municipality to regional level will be implemented via i. products available at the regional scale, ii. projection of outcomes from the pilot to the whole region, the latter taking into account the characteristics of the sub-areas and local knowledge. The results from the DSA, with respect to the abatement of GHG per sector and overall, will be incorporated into the Regional Plan for Adaptation to CC of Attica and taken into account in its implementation. Transferability in other LAUs will be explored and the impact on achieving the National Plan for Energy and Climate from the Greek State, will be quantified.
The framework for multi-hazard risk assessment of several biotic and abiotic forest disturbances, developed in T5.5, will be tested and verified with stakeholders. Data exploration (WP3) and augmentation (WP4) tools will be tested and the benefits of integrating GEOSS data in the modelling framework will be evaluated. Feedback on development needs/gaps will be provided to those WPs. Hydrological, fire risk and pest risk models will be applied for identifying areas prone to drought, forest fires and pest in Finland. Future risks of drought, forest fires and pests based on climate scenarios will mapped. The intermediate and final mapping results will be presented to and co-interpreted with stakeholders and alternative adaptation measures will be discussed. Final maps on present-day and future risks of forest disturbances will be presented in the Finnish climate guide and seasonal/long-term drought forecasts will be integrated in the national portal for water resource information. Recommendations for priority research on forest disturbances and actions for adaptation plans in Finland will be given.
Impact creation & EIFFEL sustainability | NOA
The overall aim of WP8 is to maximise project impact through an effective campaign of communication, dissemination and engagement activities. This will be achieved by: (i) raising awareness of and encouraging engagement with the project for targeted audiences and (ii) disseminating project results.
This task will increase uptake by raising awareness on the solutions developed through tailored and well-targeted communication, dissemination, outreach activities, synergies with other projects allowing EIFFEL to be known in GEO, EO communities and user groups.
This task will coordinate stakeholder engagement activities, including Communities of Practice in pilots, across WPs by developing guidance, organising webinars and facilitating collective evaluation EIFFEL stakeholder engagement activities.
This task will promote two main actions:
- The dissemination of important results and transferability to the scientific community.
- Re-shape results targeted to a wider audience (decision makers etc.)
This task will ensure and efficiently implement the bidirectional interaction of the project and the GEO ecosystem as a whole:
Technical iteration of WP2, 3 and 4 with GEOSS infrastructure teams and alignment with GEO WP activities, the GEO Knowledge Hub and linkages with EuroGEO Action Groups.
This task will identify the EIFFEL Key Exploitable Results, assist each partner in their exploitation activities and sculpture the EIFFEL toolbox as a market product by implementing appropriate exploitation and IPRs management strategies and plans.