Active Projects

  • image

    HEREDITARY

    2024 - 2027

    HetERogeneous sEmantic Data integratIon for the guT-bRain interplaY

    HEREDITARY aims to significantly transform the way we approach disease detection, prepare treatment response, and explore medical knowledge by building a robust, interoperable, trustworthy and secure framework that integrates multimodal health data (including genetic data) while ensuring compliance with cross-national privacy-preserving policies. The HEREDITARY framework comprises five interconnected layers, from federated data processing and semantic data integration to visual interaction.

    By utilizing advanced federated analytics and learning workflows, we aim to identify new risk factors and treatment responses focusing, as exploratory use cases, on neurodegenerative and gut microbiome related disorders. HEREDITARY is harmonizing and linking various sources of clinical, genomic, and environmental data on a large scale. This enables clinicians, researchers, and policymakers to understand these diseases better and develop more effective treatment strategies. HEREDITARY adheres to the citizen science paradigm to ensure that patients and the public have a primary role in guiding scientific and medical research while maintaining full control of their data. Our goal is to change the way we approach healthcare by unlocking insights that were previously impossible to obtain.

    Role: Project Coordinator

    Project No: 101137074
    Call: ORIZON-HLTH-2023-TOOL-05
    Topic: Tools and technologies for a healthy society
    Funding (UNIPD): 1.138.046€
    Website: https://hereditary-project.eu/

  • image

    BRAINTEASER

    2021 - 2024

    Brainteaser: BRinging Artificial INTelligencE home for a better cAre of amyotrophic lateral sclerosis and multiple SclERosis

    Amyotrophic Lateral Sclerosis (ALS) and Multiple Sclerosis (MS) are chronic diseases characterized by progressive or alternate impairment of neurological functions (motor, sensory, visual, cognitive). Artificial Intelligence is the key to successfully satisfy these needs to: i) better describe disease mechanisms; ii) stratify patients according to their phenotype assessed all over the disease evolution; iii) predict disease progression in a probabilistic, time dependent fashion; iv) investigate the role of the environment; v) suggest interventions that can delay the progression of the disease. BRAINTEASER will integrate large clinical datasets with novel personal and environmental data collected using low-cost sensors and apps.

    We are leader of the "Open Science and FAIR Data" WP. The main goals of the WP are:

    • Design of open ontologies to represent the data of the project and create knowledge bases to enrich and augment the value of the data.
    • Design and implement methods for the evaluation of the FAIRification of the data and metadata produced by applying and reviewing the FAIR principles of the European Open Science Cloud (EOSC). Integration and sharing of research data with EOSC services.
    • Design and implementation of the methods to expose the data as Linked Open Data and the services to favour their exploration and re-use.
    • Organisation of three annual open evaluation challenges and sharing of the produced experimental data as open data Evaluation.
    Role: Participant

    Project No: 101017598
    Call: H2020-SC1-DTH-2020-1
    Topic: Personalised early risk prediction, prevention and intervention based on Artificial Intelligence and Big Data technologies
    Funding (UNIPD): 732.250€
    Website: https://brainteaser.health/

Past Projects

  • image

    EXAMODE

    2019 - 2023

    ExaMode: Extreme-scale Analytics via Multimodal Ontology Discovery & Enhancement

    Exascale volumes of diverse data from distributed sources are continuously produced. Healthcare data stand out in the size produced (production is expected to be over 2000 exabytes in 2020), heterogeneity (many media, acquisition methods), included knowledge (e.g. diagnosis) and commercial value. The supervised nature of deep learning models requires large labeled, annotated data, which precludes models to extract knowledge and value. Examode solves this by allowing easy & fast, weakly supervised knowledge discovery of exascale heterogeneous data, limiting human interaction.

    We are leader of the "Semantic knowledge discovery and visualisation" WP. The main goals of the WP are:

    • Develop relation extraction methods to automatically extract semantic relationships between authoritative concepts within un/semi-structured text.
    • Leverage entity linking methods in conjunction with developed relation extraction techniques to create report-level semantic networks out of extracted concepts and relationships.
    • Model report-level semantic networks through conceptual descriptive frameworks to empower data management and exploitation.
    • Develop information retrieval methods to semantically connect and discover semantic networks associated with relevant medical reports.
    • Develop information visualization and visual analytics methods for interacting with deep learning algorithm and improve their understandability.
    Role: WP and team unit leader

    Project No: 825292
    Call: H2020-ICT-2018-2
    Topic: Big Data technologies and extreme-scale analytics
    Funding (UNIPD): 516.000€
  • image

    CDC

    2018 - 2020

    Computational Data Citation

    CDC is a Supporting TAlent in ReSearch@University of Padova (STARS Grants).
    The computational problem targeted by CDC is to automatically generate complete citations for general queries over evolving data sources represented by diverse data models. The aim of this research program is to design the first well-founded model as well as to develop efficient algorithms and a solid citation system for citing data.
    This research program is timely because the paradigm shift towards data-intensive science is happening now and scientific communication must adapt as quickly as possible to the new ways in which science progresses; and, it is ambitious because it shapes a new field in computer science as well as it tackles with a uniform approach a range of computational issues, query languages and data models that have never been treated with a shared vision before.
    The broader impact of this research will be on scientists and data centers that curate, elaborate and publish data, on government agencies that direct research investments, and on research performance measures (e.g., the h-index) that will be based also on data and not only on text-based contributions.

    Role: Principal Investigator

    Funding: 130.000€

  • image

    PREFORMA

    2014 - 2017

    PREservation FORMAts for culture information/e-archives

    PREFORMA is Pre-Commercial Procurement (PCP) project (Contract n. 258191) co-funded by the European Commission under its FP7-ICT Programme..
    The main goal of the project is to address the challenge of implementing good quality standardised file formats and to give memory institutions full control of the process of the conformity tests of files to be ingested into archives.

    Role: I collaborate in the activities of the WP7 Validation and testing and WP8 Competitive Evaluation and Monitoring of the RTD work. Leader of Task 7.1 and task 8.1.

  • image

    SIAR Veneto

    2005 - 2016

    Sistema Informativo Archivistico Regionale del Veneto.

    Sistema Informativo Archivistico Regionale, Regional Archival Information System (SIAR) Project.
    It is a project aimed to develop a distributed Digital Library System (DLS) for describing, managing, accessing and sharing archival resources. SIAR is a joint project with the Italian Veneto Region and the "Sopraintendenza Archivistica per il Veneto" (Archival Regional Board of the Ministry of Cultural Heritage).

    Role: Participant of the unit of the Department; I'm working on the design and developement of the infrastructure of the SIAR system.

  • image

    CULTURA

    2011 - 2014

    Cultivating Understanding and Research through Adaptivity

    It was a STREP project co-financed by the European Commission the goal of which is to pioneer the development of the next generation of adaptive systems which will provide new forms of multi-dimensional adaptivity. The main challenge it faces is to instigate, increase and enhance engagement with digital humanities collections. To achieve this, it aims at changing the way cultural artifacts are experienced and contributed to by communities.

    Role: Within CULTURA I collaborated in the activities about user requirements analysis for developing models and systems able to manage digital archives of illuminated manuscripts of interest for different domains such as history of art, history of science, botany, astronomy and medicine.

  • image

    PROMISE

    2010 - 2013

    Participative Research labOratory for Multimedia and Multilingual Information Systems Evaluation

    It aimed at providing a virtual laboratory for conducting participative research and experimentation to carry out, advance and bring automation into the evaluation and benchmarking of complex multilingual and multimedia information systems, by facilitating management and offering access, curation, preservation, re-use, analysis, visualization, and mining of the collected experimental data.

    Role: Participant of the unit of the Department; I worked on the design and developement of the PROMISE evaluation infrastructure.