Dr Luca Longo

Nominated Award: AI Person of the Year
Linkedin profile of Person: https://www.linkedin.com/in/drlucalongo/
Technological University Dublin (TU Dublin) is Ireland’s first technological university. It is the second-largest third-level institution in Ireland, with a student population of 28,500. The university asserts an entrepreneurial ethos and industry-focused approach, with extensive collaboration with the industry for research and teaching. The flagship campus is located within Grangegorman, Dublin, with two other long-term campuses in Tallaght and Blanchardstown. The university is overseen by a Governing Body appointed under the Technological Universities Act, with representation for staff, undergraduate and postgraduate students, the local Education and Training Boards, along with the President of the university, an external chairperson and other external members appointed by the Governing Body and by the Minister. TU Dublin holds an Athena SWAN Bronze Award for its commitment to advance gender equality in STEM and is one of the eight members of the European University of Technology (EUt+), a “transnational alliance” of universities.
Reason for Nomination
Dr Longo Luca is again the general chair of this year’s National Irish Conference of Artificial Intelligence and Cognitive Science (https://aics2022.mtu.ie , 2022, at the 30th edition), demonstrating his commitment to engaging and serving the wider research Irish community and expanding its reputation within Europe. As an active researcher in Artificial Intelligence, founder of the Artificial Intelligence and Cognitive Load (AICL) Research lab at the Technological University Dublin (https://lucalongo.eu/LongoLab.php ), and mentor of doctoral and postdoctoral scholars, he has recently achieved international recognition in the field of Explainable Artificial Intelligence (XAI) [1-4 among others].
XAI is a field at the intersection of neural paradigms for learning with symbolic paradigms for inference and reasoning. In particular, machine learning in general, and deep learning in particular, are sub-fields of AI aimed at learning models from high-dimensional data for the purpose of classification, categorisation, prediction, and recommendation. However, resulting models are frequently considered black boxes because they are hard to interpret and their predictions difficult to explain. Dr Longo’s active research is devoted to expanding the great capabilities of Deep Learning methods to learn from data with explainability techniques that resemble human reasoning, enhancing their interpretation and supporting the explanation of their predictions. His disruptive and original research seeks to equip computational models induced from Deep Learning methods with argumentative capabilities that non-experts can use for explaining and better justifying their predictions. Integrating these two words is a great challenge for theoretical and applied AI research, often referred to as Neuro-Symbolic integration [12]. In detail, his research on argumentation theory (AT) and defeasible reasoning [5-11, among others] has gained interest in Artificial Intelligence as it provides the basis for computational models inspired by the way humans reason. AT focuses on how pieces of evidence, seen as arguments, can be represented, supported or discarded in a reasoning process, and it investigates formal models to assess the validity of the conclusions achieved. This paradigm can then provide an automatically generated interpretative layer of black-box models, supporting the explainability of its inferences and prediction [13-14].
The impact of his research on Explainable AI is very relevant to today’s world that is becoming AI-dependent and, in particular, to the provision of insights to users into the “why” for model predictions. This offers the potential for users to understand better and trust a black-box model, recognize the correctness and validity of its predictions, and in turn, better inform and support human decision-making. This helps characterize the accuracy of black-box models, their fairness and transparency, and discover potential bias in its inferences, often based on race, gender, age or location, just to mention a few. XAI is essential for an organization in building confidence in automatically generated models, especially when putting them into production. It supports the adoption of a responsible approach to AI development, helps developers ensure that a model is working as expected, meet regulatory standards, and allows people, especially those affected by its predictions, to challenge them. XAI allows model monitoring and auditability, supporting the accountability of AI, allowing people
not to trust them blindly, and promoting the productive use of AI. This includes the mitigation of compliance, legal, security and reputational risks. XAI is one of the key requirements for implementing responsible AI at a large scale, with the majority of the worldwide population being impacted, especially in well-developed countries, supporting private and public organizations to embed ethical principles into live applications and processes. In summary, Dr Longo’s research can help solve the very-large-scale problem of opaque AI by providing operationalizable AI with trust and confidence, speed time to AI results for optimisation of businesses’ outcomes and mitigating risk and cost of model governance with transparency and compliance to laws.
Unfortunately, the above problem is not trivial, and the likelihood someone else would have already solved it is very low. This is because solving it requires skills at various levels, including knowledge of the ‘learning’ components of Artificial Intelligence, the “reasoning” component, and skills in Human-computer Interaction, since the ultimate consumers are humans. The research currently undertaken by Dr Longo and his inter and multidisciplinary team of scholars, possessing skills within these disciplines of AI and HCI [https://lucalongo.eu/publications/publications.php], is positioned to expand the range of solutions to the above problem, advancing state of the art with tangible outcomes.
Additional Information:
[1] Longo, L., Goebel, R., Lecue, F., Kieseberg, P., & Holzinger, A. (2020, August). Explainable artificial intelligence: Concepts, applications, research challenges and visions. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 1-16). Springer, Cham.
[2] Vilone, G., & Longo, L. (2020). Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093.
[3] Vilone, G., & Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76, 89-106. [Impact factor 17] [4] Vilone, G., & Longo, L. (2021). Classification of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction, 3(3), 615-661.
[5] Longo, L. (2016). Argumentation for knowledge representation, conflict resolution, defeasible inference and its integration with machine learning. In Machine Learning for Health Informatics (pp. 183-208). Springer, Cham.
[6] Longo, L. (2015). A defeasible reasoning framework for human mental workload representation and assessment. Behaviour & Information Technology, 34(8), 758-786.
[7] Longo, L. (2014). Formalising Human Mental Workload as a Defeasible Computational Concept (Doctoral dissertation, Trinity College).
[8] Longo, L., & Hederman, L. (2013, October). Argumentation theory for decision support in health-care: a comparison with machine learning. In International conference on brain and health informatics (pp. 168-180). Springer, Cham.
[9] Longo, L., & Dondio, P. (2014). Defeasible reasoning and argument-based systems in medical fields: An informal overview. In 2014 IEEE 27th International Symposium on Computer-Based Medical Systems (pp. 376-381). IEEE.
[10] Rizzo, L., & Longo, L. (2018, November). Inferential Models of Mental Workload with Defeasible Argumentation and Non-monotonic Fuzzy Reasoning: a Comparative Study. In AI3@ AI* IA (pp. 11-26).
[11] Rizzo, L., & Longo, L. (2020). An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems. Expert Systems with Applications, 147, 113220.
[12] Hamilton K., Nayak A., Bozic B., Longo L. Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing? A Structured Review. Semantic Web Journal, IOS press.
[13] Vilone G., Longo L. A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence. Artificial Intelligence Applications and Innovations – 18th IFIP WG 12.5 International Conference, 2022
[14] Rizzo L., Longo L. A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning. 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, 2018
For additional information on the many published material, see: https://scholar.google.com/citations?user=oBqRuY8AAAAJ&hl=en