Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
Voice assistant-powered dialogue engines have previously been deployed in a number of commercial and governmental technological pipelines, with a diverse level of complexity. In our concept, such a complexity can be understood as a problem of analysing unstructured dialogues. ELOQUENCE’s key objective is to better comprehend those unstructured dialogues and translate them into explainable, safe, knowledge-grounded, trustworthy, and bias-controlled language models.
We envision to develop a technology capable of learning by its own, by adapting from a very data-limited corpora to efficiently support most of the EU languages; from a sustainable computational framework to efficient and green-power architectures. In essence, we hope it serves as a guidance for all European citizens whilst being respectful and showing the best of our European values, specifically supporting safety-critical applications by involving humans-in-the-loop.
Overall, ELOQUENCE’s project considers building on top and to improve of prior achievements in the domain of conversational agents, e.g. recently launched and public-domain Large Language Models (LLMs), such as chatGPT (e.g. more recent versions) or LaMDa, most of which have been developed in non-EU countries. While including key industrial enterprises from Europe (i.e. Omilia, Telefonica, Synelixis), ELOQUENCE will validate the developed technology through:
(i) safety-critical scenarios with human-in-the-loop for security-critical applications (i.e. emergency services in call centres); and
(ii) smart home assistants via information retrieval and fact-checking against an online knowledge base for lesser risky autonomous systems (i.e., home-assistants).
ELOQUENCE will target the R&D of these novel conversational AI technologies in multilingual and multimodal environments, and demonstrate it in several pilots.