Hybrid systems for explainable natural language understanding
The goal of the research is building a high-quality hybrid system for explainable natural language understanding and finding areas of question answering where this approach outperforms both pure machine learning and classical knowledge representation/reasoning approaches. The bulk of the research consists of building and integrating machine learning, natural language semantics and symbolic reasoning components. The student will work as a part of a team already actively engaged in this work.
Information and communication technology|
Prof. Dr. Tanel Tammet|
Dr. Ago Luberg
|Availability:||This position is available.|
School of Information Technologies
Department of Software Science
|Application deadline:||Applications are accepted between June 01, 2022 00:00 and June 30, 2022 23:59 (Europe/Zurich)|
Commonsense reasoning has long been considered one of the holy grails of artificial intelligence. Despite its criticality for most language-oriented A.I. applications, it has remained elusive despite the advances of machine learning techniques. Our goal is to develop a leading hybrid – machine learning plus logic – system for explainable commonsense question answering (see , ) and to demonstrate that (a) high-performance explainable commonsense reasoning is achievable using hybrid systems, (b) on natural language understanding and question answering tasks, hybrid systems composed of both symbolic reasoning and machine learning exceed the performance of both taken separately. Our group has developed advanced methods and systems , , , ,  for handling confidences and exceptions – soft aspects of symbolic reasoning - and is currently working on integrating existing large knowledge bases and building a suitable semantic parser for the natural language.
Main superviso: Prof. Dr. Tanel Tammet
Co-supervisor: Dr. Ago Luberg
Some of the possible research questions are as follows:
- What are the most advantageous ways of integrating machine learning and symbolic reasoning?
- How to derive additional statistical, uncertain rules from existing knowledge bases?
- Can we advance the state of the art on natural language question answering benchmarks?
- How to best use the machine learning techniques for guiding the search for symbolic solutions?
- How to build up a suitably detailed knowledge base about similarities and differences of words?
The PhD candidate is expected to contribute to theoretical aspects as well as to practical aspects such as developing and improving components of systems, planning and running experiments, and supervision of students. More detailed tasks can be agreed based on the knowledge and experience of the potential PhD candidate.
Applicants should fulfil the following requirements:
- a master’s degree in computer science or related field
- a clear interest in the topic of the position
- good programming skills
- excellent communication skills in oral and written English
- strong and demonstrable writing and analytical skills
- capacity to work both as an independent researcher and as a part of an international team
- capacity and willingness to provide assistance in organizational tasks relevant to the project
 Marcus, G., Davis. E. Rebooting AI: Building artificial intelligence we can trust. 2019.
 Kalyanpur, A., Breloff, T., Ferrucci, D.A., Lally, A., Jantos, J.: Braid: Weaving symbolic and statistical knowledge into coherent logical explanations. CoRR abs/2011.13354 (2020), https://arxiv.org/abs/2011.13354
 Tammet, T., Järv, P., Draheim, D. GK: Implementing Full First Order Default Logic for Commonsense Reasoning (System Description). Accepted to IJCAR 2022, part of FLOC 2022.
 Tammet, T., Järv, P., Draheim, D.: Confidences for commonsense reasoning. In: Platzer A., S.G. (ed.) Automated Deduction – CADE 28. CADE 2021. LNCS, vol. 12699, pp. 507–524. Springer (2021).
 Tammet, T., Sutcliffe, G. Combining JSON-LD with First Order Logic. In: 2021 IEEE 15th International Conference on Semantic Computing (ICSC), 256-261.
 Tammet, T., Draheim, D. From sensors to Dempster-Shafer theory and back: The axiom of ambiguous sensor correctness and its applications. In: International Conference on Database and Expert Systems Applications, 3-19, Springer, 2020.
 Tammet, T.: GKC: A reasoning system for large knowledge bases. In: Fontaine, P. (ed.) Proc. of CADE’2019 – the 27th Intl. Conf. on Automated Deduction. LNCS, vol. 11716, pp. 538–549. Springer 2019).