Human-Environment Interaction in Cognitive Buildings through eXplainable AI
» Apply now PDF Show all positions
Abstract
Cognitive buildings represent a new era of structures that can react to surroundings and adapt to the needs of their occupants. This paradigm is steered by a mixture of enabling technologies such as AI, ML, IoT, data science, and a human-centric approach. The latter ensures that buildings augment the quality of life, highlighting a new era in the relationship between inhabitants and their physical environment. The amount and complexity of data makes them ideal candidates for sophisticated ML models. However, the frequent problem is the lack of interpretability of such models, meaning we do not understand how they make their decisions. Establishing trust between visitors and cognitive buildings becomes essential, as a trusted environment can lead to greater acceptance and appreciation of these advanced capabilities. eXplainable Artificial Intelligence (XAI) is a concept that tries to close this gap.
Research field: |
Information and communication technology |
---|---|
Supervisor: |
Prof. Dr. Juri Belikov |
Availability: | This position is available. |
Offered by: |
School of Information Technologies Department of Software Science |
Application deadline: | Applications are accepted between October 01, 2024 00:00 and October 25, 2024 23:59 (Europe/Zurich) |
Description
We are seeking a prospective PhD candidate who will work on developing novel human-centric data-driven and XAI-based methods for cognitive buildings applications. The successful candidate will focus on exploring how AI, machine learning, and data science can be leveraged to create intelligent control systems that adapt building environments to the needs and preferences of occupants with the following general objectives:
- Develop data-driven algorithms that analyze data from sensors and user interactions to personalize various building parameters for improved comfort, productivity, and well-being.
- Create interactive and responsive environments within cognitive buildings.
- Propose new evaluation metrics for the quality of explanations.
- Develop new standardization approaches and clear definitions suitable for different focus groups.
Main responsibilities of the prospective PhD candidate:
- Publish and present scientific articles top-tier journals and international conferences.
- Assist in relevant teaching activities and co-supervise students.
- Contribute to the goals of the Centre of Excellence in Energy Efficiency (ENER grant TK230) funded by the Estonian Ministry of Education and Research.
Requirements:
- M.Sc. degree or equivalent in Computer Science, Mathematics, or a related field.
- Clear interest in the research topic, demonstrated through a motivation letter, supported by the research plan.
- Proficiency in Python and MATLAB programming.
- Excellent English communication skills, both written and verbal.
- Strong analytical and research skills.
- Capacity to work independently and collaboratively in an international team.
- Optional: Experience with ML and AI, showcased through GitHub projects.
References:
[1] L. Heistrene, R. Machlev, M. Perl, J. Belikov, D. Baimel, K. Levy, S. Mannor, Y. Levron. Explainability-based Trust Algorithm for electricity price forecasting models. Energy and AI, 14, p. 100259, 2023. DOI: 10.1016/j.egyai.2023.100259.
[2] R. Machlev, L. Heistrene, M. Perl, K. Y. Levy, J. Belikov, S. Mannor, and Y. Levron. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities. Energy and AI, 9, p. 100169, 2022. DOI: 10.1016/j.egyai.2022.100169.
[3] R. Machlev, M. Perl, J. Belikov, K. Y. Levy, and Y. Levron. Measuring explainability and trustworthiness of power quality disturbances classifiers using XAI—Explainable Artificial Intelligence. IEEE Transactions on Industrial Informatics, 18(8), pp. 5127-5137, 2022. DOI: 10.1109/TII.2021.3126111.
[4] M. Meas, R. Machlev, A. Kose, A. Tepljakov, L. Loo, Y. Levron, E. Petlenkov, and J. Belikov. Explainability and transparency of classifiers for air-handling unit faults using explainable artificial intelligence (XAI). Sensors, 22(17), p. 6338, 2022. DOI: 10.3390/s22176338.