A new year is approaching and a new project appears: Council of Coaches! In this European Horizon 2020 Project we will look at virtual coaching. We will create an autonomous council of multiple virtual coaches that can assist people in achieving their health goals. These coaches will discuss the health situation of the user with each other and the user. In this way we hope to be able to positively influence the lifestyle of the user in a fun and natural manner.
In the project we will advance the state of the art in embodied conversational agents by enabling fluent multi-party interaction between multiple coaches and our users. The best thing is, in my opinion, that we will be re-using some of the work that we have been doing in the ARIA VALUSPA project. In that project, I worked on a dialogue management system for virtual agents that are experts on a topic. These agents can use their expertise to explain to novice users about their topic. In the conversation, the agent takes into account the user’s emotional and cognitive state (for example the interest in the topic under discussion). Re-using technologies from other projects is a great way to ensure continuity of work and knowledge.
Reflecting on the technology that we are developing is crucial, especially when developing technology for sensitive users such as users with health issues. Technology does not appear to be ready to take the place of care professionals and to me it seems it undesirable that this should ever happen. Being comforted by another human, who can empathically imagine the situation you are in, is something that is irreplaceable. Should one of your loved ones need care and attention it is surely comforting to know they get this from empathic, kind and real human professionals. So why get involved in a project like Council of Coaches? Technology such as virtual humans can help by supporting the (health) care professional and the patient. Unlike humans, technology doesn’t need sleep. A virtual human will stick by the user’s side unlike a human who might have to stop replying or exit the conversation for whatever reason. A virtual agent will tirelessly work to redirect the user to the right information or repeat information until the user understands. In this way, technology can free up overworked care professionals and offer to patients nearly unlimited time and attention. A final consideration is that perhaps technology should be ‘aware’ of its limitations. Maybe, when a system is confronted with a distressed user, it should not try and comfort the user on its own. It might be best to call for a human care professional who can offer to the user ‘real human’ comfort. In the Council of Coaches project I will find (and hopefully also address) such ethical considerations. Excited!