Coffee Love

coffee love (created by
Coffee Love single art (created by

I love coffee, music and science. Despite my best efforts, I am not very proficient in making music, but not due to a lack of trying or support from my musically gifted father, who even gifted me a top-quality guitar. I am, however, slightly better at ‘playing around’ (a euphemism for science-ing) with AI. So here is my attempt to have AI create a song about coffee.

The initial version of the lyrics was created by whatever LLM uses, prompted to write lyrics for the song titled “Coffee Love.” These initial lyrics were further refined by ChatGPT 3.5, with the prompt “We’re going to rewrite some lyrics. the structure of the lyrics needs to remain the same: 2 verses (8 lines) and a chorus. I want you to add references to Utrecht University and to the life of a researcher.” The resulting lyrics, listed below, were fed into the prompted to create a “Bouncy Funk” song.

The result is pretty catchy! So for the foreseeable future, I have a theme song for coffee runs at the university.

Coffee Love

Every morning, I rise with a scholarly yearning
For that scholarly cup, my intellectual awakening (yeah)
The aroma permeates, a scientific revelation
Gotta fuel my brain, Utrecht University’s invitation (oh-yeah)

[Verse 2]
I indulge, and it’s like unlocking wisdom’s gate
Stimulating my thoughts, a cognitive state (ooh yeah)
The bittersweet flavor, it’s my academic predilection
Can’t escape the allure of this research-driven addiction (oh-yeah)

Coffee love, it’s a journey through scholarly pursuits (journey through scholarly pursuits)
An intellectual symphony, a drive like no other (oh-oh)
With every sip, I delve deeper into investigation (delve deeper into investigation)
Coffee love, my academic devotion (oh-yeah yeah)



A Feline Space Opera – A Short Comic by BingChat

Text and images generated by BingChat, initial creativity, prompt engineering and selection of results by Merijn.

Meet our hero, Captain Squeak. He is a brave and adventurous mouse who loves exploring the galaxy in his spaceship. He has a mission: to find the legendary planet of cheese, where he hopes to find his true happiness.

But Captain Squeak is not alone in his quest. He has enemies: the Furry Feline Friends. They are a cunning group of space spies who want to catch their arch nemesis, the mouse who always escapes their traps. They have a secret weapon: a giant laser cannon that can destroy anything in its path. They will stop at nothing to get their revenge.

The chase is on. Captain Squeak and the Furry Feline Friends are engaged in a fierce space battle, high above a red planet. The cat spaceship is bigger and stronger, but the mouse spaceship is faster and smarter. They fire lasers and missiles at each other, dodging and weaving through the asteroids. Who will win this epic showdown?

Disaster! The Red Claw, the lead cat spaceship, is hit by a feline-seeking missile from Captain Squeak. The villain, Commander Whiskers, quickly jumps through the main viewing screen to escape the blast. His life partner, a beautiful white-nosed patches cat named Fluffy, flashes before his mind’s eye. He remembers their first kiss, their wedding day, their last goodbye. Is this the end for him? Will he ever see her again?

Saved, for now. Commander Whiskers awaits rescue in his escape cat basket, pride hurt but feline fine. He had a close call, but he managed to grab an escape cat basket from the storage room before jumping out of the exploding spaceship. He hopes his crewmates are safe too. He watches as Captain Squeak flies away in his mouse spaceship, victorious. He clenches his paws and hisses: “I’ll get you next time, space mouse! You haven’t seen the last of me!”


I asked BingChat to take a look at this website and this is the resulting exchange. I’m amazed at the power and realism of the interactions with these LLMs and generative AI!




ChatGPT: No-all vs know-all

ChatGPTand similar new models like Stable Diffusion have absolutely amazing capability and potential. However, how useful are they in daily life? Clearly, for ideation, completion and playing around they are great and many examples exist. But with Microsoft integrating ChatGPT in Bing and Google scrambling to finish their Bart, the question rises “Why?”

To me, that does not make sense as these models are not created with that goal in mind. Many searches on Google are for information or for a source that can give you the information you need. I argue that these models do not know enough about the world to be useful replacements for search engines. Enough has been said about the dangers of ChatGPT’s lack of citation and its certainty of being right when it is wrong. I argue that we should not use such models in this literal manner. Arguing is one thing, demonstrating is better so… Let me demonstrate in a manner that hopefully shows you why asking literal or real-world things is not the right way of using these cool new tools.

Imagine I want to navigate Utrecht (a moderate-sized city in the Netherlands). I want to go from Utrecht Central Station to the Dom tower in the historic centre. First, let me ask one of the largest image generation models for a map, using the prompt:

“A realistic map that is suitable for navigation of the city center of Utrecht in the Netherlands”

The result is somewhat similar to how I know Utrecht looks, with canals, concentric rings of road and highways connecting to the outside. It might be my age or having grown unaccustomed to map reading, but it will be a challenge to find my way with this map. And let’s pretend we are not further confused by the Ciuh Cleck title of the map and assume that it AI-ish for Utrecht.

Let’s try again, this time asking for some sort of walking route indicated on the map:

“A realistic map that is suitable for navigation of the innercity center of Utrecht in the Netherlands, highlight the route from Utrecht Central Station to De Dom tower”

Again, not really helpful. But at least I now somewhat recognize the Dom tower on the left one… wait no, the Dom looks different.

OK, let’s move to ChatGPT (finally). Perhaps it can give me some walking instructions. Prompting:

“Give me walking directions from Utrecht Central Station to De Dom tower in Utrecht” does give a very reasonable sounding list of directions.

I am impressed! At first glance, the directions seem reasonable, but the proof is in the pudding. Let’s try walking these instructions.

  1. Exit the main entrance of Utrecht Central Station and turn right onto the Stationsplein.
  2. Cross the street and continue straight ahead on the Stationshal.
  3. Follow the signs for the Hoog Catharijne shopping center and take the escalator up to the shopping center.

Until step three it is going well, although I was a bit confused about how to exit the train station and… there is no escalator up to the shopping centre, but hey… you don’t see that on a map:

Keep going, we are doing well!

  1. Once inside the shopping center, follow the signs for the Vredenburg passage and exit the shopping center through the Vredenburgplein exit.
  2. Turn left onto Vredenburg and continue straight ahead.
  3. At the end of Vredenburg, cross the street and turn right onto Lange Viestraat.

OK, knowing my way around Utrecht I would have turned right on Vredenburg… and most tourists would too as you can pretty much see the Dom tower from there. But perhaps ChatGPT is on to something (a nice tour of the city perhaps)… or maybe not (as you can already see we will be here again). Carrying on:

  1. Continue straight ahead on Lange Viestraat, which will turn into Voorstraat.
  2. Continue on Voorstraat until you reach the Oudegracht canal.
  3. Turn left onto Oudegracht and continue straight ahead, keeping the canal on your left.
  4. You will soon see the Dom tower on your left, at the end of the Oudegracht.
  5. To reach the entrance of the Dom tower, cross the street when you see the tower and turn left onto Domstraat. The entrance is a few steps ahead on your right.

Wow! Eventually, you do get to the Dom tower (with some very liberal interpretation of step 9’s “keep the canal on your left”). The walk will take you considerably longer than the advertised 20-25 minutes though. I am impressed that it got me to my destination, but for now, I will stick to Google or paper maps.

This demonstration hopefully shows that amazing models such as ChatGPT are created with a specific task in mind and that outside of this task cannot perform well. I think the fact that it can still perform tasks outside the domain of “guess the next word” is absolutely amazing and I can see the appeal of trying what it can and cannot do, as evidenced by this blog post. However, it is something else to put in the place of a search engine where people have grown accustomed to finding (mostly) correct information. Google and other search engines do not know all, but clicking on a link in the search results brings you to a website where you can somewhat establish the credibility of the information. With generative models like ChatGPT this is considerably harder. Does that mean we should not use this technology at all? Let’s ask ChatGPT and Google… I leave the trial walking up to you:




In the Media – We\Visit

We\Visit makes virtual hospital visits possible for families and patients when hospital access is limited. In the ad-hoc corona project, TU Delft engineers, designers and psychologists are developing an online communication platform that connects families with their loved ones in difficult times. With We\Visit remote hospital visits are made possible by scheduling video calls when patients or family members are in isolation or when visits to the care facility are limited and even cancelled.

The media picked up on this story where a very human problem is solved with technology…

TU Delft stories:

Reinier de Graaf gasthuis website (updated to brightcare, Apr 2021):

ICT & Health:


Algemeen Dagblad (AD):

Reformatorisch Dagblad:


We\Visit – Hospital visits made possible

“Visiting a loved one who’s critically ill: normally that’s possible, but due to the coronavirus, visiting ICUs is no longer permitted. The result is a lonely fight behind closed doors. In an attempt to combat the loneliness, researchers at Delft University of Technology have come up with We\Visit: a tool that facilitates video-calling by appointment. And it was developed at lightning speed after the Reinier de Graaf hospital sounded the alarm.”

Find out more about our contribution to the corona crisis, and the TU Delft story, in the article here.


Delft University of Technology

Wow, I got a position at the Delft University of Technology at the Interactive Intelligence group. I will work on the 4TU Pride & Prejudice project. We aim to use Artificial Social Agents (ASA), such as chatbots, virtual agents or social robots, to support people who struggle in maintaining a healthy lifestyle and who may suffer from welfare diseases, such as diabetes. We support these people and their social group with ASAs to reduce prejudice and increase pride. For example, a diabetic is upset with a friend who did not accommodate their specific diet at a party, while the friend is embarrassed because they forgot the diet. An ASA can obtain information from a patient and redistribute it strategically in a social group, taking into account what information can and should (not) be shared. In the example, the friend would, without shame, ask the ASA all about the diet and could proudly accommodate the patient’s dietary needs. The patient would be pleased that their diet is accommodated at the party without having to publicly ask for it.

Cool to start a new project at a new university, but I will miss Twente and the people there…


Visiting researcher – Western Sydney University

Sydney Australia is about as far away from home as possible. The trip to attend the IVA2018 conference in Sydney was long.  So why not stay for a little while to make some new (science) friends? “I was here anyway”, I told Anton Bogdanovych who was kind enough to host me at the Western Sydney University.


Conference Intelligent Virtual Agents – Flipper 2.0

Flipper 2.0 aims to help developers of embodied conversational agents (ECAs) quickly and flexibly create dialogues. Jelte van Waterschoot presented our paper on Flipper at the Internation Conference on Intelligent Virtual Agents in Sydney Australia.

During the conference, I organised a workshop on methodology. At the conference, we had great keynotes by, amongst others, Hiroshi Ishiguro and Mark Sagar. Especially Mark’s talk was fascinating, with a live demo of his virtual baby. Mind-blowing to see the detail and possibilities of the neural network controlling the ‘baby’. To finish the conference even more positive for us: we were awarded a best paper award during the conference diner.


The Council of Coaches in Twente

Visiting European research project partners is great: seeing the places, trying the food, meeting the people. It is only fair to return the favour from time to time. In the Council of Coaches project many partners are from Twente, so it makes a lot of sense (less people traveling) to have meetings in Twente. We were the hosts for two technical integration meetings, April 23-26 and October 8-12. We had guests from the University of Dundee in the UK, the Sorbonne University in France, the Polytechnic University Valencia in Spain, and the Danish Board of Technology Foundation.

Continue reading


“Robots but social ones, right…?”

Working in science is awesome but sometimes, often at a party, it can be challenging to explain what it is that I do. I studied psychology and now work with, you know… “robots but social ones” as someone put it. Roughly two groups of people ask about my work and job, 1) those who are very interested in novel technology and psychology and 2) those who are scared this new technology is dangerous and “might steal their job”… or worse. I love talking about my job so always 1) explain as much as people find interesting and 2) reassure that robotic overlords might not happen soon (and that they probably will have an off-switch). In this post, you can read what I might say if you ask me at a party “so you work with robots, right…?”

Continue reading