Member-only story
Bots should identify themselves as bots
I shared a few days ago about the weird and disappointing conversation that I had with a AI bot at a local car dealer, but then something I heard on a podcast raised another interesting angle to it.
On that call with the bot, the car dealer worked very hard to make it sound like a human and gave no clear indication that it was a bot (though it was still fairly easy to tell). Is that the proper way to handle things? A recent episode of the “Stuff You Should Know” podcast says that it’s not.
From the show, Josh Clark shares some of the Elizabeth Stokoe’s work and offers this simple advice:
“Bots should be recognizable and volunteer themselves as bots. Humans are humans. Keep the two separate.”
I like that idea. There is certainly a place where bots are acceptable, but they should be labeled as such. Trying to trick your customer into believing that a bot is a human is such a weird thing to do, and starting your relationship with a potential client by trying to trick them just seems like a bad idea.
This kind of concept feels like something that the FCC might tackle someday, forcing companies to disclose when consumers are interacting with a robot versus a human, but we’ll see. In the meantime, why not do it anyhow?
