Emotional Computing, Captology And The Social Web Of Things part 1
I attended FutureEverything Festival 2011 this year and I was very engaged by several of the talks. There was a solid exploration of Open Data and enough speculative forays to sate my appetite. One regrettable cancellation came when Toby Barnes had to cancel his talk on Emotional Computing (which was also due to feature Ben Bashford) which I had been looking forward to. (Toby had to cancel due to suffering an injury and I wish him a very speedy recovery). Toby’s talk would have been the most explicit engagement with emotional computing on offer at Future Everything, but Philter Phactory’s Weavrs and Chris Speed’s ‘Tales Of Things’ can also be bracketed as asymptotic mediations on the subject
Even though the talk never transpired there is an element of that area which I still wish to wax upon here. First I’ll quote the description for the talk:
Smart products, interactive screens and digital ‘things’ are everywhere. They are becoming more connected, more civilised, more polite. Objects are being programmed to display human characteristics, personalities, emotions and almost lifelike behaviour.
This conversation will look at some of the ideas around designing for networked objects and human/computer relationships. How can the uncanny valley of human mimicry be avoided? How can products display human traits in engaging ways? Is it possible?
In other words is this where we’re headed?
One of the main questions begged in this description do we want a networked series of objects to display emotions? Would it not be better to focus on the affinities we already have with objects and networked existence and augment them. Case in point, the affinity between human and horse (and I hope I am not treading on any animal lovers feet here). There is a reciprocal transfer of something between both: for us we can say we feel emotion towards the horse and the horse displays something back to us which may or may not be emotion insofar as the horse is concerned. The horse doesn’t need to display anger or affection in human emotive terms (which is to say the traits we recognise in another human when they display said emotions) for there to be a strong affinity between it and its owner so why do we need to program emotions into the object with which we will share our digital existence? [for more on phenomenological considerations of horse:human coexistence see Ann Game’s Riding]
The general arc developed within emotional engagements with software agents forms the basis for a very engaging recent hour of ‘RadioLab‘. It’s a dizzying hour of entertainment which begins with Eliza, the first instance proper of a software ‘bot’ and concludes with Bina(, the creation of Hanson Robotics and Martine Rothblatt (who incidentally has lead such a fascinating life, click here, for more). Along the way Freedom Baird from MIT weighs in with the concept of an ’emotional turing test’: the litmus test of whether an object can appear sentient to us. The discussion becomes a little bit mired when discussing simulation versus reality (the developer of the Furby basically trots out the intractable argument of the philosophical zombie: if we can program something to appear human who are we to say it isn’t human/emotionally intelligent). Credit to Jad Abumrad and Robert Krulwich for escaping that particular tar baby and especially to Jad for his concise summary
“Maybe they don’t have to go all the way. Eliza was just a hundred lines of code and people poured their hearts out to it… these things don’t have to be very good, because they’ve got us… and our programming which is that we’ll stare anything in the eyes and say ‘lets connect’… so they’re going to cross the line (between real/simulated) because we’ll help them across”
This basically notes how easily it is for interfaces to parasite the expectations we humans have of what we interact with. The question yet to be answered is whether a multi dimensional approach to understanding the nuances of conversational exchange be implemented computationally?
just don’t imagine robotic orgasms while watching this video of the Geminoid
This is where the area of captology, (a backroynm of sorts, standing for Computers As Persuasive Technology – “ology”) is interesting predisposed as it is towards using computers to influence behaviour through quantifiable metrics of human attention and behaviour. It gets close (but not close enough in my opinion) to understanding that what might be required are mind hacks or body language hacks which utilise what we communicate infra – verbally and subliminally between one another. And this would not be a case of designing a computer to mirror human body language, but one which can recognise it, and subsequently respond in a manner befitting of how a computer (and it’s particular set of interface affordances) should respond to human body language. We don’t have to lock down emotionally engaging interaction into the wetware interface which currently holds the monopoly. For all the astonishing work accomplished within these automata (see above) I think they’re going down a dead end. Uncanny valley is too big a factor here.Why do we need to enflesh our software companions? The Ericsson video at least had software personalities inhabiting otherwise aesthetically unaltered quotidian object (but even at that such anthropomorphisation makes me uneasy). The rationale behind Geminoids is to explore how people emotionally respond to computers: the beauty of the universal machine is that it is entirely possible that such Data like androids are not impossible. But that direction of research seems a little like the apocryphal story detailing NASA’s solution of how to write in space