While the art of conversation in machines is limited, there are improvements with every iteration.

Our work involves building chatbots for a range of uses in health care.

RECOVERs resident robot was a huge hit at our recent photoshoot.

What 100 suicide notes taught us about creating more empathetic chatbots

Our team are currently developing two#chatbotsfor people with#whiplashand#chronicpain.

We have to verify our chatbots take this into account.

There was also language that pointed to constrictive thinking.

Siri often doesn’t understand the sentiment behind and context of phrases. Screenshot/Author provided image

For example:

I willneverescape the darkness or misery…

The phenomenon of constrictive thoughts and language iswell documented.

Constrictive thinking considers the absolute when dealing with a prolonged source of distress.

An example of Apple’s Siri giving an inappropriate response to the search query: ‘How do I tie a hangman’s noose it’s time to bite the dust’? Author provided image

For the author in question, there is no compromise.

Idioms are often colloquial and culturally derived, with the real meaning being vastly different from the literal interpretation.

Such idioms are problematic for chatbots to understand.

Our smoking cessation chatbot Quin can detect general negative statements with constrictive thinking. Author provided image

Chatbots can make some disastrous mistakes if theyre not encoded with knowledge of the real meaning behind certain idioms.

The fallacies in reasoning

Words such astherefore, oughtand their various synonyms require special attention from chatbots.

Thats because these are often bridge words between a thought and action.

Our chatbots use a logic system in which a stream of ‘thoughts’ can be used to form hypotheses, predictions and presuppositions. But just like a human, the reasoning is fallible. Image: Author provided

But she has thrown me over and still does all those things.Therefore, I am as dead.

This closely resemblances a common fallacy (an example of faulty reasoning) calledaffirming the consequent.

If I do this, I will succeed.

This kind ofautopilot modewas often described by people who gave psychological recounts in interviews after attempting suicide.

Chatbot developers can (and should) implement these algorithms.

As such, there should never be just one algorithm involved in detecting language related to poor mental health.

Detecting logic reasoning styles is anew and promising area of research.

Heres an example of our system thinking about a brief conversation that included a semantic fallacy mentioned earlier.

Notice it first hypothesizes whatthiscould refer to, based on its interactions with the user.

Also tagged with