Skip to main content

Software Support and Artificial Intelligence - Part 2: The Helpful co-worker model

CEO, Co-Founder
Aug 02, 2016

This post is part 2 of our Software Support and Artificial Intelligence series. You can find an overview of the 5 posts on our introduction post.
If you are reading this before all posts have been published make sure to sign up for our AI & Docs mailing list, where we send notifications when we publish these posts and similar content.

Documentation is under pressure because of a number of trends (funding issues, shorter release schedules,…). In my last post I argued that embedded documentation could be a solution for most of these challenges. In this post I want to take a step back and look at the root interaction pattern that our support models are derived from, and see how we could use this pattern to improve our documentation systems.

Shared intentionality: a deep-rooted teaching model

Humans love teaching: we intuitively play the point-and-name game with toddlers. Even super busy entrepreneurs make time to “pay it forward”, helping youngsters at the start of their professional journey. Teaching is a hugely satisfying activity that seems to be hardwired into human nature.

According to the shared intentionality hypothesis in cognitive science, learning by teaching, and not simply mimicking, is one of the few behaviors that is (nearly) exclusively human.

We don’t always accept advice

But there is a catch: we don’t always accept to be educated/taught. While young children most of the time unquestioningly accept to be taught by a teacher/parent, adults will often actively resist instructional advice from people that appear to seek too much favor or intentionally try to change their behavior. Taking too much advice from the same person might even become disturbing. The reasons why we are not willing to accept advice - and consequently occasionally sabotage ourselves, are multiple, but I feel that one of them is about indebtedness: feeling the need to have to do something back. But the question is: would we also feel that awkwardness when we ask a non-human for help?

Support from a helpful co-worker

I started thinking about this post after Write the Docs Portland, where Rob Ashby shared a key insight about (well-meaning) interruptive teaching patterns. Paraphrasing Rob: “People like having a friendly colleague around who can help them learn things. Someone in the office who you can ask for help whenever you get stuck. Someone who might occasionally pass by, even if you didn’t ask for it, look over your shoulders and give a tip. But we don’t like it when this becomes too pervasive, when that person starts paying so much attention to us that it becomes creepy / disturbing…”

AI in conversational UIs and the uncanny valley

The robotics professor Masahiro Mori identified the concept of the “Uncanny Valley” in the ‘70s. This phenomenon, later also identified in animated movies, describes a sudden drop in the empathy that some observers experience for a human-like agent as it becomes nearly human.

The Wikipedia article on the uncanny valley starts as follows:
_“In aesthetics the uncanny valley is the hypothesis that human replicas that appear almost but not exactly like real human beings elicits feelings of eeriness and revulsion among some observers. Valley denotes a dip in the human observer’s affinity for the replica, a relation that otherwise increases with the replica’s human likeness. Examples can be found in robotics and 3D computer animation among others.“_

Now that chatbots are getting really close to passing the Turing Test, I think that we might find that there is an equivalent of the uncanny valley conversational UIs. When robots pretend to be human in a conversation this can backfire massively when it is found out.

In fact I believe that Laura, my co-founder - wife - life-partner, might have experienced this with Adobe’s help service. We are not sure, but we think that she was talking for about an hour with a chatbot or a hybrid system with people and chatbots, that passed her around Kafka style until after an hour the support chat was dropped. A similar experience followed again when I posted the episode on Twitter.

Example of our experience with Adobe's help service, as posted on Twitter

Avoiding irritation and creepiness in Artificial Intelligence

I think there are a few rules that we will need to uphold to make sure AI doesn’t get irritating, deceiving or outright creepy. I think the helpful colleague model can help us to understand part of that puzzle. I think we will need to watch out for the following (anti-)patterns:

  • Don’t pretend to be human - just don’t...
  • Users are different - Build a model for a user’s support appetite for interruptions
  • Make every interruption count - Be respectful of your user’s time
  • Make sure you serve useful content
  • Modulate the intrusiveness of the interruption based on the expected usefulness of a message
  • Empower your users - Wherever possible, let your user decide when they want to engage with your content
  • Finish the job - If an AI can’t solve the problem in a timely matter, involve a human

These points might sound obvious, but there are plenty of examples of organizations that embrace computer mediated support, only to have it backfire at them. One such example is Clippit, a.k.a Clippy, a support feature introduced by Microsoft in its Office suite. Counter to popular culture I think Clippy was a great idea. I think it failed because it was too early, there were a bunch of technical limitations that meant it couldn’t be done properly.

In the next post I will explain why I think we should bring back Clippy…




Other posts in this series:

Part 1: Documentation at a crossroads

Part 3: Clippy: misunderstood brilliance before its time

Part 4: Repeating Clippy’s mistakes with Walkthroughs

Part 5: Automated proactive support as embedded messages




Interested in AI for Documentation? Sign up below and we’ll email you updates on our research.

Subscribe and get notifications about the next parts in these blog post series!

Kristof Van Tomme is an open source strategist and architect. He is the CEO and co-founder of Pronovix. He’s got a degree in bioengineering and is a regular speaker at conferences in the API, developer relations, and technical writing communities. He is the host of the Developer Success & the Business of APIs and the API Resilience podcasts.

Newsletter

Articles on devportals, DX and API docs, event recaps, webinars, and more. Sign up to be up to date with the latest trends and best practices.

 

Subscribe