Communication in an Algorithmic World

Communication in an Algorithmic World

People like communicating with people. People don’t like communicating with machines.

There – I’ve said it. In a technology-obsessed world, those statements seem almost heretical. Now does that mean the current debate around Enterprise Social versus #chatops has been convincingly won? Not quite.

1. Start Simple

Let me start off with a simple example of communication between a person and a machine. Before you write this off as ‘too simplistic’, stick with me while we work up to the real-world implications:

  • The person can press a button
  • The machine shows the person one of two colored lights.

In this trivial example, the person wants to change the color of the light. People have learnt that pressing the button, perhaps after a short time delay, changes the light. People know they are communicating with a machine, and  have created a simplistic, yet effective model for this machine: “press button, color changes”.

Traffic lights have functioned this way for decades. Yet the number of people you look at pressing the button furiously, in the vain hope that it will ‘recognize’ your urgency, is surprisingly high. People press the button when they’ve seen someone else press the button, just in case there’s a ‘democratic’ effect.

People take what they know to be a very simple machine, and want to communicate as if it has empathy.

Now what if I told you that the machine isn’t so simple? This one traffic light is actually part of a network of traffic lights, all connected by a complex algorithm. Maybe even an intelligent algorithm. Will knowing that it’s complex stop people repeatedly pressing the button? No.

Even with the smartest of algorithms, unless you could see that traffic lights are controlled by a real person, you will continue to approach this as an interaction with a machine. Which, despite its predictable certainty, we consider the poor cousin of human-to-human engagement.

2. Turing Test

So how smart do I think traffic lights are? In an ideal world, I would want them to be ‘intelligent’ and recognize all those nuanced button-pushing behaviors. Computer scientists have been grappling with this for years. Alan Turing (of The Imitation Game) came up with the acid test – have one system which is controlled by a human, another which is a machine. If the user can’t tell which is which, you’ve solved the puzzle!

Early implementations of #chatops haven’t even bothered attempting the Turing Test. There onus is on the user to learn a complicated set of instructions to achieve automated outcomes. As things progress, the temptation is to make the machine agents respond to natural human language, much like Siri, Google Now, or Cortana.

I still stand by the statement that people don’t want to be communicating with machines. Efforts made to pass the Turing Test are an odd workaround – trying to convince people that they aren’t dealing with a machine (even though they are).

If you need any more proof, interactive voice response (IVR) is by far the most theoretically efficient system for call centre queues. Yet despite this, insurance companies are telling us “Sometimes only a real person will understand. So no matter when you call, you’ll speak to one.”

3. Mental Models

Bringing these issues back towards #chatops, there is a conceptual path to choose. One path is a simple machine, triggered by a structured language (“sort new mail where sender = manager”). The other path is a near-intelligent machine that responds to more natural communication (“put stuff from my boss in the right folder”).

Either way, I have to create a mental model of how that machine understands me. I know I have to be very exact with the simple machine, otherwise it might not pick up on the syntax (“sort mail when sender is manager” would fail). For the intelligent machine, I have to be careful not to say anything that might be interpreted as an unwanted command (I don’t want “FYI, my boss always puts files in the wrong folder” to re-organize somone’s files).

The first rule of communication is to understand your audience. We all have mental models of how other humans interpret our verbal, written, and non-verbal communications. That is the starting point for understanding your audience. Mental models of machines, particularly those backed by complex algorithms, aren’t easy. When it comes to machines, there is a huge temptation is to second-guess the algorithm and communicate accordingly – just look at the SEO industry.

Mixing communications intended for humans with those intended for machines, as #chatops does, adds an aditional layer of complexity to the mental models that people need to construct in order to use technology effectively.

Where To?

Like everything else technology-driven, it’s not ever about the technology. Both Enterprise Social and #chatops have barriers to adoption – Enterprise Social needs ‘value’, while #chatops hasn’t overcome ‘communication’. Territory in the adoption war will be won by overcoming the relevant barrier.

My guess is that #chatops is currently facing the more difficult adoption problem. That isn’t to say Enterprise Social will end up the winner – just that more effort needs to be put into #chatops if it is to emerge victorious. Creating a platform where human and machine communication coexists is a problem that can be solved. After all, it doesn’t matter how big the problem used to be – all that matters is whether it’s still currently a barrier to adoption.