Rules of Trust in Humane App Design
There was a time when, if you didn’t take the necessary precautions, you expected to be maimed by industrial machinery you operated or badly scalded or shocked by your home appliances. Since then, the balance between caution and trust has shifted and our expectations have changed. We don’t worry as much when we use technology nowadays, secure in the belief that somebody has done the work to make the experience satisfying and harmless. We’ve come to accept the small frustrations caused by design flaws and we mostly trust our tools now, especially our ubiquitous digital ones.
However, I contend that we’ve been short-changed. The trust we put in our digital tools is far too great and undeserved. We’ve just gotten used to a very low standard and can’t imagine a world with more trustworthy tools.
Trust is the most important value in humane design. Without trust, users of a technology can't have satisfying or meaningful interactions with it. If we want to create tools that humanely augment communities, we must create tools that users truly trust.
Yet, trust not a feature you can add, it's a complex set of qualities that must be baked into every aspect of a product. Trust is something that you earn over time, and something that is very easy to lose.
These flavors of trust are often only perceived by the users subconsciously, because they have been trained over the years to recognize them in good products and in interactions with other people, but they do notice when they’re missing, even if they can’t always articulate what’s wrong.
This post lists the most important of these flavors of trust and presents them as design rules. Most of them may seem obvious – we’re after all used to technology that follows them well enough – but thinking of them as tools to manage the user’s trust is a significant insight when designing a technology.
Note: I’m only talking here about trust in a technology. Trust between people whose interactions are mediated by a technology is a whole other topic that deserves its own posts.
Trust and Learning to Use the Technology
Accessibility and transparency: the user must trust that they’ll be able to input the commands that correspond to their chosen actions
Unless one of the technology’s challenges is specifically about mastering physical skills, input devices should be easy to use
Trust that there are no hidden commands or commands that produce effects without feedback
Trust in the system to be consistent
Trust in the User Interface’s metaphor: establish the rules, symbols, meaning and stick to them
Clear understanding of the technology’s modes and of their activations
Trust in the persistence of the boundaries of the simulation – if new commands allow interactions that were not possible in the early moments of discovery, the technology must state this explicitly
Trust and Learning How the Technology Works
Trust in the system to remain simple / understandable
No hidden information that directly affects the user’s options
Trust in the rules’ logic: the technology can be mastered by the user. Most technologies fail here and users who must interact with them just replicate previous commands that produced satisfactory result in the past by rote. They act as if they were practicing magic rituals, hoping for the best. If you don’t generate this trust, users will experience mental exhaustion at the thought of exploring the features of a technology – that’s why your older relatives don’t want you to change anything on their devices or are reluctant to learn new ways of doing common tasks, even if their current methods are painful and sub-optimal.
Consistency of the model: rules are persistent and form a coherent model – even when dealing with situations that the user has no previous experience with.
Persistence of the learned rules, even in a context when there is no outward sign that they still apply.
Trust the safety of experimentation: that during experimentation, mistakes will have minimal consequences. This is a core requirement for constructivist self-learning. Mistakes should produce clear outputs that allow the user to deduce why they are different from his expectations.
Trust that experimentation can reveal the rules of the system
Trust and Safety
Trust that the technology will not cause harm to the user or others
Trust that the system will warn the user if actions would lead to harm and that these actions can be cancelled.
Trust in the system to protect private information, account information, etc.
Trust that the technology will not work against the user’s goals or wishes: entrusting a technology with private information, goals and plans makes the user vulnerable to actors who would exploit such data.
Trust than the technology won't try to deceive the user with dark patterns, like exploiting mental biases or ingrained behaviors (e.g. hiding important settings behind trees of menus, switching the meaning of checking a box, etc.).
Trust that the technology will warn the user in case of a security breach: this one is becoming part of the social contract we have with services and is often implied, but designers should consider signaling this if users are fearful of such occurrences.
Reliability: the technology should fail gracefully – e.g. a bug or an error in a plan should not compromise the user’s data.
Accountability: the technology should make itself accountable for errors as often as possible. It is the one constraining the user’s behavior, so it should act accordingly. There’s a big difference between an error message that says “Wrong File Name” and one that says “I wasn’t able to find a file named ‘dairy.txt’.”
Recovery: Because it knows more about an error when it occurs, the technology should do its best to fix the problem for the user. E.g. “Did you mean ‘diary.txt’ instead of ‘dairy.txt’? Yes / No, create and use a file named ‘dairy.txt’ / Cancel”
Trust and Purpose
What is the purpose of the technology, its subject matter, and how can the user apply the knowledge they have learned in the previous sections to form meaningful plans and fulfill goals?
Trust that the system fairly represents its purpose
Trust that the technology is working as efficiently as possible toward its purpose
Trust that the experience will be meaningful or satisfying
Trust that the technology is telling the truth: it mustn’t lie about its internal state, about its knowledge or, when the technologies draws data from the real world (e.g. maps, a physical product), about facts.
Trust in the authority of the teacher (contextual help, tutorial, examples, etc.), in the veracity of the taught knowledge (or, at least, of its internal logic)
Trust in the system to be fair
Trust in the system to provide solutions to problems related to its purpose
Trust that mistakes due to lack of information or knowledge will not be punished
Trust and agency
Make sure that the user is in charge, in control of the experience
Make sure that the user will be able to express their intentions with the provided commands
Make sure that the user’s actions will have meaningful consequences
Trust that the technology will help the user accomplish their goal within the scope of its purpose. This may take many forms:
A natural flow of actions that the user is familiar with
A system that guides the user when choosing goals or making plans
A system that contextually anticipates the user’s needs related to its purpose
A system that adapts to the user’s usage patterns
* * *
Humane technology must do its best to convey that it is trustworthy in all the above domains. This requires a shift in design thinking: we should no longer expect users to accept the limitations of opaque technologies. Instead, we should think of technologies as helpful partners that safely guide their users in their journey of understanding and mastery.