I’ve been wondering lately about change and how to better manage it. Particularly in the computer world, change is frequently accompanied by howls of rage by some people and tacit approval by others. In trying to understand this, I realized that people are frustrated by change when it violates an understood contract. This contract may not be defined anywhere, but your users still hold to it. These are not contracts in the legal sense, but in the sense of expectations.
Different people establish tacit contracts with applications differently. For example, when Microsoft redesigned Office with the ribbon UI, it didn’t bother me. My contract with them was to enable me to create a document. I just did so differently. The contract others held was that when I click here in the UI, X happens. This contract was invalidated by rearranging the toolbar.
A way of managing this change would be to focus in marketing on the document. “You can now create better documents.” “You can now create documents more easily.” By doing so, the user is willing to exchange the established contract for a new contract.
A ramification of this would be why systems like Google Now on my phone or a dynamic start menu do not seem to gain wide acceptance. When I use an application, I expect it to always accomplish the task for which I’m using it. If it stops accomplishing the task or fails to accomplish it, I cannot trust the application and will find something else. A start menu has the task of allowing me to open the application I want. If the application was in the first menu, I expect to be able to go to the first window and find it. If a dynamic start menu has detected that I’m opening a different application more frequently and moves that to the first menu, bumping the other application off in the process, even though it is theoretically more useful, the contract of “to open X application, I click the start menu and it is there” has been invalidated. A way of managing this is every time the system thinks it should change, ask the user: “You seem to be using X a lot. Would you like it on this menu?” Another option is to offer information about how the decisions are made, such as a click count next to the menu items. By doing so, the user now expects the menu to show the top used applications. They may still reject it, because they don’t want a list of the top used applications, but you stand a fighting chance.
Google Now is similar. It does some very cool things, like helping you with mass transit status, reminding you of things, telling you about game scores, et cetera. But I only use it for reminders. When I set up a reminder, I tell Google Now to remind me to do something at a certain time or place. Thus far, it has always reminded me. So I explicitly set up a contract with the application to remind me, and the contract is fulfilled. With directions, Google Now tries to guess where I want to go, but is inaccurate and gives me a combination of unneeded information and a lack of needed information. The implicit contract is that Google Now will know, without being told, where I want to go and help me get there. This contract is not fulfilled and so I don’t use the application.
Facebook’s main stream also has a contract problem. When I first started with Facebook, I had an implicit contract: when I friend someone, I will see their stuff in my stream in chronological order. When they started leaving stuff out, they broke this contract. When they started reordering based on perceived importance, they broke this contract. The contract they are trying to establish is: “trust us to tell you everything you want to know.”
This is the root issue with these systems. I don’t have a single system that I trust to just tell me what I want to know. I want to tell a system a set of parameters and have it always fulfill this contract by returning the information I asked for. These systems are expecting me to do something that I don’t even do with people. Imagine asking a person to let me know all of the things I want to know about my friends. I’d constantly miss things because of bad guesses. Why would I trust a computer application to be able to do something that a human can’t even do reasonably well? A way of accomplishing this goal would be to help the user set filters: “You hid this item, would you like to hide everything by this person?” Or even assign a displayed score with every item, and allow the users to decide what level they want. There is a paradox of choice here, but if the user doesn’t understand the choices made for him, he will not use it anyway or be frustrated by a lack of control.
This concept of tacit contracts has been helping me understand and manage change. If you can identify the tacit contracts your users have and manage that change, you can help dramatically with product releases. It also helps me understand why features don’t gain use. If a user cannot establish a contract with that feature, it will not be used.