bookmark_borderO wad some Power the giftie gie us To see oursels as ithers see us!

Originally published on Medium on July 27, 2018.

Photo by Vince Fleming on Unsplash

I’m going to start and end with a caveat: what follows is tactical advice regarding evaluating how things we say might be perceived.

Communicating effectively is hard. Even with the best of intentions and the most careful of phrasings, some conversations get mired in misunderstanding.

Sometimes, what we thought was a perfectly reasonable — or even obvious — statement becomes a point of contention. A particularly fraught source of misunderstanding is statements about members of a group. How can we avoid saying things that might be perceived as unfair generalizations? To some degree, we can’t. If someone is looking to be offended, they will find ways to be offended (and this is true regardless of their or your particular beliefs).

Still, we can look for things that are likely to be interpreted negatively. One technique is that every time a group is referred to, even if there are modifiers limiting the scope of the statement, you can mentally substitute another group, preferably one that you are more acutely aware of when you are on the receiving end of the communication. For example, if a statement about white men would feel like stereotyping if it were about black women (regardless of the inaccuracy of the post-substitution statement), then it is likely to be perceived as potentially problematic. If, on the other hand, it would sound inoffensive no matter what group it is about, then it is probably fine.

When possible, I try to avoid using group labels and instead focus on behavior patterns. Whether or not that works depends, to some degree, on the topic under discussion. Even when a group is directly the topic of discussion, focusing on behavior is still useful. It helps to avoid the problem where a group label becomes shorthand for a set of properties that may or may not be what is intended by the writer. “We all know how white men are.” No, no we do not. And even if we could, we would not all know what facet of this bag of properties is relevant to the discussion at hand.

One way to make this more concrete, is to use the Situation-Behavior-Impact model developed by the Center for Creative Leadership and described in this article. In this model, feedback is grounded in a description of a situation where a problem occurs, the behavior that is problematic, and the impact of the behavior. This need not be belabored: “When two people are discussing a topic [situation] and one starts giving an explanation before verifying whether or not it is needed [behavior], then the explainer sets up expectations about a hierarchy of expertise which may not match reality [impact].”

In practice, not every statement you make will be precisely communicated. However, by testing how statements sound when we swap out group labels and by concentrating on behavior over labels, we can avoid some common sources of misunderstanding.

And once again, this is tactical advice regarding evaluating how things we say might be perceived. It is in no way meant to draw equivalences between the different groups we might make statements about.

The title comes from the Robert Burns poem, To A Louse.

bookmark_borderI think, perhaps, this might be useful

Originally published on Medium on July 27, 2018.

Common advice says that if you want to be a strong communicator, don’t use caveats. The reality is more complicated.

As a technical lead and a people manager, I often end up intentionally using the sort of caveated[1] language that, earlier in my career, I tried hard to expunge from my speech. Standard career advice for those early in their career says that you should purge from your speaking and writing phrases like “I think” or “It might be useful to” and other phrases which make you sound less confident. This doesn’t mean communicating as if you think you are always right. Rather, it’s to take as a given that people know that you are giving your view and will have no problem letting you know if they disagree[2].

Yet as a leader, especially as a manager, it is useful to pull out those phrases again. Even in a culture where individual contributors have a pretty large amount of discretion over what and how they do things, it is easy for a leader’s suggestion to be taken more seriously than intended. I don’t think that folks on my team take their manager’s word as command — thank goodness. However, it still is received differently that a suggestion from a peer.

Thus, now that I am a lead, when I make a suggestion, I use those caveated phrases. Not to indicate that I am less confident — although, often my distance from the details mean I am less confident in my suggestions — but primarily to communicate that ownership of the decision still belongs to the person I am talking to. My use of language isn’t exactly the same as in my days as a new software engineer. There are important differences between communicating a lack of confidence and delegating authority. Still, it is interesting to reflect that our communication style needs are always changing as our role and context changes.

Building on that, I have spent some time thinking about when caveating is useful and when it is not.

Never caveat

Never caveat your state of mind unless you are truly uncertain. For example, don’t say, “I think I agree”. Unless you really are still thinking about it, this type of caveating just makes it sound like you do not know your own mind. This is the classic case of self-undermining and, in my experience, what people are warning against when they say not to caveat. (Important exception: “I think I understand what you’re saying” because that is really a statement about both your state of mind and someone else’s.)

You also should never caveat statements of fact. But we have to be careful here, because in these days of ideologically based “facts”, it can be easy for opinions to masquerade as facts. More on that shortly.

Optionally caveat

Clear statements of opinion do not always need to have an explicit caveat when given in a neutral or positive context. If I say, “This oatmeal is delicious” then, most of the time, people will understand that I am stating an opinion. Alternately, this could also be caveated to no harm.

Always caveat

It is difficult to over caveat statements of opinion given in a negative context. This could be a normally neutral comment in the context of a disagreement. For example, my love of the oatmeal can become a point of contention if we are discussing how to reduce breakfast options in the cafe down to one.

Another type of opinion in a negative context is when the opinion is likely to create a negative context even if there wasn’t one before. For example, in this age of high tensions on political topics, throwing in some extra caveats is useful even if the discussion isn’t an argument… yet.

Another type of statement you should always caveat is opinions which could be taken as facts. Often, these opinions are well informed enough that it seems almost inappropriate to call them opinions. But when you dig beyond the surface, you start to see that they are not incontrovertible facts. The role of saturated fats in heart disease falls into this category. Not that long ago, it was so widely believed that it might seem to be a fact, but the underlying evidence was not as strong as believed and has come into question more recently.

Thus, while I said above that you should never caveat facts, that is with the caveat that the set of facts should be pretty strictly limited to cases where there is no controversy or the controversy that exists is implausible — e.g., the Earth is round.

Variable caveating

Advice and recommendations are an interesting type of statement because they involve saying that someone else should change their behavior. This makes them automatically more sensitive. Like with descriptive statements, advice and recommendations given in a negative context or likely to produce a negative context generally benefit from caveats.

In a positive or neutral context, my advice is more nuanced, as the opening noted. When you are in a position of power, you should caveat if the recommendation is optional and not caveat if it is not optional. If you are making a recommendation to someone in a position of power, you generally should not caveat in positive or neutral contexts because — fairly or not — it will tend to be seen as self-undermining. When there isn’t a power differential, then it will vary based on context and caveats are often discretionary.

And, 900 words later, we see why this is usually summarized with the less nuanced but more memorable, “Don’t caveat unless it reflects true uncertainty.” 🙂

[1] Yes, I’m using caveat as a verb and will do so heavily throughout this post.

[2] As an aside, I don’t tend to change my communication style much for personal and work communication. At work, I am considered a fairly considerate communicator and in my family, I am considered quite blunt. This amuses me.

bookmark_borderSummary: A Model of Reference-Dependent Preferences

Originally published on Medium on July 13, 2018.

The paper “A Model of Reference-Dependent Preferences” by Botond Koszegi and Matthew Rabin discusses the role of expectations in determining economic utility. By explicitly modeling expectations, various “irrational” human behaviors can be explained. I read the freely available draft; this is the purchase-requiring published version. What follows is a detailed but non-technical summary.

Traditional economic models evaluate outcomes based on consumption utility. In this paper, the authors model utility as a function of both the consumption utility of an outcome and the utility of that outcome relative to the expectations of the actor. The authors assume the expectations are rational: they are an accurate probabilistic representation of both the possible outcomes and the utility the actor would derive from each outcome. Even under these strict assumptions, the model predicts various experimental outcomes better than models which take just consumption utility into account and better than models which assume that reference based utility is relative to the status quo rather than expectations.

More concretely, the utility of an outcome is defined as the sum of the actual consumption utility of that outcome and the gain-loss utility of that outcome. The gain-loss utility is the probability weighed sum over all expectations of a utility function applied to the difference between the consumption utilities of the actual outcome and the expectation.

For example, if I expected

  • $0 with 25% probability
  • $50 with 25% probability
  • $100 dollars with 50% probability

and I won $50, my utility, translated into dollars, would be

  • $50 [consumption] + (0.25($50-$0) + 0.25($50-$50) + 0.50($50-$100)) [gain-loss] 
    = $50 + $12.5 + $0 + -$25 
    = $37.50.

In other words, the utility of the $50 would be decreased because I had a strong expectation of winning $100. If, on the other hand, I had more strongly expected to win nothing, than the utility of the $50 would be increased by my sense of gain relative to my expectation.

My example above assumed a linear gain-loss utility function: the utility of a gain relative expectations is equal to the utility of an equal sized loss relative to expectations. In practice, the authors build loss expectation into their model. They assume that the utility function has negative utility for losses greater than the positive utility of equally sized gains. They also assume, however, that this effect is more pronounced for smaller losses/gains relative to expectations and that for larger losses and gains, the relative consumption utilities dominate any loss aversion. Thus, this model does not explain loss aversion.

Although the authors do not discuss this, I believe this model puts loss aversion on a more solid footing than it traditionally sits on. This model does not say that a loss is worse than a gain. It says that a loss relative to your expectations is worse than a gain relative to those expectations. In other words, it says that not having your expectations met is worse than having your expectations exceeded. This seems more psychologically defensible than saying that losses are, axiomatically, worse than gains.

Much of the paper is devoted to discussing the consequences of this model. In deterministic environments, the consumption utility and actual utility will always match because the gain-loss factor will always be (1.0*0) — complete certainty that there is no difference between the outcome and the expectation. However, in non-deterministic environments, consumption utility can vary from actual utility.

Another interesting consequence is that having expectations can leave us worse off than having no expectations. Someone with no expectations will always see any gain as positive. Someone with expectations (that they might have gained more) may see a gain as a loss. (Of course, they may also see a loss as a gain, so it’s all relative.)

Although the authors do not discuss it, this is applicable to situations like the ultimatum game. In this experiment, the proposer gets some amount of money and they offer some to a responder. If the responder takes the offer, they both get the money. If the responder doesn’t accept, neither gets the money. Since the alternative is nothing, a traditional economic model would predict that the responder would take any amount greater than zero. In practice, responders want much more fair amounts. Why they expect this has been discussed at length, but the relevance to this paper is that the responder has an expectation that they will get more and evaluate the outcomes based on this expectation.

This model also predicts a status quo bias. In the face of certainty, the model predicts that people are always willing to abandon their current reference point for an alternative that has a probabilistic expectation that is even marginally better. However, in the face of uncertainty, the expectation of the status quo is that you will continue having what you had already and the chance of ending up either worse off or better will be weighed against that expectation. For example, compare having $50 with betting that on a 75% chance of getting nothing and a 25% chance of getting $204. According to traditional expectations, (0.75*$0 + 0.25*$204) = $51 and so the bet is worth taking with a gain of $1. However, in a world where outcomes are evaluated relative to expectations and losses weigh more than gains (say, gains are worth only 90% of a loss), the expected gain comes out to

  • (0.75($0-$50) + 0.25($204-$50)0.90)
  • = -$37.5 + $34.65
  • = -$2.85

and so the bet is not worth taking.

The endowment effect is when someone seems to ascribe more value to something because they own it. The canonical experimental setup is that someone is asked how much they would pay for something, like a mug (hence the photo), then given a mug, then asked how much they would sell it for. Often, people set a higher price for selling something they have been given than they had set for buying. However, this effect is not consistent; some experimental setups do not show this effect. The authors argue that when someone is given an object, they start forming expectations which are based on their continued possession of that object. If they are given the object with the expectation that they’ll be selling it, their expectations will shift and their valuation will not show the endowment effect.

This model also explains how people may come to spend more on an item than the consumption value they will get from it. Essentially, once you expect to get something, that feeds back into how much you are willing to pay for it because the value of the item to you is not just the consumption utility of acquiring the item, it is also how much you are willing to pay to avoid the missed expectation of not getting the item. The more you expect to get the item — the greater the loss from not getting it — the more you are willing to pay.

An odd variant of this is that the consumer’s demand for an item based on price is not, as classically assumed, a function based on only the price. Changing the price changes rational expectations about the price at which the object can be acquired which shifts the demand. Price and demand is a feedback loop, not a static function. Concretely, if a consumer is willing to buy shoes at price $X and then sees that they are on sale for 50% off, they may no longer be willing to buy the shoes at price $X. Alternately, a consumer may not have been willing to buy shoes at price $Y, but if they see that $Y is actually 50% off the full price, their expectations shift and they may see buying the shoes at $Y as a gain… as anyone who has ever bought something “because it was on sale” knows.

Another example the authors work through is whether or not increased wages decrease willingness to work. Their model predicts that increased wages will decrease someone’s willingness to work if those increased wages meet the worker’s wage expectations more quickly but not if it causes them to change their expectations. To put it another way, someone who expected to earn $200 in a day but instead earned it in half a day will not be inclined to work more. However, if they expected the day to be busy and adjusted their expectations accordingly — say, to $400 for the day — then they will be willing to work more. Thus, it is not higher wages that affect willingness to work. It is actual wages relative to expected wages.

Although this is a simplified model of how expectations affect utility, building expectations into the economic model still does a lot to make the modeled outcomes better reflect reality. In my view, this further supports the idea that when humans act “irrationally” relative to behavioral models, it is much more likely that the models are missing critical factors than that humans are truly as lacking in rationality as is sometimes implied.

bookmark_borderUncomfortable Parallels

Originally published on Medium on July 1, 2018.

Source: me

First, the caveat. I’m a fan of thoughtful gun control proposals. I want them to be effective and consistent with legal understanding of the second amendment — otherwise, they are unlikely to withstand scrutiny. I accept the second amendment as the law of the land, although I would not object if it could be removed without tearing the country violently apart. However, I do not get the second amendment; arguments that guns are needed for liberty leave me unmoved. So when I make the analogy I’m about to make, know that I’m not making it as a gun rights advocate.

Abortion rights, pretty broad ones, are the law of the land. Yet there are women who get abortions for problematic reasons. Too many women get abortions because they feel that their economic or family situation does not allow them to keep a pregnancy they would otherwise choose to keep. We should all be able to agree that this is a problem. Pro-life legislation often uses these problems as a starting point. These laws are often passed with the stated intent of wanting to help women be medically safer or help them avoid making a decision they will regret.

However, those of us who are pro-choice see these laws as attempts to chip away at abortion rights. We see these as a way to make it so that the right to an abortion exists on paper but not in practice. We see them as an attempt at an end run around the law of the land. We know that, even if the stated goal is sincere, the end goal of pro-life activists is to ban abortion. With such an end goal, intermediate proposals are suspect even when linked to ultimate ends that we can widely agree need to be improved.

Gun rights and abortion rights have enough substantial differences that one can logically feel differently about them — that one, your choice, is a monstrosity and the other a fundamental right. However, from a perspective of how proposed regulations interact with the law of the land, they are very similar. So for those folks who bristle at attempts to nibble away at abortion rights, consider that gun rights advocate have a similar feeling about gun control. This does not mean that there isn’t room for improvement in both. It’s just that it’s hard to take at face value proposals from someone whose underlying goal is to ban rather than improve.

We have a living system of laws, so we should not look at the law of the land as a static ending point. Yet we should not completely ignore it when it goes against our desires either. Such an approach is impractical, especially for proposed laws with a constitutional argument against them. More fundamentally, such an approach dilutes the legal standing of rights, which should worry us all. We should feel empowered to challenge laws—even those with a Constitutional basis — but we should not try to chip away at laws in ways that we would consider deceitfully undermining if used against a right we support.