Our brains are wired up all wrongly for investing, says Stephen Spurdon. It’s a wonder we survive at all
Many years ago at university, a lecturer told me how a team of anthropologists showed a film to a tribe they were studying. It was about life in the UK. When they asked the tribespeople what they thought the film was about, they all said “the chickens”.
Astounded, the anthropologists reviewed the film – and there, in a few short frames, was a flock of chickens scuttling across a farmyard.
It was mainly down to the perception of meaning. Clearly, to those tribespeople, the chickens were the most important thing in the film. But it was also about the preconceptions of the scientists, as they had assumed just about any other answer than the one they received.
So, in a way, it was about probability – in that a group of scientists (no less!) had assumed a range of answers which excluded the possibility of the one they actually received. If these supposedly cool, detached observers of empirical reality can be caught out, what chance is there for the rest of us? None – although brains may vary in ability, we all share the same structure of mind.
And that’s where we get to the point. All this has serious implications for our abilities to properly assess likely outcomes such as odds on a bet or on an investment.
The Monty Hall Absurdity
Perhaps the most famous illustration of this brain deficiency is the so-called Monty Hall Test, based on an old US TV game show, in which the contestant can choose between three doors – with goats behind two, and a sports car behind the third. Having made his choice (and assuming he hasn’t picked the car, the host opens a door behind which he knows is a goat. He then asks whether the contestant wants to change his original choice. Most contestants don’t, figuring that the odds are still 50-50. (And to change their minds would be a climbdown.)
But they’re wrong! I was just as astounded to find it mathematically proven that you always had a greater chance of winning the car by switching.
Now, I refused to agree with that – and it was only by talking this through with occupational psychologist (and former IFA) Kim Stephenson, that I calmed down.
Stethenson explains that the counter-intuitive answer – that by switching from your original choice you are far more likely to win the car – is mathematically correct. But, he adds, “The Monty Hall Test does not fit in with what we feel the world is like. You have your starting point and then the odds change. You know this is logical, but it does not seem right because humans did not evolve to think in such abstract terms. Everyone thinks like that, and you are not an idiot for thinking that way.”
Meanwhile, Back In The Swamp …
Phew! So how did human thinking evolve? Stephenson puts things in perspective, stating that it is only in very recent times that many people have lived in urban areas, and that the most important formative period was when we were hunter-gatherers for 250,000 years. We retain elements from the reptilian creatures from which mammals evolved.
“There are three parts to the brain,” says Stephenson. “First is the reptilian part which works in the unconscious. It does what it does (mainly control things like blood pressure, heart rate, breathing etc.) Then there is the limbic system, the emotional part of the brain which again works in an unconscious way, mainly.
“Finally there is the cortex, which is where consciousness takes place – although a lot of it still takes place in the subconscious. Now, the cortex is a bit more logical than the other two, but all of the inputs into it are filtered through our senses which are part of our sub-conscious.
The reality is that we are often not really conscious of what drives our decision-making.”
So, To the Money Question…
Now on to how our brains work with money. Stephenson explains the fundamental problem:
“Economists say we should make logical decisions. But the hunter-gatherer brain says that you should not defer consumption. If you see food, you eat it. If the right mating opportunity arises, you mate.”
How much all this applies to our financial dealings was revealed in 1979 with the publication of Kahneman and Tversky’s Prospect Theory, which shows how people choose between probabilistic alternatives that involve risk. It was this ground-breaking paper that originally started the development of behavioural finance as a field of study – now an accepted consideration in many financial institutions. And you can read it yourself at http://tinyurl.com/ppty2t9 if your brain is up to it.
But how do fund management groups use this awareness? Quiet there at the back – not everyone is busy using our cognitive illusions against us!
Well, one example is provided by Vanguard Asset Management, which published a guide to behavioural finance (http://tinyurl.com/nprtlm5) to introduce the theme of cognitive illusions to clients. “Yes, we are saying to people that they are fallible,” says Nick Blake, Vanguard’s head of retail, Europe. “And this is how to start thinking about it.”
So far so good. But of course fund managers themselves are only human, so how do they deal with such illogical issues?
You might suppose that Vanguard, as a passive manager, might be out of the loop on this one, because its founding premise is based on the observation that not only do most active managers fail to beat the market, but that investor behaviour patterns will often lead to them doing even worse.
“Vanguard’s programme is built into our products,” Blake says. “We think passive investment should be just that – and should not include such things as ‘Smart Beta’ considerations in which an active choice is made from a passive size-weighted index.”
The Trouble With Crowds
So what you see is what you get. But what do you ‘see’ when you’re looking at markets? They are the combined result of the decisions of thousands or perhaps millions of people. So you might simply be observing ‘the wisdom of crowds’ or ‘the herd instinct’ – with all its negative implications.
Colin McLean, managing director of active manager SVM Asset Management, confirms. “Another thing I have observed is that large teams do not always make good decisions, and that consensus is not always helpful.”
Martin Reeves, L&G Investment Management’s head of high yield and manager of L&G High Income Trust, explains how his organisation tackles human fallibility:
“Internally, of course, you can never really know when such cognitive illusions are in place. Here we see the need to have processes that continually challenge them. So we have a macro team in place, as well as individual analysts and a separate portfolio management function. This means that functions in investment management are separated, and that they are challenged appropriately. We have a process of talking through what we see going on to help in eliminating optimistic or pessimistic predispositions, which can be set off by such human things as getting out of the bed the wrong side… We have set up a structure based on challenge and discussion.”
Sequential Logic Fails Us
McLean says that some funds use behavioural finance principles – for example, those that rely on social media such as patterns observed in Twitter. But he says these funds have not done too well, and that it is better to add this awareness to existing fund management: “It can help internally, in that we are aware of issues that tend to affect technical patterns of markets.”
“In general, however, people tend to have a problem with conditional probability, as is shown in the Monty Hall Test. For instance, if a fund manager says that the market has a 60% chance of rising over the next year but that it will fall over the next month, then we will tend to add up probabilities rather than seeing them sequentially – and it is only human to do so.”
What’s to be done?
Keeping It From The Clients
Of course, as IFAs with clients in tow, it’s not sufficient just to describe our inability to deal with probability. Instead, you want to know what can be done to counter it – both in yourself and clients.
Blake says his organisation pays special attention to ‘The Check List Manifesto’, a book by Atul Gwande, a surgeon who discovered evidence of high levels of mortality in spots around the world which was linked to the fallibility of surgeons – for instance, administering antibiotics at the wrong point. (http://tinyurl.com/yk43lva )
Blake explains: “This showed that even highly trained professionals can make mistakes, such as surgeons, pilots and architects. It highlighted the need for such people to have a check list.”
“In the financial planning context, the best planners use them, and they are usually referred to as ‘a statement of investment policy’, in which they will set out what they will do in certain situations. It brings discipline.”
McLean concurs: “Frequently, it is the things that people omit that are very telling. Also, you may see experts often citing many reasons for their position, but not including counter arguments. Using a check list would be useful for the elimination of such obvious things. Also, keeping a journal detailing activity and the reasons for it would be useful in highlighting systematic errors.”
The Dangerous Lure of Old Models
Reeves expresses his appreciation of the problem advisers face in the current situation, noting Keynes’ observation that clients may prefer for their investment decisions to fail conventionally than to succeed unconventionally. Hence, it might be expected that an adviser would utilise a pre-existing investment model. But, he adds: “Any model is based on a set of assumptions; it will be static and based on a series of historic relationships between variables. There is the implicit assumption that the future will repeat the past.”
“Currently, in the bond and equity world, we are facing the challenge of how to handle the situation when quantitative easing ends. Most of us have lived in a world where yields have been declining during the past 20 years. Along with the QE we have had suppression of interest rates by central banks. So we are in a new situation. What this tells us is there is a danger in using an old model in this new situation. Here I think it is best to go back to a priori – the very basic characteristics of what we are dealing with, which will include the individual nature of bonds.”
Kim Stephenson shows what you are battling against, fundamentally:
“There is a need to make decision trees to get near to the financial outcomes that we want. (There isn’t really a need, it’s just that people think that’s what you should do). But this need conflicts with the hunter-gatherer mind where you had to act. For instance, in the case of a large animal being nearby: an instant decision has to be made to kill it or run from it. If you sat there making a decision-tree it might have killed you while you thought about it.”
Of course, any decisions you arrive at will take into account an appreciation of the client’s risk profile. But Stevenson explains that you may be let down by the available proprietary models:
“Now, when it comes to utilising psychometric tests to assess risk perception amongst clients there are around eight or so available (there might be more, I’ve looked at about 8). The plain fact is that not all are valid. But even if they are valid, there is the interpretation of the answers. So a client can answer what is the same proposition in two different ways and one will give them an ‘adventurous’ tag, while another will give them a ‘cautious’ tag. Tell me how that works?”
Well, Kim, it could have something to do with our human need to be seen to be doing something even if in reality it is not the answer. But then our minds are moulded to protect us from reality. Aren’t they?