The Limits Of Quantitative Tools

“It can do anything”, is rarely something knowledgable people say about a tool. This is especially true of quantitative tools. The people who best understand a tool, also understand its limits. That doesn’t mean they are bad at using a tool. Often, they know what other skills will complement the limits of the tool. They can then master that complimentary skill.

For example, Bill Gurley dissects the Discounted Cash Flow (DCF) model used in company valuation here. It might be surprising to see an investor speak critically of the workhorse tool of company valuation. In my opinion, that’s what makes him a good investor. He understands that a DCF is a mathematical framework of connecting inputs to outputs. However, the really tricky part is figuring out the inputs and the huge amounts of uncertainty around them.

Another great Bill Gurley example is this post on the Lifetime Value (LTV) model. He explains how to calculate LTV, but to be very careful about the assumptions that go into it. It can very easily lead companies to spend a lot in acquisition costs that never gives them an ROI.

An example from a different person, is this transcript from Charlie Munger. He says that everyone in business should understand accounting. He goes on to say,  “But you have to know enough about it to understand its limitations—because although accounting is the starting place, it’s only a crude approximation.” He also says everyone should understand the limits and quirks of human cognition. He recommends studying this in order to understand motivation, marketing and to lower the chance of being manipulated.

Every Data Scientist has read through some study or paper that grossly abuses Linear Regression. If you don’t know the assumptions being made by the model, you can reach ridiculous conclusions. It’s actually very rare in business to have a simple linear model that meets all of the assumptions that can both explain and predict the underlying system. Of course, it can still be a useful tool as part of an ensemble or for subtasks.

Whenever something is being quantified, there will always be a loss of nuance and context. Of course, that doesn’t mean that quantifying the unquantifiable is bad. In fact, it’s usually necessary to reduce the complexity of the problem. However, it is important to understand what information is lost. Those who respect what was lost also usually know what was retained. The people who push the border of a tool forward usually know where the border was before they moved it.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: