Thursday, October 4, 2012

What is Apple Worth? Part 2: Paradox Resolved?

In part 1 of this series I talked about Daniel Bernoulli's brilliant idea of using a concave utility function to solve the St. Petersburg paradox, a problem originally posed 300 years ago (and a couple of decades before Daniel's solution) by his cousin Nicolas. I explicitly suggested a strong parallel with the way Apple is being undervalued as it gets bigger and bigger, and ended with a discussion of how the overused modern phrase "law of large numbers" (aka lol numbers) as misused by current mainstream financial media talking heads (when referring to the "limits of growth" concept) ironically is in contradiction with the actual fundamental probabilistic concept called the Law of Large Numbers (LLN), first proved by Daniel's and Nicolas' uncle Jacob Bernoulli: If Jacob's LLN applies, the likelihood that Apple will continue to grow would seem higher, given the historical empirical record, not lower (as the lol numbers trite cliche implies). On the other hand, if in fact there are limits to growth being reached, then the actual LLN is violated (independence of trials). In short, the semantic confussion and logical contradiction of invoking a misnamed "law" to justify persistent multiple compression despite continued strong growth in Apple's earnings might get resolved by reinterpreting it not necessarily as limits to growth but instead as an expression of the concept of diminishing marginal utility (although these two concepts might be related after all).

For this part 2 my goal was to explain why and how such a simple solution of a concave utility function works in assessing the pragmatic value placed by most people on such big yet unlikely potential returns as well as why and how the specific logarithmic utility proposed by Bernoulli might not work in every case, and how the theory eventually got refined over the years, decades and centuries. This is no simple task: to wade through 300 years of academic literature on economic theory, and most recently financial theory, from the incipient probability research to modern portfolio theory and CAPM. For an example of the nature of the literature, check out this 850 page book (many pages omitted in that Google Books preview) focused only on a singular very specific thread (the Kelly Criterion) within the broader subject. Even if I could digest and condense all of it into a blog post (which I definitely can't), I wouldn't want to submit my readers to such a dry academic dissertation. Besides, it's mostly way over my head. Hence, after being set back on my promise to post this within a couple of weeks of part 1, I've finally decided to go with a very simplified intuitive exposition, but leaning heavily on the multiple referenced links provided (with the caveat to only rely on Wikipedia for a superficial glance and to conduct your own research through more reliable sources if a deeper understanding is needed). Apologies if this ends up with significant theoretical holes.

Concave utility of money implies that increases on already very large wealth levels are less useful (utile) than similar increases on smaller levels of wealth. For example, a gain or loss of a million dollars for a company that's already worth hundreds of billions, like Apple, is nothing compared to how an individual investor or a small company worth a few million might feel about it. Equal-sized losses and gains feel different even for the same individual or company, at the same starting point in wealth. This is especially relevant under uncertainty:
Suppose I gave you $1,000 and then offered you a choice between doubling it to $2,000 or losing it all on a coin flip, or to keep the $1,000. Most people would choose to keep the $1,000 (even though the expected value of both choices is the same) particularly if the $1,000 is relatively significant to you in relation to your income or net worth (say more than 10%). The popular idiom "a bird in the hand" expresses this natural and almost universal risk-averse behavior in people.

A stock's P/E multiple can be interpreted as the price investors are willing to pay for one dollar of current earnings (more or less known but still somewhat uncertain). This is a simplification of the DCF valuation model where the growth of the cash flows in perpetuity is estimated as constant (also see the Gordon growth model). So the big uncertainty actually lies on the estimated growth of current earnings into perpetuity. However, these multiple valuation models don't take into account the contextual differences introduced by a concave utility curve necessary for risk aversion. So, for very high levels of earnings toward the right of the utility curve, the additional utility of one extra dollar would be less than at lower earnings levels. This hypothesis might explain continued multiple compression for companies with very large levels of earnings (AAPL, XOM, MSFT) and perhaps justify the high-flying multiple valuations of companies with low or even negative earnings (CRM, AMZN, LNKD).

From the investor point of view this explanation seems wrong, or at least quite unfair. The diminishing marginal utility of money argument shouldn't apply for small investors with average wealth who hold a relatively small number of shares of some mega-cap company. Their utility curve can still be quite steep (they would place a high multiple on earnings), while that of an institutional investor holding hundreds of thousands shares might be relatively flat (more indifferent to earnings changes on such a large position). But the reality is that those big institutional investors represent the majority of the investor base of mega-caps with low multiples simply because small individual investors can't support those huge market caps. And by the way, a stock split wouldn't help either. The argument is about total market cap vs. total net income vs. total cash on hand by the company as a whole and whoever may want to own any piece of it. The number of pieces is irrelevant. Thus, when looking at it from the company's point of view, the argument seems more reasonable. An extra billion earned by Apple seems much less critical (again, utile) to them, to their success, or to guarantee their survival than for, say, Facebook. So the relative multiple of market cap for each dollar gained or held as cash would reflect this lower marginal utility.

Notice that log utility of money as suggested by Bernoulli (also implied in performance metrics such as relative earnings percent growth or the usual rate of return) is not enough to resolve the paradox. What's required is bounded utility, i.e. the utility function has a maximum. Once this maximum gets reached (perhaps asymptotically), then no matter how much earnings or wealth increase, the utility doesn't increase. This is mathematically similar to the limits of growth idea, but conceptually it's quite different: it's not that earnings will inevitably stop growing due to a physical limit on resources or consumers (although that might be the case for most mega-caps), but that even if earnings could somehow keep growing indefinitely, such levels of increasing wealth would not increase the utility at all for the company or its investors.

These theoretical ideas do seem a bit hazy to me. Although multiple compression is a widely recognized phenomenon for mega-cap companies, it is not yet clear to me if this is due to the usual lower rates of earnings growth suffered by most of them and Apple with its exceptional growth record is being singled out unjustly, or if it's truly due to the diminishing marginal utility hypothesis proposed here. An empirical test with specific valuation vs. wealth levels, e.g. comparing market cap against assets in the balance sheet and cash flow or earnings metrics, for different companies at different points in time might provide significant evidence supporting this theory.

In the specific case of Apple, I've run several non-linear regression models of stock price against cash per share (contrast with Asymco's linear regression model on this same balance sheet relationship) as well as EPS, and have found that multi-stage (partitioning the data at a few different orders of magnitude in size) non-linear (concave) relationships fit the historical data much better than a linear regression. I will leave the detailed exposition of these regression models for Apple (and some projections) for the final installment of this series of posts on valuation.

I'll finish this one with a couple more philosophical musings, and a potentially invaluable educational reference. I find it fascinating how theoretical economists take a simple idea or bit of intuition and evolve it into some of the most deterministic general precepts for all of society. First imagining some extremely simplistic, idealized and pretty silly wonderland-like world ("suppose the whole economy is made of only apple trees that produce apples every year...") and somehow after trustingly pulling at that "suspension of disbelief" thread, falling deeper and deeper into the rabbit hole, they end up revealing some truths that remarkably do explain actual behavior in a very broad and rough general sense, like markets and insurance and interest rates and a whole bunch of other stuff.

But sometimes this highly stretched-out hyper-rational logic blows up in our irrational faces, and we end up trying to build up fences and barriers to keep them crazy goats under control. Despite the insurgence of behavioral economics and finance, it's quite astounding how much of our modern world's most critical support systems rely on these seemingly silly "rational agent" models working perfectly. It's amusing to learn about all the different utility functions that all the economists have thought of as surely pertaining to this weird homo economicus guy, the most useful of those functions being concave and bounded (they call those well-behaved), a myriad of them, all different, all arbitrarily yet conveniently chosen to explain or derive a particular theory or result: "assuming people are hyperbolically absolute risk-averse (a kind of utility function) then such and such holds." All this despite most now recognizing that utility is actually unmeasurable! They've argued on such things for the last several centuries and still do so today. Yet ordinary, even primitive people since ancient times have somehow come up with incredibly efficient mechanisms to precisely reveal all of their wants and preferences, such as trading in public (or private) markets.

As for the educational bit, Yale Professor John Geanakoplos gives this bewildering course on Financial Theory which is freely available online (Open Yale Courses, iTunes U, or YouTube). If you're brave, please go ahead and watch lectures 22 and 23 which I was happily surprised to see starting off along this very same plot I've been telling here. If you somehow can guess which equation on the blackboard he's aiming to throw the chalk at (you'll see what I mean from the videos) and manage to comprehend just 10% of his crazy maths, you'll surely find it worthwhile. Also, you're probably a genius! Ha, just kidding (kind of). Even after watching all the previous 21 lectures you might be able to get just 50% of what he's doing, if you're really smart.

Anyway, in lecture 22 mad Professor Geanakoplos states that this is the highlight of the course, and begins telling the Bernoullis story but mixes up Nicolas (I) (the cousin who originally posed the paradox in 1713 in private correspondence to Montmort) with a brother of Daniel also named Nicolas (II) who did in fact work with Daniel years later, very likely including work on the paradox, and indeed died of a fever at the young age of 31 in St. Petersburg just a few years before Daniel sent his solution to his cousin Nicolas in 1731 (see letters numbered 17 and 18 in the link above). And then the professor jumps right into CAPM and solving for general equilibrium with two risk averse agents and 2 or 3 assets under uncertainty. Very cool.

So, be warned about Geanakoplos, about a sort of chaotic clutter in his delivery and how he often comes through as quite erratic. But don't get me wrong, he's actually quite impressive and his genius shines through despite all the mad professor quirkiness: randomly throws invaluable nuggets full of insight, he haphazardly goes over almost every single mathematical derivation of all the theories (which if you do pay attention you'll come to appreciate his struggle), and the whole course is well worth a shot. Again, if you're brave. Or very inquisitive about these fundamental things (my case). You must be comfortable with solving math and algebraic stuff quickly in your head if you want to follow along. I didn't go through the problem sets and exams and other course materials available because I felt I got enough of the gist I thought I needed from the videos (maybe 50% of it as I said above), but I'm sure doing so would help a lot towards truly mastering the material.

Again, please excuse any academical shortcuts I've surely taken in this quite speculative theory about Apple's (and possibly other companies') apparent valuation discrepancies. I'll post part 3 with all the regressions and projections specific to Apple after I update those with whatever actual numbers they report on the 25th. And don't forget to check out my most recent estimates here.

1 comment:

vk said...

Daniel: Here is one funny anecdote. As I was reading part 2, a lot of what you wrote made sense to me. That is strange because my background is not this area at all. But then when you referred to Professor. John Geanakoplos, then it all made sense. I watched that course a couple of months back! Excellent course indeed, and I probably got 20% of all that and I have promised myself to go back and watch it one more time at that time. Your recommendation sure adds to that resolve.

Thanks.