The Paradox of Osteoporosis Irreversibility

Over 30 years ago I gave a paper at an osteoporosis meeting in Jerusalem titled “The Paradox of Irreversibility of Age-Related Bone Loss.” By  ”irreversibility” I meant that once the bone was lost there was not much that could be done to restore it.

Bone_HeaneyPerhaps “puzzle” would have been a better word than “paradox.”  From our experience with other similar situations, we would have expected that the lost bone would be restored.  The underlying facts are that during the postmenopausal period bone loss occurs rapidly as estrogen levels drop to low values.  Estrogen replacement therapy started at menopause prevents that loss, showing clearly that it is the estrogen deficiency that is responsible. Similarly, severe calcium deficiency also leads to bone loss, and maintaining a high calcium intake does slow that loss, and perhaps even prevent it.  And, as is generally recognized, low calcium intake and low estrogen status are common in contemporary women during the post-menopausal years. These factors are the principal reasons for age-related bone loss in women.

But neither estrogen, nor calcium, nor the combination of the two, will restore the lost bone after it is gone.  This seemed puzzling because with many other nutritional and hormonal deficiencies, restoring the lost hormone or nutrient does generally return the body to its pre-deficiency state.  Examples would be hypothyroidism, which responds to thyroid hormone replacement, and iron-deficiency anemia, which responds fully to replacement of the lost iron.

The explanations for this seeming irreversibility which I offered at the time were twofold.  1) bone building (or rebuilding) requires  weight-bearing or impact exercise, and physical activity generally declines after midlife; so a condition necessary for rebuilding was missing. 2) much of the lost bone is trabecular in character, i.e., the spongy latticework in the center of bones such as the vertebral bodies of the spine; once that lattice is lost, there is no longer a scaffolding or framework on which to rebuild.

I believe that both reasons are at least partially correct, but today they seem to me far from satisfactory explanations for this puzzling irreversibility. There is, I think, a better, more complete explanation, one that can be tested (and thus proved or disproved), and one that, if correct, could revolutionize the treatment and prevention of osteoporosis.

Bone is not just calcium. It is made up, first of all, of a protein matrix within which the calcium salts are embedded.  Soak a bone in acid and you remove the calcium.  But what’s left still looks like the bone you started with, except now it’s rubbery rather than hard. It’s now all protein and no mineral. The key point is that, while bone is the body’s reservoir of calcium, that calcium is tied up as part of a structure, the largest component of which is protein. When the body needs calcium and has to make withdrawals from the skeletal reserves, it does so not by leaching the calcium from this protein-mineral complex, but by physically tearing down microscopic units of bone and scavenging the calcium that is released in the process. Inevitably, therefore, the protein matrix – the structure – goes as well.

In order to profit fully from a high calcium intake, a patient who has lost bone needs to consume enough protein to allow the body to rebuild the lost structure. Otherwise all that a high calcium intake can do is to prevent the body’s further tearing down of bone to meet the calcium needs of other body systems and tissues. That’s a good thing to do, but it is not enough. Nevertheless, it is precisely to prevent that draining of the body’s calcium reserves that a high calcium intake (whether from food or supplements) is today a vital part of the standard of care for patients with osteoporosis. Even so, the failure of nutritional replacement to rebuild lost bone is what originally set the stage for the entry of pharmaceutical agents, some of which can produce substantial bone rebuilding.

That landscape began to change a few years ago when an insightful investigator at the Tufts Nutrition Research Center on Aging in Boston noticed that a high calcium intake did, in fact, lead to increased bone gain if the patient’s intake of protein was high. Bess Dawson-Hughes had previously published the results of a calcium and vitamin D supplementation trial, producing a better than 50 percent reduction in fracture risk in healthy elderly Bostonians with those two nutrients alone. But, like others before her, she noted that, while high calcium intakes reduced or stopped bone loss in her treated subjects, the two nutrients didn’t lead to bone gain. They didn’t, that is, in individuals consuming usual protein intakes. However, in a subset of her treated patients, who, it turns out, had protein intakes above 1.5 times the RDA, bone gain was dramatic (while it was zero in those with more usual – and usually thought “adequate” – protein intakes).

For me, it was an “Aha!” moment.  Why hadn’t we thought of that?  It was known that bone is 50 percent protein by volume (but only about 20 percent calcium by weight).  And it was known that when bone is torn down (as with estrogen or calcium deficiency), its protein is degraded in the process. So it made sense that, to rebuild the lost bone, you would need not just calcium but fresh protein as well.

When I first heard of this result, I immediately went to our own Creighton database on calcium metabolism in midlife women (the “Omaha Nuns Project) and looked to see whether protein intake (which we had recorded and measured) made a difference in the bone metabolism of our nuns. There it was, just as the Tufts investigator had shown. Our nuns with protein intakes below the median for the group could not retain calcium, no matter what the intake (i. e., they couldn’t build bone). By contrast, those with protein intakes above the median for the group retained extra calcium reasonably well.

So, here were two distinct data sets, two quite different investigations, exhibiting the same interdependence of calcium and protein. What we, and probably most clinical nutritionists, had failed to recognize, was that the adult RDA for protein is just barely enough to prevent muscle loss, and is not enough to support tissue building or rebuilding. But, as already noted, when calcium deficiency leads to bone loss, the bone protein is lost as well, and that has to be rebuilt to restore the lost bone.

This mutual dependence of calcium and protein provides a good illustration of two key (and often underappreciated) aspects of nutrition. The first is that nutrients almost always act together with other nutrients. The second feature is what Bruce Ames of the University of California, Berkeley, has called a “triage” system within nutrition. The body operates a triage mechanism, ensuring that the most vital functions receive the nutrients first and leaving the other tissues and systems of the body to get by on what is left over. It seems that this triage mechanism is at work with respect to adult bone rebuilding.  With limited protein intake, the body ensures that its most vital functions are served first.  Bone, in effect, gets the leftovers.  We need a high protein intake precisely to ensure that there will be something left for bone.

Two unplanned observations such as those of Dawson-Hughes and our own Creighton group, even if they make perfect sense, would not generally be considered enough to change public policy, particularly when it comes to nutrient intake recommendations.  So, if we are to be certain that supplementing both protein and calcium will permit rebuilding of lost bone, it will be necessary to mount one or more clinical trials testing that hypothesis.

Such a trial would likely be designed to start  with a group of probably several hundred postmenopausal women who had already lost bone and whose protein intakes were in the range of the current RDA, that is about 0.8 g/kg/day.  All would be supplemented with sufficient calcium to permit maximum bone building if the individuals concerned could, in fact, use the calcium efficiently. They would all also receive sufficient vitamin D to ensure a serum concentration of 25(OH)D of 40 ng/mL or higher.  Then half would be given a diet, probably involving a protein supplement, which would raise their protein intakes to above 1.2 g/kg/day.  [Some might argue that there should be a third group, one with protein intakes at the RDA  (0.8 g/kg/day), but without the substantial calcium and vitamin D supplementation envisioned above.]  In either case, trial duration would be about three to four years, and the endpoint would be the observed change in bone mineral density (BMD) over that treatment period. The predicted outcome would be that the lower protein group receiving calcium and vitamin D would have no appreciable change in BMD, while the higher protein group, also receiving extra calcium and vitamin D, would exhibit clinically significant bone gain.  As outlined here, such a trial could not be blinded, mainly because the diets would be perceptibly different.

Even if such a trial were to start today, it would probably be at least five years before the results would be clear and actions could be taken to change official recommendations and influence individual dietary behaviors.  What should one do in the meanwhile?

This is a matter for individual decision, but it is helpful to know that high protein intakes are safe. Their principal negative impact is on the wallet, not the body (as rich protein foods tend to cost somewhat more than foods high in added sugars, for example, or other types of empty calories).  For me the decision is easy; I’d opt for the high protein intake without a second thought (with, of course, adequate calcium and vitamin D, as well).  As a dividend, I should note that there is one food group that is both a very rich source of protein and at the same time the principal source of calcium in the diets of first world populations – dairy.  Moreover, if consumed as milk, its cost is less than the average cost of the other foods in your grocery cart.

Even if the trial were to be successful, It  would be naïve to think that would be the end of the story.  Recall that the pharmaceutical industry stepped into this field 25 years ago when it appeared that nutritional therapy was not up to the task (at least as it was conceived at the time).  If this protein hypothesis is correct, then better nutrition could be a much better form of prevention than pharmacotherapy.  However, I suspect that the pharmaceutical industry will not back out of the field as readily as it got into it.

To be fair, resistance from big Pharma should not be surprising. After all, they’ve invested billions of dollars in helping us solve a critical health problem for an aging population. Naturally, they (and our pension funds who are their stockholders) want to protect that investment. Still, if diet can do the job for us, few would choose a lifetime of pill taking or injections over better eating.

Posted in Calcium, Nutrient intake requirements, Nutrition, Vitamin D | Tagged , , , , , , | Leave a comment

Vitamin D and the nursing mother

Do infants receive enough vitamin D?

Do infants receive enough vitamin D?

Everyone seems to agree that vitamin D is important throughout life. This is certainly as true in the first year of life as it is later on. For it is during the first year that, in addition to its role in calcium metabolism, this critical nutrient reduces both the risk of current infections and the late-life development of such autoimmune diseases as multiple sclerosis and type 1 diabetes. Both the Institute of Medicine (IOM) and the American Academy of Pediatrics (AAP) agree that vitamin D intake during the first year of life should be 400 IU/d. My own estimation of the requirement (for different ages and body sizes) is 65–75 IU/kg body weight per day. For average body weights in infants during the first year of life that rule of thumb computes to somewhere between 300 and 500 IU/d for infants. So, while there is still contention with respect to the optimal intake for adults, there really is no disagreement about how much is needed for infants, either among various authoritative sources or arising from different approaches to the evidence. With respect to infants, 400 IU/d seems to be just about right.

The question is, how is the infant to get that vitamin D? Human milk, in most nursing mothers, contains very little vitamin D. Infant formulas, from various manufacturers, all contain some added vitamin D in amounts calculated to be sufficient to meet an infant’s needs. But extensive studies during the first year of life reveal that less than one-fifth of all infants ever get as much as the recommended 400 IU/d from any source, and fewer than one out of 10 breast-fed infants meet the requirement. As a result, the AAP urges that all infants, regardless of whether they are breast or formula fed, receive their 400 IU/d as pediatric drops. Unfortunately, this recommendation, while appropriate, is not often followed. Most babies are just not getting the vitamin D they need. The late-life consequences of this shortfall could be enormous.

It must seem strange that on the one hand we stress that human milk is the best source of nourishment for our babies, and on the other seem to ignore the fact that human milk doesn’t contain the vitamin D those babies need. The explanation, very simply, is that the disconnect is artificial. Nursing mothers have so little vitamin D in their own bodies that there is little or none left over to put into their milk. But it has not always been this way. We know that the vitamin D blood concentrations that are regularly found today in Africans living ancestral lifestyles are high enough to support putting into breast milk all the vitamin D an infant needs. But the bulk of the world’s population today is not living on the high equatorial plains of East Africa nor exposing much of its skin for most of the day.

Fortunately, we don’t have to return to East Africa. It turns out that, if we give nursing mothers enough vitamin D to bring their blood levels up to the likely ancestral levels, then they automatically put all of the vitamin D their baby needs into their own milk, thereby ensuring that the infant gets total nutrition without the need to resort to vitamin D drops.

How much vitamin D does the mother need so as to ensure an adequate amount in her milk? As with everything else related to vitamin D, there is a lot of individual variation, but it appears that the daily intake must be in the range of 5,000–6,000 IUs. As no surprise, that’s just about the amount needed to reproduce the vitamin D blood levels in persons living ancestral lifestyles today. And while 5,000–6,000 IU may initially seem high, it is important to remember how much the sun produces for us. A single 15 minute whole body exposure to sun at mid-day in summer produces well over 10,000 IU.

There is one important proviso for nursing mothers concerning the needed intake. Those who live in North America and have to rely on supplements should be certain that they take their supplements every day. While for other purposes it is possible to take vitamin D intermittently (e.g., once a week), that doesn’t work for putting vitamin D into human milk. The residence time of vitamin D in the blood is so short that, if the mother stops taking her vitamin D supplement for a day or two, vitamin D in her milk will be low (or absent altogether) on the days she skips.

There is a glaring disconnect here between these well-attested physiological facts and the official IOM recommendation for nursing mothers, i.e., only 400 IU/d – the same intake for her as IOM recommends for her baby (whose body weight is less than 10% of her own). The IOM, if it were to be explicit about its current recommendations, would be telling nursing mothers something like this:

“The evidence we analyzed indicates that your own body needs only 400 IU of vitamin D each day. Unfortunately, that won’t allow you to put any vitamin D into your breast milk. Sorry about that . . . So, if you want to ensure that your baby is adequately nourished, you are going to have to resort to giving your infant vitamin D drops.”

That would be a hard message to sell, and clearly, it makes little sense on its face. As I have already noted, women living ancestral lifestyles (whether or not they are nursing an infant) have far higher blood levels of vitamin D than contemporary urban Americans. Milk production (and its optimal composition) are simply two of the many functions that vitamin D supports in a healthy adult. This breast-feeding example is not a special case; it is just one of the many pieces of evidence that point to the fact that current vitamin D recommendations for adults are too low – way too low.

Vitamin D supplements – and in this case vitamin D drops – are literal lifesavers for infants today. But what about two or three generations back – before nutritional supplements come into existence, but long after migration out of Africa? Ninety years ago vitamin D had not yet been discovered, and there certainly were no vitamin D supplements that could have been used. How did we get by through those thousands of years? There are two answers. Most of us, living in temperate latitudes, got a lot more sun exposure than we do today, and of course there were no sunscreens, so there was no blocking of the solar radiation that produces vitamin D in our skin. Those of us living in far northern latitudes survived mostly by depending upon diets that were very high in seafood, which is naturally a rich source of vitamin D. And those of us who got vitamin D by neither route were at increased risk of a whole host of vitamin D-related disorders, most obvious and most easily recognized being rickets.

The bony deformities of rickets were common a century ago in Europe, North America, and East Asia, and were largely eradicated in growing children by use of cod liver oil and, in the US, by the introduction of vitamin D fortification of milk in the 1930s. Fortunately, growing children can repair some of the bone deformities of rickets if they are given vitamin D soon enough. But, repairing rickets, while a good and necessary thing to do, is not sufficient. It is too late, by the time we recognize the deformities of rickets, to ensure maximal protection against the autoimmune diseases (for example), for which susceptibility is mainly determined in the first year of life.

To sum up, we now better recognize the importance of vitamin D in the earliest stages of life. Furthermore, there is, as noted earlier, quite good agreement on how much an infant needs. Where we lack consensus is how to make certain that all of our babies get the amount they need. Why not rely on giving nursing infants vitamin D drops, as the AAP recommends? Two reasons: 1) It’s been tried and has failed; and, 2) When it does work in individual infants, it provides no benefit for the mother. By contrast, ensuring an adequate vitamin D input to the mother during pregnancy and lactation is almost certainly the best way to meet the needs of both individuals.

An “adequate” intake for nursing mothers, as noted earlier, is not the 400 IU/d the IOM recommends, but is instead in the range of 5,000–6,000 IU/d, taken daily. If they get that much, they will meet not only their own needs, but their infant’s as well. And they will have the satisfaction of knowing that they are supplying all their baby’s needs, the natural way.

Links for more exploration:

  1. Perrine CG, Sharma AJ, Jefferds MED, Serdula MK, Scanlon KS. Adherence to vitamin D recommendations among US infants. Pediatrics 2010;125:627-632.
  2. Hollis BW, Wagner CL. The role of the parent compound vitamin D with respect to metabolism and function: why clinical dose intervals can affect clinical outcomes. J Clin Endocrinol Metab 2013;98:4619-4628.
  3. Luxwolda MF, Kuipers RS, Kema IP, van der Veer E, Dijck-Brouwer DAJ, Muskiet FAJ. Vitamin D status indicators in indigenous populations in East Africa. Eur J Nutr 2013;52:1115-1125.
Posted in Nutrient intake requirements, Nutrition, Vitamin D | Tagged , , , | Leave a comment

Antagonists or Partners? Part 2

In my last post I argue that capitalism and socialism need not be opposed to one another, as capitalism is focused mainly on increasing resources or wealth, while socialism is focused more on their distribution. However, as encountered in the real world, capitalism usually involves private ownership of resources while socialism often involves, instead, group or state ownership and management. Thus, if the frame of reference is ownership or management, the two could well be seen as opposites, if not necessarily antagonists. In any case, rather than focusing on labels (“capitalism”, “socialism”), it will be helpful, instead, to concentrate on how the two economies operate.

Capitalism, it was noted, increases resources or wealth by feeding back some of the output of a productive process to improve its yield.  Also, to ensure that this happens, built-in incentives seem to be needed. In theory, there is no reason why such improvement-oriented feedback could not work when resources are collectively owned. However, abundant experience with such approaches (U.S.S.R., Albania, Communist China, North Korea – to cite just a few modern examples) shows that collective ownership does not increase the size of the pie. There are several likely reasons:

  1. Innovation is a “micro” activity and tends to be stifled at a “macro” level;
  2. While the common good should motivate the members of a collectivity to improve its processes, opinions about what is worth doing or what is of value are not uniformly shared; so the enthusiasm, the extra work, and the consensus needed to implement or to test a change in process is weakened; and
  3. As collective ownership often leads to the aggrandizement of power and wealth in the hands of a ruling class, little or none of the output of the economy is fed back in, so as to improve yields and hence increase the size of the pie.

Thus private ownership and management would seem to be an essential feature of an economic system that ensures the growing of the pie. However, that is a pragmatic rather than a deductive conclusion. One recalls the conclusion of Thomas Aquinas to the effect that he could find no philosophical basis for private ownership, but nevertheless supported it because he noted that resources are better managed when held by individuals than when held in common. One is reminded also of what Garrett Hardin more recently referred to as the “tragedy of the commons”.  With that in mind, and taking our clue from Aquinas, it has to be said that control of resources needs to be seen not so much as absolute ownership but as stewardship.

I noted in the earlier blog that the way resources are distributed expresses, de facto, the values of a society. Whether based in economic pragmatism or religion, these values boil down to the degree of responsibility the members of a society feel toward one another. At one extreme, we have the shared care and concern of members of a closely knit family, and at the other, an every-person-for-himself mentality that inexorably results in greater and greater disparities between the “haves” and the “have nots”. This latter is often referred to as laissez-faire capitalism. “Laissez-faire” is French for “let them do” (more or less as they please). It is important to stress that it is not an integral part of capitalism. The laissez-faire philosophy boils down to reduction or elimination of all external constraints on economic activity, i.e., little or no regulation.

The current debate in the United States, in which the terms “capitalism” and “socialism” are bandied about as epithets, is really about regulation, not about abstract economic principles. Regulation is not anti-capitalist, and does not in itself kill the process improvement which is the strength of a capitalist system. Nor does it necessarily stifle the incentive needed for the system to work. Thus to label regulation as “socialist” is nonsense. Perfectly good examples of constructive regulation exist in the contemporary world. Germany is a highly successful capitalist economy (and one we obviously admire, as judged from the tenor of television commercials), and yet the disparity between the haves and the have nots in the German economy is vastly smaller than in the United States. This is partly because the interests of the employees in big companies are required to be represented at the level of the board of directors of the corporations concerned. With that arrangement, both management and labor have a chance to see that what is good for both is good for the company. The necessity of regulating such arrangements is virtually self-evident: all the companies in an economy have to play by the same rules. The only feasible way to ensure that is to impose those rules from the top. George Tyler, in his book, “What Went Wrong . . .” describes two variants of capitalism, which he calls family capitalism and shareholder capitalism. Both are equally capitalist. They differ mainly in the sets of regulations under which each operates.

I mentioned above that the choice of systems of distribution might be based in economic pragmatism or in religious principles.  It may be worth expanding a bit on each.

An economy must have both producers and consumers, and under ideal circumstances most members of a society would play both roles.  That was the insight of Henry Ford who, as noted in the earlier blog, understood that his workers needed to be able to afford (and to buy) his automobiles.  When wealth is so concentrated at the top of the economic ladder that those lower down have little or no purchasing power, then the economy will sooner or later – but inevitably –   grind to a halt.  It is a self-defeating situation and hence one that cannot be sustained.  Expert economists and political scientists have noted also that democracy itself depends upon a society’s having a large middle class.  That is almost surely a part of the reason so many underdeveloped nations have had trouble accepting and integrating a political system that we in the U.S. consider ideal.  Political power inevitably attaches to concentration of wealth, and when that happens the lower class majority no longer has effective voice or influence.  We have at best an oligarchy, i.e., rule by the few. The ancient Greeks, who created the first true democracy, coined the term “oligarch”, and Aristotle, himself, spoke of its disadvantages for a society over 2,300 years ago.

Religion does not speak with a single voice on these issues, but perhaps the most pertinent insights in this regard are found in the Jewish and Christian traditions, in which ownership of all resources is ultimately held by God, and in which the human role is as a manager or steward.  (Obvious questions are: Management for what purpose? Stewardship for what end?)  It is clear that a steward can abuse his or her control, as gospel parables clearly illustrate. Still the chemistry of stewardship – the attitude toward resources – is vastly different from that of absolute ownership.

Christianity adds another dimension to the Jewish understanding of stewardship. Very simply, baptism is seen as entry into a new family, with God as Father – a family not only with privileges, but with responsibilities as well. Early Christians – and we today – pray “Our Father  . . .”, not “My Father”. Jesus said “You are all brothers [and sisters]” (Matt 23:8). Everyone within this kinship (and to whom we refer when we say “our”) is brother or sister to me. I am called/expected to use my resources not only for myself, but for them as well – for them first, actually. That was immediately evident in New Testament times, in a culture where kinship established one’s identity, and kinship obligations were paramount. Not everyone in a pluralistic society today necessarily buys into this understanding. Still, for those who call themselves Christian, it is a contradiction for them to ignore their responsibility to brothers and sisters.

Posted in Uncategorized | Tagged , , , , , | Leave a comment