Are Paleo Diets More Natural Than GMOs?

What do we mean by “natural”?
What do we mean by “natural”?

What do we mean by “natural”?

In 1980 I subjected myself to a weeklong cleansing diet of water, cayenne pepper, lemon and honey, topped off with a 150-mile bicycle ride that left me puking on the side of the road. Neither this nor any of the other fad diets I tried in my bike-racing days to enhance performance seemed to work as well as the “see-food” diet one of my fellow cyclists was on: you see it, you eat it.

In its essence, the see-food diet was the first so-called Paleo diet, not today’s popular fad, premised on the false idea that there is a single set of natural foods—and a correct ratio of them—that our Paleolithic ancestors ate. Anthropologists have documented a wide variety of foods consumed by traditional peoples, from the Masai diet of mostly meat, milk and blood to New Guineans’ fare of yams, taro and sago. As for food ratios, according to a 2000 study entitled “Plant-Animal Subsistence Ratios and Macronutrient Energy Estimations in Worldwide Hunter-Gatherer Diets,” published in the American Journal of Clinical Nutrition, the range for carbohydrates is 22 to 40 percent, for protein 19 to 56 percent, and for fat 23 to 58 percent.

And what constitutes “natural” anyway? Humans have been genetically modifying foods through selective breeding for more than 10,000 years. Were it not for these original genetically modified organisms—and today’s more engineered GMOs designed for resistance to pathogens and herbicides and for better nutrient profiles—the planet could sustain only a tiny fraction of its current population. Golden rice, for example, was modified to enhance vitamin A levels, in part, to help Third World children with nutritional deficiencies that have caused millions to go blind. As for health and safety concerns, according to A Decade of EU-Funded GMO Research, a 2010 report published by the European Commission:

The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research, and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are notper se more risky than e.g. conventional plant breeding technologies.So why are so many people in a near moral panic over GMOs? One explanation may be found in University of California, Los Angeles, anthropologist Alan Fiske’s four-factor relational model theory of how people and objects interact: (1) communal sharing (equality among people); (2) authority ranking (between superiors and subordinates); (3) equality matching (one-to-one exchange); and (4) market pricing (from barter to money). Our Paleolithic ancestors lived in egalitarian bands in which food was mostly shared equally among members (communal sharing). As these bands and tribes coalesced into chiefdoms and states, unequal distribution of food and other resources became common (authority ranking) until the system shifted to market pricing.

Violations of these relations help to show how GMOs have come to be treated more like moral categories than biological entities. Roommates, for example, are expected to eat only their own food or to replace one another’s consumed items (equality matching), whereas spouses share without keeping tabs (communal sharing). If you invite friends to dinner, it would be disconcerting if they offered to pay for the meal, but if you dine at a restaurant, you are required to pay the bill and not summon the owner to your home for a comparable cuisine. All four relational models are grounded in our natural desire for fairness and reciprocity, and when there is a perceived violation, it creates a sense of injustice.

Given the importance of food for survival and flourishing, I suspect GMOs—especially in light of their association with large corporations such as Monsanto that operate on the market-pricing model—feel like an infringement of communal sharing and equality matching. Moreover, the elevation of “natural foods” to near-mythic status, coupled with the taboo many genetic-modification technologies are burdened with—remember when in vitro fertilization was considered unnatural?—makes GMOs feel like a desecration. It need not be so. GMOs are scientifically sound, nutritionally valuable and morally noble in helping humanity during a period of rising population. Until then, eat, drink and be merry.

Advertisement

The Difference between Science and Pseudoscience

The Electric Universe Acid Test

The Electric Universe Acid Test

Discerning science from pseudoscience, By Michael Shermer | Sep 15, 2015

 

Newton was wrong. Einstein was wrong. Black holes do not exist. The big bang never happened. Dark energy and dark matter are unsubstantiated conjectures. Stars are electrically charged plasma masses. Venus was once a comet. The massive Valles Marineris canyon on Mars was carved out in a few minutes by a giant electric arc sweeping across the Red Planet. The “thunderbolt” icons found in ancient art and petroglyphs are not the iconography of imagined gods but realistic representations of spectacular electrical activity in space.

These are just a few of the things I learned at the Electric Universe conference (EU2015) in June in Phoenix. The Electric Universe community is a loose confederation of people who, according to the host organization’s Web site (thunderbolts.info), believe that “a new way of seeing the physical universe is emerging. The new vantage point emphasizes the role of electricity in space and shows the negligible contribution of gravity in cosmic events.” This includes everything from comets, moons and planets to stars, galaxies and galactic clusters.

I was invited to speak on the difference between science and pseudoscience. The most common theme I gleaned from the conference is that one should be skeptical of all things mainstream: cosmology, physics, history, psychology and even government (I was told that World Trade Center Building 7 was brought down by controlled demolition on 9/11 and that “chemtrails”—the contrails in the sky trailing jets—are evidence of a government climate-engineering experiment).

The acid test of a scientific claim, I explained, is prediction and falsification. My friends at the nasa Jet Propulsion Laboratory, for example, tell me they use both Newtonian mechanics and Einstein’s relativity theory in computing highly accurate spacecraft trajectories to the planets. If Newton and Einstein are wrong, I inquired of EU proponent Wallace Thornhill, can you generate spacecraft flight paths that are more accurate than those based on gravitational theory? No, he replied. GPS satellites in orbit around Earth are also dependent on relativity theory, so I asked the conference host David Talbott if EU theory offers anything like the practical applications that theoretical physics has given us. No. Then what does EU theory add? A deeper understanding of nature, I was told. Oh.

Conventional psychology was challenged by Gary Schwartz of the University of Arizona, who, in keeping with the electrical themes of the day, explained that the brain is like a television set and consciousness is like the signals coming into the brain. You need a brain to be conscious, but consciousness exists elsewhere. But TV studios generate and broadcast signals. Where, I inquired, is the consciousness equivalent to such production facilities? No answer.

The EU folks I met were unfailingly polite, unquestionably smart and steadfastly unwavering in their belief that they have made one of the most important discoveries in the history of science. Have they? Probably not. The problem was articulated in a comment Thornhill made when I asked for their peer-reviewed papers: “In an interdisciplinary science like the Electric Universe, you could say we have no peers, so peer review is not available.” Without peer review or the requisite training in each discipline, how are we to know the difference between mainstream and alternative theories, of which there are many?

In his book The Electric Kool-Aid Acid Test, Tom Wolfe quotes Merry Prankster Ken Kesey: “You’re either on the bus or off the bus.” It’s not that EUers are wrong; they’re not even on the bus.

 

Did Humans Evolve to See Things as They Really Are?

One of the deepest problems in epistemology is how we know the nature of reality. Over the millennia philosophers have offered many theories, from solipsism (only one’s mind is known to exist) to the theory that natural selection shaped our senses to give us an accurate, or verdical, model of the world. Now a new theory by University of California, Irvine, cognitive scientist Donald Hoffman is garnering attention. (Google his scholarly papers and TED talk with more than 1.4 million views.) Grounded in evolutionary psychology, it is called the interface theory of perception (ITP) and argues that percepts act as a species-specific user interface that directs behavior toward survival and reproduction, not truth.
Hoffman’s computer analogy is that physical space is like the desktop and that objects in it are like desktop icons, which are produced by the graphical user interface (GUI). Our senses, he says, form a biological user interface—a gooey GUI—between our brain and the outside world, transducing physical stimuli such as photons of light into neural impulses processed by the visual cortex as things in the environment. GUIs are useful because you don’t need to know what is inside computers and brains. You just need to know how to interact with the interface well enough to accomplish your task. Adaptive function, not veridical perception, is what is important.

Hoffman’s holotype is the Australian jewel beetle Julodimorpha bakewelli. Females are large, shiny, brown and dimpled. So, too, are discarded beer bottles dubbed “stubbies,” and males will mount them until they die by heat, starvation or ants. The species was on the brink of extinction because its senses and brain were designed by natural selection not to perceive reality (it’s a beer bottle, you idiot!) but to mate with anything big, brown, shiny and dimply.

To test his theory, Hoffman ran thousands of evolutionary computer simulations in which digital organisms whose perceptual systems are tuned exclusively for truth are outcompeted by those tuned solely for fitness. Because natural selection depends only on expected fitness, evolution shaped our sensory systems toward fitter behavior, not truthful representation.

ITP is well worth serious consideration and testing, but I have my doubts. First, how could a more accurate perception of reality not be adaptive? Hoffman’s answer is that evolution gave us an interface to hide the underlying reality because, for example, you don’t need to know how neurons create images of snakes; you just need to jump out of the way of the snake icon. But how did the icon come to look like a snake in the first place? Natural selection. And why did some nonpoisonous snakes evolve to mimic poisonous species? Because predators avoid real poisonous snakes. Mimicry works only if there is an objective reality to mimic.

Hoffman has claimed that “a rock is an interface icon, not a constituent of objective reality.” But a real rock chipped into an arrow point and thrown at a four-legged meal works even if you don’t know physics and calculus. Is that not veridical perception with adaptive significance?

As for jewel beetles, stubbies are what ethologists call supernormal stimuli, which mimic objects that organisms evolved to respond to and elicit a stronger response in doing so, such as (for some people) silicone breast implants in women and testosterone-enhanced bodybuilding in men. Supernormal stimuli operate only because evolution designed us to respond to normal stimuli, which must be accurately portrayed by our senses to our brain to work.

Hoffman says that perception is species-specific and that we should take predators seriously but not literally. Yes, a dolphin’s icon for “shark” no doubt looks different than a human’s, but there really are sharks, and they really do have powerful tails on one end and a mouthful of teeth on the other end, and that is true no matter how your sensory system works.

Also, computer simulations are useful for modeling how evolution might have happened, but a real-world test of ITP would be to determine if most biological sensory interfaces create icons that resemble reality or distort it. I’m betting on reality. Data will tell.

Finally, why present this problem as an either-or choice between fitness and truth? Adaptations depend in large part on a relatively accurate model of reality. The fact that science progresses toward, say, eradicating diseases and landing spacecraft on Mars must mean that our perceptions of reality are growing ever closer to the truth, even if it is with a small “t.”

In memory of a brave man

To remember and to forget

To remember and to forget

These days Israelis commemorate the 20th anniversary Yitzhak Rabin’s assassination. While writing this opening line I am checking myself again, maybe I’m wrong, has it been that long?

For the occasion Dr. Dana Arieli from Holon Institute of Technology – HIT, invited me to take part in an exhibition around this sad anniversary.

I was debating with myself at length on this one while iconic cliches were seducing my mind in scores. I ended up marrying two metaphors, synthesizing them into one: A memorial candle, which in Israel is commonly used for death anniversaries, and a goldfish -(wrongly) reputed to forget the thing that occupied its mind after completing a circle around his fish bowl. As the flame extinguishes under the water, the goldfish cannot remember what was it for in the first place. I know, I sometimes tend to complicate things but I just couldn’t let go of this image. For me it tells the story of the amnesia the Israeli society prescribed itself with in order to avoid confronting the life saving necessity of peace.

The Meaning of Life in a Formula

Can science help us overcome the terror of existence?

The Meaning of Life in a Formula

The Meaning of Life in a Formula

Harvard University paleontologist Stephen Jay Gould, who died in 2002, was a tough-minded skeptic who did not suffer fools gladly when it came to pseudoscience and superstition. Gould was a secular Jew who did not believe in God, but he had a soft spot for religion, expressed most famously in his principle of NOMA—nonoverlapping magisteria. The magisterium (domain of authority) of science “covers the empirical realm: what is the universe made of (fact) and why does it work this way (theory),” he wrote in his 1999 book Rocks of Ages: Science and Religion in the Fullness of Life. “The magisterium of religion extends over questions of ultimate meaning and moral value.” 

In part, Gould’s motivations were personal (he told me on many occasions how much respect he had for religion and for his many religious friends and colleagues). But in his book, he claimed that “NOMA represents a principled position on moral and intellectual grounds, not a merely diplomatic solution.” For NOMA to work, however, Gould insisted that just as “religion can no longer dictate the nature of factual conclusions residing properly within the magisterium of science, then scientists cannot claim higher insight into moral truth from any superior knowledge of the world’s empirical constitution.”

Initially I embraced NOMA because a peaceful concordat is usually more desirable than a bitter conflict (plus, Gould was a friend), but as I engaged in debates with theists over the years, I saw that they were continually trespassing onto our turf with truth claims on everything from the ages of rocks and miraculous healings to the reality of the afterlife and the revivification of a certain Jewish carpenter. Most believers hold the tenets of their religion to be literally (not metaphorically) true, and they reject NOMA in practice if not in theory—for the same reason many scientists do. In his 2015 penetrating analysis of Faith vs. Fact: Why Science and Religion are Incompatible, University of Chicago evolutionary biologist Jerry A. Coyne eviscerates NOMA as “simply an unsatisfying quarrel about labels that, unless you profess a watery deism, cannot reconcile science and religion.” 

Curiously, however, Coyne then argues that NOMA holds for scientists when it comes to meaning and morals and that “by and large, scientists now avoid the ‘naturalistic fallacy’—the error of drawing moral lessons from observations of nature.” But if we are not going to use science to determine meaning and morals, then what should we use? If NOMA fails, then it must fail in both directions, thereby opening the door for us to experiment in finding scientific solutions for both morals and meaning. 

In The Moral Arc, I give examples of how morality can be a branch of science, and in his 2014 book Waking Up: A Guide to Spirituality without Religion, neuroscientist Sam Harris makes a compelling case that meaning can be found through the scientific study of how the mind works (particularly during meditation and other mindful tasks), noting that “nothing in this book needs to be accepted on faith.” And Martin Seligman’s pioneering efforts to develop a science of positive psychology have had as their aim a fuller understanding of the conditions and actions that make people happy and their lives meaningful. 

Yet what if science shows that there is no meaning to our lives beyond the purposes we create, however lofty and noble? What if death is the end and there is no soul to continue after life? According to psychologists Sheldon Solomon, Jeff Greenberg and Tom Pyszczynski, in their 2015 book The Worm at the Core: On the Role of Death in Life, the knowledge that we are going to die has been a major driver of human affairs and social institutions. Religion, for example, is at least partially explained by what the authors call terror management theory, which posits that the conflict between our desire to live and our knowledge of our inevitable death creates terror, quelled by the promise of an afterlife. If science takes away humanity’s primary source of terror management, will existential anguish bring civilization to a halt? 

I think not. We do live on—through our genes, our loves, our friends and our contributions (however modest) to making the world a little bit better today than it was yesterday. Progress is real and meaningful, and we can all participate

Trains of thoughts

Trains of thoughts

Trains of thoughts

Earlier this year, Shlomo Artzi mentioned in his column a poem by the late Israeli poet Yehuda Amichai. The poem: Poem No. 10, The Math book, was making reference to a genre of math  books questions which were considered ‘classical’. These questions usually used to evolve around two trains which left their respective stations at a given time, along a certain length of a track, traveling at a certain speed. The student is then invited to tell the meeting time and the place of those trains along the way. Amichai, with his poetic magic wand, diverted the focus of the question, and with his humanistic instinct concentrated on the opportunities accomplished and those which were missed between would be lovers, traveling on those trains. The poem ends with an air of acceptance and the word: ‘Amen’. he’s my favorite Israeli poet.

Biohacking

Biohacking

Biohacking

Going back through my files, I have stumbled upon this one. Commissioned by G magazine, earlier this year.

Here’s what Biohacking is all about, from Wikipedia:

Biohacking or wetware hacking is the practice of engaging biology with the hacker ethic. Biohacking encompasses a wide spectrum of practices and movements ranging from “grinders” who design and install do-it-yourself body-enhancements such as magnetic implants to do-it-yourself biologists who conduct at-home gene sequencing. Biohacking emerged in a growing trend of non-institutional science and technology development. Many biohacking activists, or biohackers, identify with the biopunk movement as well as transhumanism and techno-progressivism.

“Biohacking” can also refer to managing one’s own biology using a combination of medical, nutritional and electronic techniques. This may include the use of nootropics, non-toxic substances, and/or cybernetic devices for recording biometric data.

Are you a Zombie?

What the living dead can teach us about ancient prejudices

Are you a Zombie?

Scientific American Volume 312, Issue 1 By Michael Shermer | Dec 16, 2014| Skeptic

Imagine one day you find yourself surrounded by a group of zombie hunters who are about to blow your head off in the belief that you are a member of the evil walking dead. Showing the human traits of mercy and justice, however, they give you a chance to convince them that you are not a zombie. What would you say?
This thought experiment is featured in the British sociologist Jeremy Stangroom’s delightful little book Is Your Neighbor a Zombie? (Bloomsbury, 2014) as a puzzle in the philosophical problem of “other minds.” Since we cannot get inside other people’s heads we have only their actions to assess their mental states. If a robot with advanced artificial intelligence could perfectly mimic human actions how could you tell the difference between it and a real human? Thus, if you tell the zombie hunters that you have a family and a job, that you have hobbies and play sports, and that if stabbed you will cry out in pain, they can counter that zombies exhibit all of these behaviors even though they are not human on the inside. Zombies appear to be sentient beings, they explain, but are actually unconscious automata, merely exhibiting human-like features.
In philosophy of mind studies this is what is known as a philosophical zombie, or p-zombie, and according to Stangroom, there is no way out of this dilemma. Whether such actions are generated by human brains, zombie brains, robotic computer brains, or even animating souls, we can never truly know someone’s (or something’s) mental state. Although, he notes, if we’re really zombies “we simply won’t care.”
Therein, perhaps, is a solution to the dilemma: self-awareness means you’re not a zombie, and if others claim that they are also self-aware and exhibit actions indicating such, it is reasonable to conclude that they are not zombies either. Distinguishing between zombie and non-zombie brains captures a deeper problem of how we discriminate between various human groups, and Shakespeare worked out the logic of why we shouldn’t in The Merchants of Venice when he has Shylock ask:

Hath not a Jew eyes? Hath not a Jew hands, organs, dimensions, senses, affections, passions; fed with the same food, hurt with the same weapons, subject to the same diseases, heal’d by the same means, warm’d and cool’d by the same winter and summer, as a Christian is? If you prick us, do we not bleed? If you tickle us, do we not laugh? If you poison us, do we not die? And if you wrong us, do we not revenge? If we are like you in the rest, we will resemble you in that.

P-zombies differ from the zombies of pop culture, such as those explored by neuroscientists Timothy Verstynen and Bradley Voytek in Do Zombies Dream of Undead Sheep? (Princeton University Press, 2014). These zombies, they write, suffer from “Consciousness Deficit Hypoactivity Disorder” characterized by “a lack of intentional control over their actions, lethargic and fatigued movements (akinesthesia), loss of a sense of pleasure (anhedonia), general language dysfunction (aphasia), memory impairments (amnesias), and an inability to suppress appetitive actions such as eating or aggressive ‘fight-or-flight’ behaviors.” These very real disorders (and others, such as a neural virus hijacking a brain), if combined into one person, would create the zombie found in such films as Dawn of the Dead and 28 Days Later, a staple of this now wildly popular genre.
The odds of such medical maladies taking over millions of bodies at once is remote, so a zombie apocalypse is not something we need concern ourselves with. Why, then, are zombies so popular? Perhaps they touch an instinctive xenophobia that evolved as part of our human nature to be suspicious of outsiders who, in our ancestral environment, were potentially dangerous. In the western world we have learned to curb such chauvinisms and as a result the moral sphere has expanded to include all racial and ethnic groups as worthy of moral respect and equality, so maybe zombies and other fictional beings found in science fiction, fantasy, and horror genres stimulate those neural regions of our non-zombie brains and allow for a healthy and nonviolent outlet for such ancient callings.

 

And on a personal note: Yesterday my computer came from the dead after having been left on the conveyer belt in Fumicino airport’s x-ray machine ten days ago.

With the help of some very good friends across countries and languages, good will prevailed and the computer was found. Hallelujah!