Over at Ethics Alarms, Jack Marshall is blogging about a recent British Medical Journal study of TV medical talk shows which found that, basically, Dr. Oz is talking out of his ass. Jack makes a good point, but this throw-away line caught my eye:

For some reason medical experts have waited over a decade to actually check out the snake oil Dr. Oz has been selling to credulous viewers…

That’s an interesting phenomenon that I’ve observed before. Many experts seem to be so thoroughly immersed in the framework of their fields that related quackery doesn’t seem relevant to what they do. I imagine it just seems so obviously wrong that they don’t even think of it as part of their field, which is why astronomers don’t spend much time debunking astrology and lawyers don’t feel the need to address the problems with redemption theory. I guess doctors don’t give much thought to TV medical advice because they don’t see it as relevant to the practice of medicine.

It probably says something about me that I’m thinking of buying a Geiger counter.

It’s not that I really have a need to detect radioactivity, exactly. But you know all those CSI shows where they pull out the UV light and shine it around a crime scene to find suspicious stains? If you tried that at home, the ultraviolet light would probably find all kinds of messes — where the dog peed on the rug, or a child spilled food, or granddad didn’t quite make it to the bathroom that one time — things you’d probably rather not know are there. And you’re really better off not knowing what would show up if you tried that on the sheets the next time you’re staying at a cheap hotel.

Well, I have this theory that there’s a little bit of the same thing going on with radioactivity. That is, I think if I poked around with a Geiger counter in some likely places — the alleys behind hospitals, university science buildings, junkyards that are not supposed to be used for radioactive waste — I’ll bet I’d find some radioactivity that’s hanging out where it’s not supposed to be.

Also, all kinds of radioactive stuff got into the wild in the early days of the nuclear age, and some of that stuff is still knocking about — radioactive samples from educational kits, mid-1900’s red-glazed earthenware, radioactive materials that were not properly disposed of and we reworked into everything from elevator buttons to jewelry — and it would be fun to try to find some of it.

Truthfully, I need to research this a lot more. I need to have a much better plan for finding stuff than just “looking around.” I’m not even sure what kind of Geiger counter I’d need. I’ll have to come up with theories backed by historical evidence for how radioactive materials could get into the environment in quantities I could detect with equipment I can afford. Then I’d have to come up with a plan for searching for it, preferably without attracting the kind of attention you might get when someone points out the strange guy with a clicking box to the police.

It probably says something about my marriage that when I told my wife I was thinking of buying a Geiger counter, she was genuinely surprised that I didn’t already have one.

When word got around that Bill Nye (“The Science Guy”) was going to debate Ken Ham, the founder of the Creation Museum, I was skeptical because just understanding the science of evolution is not good enough, you also have to understand and be prepared for all the creationist criticisms of evolution, and most scientists have no reason to learn about those.

Among other things, I wrote:

In other words, in order to debate the subject of evolution, it’s not enough to learn all about evolution. You also have to learn all about creationism, and how creationists think about evolution. You have to be familiar with things like Jonathan Wells’s anti-evolution Icons of Evolution and Alan Gishlick’s explanation of why it’s wrong. You have to absorb the creationists’ way of thinking about evolution in order to explain your point to them in a way they will understand.

And that’s just not something I’m willing to spend a lot of my time on. And unless Bill Nye has been secretly setting up Ken Ham for this debate for months, it’s not something he’s spent a lot of time on either.

As it turns out, Nye didn’t quite set Ham up, but as he explains in a Skeptical Inquirer article about his view of the debate, he did do a lot of prep work:

I consulted the world’s foremost authorities on arguing or debating with creationists. I flew to Oakland, California, and consulted with the famed, venerable, and formidable Genie Scott, along with Josh Roseneau, and the staff at the National Center for Science Education (NCSE). They schooled me on what to do in great detail. Later that week, I managed to arrange a lunch with Don Prothero and Michael Shermer, two hardcore skeptics. Don even debated the notorious Duane Gish back in the 1980s. All of these people were wonderfully helpful. They were very patient with me and helped me figure out what to say and, especially, what not to say. They said to prefer the word “explanation” to the word “theory,” for example. I just can’t thank them enough.

I haven’t seen the debate, but it sounds like he also managed to pull off a strategy I’ve seen work before, which is to steer the debate so it isn’t constantly about the attacks on the theory of evolution, but rather a discussion of what the alternative might be. I’ve seen forum discussions where creationists were hammered by repeated requests to “state the theory of creationism.” After all, if they think they’ve got a better theory, they should be able to explain what it is and why it does a better job of explaining the physical evidence of the real world.

Ken Ham’s theories are well known, so Nye didn’t have to challenge him to describe them, which allowed Nye to address them directly and point out why they were deficient:

If you take the time to watch, Mr. Ham repeatedly mentioned or droned on about the less-than-a-handful of scientists who subscribe to the weird idea that the Earth is crazy (or crazily) young. When my turn came, I talked about geology and the Grand Canyon. Creationists from the United States, or in Australian-born Ham’s case, in the United States, just can’t get enough of the Grand Canyon. I pointed out that not a single fossil form had tried to swim from one rock layer to another during his purported worldwide flood, only 4,000 years ago. Were we to find such a fossil, it would utterly change geology and our scientific worldview. I did a bit of engineering, pointing out that no wooden boat has ever been built as big as Ham’s imagined ark. In fact, the big ones that were built were smaller and generally twisted apart— and sank (for this I used a chart from Ham’s website). I made it personal where possible. The Nyes are an old New England family, many of whom sailed wooden ships. I also spoke of decades in the Pacific Northwest, where I observed the enormous boulders washed westward by ancient collapsing ice dams in what is now Montana.

If kangaroos got off the ark in Mesopotamia, why aren’t there kangaroos in Laos? (Again, I used a map from Ham’s very website.) Then, from geology: If I find ice that has evidence of 680,000 layers of summer-winter cycles, how could the Earth be any younger? Thanks to Don for that. How can there be 9,500-year-old trees if the Earth is only 6,000 years old? And so on.

Apparently, the whole debate went like that.

Those of you familiar with creationism and its followers are familiar with the remarkable Duane Gish (no longer living—at least as far as we know). His debating technique came to be known as the “Gish Gallop.” He was infamous for jumping from one topic to another, introducing one spurious or specious fact or line of reasoning after another. A scientist debating Gish often got bogged down in details and, by all accounts, came across looking like the loser.

It quickly occurred to me that I could do the same thing. If you make the time to watch the debate (let’s say for free at—wink, wink), I hope you’ll pick up on this idea. I did my best to slam Ken Ham with a great many scientific and common sense arguments. I believed he wouldn’t have the time or the focus to address many of them.

So I guess I was mistaken to worry about how this would turn out. From the accounts I’ve heard, it sounds like Nye came across very well and pretty much won the debate.

I stumbled across an amusing bit of scientific confusion at Addicting Info (“The Knowledge You Crave”) in an article titled “The U.S. Navy Just Announced The End Of Big Oil And No One Noticed.” The author, Justin “Filthy Liberal Scum” Rosario, says the U.S. Navy has “achieved the Holy Grail of energy independence – turning seawater into fuel.”

He’s talking about an International Business Times article by Christoper Harress describing a process developed by he U.S. Navy:

After decades of experiments, U.S. Navy scientists believe they may have solved one of the world’s great challenges: how to turn seawater into fuel.

The development of a liquid hydrocarbon fuel could one day relieve the military’s dependence on oil-based fuels and is being heralded as a “game changer” because it could allow military ships to develop their own fuel and stay operational 100 percent of the time, rather than having to refuel at sea.

The new fuel is initially expected to cost around $3 to $6 per gallon, according to the U.S. Naval Research Laboratory, which has already flown a model aircraft on it.

There have been rumors and conspiracy theories about methods for getting power from seawater for decades. I’ve heard it’s a good story for con men who claim to be looking for investors, because it has a built-in explanation for why they’re approaching individuals instead of Wall Street — the oil companies are suppressing it, you see.

However, this is not that rumor. It’s a real thing, although it’s not as good as it sounds, which I’ll explain in a minute. But it sure excites Rosario, who is eager for the demise of Big Oil:

This technology is in its infancy and it’s already this cheap? What happens when it’s refined and perfected? Oil is only getting more expensive as the easy-to-reach deposits are tapped so this truly is, as it’s being called, a “game changer.”

I expect the GOP to go ballistic over this and try to legislate it out of existence. It’s a threat to their fossil fuel masters because it will cost them trillions in profits. It’s also “green” technology and Republicans will despise it on those grounds alone.

Okay, first of all, the $3 to $6 per gallon price is the expected price once the process is industrialized. We’re not there yet.

Second, this won’t lead to energy independence for the United States because this is not a new energy source. What the IBT article is describing is a process for extracting hydrogen and carbon dioxide from the ocean and “un-burning” them to create a hydrocarbon fuel. However, the principle of conservation of energy tells us that if a fuel produces energy when burned, then the process of creating the fuel must consume energy. Ultimately you can’t get any more energy out of a fuel than you put into creating it, and in practice you’ll get somewhat less, due to inefficiencies in the process.

I’m guessing the U.S. Navy is interested in using this process to fuel aircraft and support ships associated with aircraft carriers groups. The U.S. carrier fleet is nuclear powered, but the aircraft and support ships all operate on hydrocarbon fuels. This is a major logistics problem because that fuel has to be replenished periodically from land-based stockpiles while the fleet is operating at sea, which is complex even in peacetime, and during a war, the Navy would have to be prepared to defend the refueling ships from enemy attacks along their entire route.

If this new fuel synthesis technology can be scaled up to industrial proportions, however, the nuclear power plants on board the aircraft carriers could provide the energy to synthesize fuel for the rest of the fleet right from seawater. Alternately, the Navy could deploy special purpose-built nuclear fuel synthesis ships. This would eliminate the need for refueling ships, thus solving a big logistics problem for the Navy.

That bad news for Rosario is that this will not overthrow big oil. That’s because if you have to put energy in to get energy out, then what you’re describing is really an energy storage system, not an energy source. The energy that you put into the storage system still has to come from somewhere else. We could use electrical power to synthesize fuel, but that electrical power still has to be generated, and here in the U.S., over 80% of our energy comes from fossil fuels, and almost half of that is from oil.

If we look only at electric power generation, almost half of it is from coal, with another quarter from natural gas. So we’d end up burning coal and natural gas to get the energy to make the synthetic fuel, and the transformation to electricity and then back to fuel would make it less efficient than just burning fossil fuels directly. There’s no free lunch.

(Oil and other fossil fuels are subject to conservation of energy as well, but we consider them to be energy sources because we didn’t have to provide the energy to make them. The energy content of fossil fuels was captured from sunlight by ancient organisms millions of years ago.)

You could argue that we could switch to cleaner energy to power the oil synthesis, but if it were economically feasible to shut down coal and gas powered electric power generators and replace them with cleaner energy sources, we could have already done so. Our choices of energy source are driven by availability, economics, and our existing investment in power generation infrastructure.

That’s not to say the Navy’s fuel synthesis wouldn’t be useful once we do eventually switch our electric power generation system to cleaner sources, such as solar, wind, next-generation nuclear power, or maybe even fusion (a.k.a. “The energy source of the future”). Because even if we switched our electric power generation to clean energy, and switched our industrial power and residential heating to work off the electric grid instead of burning fossil fuels, we’d still have to power our transportation system, which uses about 30% of our energy, and which is almost entirely powered by fossil fuels.

Switching our transportation system to use electrical energy would be difficult, because the elements of our transportation system — cars, trucks, trains, planes, ships — all have to carry their energy sources around with them, which means they need an energy source that is portable. (Trains travel fixed routes, so they could conceivably be powered electrically from catenary lines or the “third rail,” but that would require more infrastructure investment.) More to the point, most modes of transportation require an energy source that is lightweight, which means they must use a storage medium that has a high energy density — that stores a lot of energy per pound of added weight.

Our love of portable electronic devices has driven a revolution in battery power density, and yet with our current technology, we can just barely build battery storage units suitable for powering a small vehicle. The extended-life battery for a Tesla S model holds 85 kilowatt-hours of energy and as near as I can tell from a bit of Googling, the batteries weigh about 800 pounds. By comparison, the amount of gasoline needed to store 85 kWh worth of energy only weighs about 15 pounds. The lithium ion battery technology works okay for small, lightweight vehicles designed for relatively short trips, but it hasn’t proven feasible for larger vehicles or those that routinely travel longer distances.

The weight problem is even worse for aircraft. A Boeing 737-200 flies with 4780 gallons of fuel, which weighs just over 32,000 pounds, or just over 1/4 of the aircraft’s 115,500 pound maximum takeoff weight. That much fuel contains 187,000 kilowatt-hours of energy, and storing that much energy in lithium ion batteries would require a battery pack weighing 1.7 million pounds, or about 15 times the maximum takeoff weight of the aircraft. So unless we invent a whole new battery technology with unprecedented energy density, we will never be able to fly commercial aircraft on cleanly generated electric power.

However, as I said earlier, the Navy’s new fuel synthesis technology is really an energy storage system, and so it could well be the new “battery” technology for transportation. If it is as successful as predicted and it’s an energy efficient process and it can be scaled up to supply more than just a few carrier groups (that’s a lot of ifs), then we could generate the electric power cleanly and then use it to synthesize fuel for airplanes and road vehicles and anything else that can’t be wired into the electric grid. The synthesized fuel is not pollution free — it’s still hydrocarbons and burning it still produces carbon dioxide — but because the fuel is made by extracting carbon dioxide from the ocean instead of creating new carbon dioxide from fossil fuels, it can only produce as much carbon dioxide as was used to make it, so there won’t be a net increase. The entire cycle is carbon neutral.

However, the Navy’s new technology is not enough by itself. Energy independence and the end of big oil will have to wait until we get an energy source that is better and cheaper than fossil fuels.

My sometimes co-blogger Ken and I often discuss the whole evolution-v.s.-creationism issue, and I’ve tried a few times to explain to him why I don’t write about it much here. I occasionally discuss or speculate about some basic evolutionary science, and I’ve slammed some really idiotic creationist nonsense, but I just don’t want to get into a debate about it.

I got to thinking about it again when I heard that Bill Nye (“The Science Guy”) was apparently going to debate Ken Ham, who is the founder of something called the Creation Museum (now with Zip Lines!).

This strikes me as a bad idea. Bill Nye is famous for explaining science, but not for debating it, and debating evolution v.s. creation (or intelligent design) is hard.

To give you an idea why, let’s start with a simple challenge that might be offered by an opponent of evolution: The second law of thermodynamics says that entropy must always increase, that is, things become disordered. Yet evolution implies an increases in order as things evolve into more complex forms. Therefore evolution would violate the second law of thermodynamics.

This is a simple misunderstanding of the basic science. The second law of thermodynamics applies only to closed systems, and the Earth is not a closed system because it receives energy from the Sun. That energy powers the biological processes of evolution. Considering the combined system of the Sun and Earth, entropy still increases because the decrease in entropy implied by evolutionary processes on Earth is more than made up for by the increasing entropy in the furiously churning heart of the Sun.

As creation v.s. evolution questions go, that was a fairly easy one for me to answer. It doesn’t really even require any knowledge of evolution, just a basic understanding of thermodynamics.

To handle more difficult challenges, some understanding of evolution is necessary. My own knowledge is strictly amateur level, but I think I can respond to a slightly harder challenge, such as If species evolve to survive, how come we still kill cattle in slaughterhouses? Shouldn’t cattle have evolved to prevent us from doing that?

I can think of three answers to that question, depending on your point of view:

  1. Cattle are at an evolutionary dead end. Evolution happens through small changes. But can you imagine any property of cattle as a species — height, weight, speed, intelligence, digestion — where a small change would allow them to escape the ranch or the slaughterhouse? If not, then evolution won’t help them get away.
  2. It’s already happened. That is, the question is misleading because it focuses on a single species. But if you look at the larger Bovidae family, which contains cattle (Bos primigenius)Wikipedia tells us it includes 145 distinct species, including yak, several types of antelope and oryx, bison, anoa, several types of buffalo, zebu, nyalas, elands, many types of duiker and gazelle, a variety of goats, several types of reedbucks, impala, wildebeest, several hartebeest, and muskox. Humans may eat some of those creatures now and again, but how many of them have you seen at the butcher shop? Cattle are an unusual case in a part of the animal kingdom where most species have followed evolutionary paths that escaped humanity’s hunger.
  3. The cattle species is actually very successful. Cattle’s survival “strategy” is to make themselves valuable to humans. As individuals, they may die by the hundreds of millions, but as a species, they are thriving: By becoming a tasty food source, they’ve given humans an incentive to protect and nurture a worldwide herd of a billion cattle. Measured as a fraction of the mass of all living things on Earth, the cattle species has managed to garnered a larger portion of the organic matter of this planet than any other single land-dwelling species. (Humans are a close second.)

That was a lot more evolution-specific, and I had to look a few things up on the web to get the details about the Bovidae family and biomass proportions, but it’s still basic science. And while I find it a convincing response, I can’t be sure that it would convince anyone else.

Now lets move on to a harder challenge: If evolution does move in small steps, how could something as complex as an eye evolve? It seems to be a structure of irreducible complexity — until you’ve got the whole thing, you’ve got nothing useful — so what good would evolving 1% of an eye’s structure do for a species’ survival?

This is at the limits of my knowledge, and I only know the basic outline of an answer: To a blind organism swimming in the ocean, even a slight ability to sense light will give it useful information, such as how close it is to the surface. Then, once any sort of photosensitive patch has formed, any ability to detect the direction of incident light is an improvement, and one of the easiest way to increase directionality is to recess the photosensitive patch into the body a bit, so light has to strike it from within a narrow angle. The more the photosensitive patch recesses, the more directional it becomes, until it reaches the point of being a photosensitive pit with a tiny viewing hole. Then it works like a pinhole camera and images form on the photosensitive surface, so it becomes advantageous to evolve brain structures for interpreting those images — edge detection, motion detection, and so on.

And that’s about all I know. I can only describe the basic idea. To respond effectively in a debate, I’d have to be able to offer evidence that anything like this actually happens. It’s my understanding that you can find examples of all major stages of eye evolution in nature if you know where to look, but I haven’t got a clue.

I’ll bet that many biologists can’t give a high-quality answer this challenge either, unless they just happen to have researched eye formation. Creationists also assert that other things have irreducible complexity, such certain cellular internal structures and the mechanism by which blood clots. To survive in a debate, you’d have to have researched the answers ahead of time, which means knowing the creationist challenges ahead of time.

There are still harder kinds of challenges. They tend to sound something like this: If the theory of evolution is correct, how do you explain Professor Robert Schuster’s 1992 paper in which he reports finding Cathayornis yandica fossils in the sedimentary deposits of the Chusovaya River Basin at levels beneath where he found Castorocauda lutrasimilis?

First of all, you’d have to know that Cathayornis yandica is a Cretaceous bird and Castorocauda lutrasimilis is Jurassic mammal, which means you’d expect to find the Castorocauda fossils deposited in the older sedimentary layers beneath the Cathayornis fossils. Finding them higher up seems to imply a problem with our understanding of the fossil record or with the technology used to date fossils, possibly allowing for a younger earth in which all the supposedly ancient extinct species actually lived quite recently, justifying creationists’ depictions of dinosaurs and humans living together.

To respond to this, you’d have to know if Professor Schuster really exists, and if so, did he publish his findings in a peer-reviewed journal or is he some kind of crackpot? Then you’d want to know if his paper actually says what your debating opponent claims it says. And if so, does it mean what it seems to mean? Or would Professor Schuster be shocked to discover that anyone thought his paper disproved the theory of evolution? Is the explanation for the layer inversion actually a geological phenomenon that is a well-understood by anyone who studies the Chusovaya River zone? Could it be that scientists study this area precisely because the well-known inversion gives them easy access to older fossils without digging so much?

For the record, I completely made up this question, so no one could possibly answer it. But when you go up against a sophisticated creationist debater without anticipating his questions and researching your responses, every challenge is going to feel made-up.

Finally, there questions my mother would ask, such as How could the random changes of evolution result in human beings?

The basic answer is that they couldn’t and they don’t and that’s not how evolution works. It’s true that mutations can cause organisms to develop random traits that are different from the parent organism, but whether the organism passes that trait to its own offspring is dependent on whether the organism survives and reproduces. That too is a matter of some random chance, but what’s not random is the differential survival rates between organisms with and without the mutation.

It works a bit like the house odds in a casino. Games like roulette, craps, and keno are all games of random chance, which means individual players could randomly come out ahead or behind after a short playing session. But on average, over time, the house is guaranteed to make money. In the long run — and evolution is always about the long run — that house edge will always wear down the players’ bankrolls until the casino has all the money. There’s nothing random about that result.

Similarly, if a new trait improves the chance of an organism’s survival by even a small amount compared with other members of its species, then the number of organisms carrying that trait will increase as a percentage of the species in every generation until all organisms exhibit the trait.

(I spent a few minutes trying to simulate this, and if I did it right, then if one organism in a colony of a million develops a trait that improves its chance of survival into the next generation by 1%, then that trait will spread to the entire population in less than 3000 generations. That’s tens of thousands of years for a large mammal species, a few decades for houseflies, and about two months for a colony of E. Coli bacteria.)

The thing is, no matter how hard I tried to explain this, I could never find a way to convince my mother. She just couldn’t seem to wrap her mind around the concept. Maybe she was simply unwilling to accept evolution, or maybe I just never found the right way to explain it to her. I had answers that convinced me, but they didn’t convince an evolution skeptic.

This is why I don’t debate the theory of evolution here, and why I think it’s a bad mistake for Bill Nye to try to debate it at the Creation Museum. In order to debate the theory of evolution with a creationist like Ken Ham, it’s not good enough to just learn about the theory of evolution. You also have to learn answers to the kinds of questions creationists raise about evolution, such as my cattle example, or the more specific question about the evolution of the structures of the eye.

That stuff’s not actually so bad, especially if you’re fascinated by evolution, as I am, but then you also have to learn about all the papers and studies and conjectures that creationists have used to attack evolution over the years. For each one of those, you have to learn what creationists say about it, what it actually said, what it meant, whether it was legitimate research, and what legitimate evolutionary scientists say about the issue.

Finally, once you’ve learned about all these challenges you’re going to get from creationists, you have to figure out how to respond to them. That’s harder than it sounds because, as my last example illustrates, it’s not good enough that your responses convince you. A really good response has to convince your opponents, or at least it has to convince listeners who are skeptical about your position.

In other words, in order to debate the subject of evolution, it’s not enough to learn all about evolution. You also have to learn all about creationism, and how creationists think about evolution. You have to be familiar with things like Jonathan Wells’s anti-evolution Icons of Evolution and Alan Gishlick’s explanation of why it’s wrong. You have to absorb the creationists’ way of thinking about evolution in order to explain your point to them in a way they will understand.

And that’s just not something I’m willing to spend a lot of my time on. And unless Bill Nye has been secretly setting up Ken Ham for this debate for months, it’s not something he’s spent a lot of time on either. Which is why I expect it to go something like Greg Laden’s parody of a debate:

Scientist: “If there’s one thing you should take away from this discussion, it’s…

Denialist [interrupting]: Thing one, thing two, thing three, thing four, thing five.

Scientist: “Actually, that thing four you said, that’s not really true ..

Denialist [interrupting]: Thing six, thing seven, thing eight, thing nine, thing ten.

Scientist: We can’t be sure of everything but one thing we are pretty sure of is…

Denialist [interrupting]: I’m sure of thing eleven, thing twelve, thing thirteen thing fourteen.

Greg Laden also discusses some other reasons this debate is a bad idea. I don’t agree with everything he says, but like him, I don’t expect it to go well for the science guy.

It’s my understanding that DNA matching is arguably the best and most reliable of the forensic sciences. One reason is that it’s based on scientific knowledge and ideas that were developed independently of its forensic uses. Scientists spent decades studying DNA: How it’s structured, what it’s made of, how it replicates, how it combines in sexual reproduction, and how it is that we each come to have the DNA that we do.

In the 1800’s, when Charles Darwin published his theory describing how organisms evolve over many generations, he didn’t have a good explanation of how traits of organism combine with each other: If children are a mix of their parents’ traits, then after a few hundred generations, why aren’t we all medium height, medium build, and beige?

At about the same time (and apparently unknown to Darwin) Gregor Mendel published research showing that some traits of organisms seem to pass from generation to generation in discrete chunks — you’re either albino or not, you either can smell hydrogen cyanide or you can’t. (Mendel worked with plants, but those are human traits that are now believed to follow strict Mendelian rules.) This implied that however traits passed from parents to children, they passed in discrete chunks — perhaps with many chunks combining to determine fuzzy traits such as height or skin color — although Mendel had no idea what those chunks were.

By the end of the 1800’s, scientists were studying a peculiar microscopic substance found in living cells, and over the course of the 20th century, they began to suspect that Mendel’s hereditary chunks were pieces of this molecule, which came to be called deoxyribonucleic acid, or DNA. Eventually they were able to prove that DNA was the mechanism of heredity, and to determine its biological behavior at the molecular level. It explained everything that Darwin and Mendel had observed about heredity.

Biologists began using DNA to study the relationships between species of animals, and doctors began hunting for bits of DNA — genes — that correlated with (as possibly caused) diseases and conditions with known hereditary components. Scientists and engineers developed methodologies and tools for studying DNA, and their data went into publications and databases. They accumulated a lot of knowledge about DNA and how to work with it.

By the time DNA began to be used forensically to match biological samples to individual people, it had about a century of history and billions of dollars of research behind it.

Except…not every aspect of forensic DNA analysis is based on that science. Technicians can now recover DNA from tiny amounts of foreign skin cells found on a victim. But even if that identifies an individual, what exactly does it mean?

I wrote last year about a case here in Cook County where a guy was acquitted of sexual assault even after his DNA was found on the victim. Part of the problem with the case was that the DNA was found on the victim’s lips, and the lab could not tell what kind of cells it came from — it could have been from saliva, skin, or even hair.

(A bigger problem was that the DNA was so degraded as to be very nearly meaningless. Instead of the usual one-in-millions odds, the DNA expert said the odds were better then one-in-a-dozen. This is little better than saying he was tall and had dark hair. Hundreds of thousands of people in the area would also have matched.)

Now PDgirl offers two astounding examples of misleading DNA transfer:

…millionaire guy is murdered in his home. They are able to extract unknown DNA from on the guy’s fingernails. The assumption is that the DNA must be that of the killer. They run the DNA through a database and get a hit to a local man named Lukis Anderson. Mr. Anderson is arrested and charged w/ murder and faces the death penalty. He spends 5 months in jail, awaiting a resolution on the case.

However, it is impossible for Mr. Anderson to have been involved in the crime because on the night the man was killed, Mr. Anderson was in a hospital due to severe intoxication. He had been brought to the hospital by paramedics earlier that evening.  Airtight alibi if there ever was one. So how did his DNA get on the dead guy’s fingernails?

Well, according to the theory that the prosecutors have put forth, after finally conceding it wasn’t possible that Mr. Anderson was the killer, is that the most likely explanation is that the DNA was unintentionally transferred by the paramedics.  You see, the paramedics that had taken Mr. Anderson to the hospital earlier in the evening were the same paramedics that responded to the crime scene and that handled the millionaire’s body.

Imagine if the paramedics had decided not to transport Anderson, so he hadn’t been in the hospital at the time of the killing. He might be in prison now because the forensic techs were so good at recovering DNA.

The second example is even stranger, but at least no one got arrested:

Not concerning enough for you? Well, what about Germany’s “Phantom of Heilbronn,” a notorious female serial killer who linked by her DNA to 40 different crimes, and yet somehow continued to manage to evade police during her crime spree that left police baffled for 2 years? She was also linked by DNA to several cold cases. Except for the devious serial killer turned out to be an innocent woman who worked at the factory where the cotton swabs used to collect DNA evidence were made. The Phantom of Heilbronn never actually existed

A similar problem in Austria a few months earlier was also traced to contaminated swabs, although in that case the police were using swabs that hadn’t been certified for DNA testing.

(DNA traces are fairly durable, and DNA itself is not a pathogen, so disinfection protocols that protect human health are not intended to eliminate all traces of DNA, which is probably why sterile swabs and trained paramedics have both been implicated in accidentally transferring DNA traces.)

Modern laboratory techniques can pull usable DNA samples from fewer than a dozen cells. Since your body has about one and a half trillion skin cells and sheds them by the millions every day, it seems likely that you are leaving theoretically detectable traces of yourself everywhere you go, and someday your freedom may depend on forensic experts having a real understanding of just how easy it is to transfer traces of your DNA around the environment.

Jamison Koehler put up a post on the fraction of your breath that a breathalyzer uses to estimate your blood alcohol ratio. He’s had expert training in DUI defense and therefore DUI-related technology, whereas I’ve just had some basic math and science education. But it’s an interesting subject, and I thought I’d try to explain what I think is happening from a science-ish point of view. DUI experts are invited to explain where I go wrong.

Jamison’s post begins it with a charming anecdote about his daughter (which makes me imagine the whole post as a rehearsal for a closing argument in a DUI trial), and then he gets into some of the math behind breathalyzer operation:

To be more precise:  The amount of ethanol present in a DUI breath sample is measured in terms of grams per 210 liters.

The Intoximeter EC/IR II – the breath test machine used in both DC and most jurisdictions in Virginia — requires a breath sample of at least 1500 cubic centimeters (or 1.5 liters) before it can provide a result.  Of this amount, McGarry says, the machine measures only 2 cubic centimeters (or 0.002 liters).

You need to multiply 0.002 by 105,000 to equal 210.  This means that any error in the measurement of ethanol in the 2 cubic centimeter breath sample will be magnified 105,000 times.

That’s true if we’re talking about the absolute amount of alcohol in the sample. For example, if the 2 cubic centimeter sample chamber contains 1 milligram (1/1000 of a gram) of alcohol (an unrealistically large amount that makes the math simple), then the full 210 liters would contain 105 grams of alcohol. Now if the breathalyzer measurement was off by 10% on the high side, then it would measure 1.1 milligrams of alcohol which would appear to mean the 210 liters of breath would have 115.5 grams of alcohol. So a measurement error of 0.1 milligrams in the sample results in a reported error of 10.5 grams.

I’m puzzled by that line of reasoning, however, because the 210 liters is not the size of a breath sample, and it’s not the volume of a pair of lungs. I’m pretty sure it’s not anything real. It’s just an artifact of the way the calculations are set up.

The legal limit for Blood Alcohol Content (BAC) while driving is 0.08 grams of alcohol per 100 milliliters of blood. Since alcohol is measured by weight and blood by volume, you can’t technically express this as a percentage because the units aren’t the same. Nevertheless, we implicitly assume that blood has the same density as water (it’s close), so a milliliter weighs one gram. That works out to o.o8 grams of alcohol/100 grams of blood, and we can cancel out the units to get our familiar 0.08% that we’re always hearing about.

Breathalyzers, as the name suggests, do not measure alcohol in the blood directly. Instead, they measure alcohol in the breath, and use that to produce an estimate of alcohol in the blood. For reasons too complicated for me to figure out breathalyzers perform their calculations on the assumption (ripe for attack by DUI defense lawyers) that the ratio of alcohol to blood volume is 2100 times the ratio of alcohol to breath volume. So, if we take the ratio of 0.08 grams of alcohol/100 milliliters of blood and divide by 2100 to convert to breath concentration, we get 0.0000380952 grams of alcohol/100 milliliters of breath.

That’s a really annoying number to work with, so instead of dividing the 0.08 grams of alcohol by 2100, let’s multiply the 100 milliliters of breath by 2100 to get 0.08 grams of alcohol/210000 milliliters of breath. Now divide the bottom number by 1000 to convert to liters, and we get 0.08 grams/210 liters. That’s a handy way to express it because there’s our familiar 0.08 legal limit again. And that’s where the 210 liters shows up in the calculations.

But that’s just because of the convention we’ve chosen to express the ratio. It would have been just as accurate express it as 0.000380952 grams of alcohol/liter of breath. Or we could go in the opposite direction and measure the alcohol in kilograms, which works out to 0.08 kilograms of alcohol/210000 liters of breath. If the absolute measurement error grows by 105,000 when converting to 210 liters, then converting to 210000 liters implies a multiplier of 105 million, which sounds even worse.

But it’s not, because however you choose to describe it, you’re still just measuring a ratio between the amounts of two substances, and therefore an error of 10% is an error of 10% regardless of the units you choose. Multiplying it out to 210 liters doesn’t really mean anything.  In other words, I think Jamison is engaging in the time-honored defense strategy known as muddying the waters.

McGarry also uses the analogy of a railroad car full of grain.  Can you take a small sample of that grain, test it and then be confident you know the contents of that railroad car?

Now this is a much more interesting question. Let me see if I can figure out an answer…

Let’s assume for the sake of simplicity that the grain car contains 1 billion grains. Jamison says the breathalyzer uses a sample that is 1/105,000 of the theoretical 210 liters. Rounding a bit, we can calculate that 1/100000 of a billion grains is a sample of 10,000 grains. If we’re trying to estimate, say, how much of the grain is too small (or rotting, deformed, whatever), is testing a sample of 10000 grains good enough to estimate how much of the grain in the car is defective?

My answer is the favorite answer of lawyers everywhere: It depends.

In particular, it depends on how you select the sample. The ideal would be to choose the sample completely at random: Start by numbering each of the grains from 1 to 1,000,000,000. Then use a random number generator to generate a set of 10,000 random number between 1 and 1,000,000,000 inclusive. Now for each random number, find the grain corresponding to that number and inspect it. (A simpler way to do that would be to let the grain out of the car through a tiny hole and count the grains as they fall out. Whenever your count is equal to one of the 10,000 random numbers on your list, inspect that grain.) Tally which ones are acceptable and which ones are defective.

When you’re done, calculate the fraction that’s defective in the sample, and use it as the estimate of the defective fraction in the entire grain car. With a random sample of 10,000 grains, you can have very high confidence in your result. The chance of a significant difference between the sample defective fraction and the defective fraction for the entire grain car is vanishingly small. The tiny sample will tell you a lot.

That kind of random sampling would be a lot of work, so you may be tempted to try something simpler. Perhaps you could take a scoop that holds 10,000 grains and dip it into the grain in the car to take a sample. That’s a lot easier, but it’s no longer a truly random sample. It’s a mechanical sample, which might not be as random as we’d like: Perhaps all the small grains have settled to the bottom, so a scoop off the top will be biased in favor of large grains, and it will therefore cause you to underestimate the number of under-sized grains; or maybe the grain car was filled by dumping in bushels of grain, and one of the bushels was filled with small grains. That batch of small grain will probably still be clumped together. If your sample misses it, you’ll underestimate the fraction of defective grain in the car, but if you happen to dig your scoop right into it, you’ll overestimate the fraction of defective grain in the car.

You can alleviate the problem somewhat by scooping smaller samples from several different places in the car and combining them into one 10,000-grain sample, but it’s still a mechanical sample rather than a truly random one, which means there’s still a pretty good chance of significant error.

So is there is any way you can use a scoop and still get the benefits of random sampling? Maybe. You could try stirring the grain thoroughly before taking a scoop. You’d have to be very careful and thorough about this, so that every grain from anywhere in the car before stirring has exactly the same chance as any other grain of ending up in the scoop. The scoop still takes a mechanical sample, but it’s a sample of grains that have themselves have been thoroughly randomized, which is just as good.

But randomizing an entire car full of grain is a difficult task. How do you make sure that grains that start in the bottom have an equal chance at ending up in the sample as grains that start on top? How do you make sure grains don’t get stuck in the corners? It requires a carefully designed stirring mechanism. Laboratories often buy expensive stirring and shaking equipment to get a sufficient randomization, and even those are usually intended for liquids. (Hmm…maybe we could fill the grain car with water, stir the floating mass of grain thoroughly, and then drain the water out. That would make for pretty good randomization, but wetting the grain brings its own problems.)

The breath analysis situation is a little simpler, however, because nature lends a hand in the form of turbulence and Brownian motion (the jiggling motion of gas molecules as they bounce around against each other and the sides of their container), which are about as random as any natural processes can get. Add a little alcohol vapor to a container of air, and in just a little while it will be diffused evenly throughout the entire container — faster if it is stirred or agitated to cause turbulence — so that even a tiny sample will contain enough randomized molecules to be an accurate representation of the alcohol concentration of the entire container.

However, the diffusion of the alcohol vapor through the air isn’t instantaneous. It takes a bit of time, and happens quickest near the highest concentration. In addition, if fresh air is circulating, it continues to dilute the alcohol vapor. This is why when you open a bottle of liquor, you can easily smell it at the bottle opening — the air inside is already saturated with vapors — but the farther away you get, the weaker the smell, as the alcohol is diffusing outward and mixing with fresh air and being carried away.

(In a perfectly sealed room with nowhere for the alcohol to go, the entire room would eventually become saturated to the same extent, and the smell wouldn’t depend on how close you were to the bottle. Also, I am neglecting the effects of gravity: If you’re mixing gases that have significantly different densities and there isn’t much turbulence, the heavier gases will eventually sink to the bottom of the container or room instead of mixing evenly.)

Now let’s think about what happens in your body, in your lungs, when there’s alcohol in your blood. As the blood flows through your lungs, alcohol diffuses across thin membranes in the alveolar sacs deep within the branching airways of your lungs, and it is therefore in these sacs that the alcohol concentration is the highest.

When you take a breath of fresh air, your lungs fill with alcohol-free air. This begins to mix with the alcohol-laden alveolar air due to Brownian motion and turbulence within your lungs. The longer you hold the air in your lungs, the more thorough the mixing. As you exhale, you first push out the air in your mouth and throat. This air was the farthest from the alveolar sacs, and is therefore has the lowest concentration. As you continue to exhale, you are forcing out air that was closer and closer to the alveolar sacs, and therefor higher in alcohol content. This is why police administering a breathalyzer test want you to give a long, slow exhale. It gives the alcohol more time to saturate your breath, and it brings up air from deep inside your lungs where the alcohol concentration is highest.

I’m almost done, but I should probably mention one of the possible sources of error in a breathalyzer test, which is that alcohol on your breath can come from other sources besides your blood.

Remember that in a person near the legal limit for DUI, the fraction of alcohol in the blood is 0.08%, and it is the diffusion of that tiny amount of alcohol into the lungs that is ultimately measured by the breathalyzer. An alcoholic beverage like beer, on the other hand, typically has about 5% alcohol. That’s not very much as alcoholic beverages go, but it’s more than 60 times the alcohol concentration in the blood at the legal limit.

So if there’s any beer in your mouth — because you just took a drink (or just threw up) — the alcohol from the beer will diffuse into your breath the same way the alcohol from blood diffused into the air in your lungs. There’s a lot more air in your lungs than in your mouth, and the convoluted surface area of your lungs is hundreds of square feet, meaning that diffusion happens much faster than with beer evaporating off the interior of your mouth. Nevertheless, the equilibrium point for air directly exposed to beer is still 60 times the legal limit, so any beer in your mouth will push the alcohol content of your breath in that direction. And even a little of that could put you over the limit.

Then there’s the beer in your stomach. Compared to your mouth, that’s a sealed system, with little fresh air getting in, which means your stomach gases should get pretty close to their saturation level of 60-times the legal limit. One burp before or during the breath test, and you’re in a lot of trouble.

It’s even worse if you drink hard liquor, because that could have an alcohol concentration hundreds of times the legal blood limit, which means an even greater chance of corrupting the DUI results.

I suppose you could always try arguing that the measured BAC is so high that it couldn’t possibly be right. But it would have to be pretty damned high.

Yesterday, Roger Ebert tweeted:

Bad luck. The asteroid that came so close to Earth is coming baaaaak.

Well, of course. It’s a known near-Earth object. They do that by definition. But the linked article by Andrew Malcolm at Investor’s Business Daily was a little more alarming than that, at least until I realized he was making stuff up:

Now, about that other bad news. According to the same computer calculations, in 2080 the orbit of 2012 AD 14, if unaltered in these next 67 years by some super-natural force like Bruce Willis, will slam into Earth at almost 18,000 miles an hour.

That explosive encounter, NASA says, will release about 2.5 megatons of energy into the atmosphere, causing “regional devastation.”

Um. No.

First of all, there’s no asteroid called “2012 AD 14.” The proper designation of the asteroid that just flew past the Earth is “2012 DA14” indicating that it was the 351st object logged with the Minor Planet Center in the second half of February 2012. (Whole ugly numbering system explained here.)

Second, the day after it passed the Earth — and two days before the publication date on Andrew Malcolm’s article — it was removed from the Sentry Risks Table. That’s the up-to-date listing of all potential collisions by known asteroids for the next 100 years.

Newly discovered asteroids get added to this list if the margin of error for their projected orbital track could possibly allow them to hit the Earth on one or more dates in the next 100 years. As the asteroids are repeatedly observed over the years, scientists refine their estimate of the orbit, and the shrinking margin of error reduces the number of possible dates for an impact. For example, the top item currently has a 1 in 59,000 chance of hitting the earth some time after 2078. This means that that the orbital track is good enough to eliminate the possibility of an impact at any earlier date. Eventually, when no possible impact dates remain in the next 100 years, the object is removed from the table.

Because 2012 DA14 was removed the day after its closest approach, I’m pretty sure what happened is that it came in range of so many telescopes and radar systems that its orbit has been thoroughly pinned down that there was no longer any doubt that it will keep missing the Earth for the next 100 years. Indeed, the Minor Planet Center records 297 observations of the orbit of 2012 DA14 on February 16th alone.

There are still many as-yet-undiscovered near-Earth asteroids out there, some of them probably quite large. It’s possible — arguably inevitable — that one of them will hit us some day. But not 2012 DA14. At least not soon. We know it far too well.

Could we be seeing the start of the Weyland-Yutani Corporation?

A new space startup company, Planetary Resources, claims they “will overlay two critical sectors — space exploration and natural resources“. That sounds like space mining! And it’s not just a bunch of nuts I’ve never heard of backing this idea. The investors include Ross Perot Jr., Google co-founder and CEO Larry Page and Google chairman Eric Schmidt, James Cameron and Microsoft billionaire Charles Simonyi.
One of the classic memes in science fiction is the exploitation of resources beyond Earth, and in particular asteroid mining. We know there are valuable minerals to be mined just sitting around on rocks with orbits not too distant from Earth.
There is platinum, cobalt, gold, cobalt, iron, manganese, molybdenum, nickel, osmium, palladium, platinum, rhenium, rhodium, ruthenium, tungsten, and more, just waiting to be picked up and flung back towards Earth.
And let’s not forget hydrogen and oxygen which is cheap on Earth, but expensive to put up into space. It would be much easier to fling those elements down into Earth orbit than to haul them up from the surface because of the deep gravity well we sit at the bottom of. Those two elements are very valuable as propulsion and already having them up in orbit would reduce the cost of rocket travel beyond Earth orbit enormously.
And I do mean “fling”. Asteroids don’t have a huge mass like a planet the size of Earth does, so it’s easy to get some of that mass away from them. In other words, the gravity well they sit at the bottom of isn’t very deep. In fact, it’s barely more than a rim. We would have more trouble keeping things on the surface of an asteroid than getting them off.
Since we are just talking about minerals or elements, and nothing that is living, a gentle change in velocity, called delta-v, will start any container slowly on its way down towards Earth, which sits at the bottom of a much larger gravity well. With a very precise push, you can expect the containers to either park themselves in Earth orbit, or even into a trajectory that would drop them down onto Earth for recovery, all with that initial push.
This is some very exciting news for space buffs and old kids like me who read all about such operations in science fiction novels. As a kid I just assumed that, by now, I would be working and living in space, yet commercialization of space has been nothing more than a pipe dream until recently.
But dream no more. Space-X corporation is scheduled to launch the first commercial resupply mission to the International Space Station on April 30th on a rocket they are designing to be man-certified. Spaceport America is a facility in New Mexico that is specifically designed for commercial space operations including facilities for the tourists Virgin Galactic will be flying into space (although not into orbit yet). Bigelow Aerospace is working on the old NASA inflatable space habitat concept, and expects to use the services of Space-X not only to launch the stations, but to supply crew and supplies. They plan on renting them out to nations or companies that can’t afford to build and launch their own stations.
Asteroid mining, however, is one of the great dreams of space commercialization. The potential for profit is huge, and so are the risks, but it represents a major milestone in man reaching for the stars. The reach this time is not just for exploration and knowledge, but for profit.
In Robert Heinlein’s classic story The Man Who Sold the Moon, the main character recognized that space travel would never become common until people could make money from the venture. He hid some diamonds on a flight to the moon so he could convince people it would be worth going back. In the case of asteroids, we already know the valuable materials to be harvested. It’s just a matter of having the technology to go out there so they can be tossed back to Earth.
If any space miners go along to repair the equipment, I just hope they remember to never, under any circumstances, look into a slimy alien egg as it it opening up. Even with a helmet on, that just never goes well in the end.

My Nobody’s Business co-blogger Rogier has a pretty good article up about divine delusions v.s. observable reality. It’s a plea for rationality, even if faith and mysticism seem like more fun. As is often my way, I have a small quibble.

Rogier and his opponent are discussing a Facebook poster’s insistence that a bit of lens flare in a photo of a pyramid is actually a sign that the “goddess era has arrived.” Rogier’s opponent is arguing that her subjective interpretation has meaning.

So here’s perhaps how she making the connection between her beliefs and aspirations and this photo. This photo for her is a symbol of her convictions: To bring the masculine energy (which she perceives is out of whack) into balance with the feminine energy.

He goes on to conclude:

So this image is a visual confirmation and symbol of her beliefs, and makes perfect sense.

Rogier had a problem with that:

I don’t see how he arrived there. At all. Unless he means that it makes perfect sense for some poor guy in an asylum to believe that he is Napoleon Bonaparte, or for the cat lady down the street to worship her scraggly charges as multiple reincarnations of Nefertiti. Yes, it makes sense to those two people, I’m sure. But almost everyone else easily recognizes the outsized fallacies involved.

There is no equivalence between the unprovable views of Cat Lady and Fake Bonaparte on the one hand, and the provable ones of Richard Feynman, Neil DeGrasse Tyson, and all the rest of science on the other.

This is where I feel the need to add a small clarification. I think “provable” is the wrong word. The key difference between the theories of scientists and the pronouncements of mystics is not that they can be proven, but rather that they can be disproven. In the terminology of Karl Popper, the theories of scientists are falsifiable.

What distinguishes a scientific theory from other kinds of ideas — personal beliefs, religious faith — is that scientific theories allow you to make predictions about the world that can be tested and that might be found false. (Note that I’m not saying that a theory has to be disproven to be scientific — that would make it a false theory — only that it has to be conceivable that it could be disproven.) Conversely, if there’s no way that an idea can be disproven, then it’s not really a scientific theory. If the theory can’t be tested against the real world, that means it doesn’t say anything useful about the real world.

Rogier’s opponent implicitly agrees that the goddess theory is not falsifiable:

For her [the Facebook poster] it’s a sign that the goddess era or whatever has arrived. Who’s going to prove she’s wrong?

If no one could ever prove her wrong, then she’s not saying anything interesting about the world.

A few days ago, at the conservative Illinois Review, an unnamed author who I assume is editor Fran Eaton got excited about some basic science in a post titled “Biology Textbook Author Asserts Life Begins at Conception”:

When does life begin?  At conception?  When the fertilized egg begins to multiply cells?  When the zygote embeds itself into its source of nutrition?

A growing number of scientists are beginning to assert that life can begin nowhere else but at conception, because at the moment when an egg is fertilized, it is either a human, a squirrel, an elephant or a dog. At that moment on, then, is when human life should be protected from planned destruction.

Actually, this is not some new trend that is getting support from “a growing number of scientists.” I’m pretty sure that biologists have never disputed the fact that fertilized eggs are alive — at least not since 1651, when William Harvey figured out that all animals, including humans, come from eggs — nor is there any doubt that a fertilized egg is of the same species as its parents. Fertilized human eggs have been human life since as long as scientists have known where babies come from.

In referring to an article at by biologist Gerard Nadal, Eaton describes it as reporting Professor Scott Gilbert’s “findings.” But the quote is from the 9th edition of Gilbert’s Developmental Biology, which is one of the standard textbooks in the field. I doubt that Gilbert is reporting any novel findings.

Here is the quote:

Traditional ways of classifying catalog animals according to their adult structure. But, as J. T. Bonner (1965) pointed out, this is a very artificial method, because what we consider an individual is usually just a brief slice of its life cycle. When we consider a dog, for instance, we usually picture an adult. But the dog is a “dog” from the moment of fertilization of a dog egg by a dog sperm. It remains a dog even as a senescent dying hound. Therefore, the dog is actually the entire life cycle of the animal, from fertilization through death.

I don’t have a copy of the book handy, but that doesn’t sound like a scientific conclusion. Rather, it sounds like a scientific definition. It sounds like Gilbert is describing what his book is about, and why it is an important field of study. He’s making the point that a thorough scientific study of life isn’t only about what an organism is, it’s also about the changes that organism underwent to become what it is.

Eaton finishes with this conclusion:

Gilbert says a dog’s life begins at fertilization and ends at that dog’s death. How soon can we expect him and other scientists to define a human’s life cycle the same?

I think that’s backwards. Dr. Nadal was’t quoting Gilbert’s book as evidence that scientists have changed their minds, he was using the quoted passage to show that his own pro-life position is based on science that is so widely accepted it’s in a textbook. Here’s part of Nadal’s conclusion:

We are human for our entire life cycle. We are whole and complete in form and function at every stage of our development, for that given developmental stage. The prepubescent child is fully human, even though they lack the capacity to execute all human functions, such as abstract reasoning, or reproduction.

In the same way, the early embryo is alive and fully human, though it has not yet executed all human organismal functions.

Except for the overloaded use of the word “fully,” that’s certainly how I’d expect a biologist to see it, especially a developmental biologist who studies organisms’ entire life cycles. I really don’t think it’s a controversial idea. Eaton is missing the point if she thinks this is some new breakthrough. No one seriously doubts that fertilized eggs are human life.

Or so I thought. You see, just to be sure, I decided to do a little Googling, which lead to the National Abortion Rights Action League’s answer to the question:


That’s a question each person must decide for him- or herself. These issues involve matters of personal, moral, religious, and scientific beliefs. This is an area where politicians should have no role.

Here NARAL is using the word “life” to mean something more than just biological life. That’s not exactly unjustified — there’s plenty of etymological support — but it seems to me they’re evading the question.

The Pro-Choice Action Network also has an evasive answer to the same question:

There is no scientific consensus as to when human life begins. It is a matter of philosophic opinion or religious belief. Human life is a continuum—sperm and eggs are also alive, and represent potential human beings, but virtually all sperm and eggs are wasted.

This is technically true, and I think it’s the same point Nadal was making in his article. Human life doesn’t begin at birth. It doesn’t even begin at conception. The unfertilized human egg was alive, and it came from a woman who was alive, and she grew from a living egg, which came from a living woman…and so on, going back maybe 100,000 generations until you reach the predecessor species from which humans evolved. Human life extends back continuously over millions of years.

But that’s not what people mean when they ask, in the context of the abortion debate, “Does life begin at conception?” That’s because they’re not really asking the right question.

Professor Scott Gilbert has been out of the office, but he found the time to dash off a quick note when I asked him to comment:

Thanks for sending this on. One can’t help people taking quotations out of context. Creationists do it all the time. We also call a human a human when that person is dead, even if they are not a person anymore. We don’t eat humans, we bury them. But the dead can’t vote or inherit. So calling a dog a dog even as a zygote is kind of obvious. Even a dog sperm is a dog sperm and not a human sperm. But (unless your a Monty Python fan), that don’t make the sperm a person.

(Professor Gilbert also suggests reading an op-ed he wrote for the Philadelphia Inquirer.)

Nadal stumbles into this when he argues that we consider both prepubescent children and embryos to be human life, even if neither is capable of performing all human functions. He’s right on the biology of course, fertilized human eggs are human life, but he’s not properly addressing the moral issue, because when it comes to morality, function matters.

Here in the United States, the legal and clinical definitions of death are specified in terms of brain activity. A person’s body can be kept alive by machines, and that’s certainly human life — blood is still flowing, the metabolism is still processing nutrients — but if the brain has irreversibly ceased to function, we pronouce the person dead.

Or consider that having consensual sex with an adult is not generally considered an immoral act, but having consensual sex with child is a crime. The reason we make this moral distinction is because even though a child is fully human, we don’t believe they have the mental function to make decisions about their sexuality.

Similarly, a person’s rights depend on their behavior, which is another aspect of how they function. Obey the law, and you remain free. Rob a bank, and you go to jail. Try to kill someone, and you can be killed in self-defense, or executed after a trial.

The rights we grant people, and the respect we show to them, do not depend solely on the scientific fact that they are human life. We usually make the distinction by discussing not when a fertilized egg develops into human life, but when it becomes a person. That’s a harder question, and one that science can inform but not fully answer. 

Yesterday NASA awarded development grants to four corporations for development of human-rated space transportation systems (spaceships). Here are the big winners:

$22 million went to Blue Origin, best known for its intricately detailed corporate logo (as well as its founder, Jeff Bezos of fame) which has a creative vertical take-off and landing system which is very science-fictiony, called New Shepard, which they plan on ramping up from a sub-orbital launch vehicle into a full-scale orbital system.

$80 million goes to Sierra Nevada Corporation for their Dream Chaser vehicle, which is kind of a small space shuttle that doesn’t need a custom launch system.

$92.3 million is slated for Boeing, the company that a few short years ago was claiming that space transportation systems could never be privatized and could only work when on a cost-plus government contract. (To be fair, they blew a lot of money a decade or so ago when they did R&D on a system that never got off the ground, so management was understandable gun-shy.) They changed their mind when they found out they could get grants for developing a new system and saw that other companies were already taking the lead. They have an impressive 7-man crew capsule based on the concept of scaling up older, proven designs.

$75 million for SpaceX, which has been in the news a lot lately for their very cool and successful launches of their Falcon series of vehicles. Unlike the other firms, SpaceX is keeping their efforts very much in the public view, which is kid of gutsy. Brand new rocket systems fail on their debut launch 40% of the time, but the Falcon 9 had two successful launches in a row. That’s pretty exciting in itself. They plan on mating that to their Dragon 7-man capsule for a complete system. The other designs mentioned here will rely upon an existing launch system (such as a human-flight certified verson of the Atlas booster), but SpaceX is counting on having a totally new system which is engineered with efficiency and safety in mind from the start.

In September they plan on launching another Falcon 9 with test satellites which will approach the International Space Station, followed quickly a month later with their first actual cargo delivery to the station.

Notable in its absence is any money for the joint Liberty project from ATK (which makes the Space Shuttle solid rocket boosters) and Arianespace which would have placed the European Ariane 5 booster on top of an extended Shuttle SRB. The basic idea there was to take two very proven technologies and marry them into a vehicle that could launch humans into orbit. I had been figuring them as a shoe-in for some of this second round of financing from NASA because of that. Maybe they can still get some private financing to keep this interesting project going. They plan on proceeding with development even without NASA money.

Overall I’m please that this part of the Augustine Commission’s plan is coming along. When the Shuttle Transportation System was conceived it was pitched as a “space truck” idea. The Shuttle was meant to have a fast turn-around, and be cheap to operate. In reality it was just too complex to accomplish such goals. The reason NASA had to try was that no one else in the world was capable of attempting such a system. Much has been learned operating the system, and the knowledge has been passed into the marketplace.

The comparison used to support privatization of launch-to-orbit systems is that of the early days of aviation. To help spur the commercial aircraft industry, the US government guaranteed contracts in the form of air mail so that companies knew they would have a customer. In the same way, NASA is now guaranteeing future contracts to deliver supplies and crews to low Earth orbit.

I honestly think that private companies can now take up the reigns of operating a space trucking company. NASA can get back to focusing on what it is best at, which is doing things that have never been done before, like figuring out how to make CB radios work across interplanetary distances.

The Wall Street Journal ran an op-ed by Roger Scruton, an English philosopher, titled “Memo to Hawking: There’s Still Room for God“. (Sorry, it’s behind a paywall.) He attempts to refute Hawking’s premise that no God is needed to create a universe from nothing.

Immanuel Kant, who believed that Newton’s laws of gravity are not merely true but necessarily true, argued that we humans lack the ability to comprehend the universe as a whole, and thus that we can never construct a valid argument for a designer. Our thinking can take us from one point to another along the chain of events. But it cannot take us to a point outside the chain, from which we can pose the question of an original cause.
Scruton’s premise is that nothing has changed and Kant is still right. It’s the old argument that there must be a “first cause”. If you accept the idea that the Big Bang created the universe, you must accept that something or some being initiated the bang.
Hawkings said that the creation of the universe from nothing was an inevitable consequence of how physics works, and therefore first cause is no longer required. Scruton then deftly moves the goalposts.
If Mr. Hawking is right, the answer to the question “What created the universe?” is “The laws of physics.” But what created the laws of physics? How is it that these strange and powerful laws, and these laws alone, apply to the world?

The laws of physics are not physical objects that need to be created. They are a set of explanations for how the universe works. Perhaps Scruton is confused by the word “law”. The common usage for the word is that laws are man-made rules. (I’m sure the lawyers reading this have a much more precise definition…) Physicists use the word as a way of describing limitations they place on how the universe can work. In effect, the physicists are the “creators” of the laws, but only insomuch as they were the ones to write them down after figuring them out.

Perhaps a better phrase is “description of physical properties of the universe”. That’s a bit more cumbersome, though. No being is required to describe how the universe works. Now that we have a good idea about how a universe is inevitably created from nothing, no being is required for first cause either.
If you want a great description of just how universes can be created from nothing, watch ‘A Universe From Nothing’ by Lawrence Krauss (a real physicist). Krauss and Hawking seem to have a better grip on how the universe works than Scruton and Kant have.

I first saw the Powers of Ten short on Carl Sagan’s Cosmos. (If you don’t know who Carl Sagan was, please don’t tell me. It will just make me feel terribly old and sad.) Way back in the ancient mists of time the Museum of Science and Industry setup a kiosk looping the video and I stood watching it over and over for as long as I could. That video had a big impact on how I viewed the universe and science.

There’s even an official website for the video and they claim that this year, on 10/10/10, they will be having special events. Nothing has been updated on the site since July, so I’m not sure if the plans are going forward. The opening scene is a couple having a picnic on Chicago’s lake front, west of the Adler Planetarium and east of the Field Museum. As the “camera” zooms away you see an aerial mosaic photo of Chicago. The Adler had a giant copy of that photo on a wall. I spent even more time staring at that picture than I did watching the video over at MSI.
I’ll have to do something this October 10th to commemorate this bit of my daydreaming youth. If there’s anyone out there with similar fond memories of this short film, please feel free to give me some ideas. Maybe I can place a geocache at the site where the video begins.
In the meantime, check out this great interactive feature (using Flash) demonstrating the scale of the universe as we understand it now.

Or, How Eating Habanero Peppers Proves I’m Smarter Than Other Mammals.

It’s chili pepper harvesting time again! While most Chicagoans seem enamored with growing tomato plants, I think habanero peppers should be the crop of choice. OK, to be honest, I’m actually too lazy to grow my own, but I have a couple of friends and a neighbor who go through the effort and I reap the rewards. I added the first batch of habaneros to my home-made enchiladas a couple of days ago and am still savoring the thought.

The New York Times science section has a fascinating article about why so many humans love hot peppers. Theories about why we like certain foods often involve evolutionary motivations for good health. In the case of hot peppers it has been suggested that by reducing blood pressure, and even providing some level of pain reduction, we evolved not just a tolerance, but a liking for hot peppers. The problem with this theory is that humans are the only mammals who seek out hot peppers to eat. Birds eat them, but they don’t have the same neurological receptors to feel the heat, so to them hot peppers are just another fruit.

If eating the hot peppers gave us an evolutionary advantage other mammals would also have developed a yen for them, perhaps well before homo sapiens split from our common ancestors. Yet even our closest relatives shun the noble jalapeno or habanero.

My son was quite impressed with an in-law who grew up in Mexico and ate habanero peppers whole, so my wife suggested a father-son gardening project. The first year only one plant survived the woodchucks and deer. But what a plant — it produced a bumper crop of killer orange habaneros. Nothing ate them. In my mind I still see that plant dangling its little orange heat grenades in front of the deer and growling, “Bite me, Bambi.”

Dr. Paul Bloom, a psychologist from Yale, sees our love of hot peppers as a unique outgrowth of our abnormally large human brains. He thinks that, perhaps, it’s a form of dietary thrill seeking.

The fact that capsaicin causes pain to mammals seems to be accidental. There’s no evolutionary percentage in preventing animals from eating the peppers, which fall off the plant when ripe. Birds, which also eat fruits, don’t have the same biochemical pain pathway, so they don’t suffer at all from capsaicin. But in mammals it stimulates the very same pain receptors that respond to actual heat. Chili pungency is not technically a taste; it is the sensation of burning, mediated by the same mechanism that would let you know that someone had set your tongue on fire.

The lizard portion of our brain gets the signals that our mouth is on fire and tells us to stop eating right now! The more evolved, logic and reasoning part of our brain tells us that it’s alright to continue. Logic an reasoning prevail, and we take another bite, thrilled that we survived the first. Being a good scientist, Dr. Bloom has been experimenting to test his theory and so far results have been encouraging.

It’s amazing that a love of hot spicy food is one of the indicators of higher intelligence in a species.