Tag Archives: Evolution

To bit, or not to bit

A dualistic assessment pulses at the heart of cyber-sprawl. Our computer universe revolves around a persistent question delivered in rapid-fire succession. We call tiny, indivisible parcels of information bits. Bits answer one question, again and again: Does it exist? If it does not exist, it takes an identity based on a powerful idea often taken for granted in the rudimentary arithmetical grind.

The concept of zero arose in the wake of the agricultural revolution, most likely in South Asia. Nothingness, as a distinct mathematical entity, logged into history about the same time our ancestors first built cities. Nonexistence equates mathematically to the absence of value—zero.

Humans speciated about 300,000 year before present (ybp). We abandoned hunting and gathering approximately 10,000 ybp. The concept of zero didn’t manifest until more like 5000 ybp. The concept of zero most likely snuck into existence under the cover of flashier history: Egyptian pyramid construction, Babylonian hanging gardens, and the preponderance of pottery in human settlements across the Fertile Crescent.

We many never know zero’s precise origin in space and time. Most likely, the numerical value of nothingness arose again and again among geographically disperse cultures, throughout separate eras. Without reliable and more precise evidence, 5000 ybp serves as a sufficient estimate. Humanity has reaped the fruit of zero for roughly 2% of its time as a distinct species.

If a bit exists, we assign it the least magnitude, just enough to establish its presence. The symbol “1” represents existence.

A true or false also answers the bit’s existential query. But this is just a more complicated restatement of the answer rendered above. The use of a string of characters, a word, like “true” or “false” adds contextual meaning and alleviates the anxiety of the mathematically averse. True or false successfully answers the existence question. As simple as answering true or false may seem, imagine repeating the process hundreds, thousands, millions, or even millions of millions of times.

Many people would instinctively switch to “T” and “F” in place of writing four or five letters for each query. A single letter suffices to state the existence of something. Continue along this line of logic and some people—probably the mathematically inclined—will substitute a 1 for T, and a 0 for F. This transcends language barriers, removes ambiguity, and adds quantitative value for more complicated manipulations of our nascent data.


photo courtesy of Mark Ordonez

The concept of zero wields power in ways that often escapes even the most mathematically gifted. Humans prefer ten digits (0-9) that repeat infinitely in a cycle up, and down, the number line. A zero initiates the process: The origin on a number line has no value. Moving up, we count to nine; further progress requires a reset of the ones place to zero, and then the addition of a one to the tens place (1 ten plus 0 ones is 10). Repeat the process until we reach nineteen (1 ten plus 9 ones is 19); now, replace the one in the tens position with a two, and reset the ones to zero—20. Without the concept of zero, we have no natural origin, or means to recycle our counting system at a convenient interval.

Computers operate with only two digits. Remember: the most basic element of a computer is a bit with two possible states—existence, or not. Humans prefer ten digits because modal humans have four fingers and a thumb on two hands. Base ten is natural for us because we typically inaugurate our mathematical experience tallying small quantities on our hands. Computers don’t have appendages. Instead, computers recognize existence or lack thereof; on or off; 1 or 0.


photo courtesy of Chris McClanahan

Math isn’t partial to any number of digits. As long as the absence of value is expressible, only two fundamental numbers can effectively represent any quantity. Two digit numerical systems employ binary code.

Here’s a string of binaries beginning with the absence of value: 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010. You don’t need master spatial perception to discern the pattern at the end of the previous sentence. Let me translate into more familiar base ten notation: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.

The same principle governs all positional numeral systems. In binary, we recycle the process using only two digits; base ten counts nine distinct quantities before an abrupt return to the concept of zero. The first position has no value, then it does. Since we only have two possible values, we have to reset the value to zero and progress by adding value to the next position. Then we add value to the initial position. This process can repeat forever creating an infinite series of magnitudes.

Tedious repetitions, like writing and manipulating binary code, strain human attention spans and raise the probability of error. Also, we’re slow when it comes to mundane tasks. Fortunately, humans designed computers to flawlessly iterate simple instructions. According to a loose interpretation of Moore’s Law, processing speed doubles about every two years. My processor executes 2.2 billion (the same as 2.2 thousand x 1 million, or 2.2 thousand million) instructions every second.

By the way, base ten 2,200,000,000 is the same as 10000011001000010101011000000000 in binary. I think. Just transcribing a 32 digit quantity, regardless of base, carries a high probability of a simple mistake.

If you enjoyed reading this piece, check out An Electron Story or Hunter Gatherers in the Quantum Age.

An Evolving Universe III—The Insurgent


This may seem unorthodox coming from a seasoned physics teacher with an engineering educational background: Charles Darwin is the greatest scientist!

Unfortunately, scant evidence exists for my claim and I’ve already stated and supported my position that Isaac Newton is The Greatest in the first part of this stream of posts. A link to part one is in the previous sentence and this link will take you to part two.

Before venturing further into my argument for Darwin’s scientific supremacy, let’s clarify the difference between evidence and proof; the discrepancies are subtle but distinct. Scientists never prove anything. Evidence accumulates in support of a hypothesis until it becomes a theory. A theory is as close to proven as it gets, but theories are forever under the assault of young upstarts, and old veterans. Perhaps the greatest scientific accomplishment discovers evidence overturning or compelling a theory’s modification.

To say, “Darwin is the greatest scientist” is not a theory—it’s not even a hypothesis. It’s an opinion. Perhaps we should call it a belief; there’s a rationale for the claim, but I’d fail to assemble scientific evidence to support the claim. Basically, you would have to settle for this is how I see it and so should you. I would call it a hypothesis, but there’s no apparent means of testing it; so we keep circling back to the statement, Charles Darwin is the greatest scientist, being an opinion or belief.

I should have said, “I believe Charles Darwin will one day be seen as the greatest scientist.” While this is still a long shot, it’s far from impossible.

The reach of physics is vast: energy, atomic structure, exotic particles, gravitation, and the study of the entirety of space and time. It’s no surprise that the greatest scientists are almost exclusively physicists: Newton, Bohr, Einstein, Faraday, Maxwell…

It’s rare for biologists to get on the greatest scientist list not because they lack scientific prowess; the biologist’s domain is too tiny to compete with physicists. Although we hope life exists in other parts of the universe, as of now, every biological principle we have is confined to a narrow shell of water, land and air in the crustal region of the third planet from an ordinary star in a large, but common, spiral galaxy. It’s probable that one day aspiring scientists will flock to a burgeoning field most likely to be called exobiology, but currently, all biologists must focus their studies near the surface of Earth.

Lisa Randall’s new book Dark Matter and the Dinosaurs frequently uses the words (and various derivatives) “evolution” and “universe” in the same sentence. She writes things like:

“Improved technology combined with theories rooted in general relativity and particle physics have provided a detailed picture of the Universe’s earlier stages, and of how it evolved into the Universe we currently see.”


“…part of the beauty of the Universe’s early evolution is that in many respects it is surprisingly simple.”

While I’m sure Darwin would be thrilled to learn 21st century particle physicists use language that evolved from his work, it’s doubtful he foresaw such an outcome. Quantum mechanics in general and particle physics in specific didn’t hit full stride until over a half century after Darwin passed away (he died in 1882).

Modern cosmology kicked in shortly after Einstein theorized general relativity in 1915. The idea that the universe could change on a macro level was offensive to Einstein: He manufactured a cosmological constant to maintain a static universe contrary to general relativity’s mathematical prediction of an expanding universe. Ironically, and a bit humorously, Einstein eventually called his cosmological constant the “biggest blunder” of his life. We now recognize the cosmological constant as the first clue that our universe is growing. Einstein was so brilliant that even his misunderstandings qualify for good science.

In 1927, catholic priest and physicist, Georges Lemaitre, hypothesized the universe arose from a primeval atom. After several decades of accumulating evidence, Lemaitre’s educated guess would eventually become the Big Bang Theory. The theory of an expanding universe appears to have cleared the way for an evolving universe. If the universe can expand, might it also go through changes that are life-like?

I do realize it’s a huge leap from The Theory of Natural Selection to an evolving universe, but it’s not impossible that Darwin stumbled onto something more universal than he thought. Darwin published On the Origin of Species in 1859. Perhaps a 157-year-old scientific insurgency may be about to discover a higher gear?

Go back to Part I or Part II.

If you enjoyed The Evolving Universe try these too:  An Electron StoryHunter Gatherers in the Quantum AgeDe-frag BrainBinary FraudLinear, Circular Politics.



An Evolving Universe I—The Greatest

Isaac Newton was the greatest, most influential scientist.


This is a fact but not a really scientific fact. There aren’t really any facts—even in science—because the scientific method (question, hypothesis, experiment, analysis, conclusion, evaluation) dictates all ideas must carry some degree of uncertainty. The scientific method never rests. It does get tired after many iterations. If exhaustive repetitions fail to uncover evidence against—scientists attempt to falsify, not support their predictions—a hypothesis becomes a theory: a scientific fact is born. Keep in mind all facts—theory is probably a better moniker; a fact and a theory are essentially the same—are subject to ongoing review.

Any evidence against a theory compels at least a modification, or even abandonment of that theory. The idea that facts don’t exist confuses the general public; it often confounds people with advanced degrees. Most realize the universe is continuously changing, evolving. Facts are part of the universe. Assuming ideas are manifestations of the physical universe, facts should be subject to evolution too.

Why was Newton the greatest scientist? His influential accomplishments were many. In order of estimated decreasing importance, here is what Newton revealed: the nature of light (he even hypothesized light came in particles called corpuscles, a precursor of photons, but he conducted no experiments regarding this belief), the universal nature of gravitation and the laws of motion. He invented (should we say discovered?) calculus too.

Calculus would be a more significant achievment but another bright chap, Gottfried Leibniz, created the same branch of mathematics about the same time as Newton. Had Newton died in the plague—he fled London when the pandemic ravaged the British Isles—calculus would have been Leibniz’s baby, so to speak.


It’s unlikely another scientist would have discovered (should we say invented?) the other three ideas within a few decades. Newton’s color theory of light might have taken a century or more before another scientist discovered it.


If Newton were alive today, he wouldn’t claim to be history’s first scientist; Newton would most likely defer to Galileo. Galileo seems to be the first person we know of to test his ideas. Newton didn’t really do anything distinctly different from Galileo. Newton just took Galileo’s practices to another level.

I’ve never heard an argument that any scientist surpasses Newton’s greatness. Albert Einstein is often considered as Newton’s competition. Einstein was the first scientist to compel a modification to Newton’s gravitation law; it was a cosmetic adjustment really, and it only modified it in extreme conditions. But Einstein’s General Theory of Relativity did do something Newton couldn’t do: Einstein explained the true nature of gravity—a distortion of space-time caused by the presence of matter.


It’s appropriate that we distinguish between laws and theories. It’s likely most people believe laws are superior to theories. Unfortunately, the word theory is often mistakenly applied when the word hypothesis should be employed. A hypothesis is an educated guess; a theory is system of ideas backed up by a vast and complicated reservoir of experiments. In short, and once again, a theory is what we commonly call a scientific fact.

A law is a mathematical system which allows us to make predictions. Laws are powerful scientific tools. Laws have a profound weakness: they don’t explain what’s actually happening, physically. We just know that, as long as we realize the necessary constraints, laws yield reliable predictions.

Niels Bohr is a dark horse candidate to compete with Newton for greatest scientist. What did he do? Bohr was a father of quantum theory. Why not the father of quantum theory? There are many fathers of quantum theory: Max Planck and Einstein to name two more—there are others we should consider too—but neither really accepted the fundamental weirdness that goes with quantum theory.


Bohr was the first scientist to embrace the weirdness, the probabilistic nature of the universe, at the root of quantum theory. Once Bohr convinced the scientific community—not all scientists we on board with Bohr, Einstein was stubborn and never accepted the dicey nature of quantum theory—a vast array of successive quantum theorists continued to build the most explicative theoretical system in the history of science: quantum mechanics.


Bohr is not the father of quantum theory, but he’s the first on the list of potential fathers. Since quantum theory is the most successful scientific system of ideas, it makes sense that the first on the list of fathers is one of the greatest scientists.

It will be nearly impossible to knock Newton of his lofty perch. He had the advantage of getting in at the start of the game. Science didn’t really exist in an organized way when he was born.

The whole discipline rests on a foundation he constructed. Thanks to Newton, the base of science is strong. The only way to supersede Newton may be to discover a new characteristic of the foundation, or something we had not thought about how the foundation rests on whatever is supporting it. In my opinion, there is one possibility for another scientist to take the title of greatest scientist from Isaac Newton.

Click here to go to Part II. Here’s Part III.