Consider something vastly more complex than, say, the latest intel super chip – multiple cores, many megabytes of of on-chip RAM, on-board superspeed communications capability via laser transceivers, yada, and yada, and – – – –
We already make things this complex – and with great skill the engineers who made it can make the next one better, faster, stronger, yada, and yada, and – – – –
We’re up in the multiple billions of “devices” on each silicon chip, where a device is a transistor, resistor, capacitor, – mostly transistors. Each RAM bit is a very specialized transistor. Now, suppose you are a brilliant engineer who has learned to navigate the entire design of this chip. Your boss comes up, smiles at you, pats you on the back, and says, “Valued contributor and fave employee, you are going to head up a new design team to make a completely new chip, using the same circuit technology but executing a brand new internal machine micro-language and a brand new external compiler-generated machine language, to do something that the chip you’re familiar with was never considered useful for.”
The above fanciful construction is apt, but far smaller, orders of magnitude smaller, than the job of “writing” the DNA for a whole new species, much less a “new life form.”
Science has been digging into the DNA on hand, and at present has dozens (? – I need to ask my daughter Angela, who is involved in the user interface software of the Genome Project, working out of U C Santa Cruz) – at least dozens of species of major animal, much less viruses, plants, single cell bacteria, molds, fungi, and more. Much more.
It’s like someone looking at an acre of combed lint, trying to deduce what this bit of silicon does. “Acre of combed lint?” – that is one way to conceptualize expanding the image of the latest CPU super chip to a point where the naked eye can distinguish each connecting trace, each transistor, etc. An acre of combed lint.
Fortunately DNA is linear, and has a lot more structure we can parse out than just diving into an acre of combed lint and trying to find a path through the maze. But we have designed CPU chips for so long that it only looks like combed lint to the naked eye, when in fact it is elegant and purposeful to the chip designer.
DNA-based life has, no matter your opinion on Creator etc. more-than-full equivalents of elegance and purposefulness. But since humans didn’t develop the engineering behind how DNA works, to us it really did start out as combed lint. We struggle to understand things like how a protein (a specific combination of (mostly) amino acids, folds in 3-D. The DNA is legible, but part of the reason for that DNA sequence is that the protein it creates happens to fold right, to form a particular 3-D structure which acts in a particular way. [[ Note to the curious – analogies to wrenches, screw drivers, and so forth do appear to explain some proteins’ functions. ]]
Computers currently have TWO layers of “machine” language. Define ‘clock’ – a five gigahertz chip uses a frequency generator that pulses five billion times a second, thus has a five gigahertz clock. Deep inside, microcode directs each of the tiny single-clock steps necessary to perform the very powerful, very complex sequence of steps of an ‘exterior’ machine instruction such as ‘multiply a times b and store the result in c: “Read one word either from a machine register or from the RAM memory location addressed by the contents of a machine register, possibly offset by the contents of the Program Counter register, plus a given 32-bit offset, then multiply that by one of [ literal value, contents of some register, some other location in RAM, …] and then place the result in one of [some register or some other location in RAM …]
A compiler is a program which takes FORTRAN or C or C# or CNET etc. as input and derives an output consisting of pre-defined RAM contents (starting raw data) and exterior machine instructions such as the above ‘multiply’. If you’ve never had a look inside a computer before, the foregoing is probably dry as dirt and somewhat less important. It doesn’t move you. Sorry about that. But DNA is equivalent to a microcode orders of magnitude more subtle, and the entire DNA/microcode sequence obeys, as exterior machine code, the environment containing the nucleus.
DNA is ALL microcode. A computer has from a fraction of a million bytes to a million or more bytes of microcode. DNA is, for a human, one billion six-bit codons(*), or about 750 million bytes.
Anyone care to start writing in DNA microcode? Even given that we do understand some of what we see there, the full mystery is at best decades into the future, and with the help of super-computers. A million bytes to run our “acre of combed lint” CPU chip vs 500 or more times that much, to operate one copy of homo sapiens – that is some kind of complexity!
(*) – each DNA ‘bit’ is one of four letters, G A T C – or two binary bits of information. Each codon appears to be three DNA letters, so six bits, or three-quarters of a byte.