Innate Knowledge?

by Peter Chapman

innateidea

It seems that some in the field of A.I. wish to start at a nerve cell as the basic building block. However, to build a floating point arithmetic using nerve cells to me seems like quite a challenge (since I have written these functions in assembly language). I decided to jump ahead several million years in evolution. A nerve cell is too low level for me. That’s not to say that we will forgo some of the underlying principals, but it doesn’t need to be our basic building block.

The first microprocessors did not support floating point instructions. Instead, floating point functions where created out of more primitive integer instructions. Somewhere in the early history of microprocessors, they added floating point instructions to the base instruction set.

This leads to a most important question: What is the base instruction set for Strong A.I. or another way of thinking about it is what is the Strong A.I. innate knowledge? Should integer instructions be included? How about floating point? What other concepts? How about time? How about things it can never directly experience such as color? Wouldn’t it be better to have a sighted person define as part of the base instruction set a concept of color rather than trying to teach a blind program about the concept? Is the base instruction set immutable. Over time, can the A.I. engine come to it’s own meaning replacing or adding to the meaning given to by the original programmers?

And it is clear, that not all learning can come from the base instruction set. Somewhere you have to bite the bullet and get to the A.I. piece. And maybe in this sense, that starting with nerve cells allows you to focus on this question without all the clutter.

I don’t have all the answers yet, but very interesting questions, indeed!