Random thoughts about strong A.I.

The Singularity is Near

Category: Uncategorized

IBM Invests $1B in Watson

IBM_Watson

IBM says it will invest $1 billion in the computer system that won on Jeopardy! With Google, Microsoft, Apple and Nuance, all spending significant resources, the race is on to see who can be the first to produce the worlds first Strong A.I. program.

As a species, I don’t think we are ready for the implications. Just one example: If robots can do the work of Humans, why do Humans need to work? What does that mean for the concept of money? What happens if one corporation gains control of this technology? Will it be used for the good of all, or only the shareholders of the one corporation?

Lot’s of interesting questions we have postponed because it seemed to early to have these discussions, but clearly some very smart people think Strong A.I. is about to be here…

Progress: Of Mental Lexicons

Hierarchical_Model_Mental_Lexicon

A mental lexicon is defined as a mental dictionary that contains information regarding a word’s meaning, pronunciation, syntactic characteristics, and so on (similar to the picture above).

I coded this up today for the purposes of a knowledge representation model.  Interestingly, I had not thought about it in terms of a model for the words themselves. I had thought a WordNet like database would be needed and this clearly fits the bill. I will think upon this more.

Methods of Inference or Reasoning

I have created several video games over the years.  In all my games, they shared a common architecture: the game is implemented as a big while loop.  At the top of the loop, it checks for input from the gamer and then performs it’s own internal logic to move game pieces, etc.  This is where the A.I. in video games run.  Even if the user isn’t doing something, the game itself is working on how to beat the gamer.

Likewise in the A.I. engine, the most interesting part of the engine is not when it get’s input from the user, but what it does on it’s own cycles, un-obstructed from medaling humans.   So doing this period, the engine is making inferences based on the information it has at hand.  Here is our list of different ways the kinds of inference our A.I. uses:

  1. Deduction – a premise, a conclusion and a rule that the former implies the latter
  2. Induction – inference is from the specific case to the general
  3. Intuition – no proven theory
  4. Heuristics – rules of thumb based on experience
  5. Generate and test – trial and error
  6. Abduction – reasoning back from a true condition to the premises that may have caused the condition
  7. Default – absence of specific knowledge
  8. Autoepistemic – self-knowledge
  9. Nonmonotonic – previous knowledge
  10. Analogy – inferring conclusions based on similarities with other situations

We can group multiple types of inferences to connect a problem to a solution.

Arthur C. Clarke’s Three Laws

2001_header

Arthur C. Clarke’s three “laws” of prediction are:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

“Any sufficiently advanced technology is indistinguishable from magic.”

I like in particular, the second prediction. If you think back to the time before radio or television and had somebody suggested that there would be devices in everyone’s homes where you could hear and see pictures from people thousands of miles away (and without a cable), my guess is that everyone would think you were crazy. We are at that point in time for Strong A.I. It’s impossible until somebody does it.

On a personal note, my father was good friends with Arthur. As a teenager, I remember him well. I was saddened to hear of his passing. Godspeed Arthur C. Clarke.