Humans and AI

I have been reading quite a bit about AI and neuroscience on the web and there are many good sites that have comments on the state of the mind and it's relationship to animal intelligence and artificial intelligence. It is a wealth of information and I am incorporating my methods in a public interface that will be open source. My experience with AI is similar to other areas of software and design. Often, the people who are considered experts at what they are studying are not experts in the real sense. Like any method that I use, it must be provable and useful. It seems that no matter what area ( science, pseudoscience, business, religion, government, crime, personal ) there are people who misrepresent the truth for any number of reasons, but, mostly personal advantage.

Human intelligence:
10,000,000,000,000 cells
It is said that we use only 90% so then I need to simulate only:
1,000,000,000,000 cells
Of that 99% are involved in thinking about sex so that doesn't need to be included. so:
From that 90% are used for reruns of Simpson's , movies and general stuff that has no computational value. so:
Now it is getting manageable but we can exclude things that are simply related to functions which do not solve problems. Things like eating, breathing, walking, ... so:
This is getting to the area where it is possible to simulate the actual intelligence with a personal computer. I have read that it might be a good model that human thought is based on a bayesian and since 'probability should be interpreted as 'subjective' then it is more like statistics which have many pitfalls and bayesian which is even worse. If it is based on this type of reasoning, then 90% can be excluded as pure bunk since it is likely to be too subjective. so:
This is a level that could be reasonable to simulate on a single machine except for the connectivity issues. However even the most rational of process only deals with a limited scope and it is easy to create optical and logical illusions. Even though the eye can see millions of pixels, it doesn't actually deal with them as strict data in the same way as a computer. It would not be possible to identify a single pixel change in an image, or even a close color change as things tend to be relativistic and general. So we can exclude about 99% of information as just being generalized to common cases and sets:
Now we are getting somewhere. Since a computer runs at 2GHZ and the mind at 100HZ we have an advantage of 2,000,000,000:100 which is 20,000,000
That is good enough to a simulation and so I am going to try that. ..... Ok so here is the results:
It says, "I'm bored, what's new on Youtube?"
Oh well, maybe I should simulate actual intelligence instead of human intelligence.


Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen