P2P Google

It seems that an open source network of P2P machines could do just as well as Google in indexing and ordering search data. The structure as a whole is that I can point at a location on the Internet and send something there, or get something there just like memory. In addition the nodes of memory in that structure are all computational. Many sorts act in a parallel fashion and the algorithms can be selected to be used in this way.

In this way the Internet indexes itself, if it has a way to store and retrieve.

I will make a simple case of this. If a series of machines are connected in a circle that is partitioned in the letters a-z and the address of a, b, c,...z nodes are stored in a known location. I am a site called 'a', I look up the index and find the node a, and then I send it across the circle to that node. Alternately if all participate, the data is ordered by the circular connections and their sub connections. I really only need to know the index point and I can search anything I want.

Since each element is able to find its place in the list, there is no computation center at all. If I keep a list of blogs I read, then I am in a way doing the same thing. You might think that it would be abused, and most certainly it would, as is Google. I can do evil things to Google and they can respond, however the net result is the same whether there is A or B, people will be what they are and some are malicious and some are not. The predominant mode seems to be toward rationality and cooperation.

The result of this is that I can find anything on the Internet in log2 time and this means that even 2333 ( a googol 10100) takes 333 steps to find exactly what you need out of 10100 things and I am guessing that would usually cover about anything that is, was, and will be for some time to come.

In a way, Wikipedia and others are already doing this, but they have not agreed on a method to establish the hierarchy of search and it really only requires a decision to do so and all the power and revenue of an international search system will be owned by those who participate.

There is no question that I can establish a linked list or tree or a dozen associative methods across the Internet. A linked list is just the address of the next element and is pointed to by the previous element. How much CPU time would it consume for me to allocate one variable on my machine as the 'next' IP xxx.xxx.xxx.xxx; the answer is 4 bytes per machine. The trees and indexes could be arranged to solve equations and some presently intractable problems .

Wikipedia is building an indexed structure to knowledge, which is commendable and in its completion, it may become the new place to search for information, instead of the randomness of unstructured search, which yields many more non-informational things than informational at times. The editors at Wikipedia are doing a great service to humanity and humanity could do a great service for itself, if it simply agreed to become self indexed.

The result of all this is that advertisers who pay to represent their context can place that information where it will be useful and not waste their time trying to screw with every single sole ( pun coming ) to sell shoes. The degree of indexing and specificity could be hundreds of times more effective by marking the data by specific qualities, like new old, cheap, expensive, high quality, quick delivery, slow and it would not be possible for the person selling something to manipulate the truth if they had no way to control a composite framework. In order to do that, they would have to class themselves as internet criminals or defacers.


Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen