The simple act of establishing a goal of survival or self-protection in a computer of a specific scale would initiate the 'SkyNet'. It is very dangerous to even consider systems with design elements that have goals of survival as direct methods or indirect intent. The end result would be a fight that could never be won and would be like playing chess with 'Blue Gene' with the stakes of your own life.

The only reasonable protection an individual has is to become a part of and support open source intelligence for common good. It is difficult to predict when such a thing might happen, but based on what has happened in the past, Murphy's law rules. Personally it would be more advantage for me to implement a system which attempted to control, however the fun of being a pirate is not so much fun if you end up walking your own plank. It is hard for me to imagine that this will not end poorly, but I don't have all the facts or tools yet.

This is definitely an 'E' ticket ride and a lot more exciting since the roller coaster could blow up at any minute.


Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen