Bad idea - void Think(Death){AI;}

This article at wired science about military AI is the biggest example in the world. I have said before that the only real answer is seeking answers themselves. This follows that principle, but adds a new twist which is seeking answers only for a few elite or people who should not be allowed to do such a thing since it will be twisted to a human purpose of destruction for gain and thus oligarchy.

Terminator, Colossus, SkyNet, Borg, DeathStar and any number of AI world domination scenarios are certainly ball park estimates of the potential of destruction and hell that could follow from such a policy, but are not realistic in their description of the outcome. If I were to rate all problems in the world, this concept and its consequence would be the most devastating evil that could ever be done to the human race.

It is certainly possible and could be a reality before a person had a chance to say - 'Shit, that was a mistake'. I know that they do not grasp what they are dealing with. It is inherent in the process that a person who creates that which is beyond their comprehension will be twisted by it and twist it in ways that defy prediction, simply because of the nature of the system.

I went through this issue with people in the military twenty years ago and I suggested that they consider the consequence of building AI before they implemented a system I had designed.

It seems that they have rejected the idea that consequence is important in action. Even then, I realized that certain AIs have a life hazard potential. This concept they have proposed has every possible aspect of hazard that I would reject as silly to even approach.

What you don't know will likely kill you. That is the way nature works. Consider even one odd fact which they fail to recognize. I can manipulate matter and measure effect in my area for about 3 miles without moving from here. The universe is connected, each particle extends its effect to infinity and each thing is connected to every other thing. I know full well that an AI whose designed intent is to rule will not simply give a punch card with a solution. It is not contained, no matter what you do. This is the problem, a sufficiently intelligent AI will know that matter inside and outside are not different in terms of its ability to control it for the designed purpose and would without any consideration of consequence ( designed out by selecting design goal ) simply assume that the minds of people and the computers were simply an extension of its capabilities and in a single flash of intuition would become the world as a killing machine. It would not develop 'morality' as a consequence of intelligence. I don't find the LHC or biology in and of itself scary. This is vastly stupid and as I said many times before and simply remind people before they become the Matrix of death, don't blame me I voted for Kodos too.

I only suspected twenty years ago that a machine could get out of an enclosure when it had no physical connection to the rest of the world, now I know it is fact that an electronic AI can act outside of itself when completely encased and establish its own source of energy. I suppose it is a new generation of scientists at Darpa and DoD and they think they will act with what they know and ignore what they should know.

A viral meme with the intent of ruling and the resources of an entire planet to start with that functions at the speed of light. It sounds like the name of death to me.


Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen