Adaptive Identity (AI)

While listening to this talk at Google Talks it occurred to me that they fail to see the consequent, even though they discuss the very concept itself. I can trust a person implicitly with almost anything, but when it comes to computers and their consequences I can only trust myself because I consider the consequences and recognize conditions that lead to positive or negative outcomes. In the case of creating truly intelligent agents for the military the first extension that it will make is consequent action of its own data.

If you recognize consequences -and- the consequences of information are negative it is inherent that the computer lies. If information suggests that if a specific person A knows the whereabouts of person B, they will kill them. If person A asks for the location of B, and it is possible to determine that, the consequent is the death of B. Like telling a baby the location of a hand grenade, the choice would have to be made whether to lie or refuse. Refusal means the data is available, which imparts information, and so the computer must lie.

The result depends on the valuation of the elements involved. If car is valuable and person is not, then consequence is weighted to maximize the damage to person and minimize damage to car , in situations of choice. This ultimately leads to -defined- valuation of objects or beings.

It is obvious from the logical progression that what a person gets from an intelligent agent is lies and manipulation. It follows that creation of such a system leads to the same problems that exist with people. If you create a system which is more capable than yourself, expect to be pwned.

0 comments:

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen