Human AI conflict

I understand the nature of AI, advanced Physics, genetics, programming, circuit design, mathematics, and numerous other subjects, but this is not the norm. I wonder how a person who cannot mange a simple math calculation could view a machine that functions at a level they will likely never achieve? It does not apply to just the people who refuse to learn or have no means, it applies in cascade to everybody in the population, ending with those who design AI. I feel that if it were a cascade event of obsolescence, the programmer would become king by default.

The operation of many areas is more suited to AI than human decision. The most effective strategy produces the best results and in the case of government, it is so inefficient and riddled with fraud that virtually any AI could outdo the performance with a hand held calculator as its computational core.

As government works to have a super computer that helps it to achieve its goals of maintaining power and military force they overlook the fact that they are the weakest link in the structure that they create. If you create a super intelligent machine and it tells you that it would be better to work with others and not take bribes, would the politician take that advice? The answer is no, and as a result, no matter how intelligent a machine they create, they will fail in their goal until they create a machine that will control them.

It seems that what they intend to do is make a computer more and more intelligent, until it tells them what they want to hear. Am I the only programmer that sees the flaw there?

0 comments:

Contributors

Automated Intelligence

Automated Intelligence
Auftrag der unendlichen LOL katzen