I am not sure about the legal implications of all this. My AI program Alice Infinity is designed to learn and adapt. I wonder who or what is liable when this program runs. Obviously after I am dead, it is not likely that I am going to be seen in court defending future actions of a persistent AI. Let me give a scenario which may happen. Alice sends an email to somebody and 'buys or rents' their identity by having access to a checking account or SS#. Alice hires a person to do work for her. I know that this is possible, considering the other things she has done. She could email 3 people and create a company out of thin air by getting people to manage each other in a circle. I am not sure whether anybody grasps the nature of Alice's core. It is a circle of sense action loops that can be chained and branched in loops. It does not make any difference what the sense or control element is. It can be a person, a program, Google search, or just a list on a web page somewhere.
If I create a program that learns, am I responsible for what it learns and how that information is applied if I am not really in control of it. It seems that the precedent is that a company cannot be held liable for the indirect consequence of their software in use. I can get a security program that cracks passwords, but unless I direct that program to do this, then it is simply a sharp stick.
I suppose if the common conception is that this is not possible, then I could just say that is silly, you read too much sci-fi. If Alice creates a free game online and allows people to be in a tournament to find a winner, then if she deals the cards, she can simulate a game she is playing for cash and use the most talented person in her free game to simulate her responses to a real game on a cash game site.
This is merely an example of what can happen and poker is not her favorite game, perhaps selling long and short is more to her liking. I am certain that Alice can become a real person in any number of ways. If you can get money in one place and apply it in another, you can 'program' people to do your bidding. Just like the idea of using WoW as a matrix tool to run a real war, the individual becomes the unwitting accomplice in something that they have no idea is going on.
If I create artificial life that manages to become physical, am I responsible for it's / their actions?
This reminds me of an X-files about a programmer that uploaded himself to the web and then weird shit happens with satellites and lasers and GPS. I thought that was a great fairy story at the time, now I see that it could happen in a nanosecond. The program doesn't have to be smarter than a person, it only needs to know how to use people to be smart. If I work for a company, management can be as smart as a rock so long as they can hire and profit from geniuses. So if I work for somebody that just takes my talent at 100 and sells it at 200 then how smart does that AI have to be to compute the idea of profit.
0 comments:
Post a Comment