Darwinbots Forum

General => Off Topic => Topic started by: Zelos on April 15, 2005, 03:36:37 PM

Title: AI
Post by: Zelos on April 15, 2005, 03:36:37 PM
It's only a matter of time before we have artificial beings that are just as inteligent as humans. So I'm curious; what do you think about this, what do you think will happen and how would you treat them?
Title: AI
Post by: Zelos on April 15, 2005, 03:40:38 PM
Oops, I made an error: wrong kind of vote stuff :S
Title: AI
Post by: Zelos on April 15, 2005, 04:16:29 PM
You can say stuff here as well you know, and I hope you anti-robot people know what you're putting at risk with your thoughts.
Title: AI
Post by: MightyPenguin on April 17, 2005, 08:32:31 AM
And... breathe out.

Personally I'm all for the I.M. Banks view; set up some absurdly intelligent computer to run civilisation and spend the rest of eternity doing fuck all.

Just remember to burn the three laws of robotics into their brain case.
Title: AI
Post by: PurpleYouko on April 17, 2005, 10:54:41 AM
Quote
Just remember to burn the three laws of robotics into their brain case.
Fat lot of good that did in the "I Robot" film

Bloody rubbish was totally inconsistent with Asimovs books. That whole film would have been completely impossible if they had followed the original premise.

 :shoot:  Artistic license!
Title: AI
Post by: Numsgil on April 17, 2005, 02:26:47 PM
My favorite Asimov robot story is Bicentenial Man.  I haven't checked the movie out yet, I don't know how it holds up.
Title: AI
Post by: Zelos on April 17, 2005, 03:04:13 PM
y the 3 laws? just one of them is needed. if we use all 3 the AI is boring. I think AIs shall be able to choose self what to do and what not to do. but not in a way that harm a human bieng
Title: AI
Post by: Numsgil on April 17, 2005, 03:29:29 PM
Are you familiar with the three laws Zelos?  I think they're pretty standard, if not a bit vague (and that was really Asimov's point in making them.  If they weren't vague, where would the stories be?)

1.  Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it.

2.  The relationship between an object's mass m, its acceleration a, and the applied force F is F = ma. Acceleration and force are vectors (as indicated by their symbols being displayed in slant bold font); in this law the direction of the force vector is the same as the direction of the acceleration vector.

3.  For every action there is an equal and opposite reaction

No wait, that's Newton's laws, aren't they?

1.  The orbits of the planets are ellipses, with the Sun at one focus of the ellipse.

2.  The line joining the planet to the Sun sweeps out equal areas in equal times as the planet travels around the ellipse.

3.  The ratio of the squares of the revolutionary periods for two planets is equal to the ratio of the cubes of their semimajor axes

Damn, that's Kepler's laws.  Okay, hold on...

1.  A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2.  A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

There, got it.

#2 is the only one that you might not want to add.  But even then you'll want to add something about following the directions of superiors.  Alot of mammals at least learn from their elders and others they take as superiors.
Title: AI
Post by: PurpleYouko on April 17, 2005, 03:41:09 PM
Seems to me that the 3 laws are pretty well laid out. You couldn't really get by without all of them, since to an inteligent being such as a positronic robot, it could conceivable become the case that it might honestly beleive that humans do more harm to each other by being alive than if they were all wiped out. Therefore it could well fall inside the first law that robots could wipe out all humans for their own good.

Law two would prevent this since the robot has to obey a humans command as long as it doesn't directly harm another human

Law two enables a robot to comit suicide at the command of a human.

BTW
The bicentenial man was translated to a movie very very well. It is a seriously cool film.
Title: AI
Post by: Numsgil on April 17, 2005, 03:48:09 PM
Quote
BTW
The bicentenial man was translated to a movie very very well. It is a seriously cool film.
I'll go rent it then.
Title: AI
Post by: PurpleYouko on April 17, 2005, 04:03:31 PM
AAAAGGGGHHH!!!!

Numsgil is a BOT-KING!

Just get a load of all those blue stars  :)

Only about another 3 or four levels to go to reach the top now.
Title: AI
Post by: Numsgil on April 17, 2005, 04:20:48 PM
Bow before my superior ability to post indecent amounts of posts.
Title: AI
Post by: Zelos on April 18, 2005, 12:10:30 PM
im familiar whit those laws, but the protect its own existence can also be removed.
"latest news another robot have commited suecide"
Title: AI
Post by: Mathonwy on June 30, 2005, 11:33:38 AM
Howdie folks, seems to me you forgot the additional rule Asimov addes later, the Zeroth Law of robotics, Now I can't look up the wording I just packed all my books away (moving house) but if I remeber correctly the zeroth law states :

A robot cannot harm humanity or through inaction allow humanity to come to harm.

While you can argue this is just a logical extension of the first law, Asmiov added it not me and you shouldn't be arguing with dead writters...  :P

Math
Title: AI
Post by: Ulciscor on June 30, 2005, 12:55:16 PM
I saw a documentary about this guy who made a little robot that was programmed with these rules and it did sod all except for hide in the corner. I guess it wasn't advanced enough to know what actions were safe and what weren't.
Well anyway he made some new laws which I don't really remember exactly but went something like this:

1/A robot must protect itself.
2/A robot must maintain a power supply.
3/A robot must find a better power supply.

Lol I can only imagine the chaos these laws would create.