They would have R's on thier head a bit like Rimmer :D
Printable View
They would have R's on thier head a bit like Rimmer :D
If they were as smart as us, would we want them to look exactly like us?
It's easier to communicate with someone like you, after all.
I'd probably like it.
Tho' I suppose religious types and others might find issue with it.
And manker, I never quite got why Data had to look that way, hell, since he doesn't look human anyway, they might as well have picked a more efficient shape. :unsure:
Still I suppose he looks like us enough for us to accept him, but not enough like us so we can mistake him for a human, maybe that's the point?
@dan, or maybe a really wicked tattoo on their cheeks or something.
:lol: :lol:Quote:
Originally posted by danb@10 August 2004 - 13:53
They would have R's on thier head a bit like Rimmer :D
To be honest. I think all this talk about robots taking over the world is a bit premature, especially if they're like the ones I talk to when I phone up Lloyds bank. They're complete idiots <_<
And Snny, you may be right about Data kinda looking like us, but not quite. He has to look more or less human to be able to fit into the seats in order to be able to operate efficiently at the helm but not enough to cause confusion - although he did get his end away with Lt Yar :o
Btw since Star-Trek is made in the U.S. shouldn't the crew be calling him data rather than day-tuh.
Methinks we are too obsessed with building robots that look like us....in reality, we can build robots with multiple limbs, 360 degree rotational joints..
I think it would be good to use the opportunity to build more effecient robots for the society we have created, rather than limiting them to our form. The human body has done a superb job in keeping us alive but our society requires more strength and intelligence in order to further itself.
@manker, they say we aren't more than a century away at the very most from making artifcial intelligences that surpass our own.
Maybe we'll get to meet them before our time is up.
I suppose someone looking like us, but who never gets exhausted the way we do might be very appealing to some people btw.
@RGX, I say they should come in all kinds of shapes, I mean, each designed for their task. Like the droids in Star Wars.
I'm sure human copies would come in handy for interacting with people or somesuch.
And as for intelligence... if we want to give them the run of the place, like we probably should once they get smarter than us, we should damn well make sure they remember to consider the survival of the individual not just the survival of the race, or the society.
But I don't see how we could miss doing that.
Like I've said, we have warnings enough about the hazards of doing different.
I think I'll add a serious reply now.
For a while I've held the opinion that AI will never reach the stage where it can get anywhere near to surpassing our own, robots will never get so advanced that they can act in any kind of way that could mimic a human's reaction to a new situation (that it wasn't specifically programmed - by a human - for)
The way our society, Western society, is structured with it's rules and regulations there is no conceivable way that research would be properly funded. It would be like the cloneing experiments or the stem cell research. Lots of excitement but ultimately ethics get in the way. Ethics would also get in the way wrt robots with intelligence greater than our own. Our instinct for self preservation is too strong and a superior entity, like data, would be a threat to our domination.
Personally I'd be all for taking the risk of commisioning unlimited funds for AI research with an aim of creating a robot superior to ourselves - but I doubt our governments would be of a likewise mindset.
I think the hardware for it will come naturally, we constantly need more processing power after all.
What we need is software that evolves on it's own, something scientists are already working on.
Once you have that, you don't need massive funds, or an army of developers.
All you need is time, enough storage, and freedom.
At least this is what i think.
I agree, my main point though is not really funds or even practical feasability, it's more ethics and government policy.Quote:
Originally posted by SnnY@10 August 2004 - 14:45
I think the hardware for it will come naturally, we constantly need more processing power after all.
What we need is software that evolves on it's own, something scientists are already working on.
Once you have that, you don't need massive funds, or an army of developers.
All you need is time, enough storage, and freedom.
At least this is what i think.
If even one man had the knowledge to build a 'super being' then every Western society would consider that to be a security threat. Even if he was a member of their society.
Maybe the (U.S.) military would undertake the project and take steps to make sure the secret was only known by a handful of people, but the security involved would be more than adequate to ensure we never get to know about it :(
I agree, my main point though is not really funds or even practical feasability, it's more ethics and government policy.Quote:
Originally posted by manker+10 August 2004 - 15:57--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td>QUOTE (manker @ 10 August 2004 - 15:57)</td></tr><tr><td id='QUOTE'> <!--QuoteBegin-SnnY@10 August 2004 - 14:45
I think the hardware for it will come naturally, we constantly need more processing power after all.
What we need is software that evolves on it's own, something scientists are already working on.
Once you have that, you don't need massive funds, or an army of developers.
All you need is time, enough storage, and freedom.
At least this is what i think.
If even one man had the knowledge to build a 'super being' then every Western society would consider that to be a security threat. Even if he was a member of their society.
Maybe the (U.S.) military would undertake the project and take steps to make sure the secret was only known by a handful of people, but the security involved would be more than adequate to ensure we never get to know about it :( [/b][/quote]
Maybe, if the internet was used, this would be a form of protection.
Move it around on servers around the world.
Keep people updated, hand out copies to people around the world, in neutral countries and so forth.
But, this wouldn't help if someone did it for commercial reasons, I suppose, as they wouldn't want to give away their code then.
Which means that the only AI that would survive would be one built for altruistic reasons :blink:
Many programs on my PC already think they are smarter than me and make decisions by themselves, choose to ignore me if they feel like it...Quote:
Originally posted by SnnY@10 August 2004 - 13:45
I think the hardware for it will come naturally, we constantly need more processing power after all.
What we need is software that evolves on it's own, something scientists are already working on.
Once you have that, you don't need massive funds, or an army of developers.
All you need is time, enough storage, and freedom.
At least this is what i think.
Should I donate my PC to NASA? :P