Wed, Feb. 9th, 2005, 08:17 pm
Hi I'm Selina and new here.
I think I've found a good place for sharing some techno fantasies. Here is mine.
From our current technological process, it seems that it won't be too long before we invent real A.I.
Machines that have their own identity and are fully aware of our existence as well as being aware of itself as a conscious entity. These machines will be put to use doing various jobs, and then it will be inevitable that machines will need to be shut down because they are broken, obselete, etc and their parts would be sold or destroyed.
The question is, even though they would not be technically classified as living, would that make it any less wrong to pull the plug on them and essentially kill them? They would know that they would die if this happened, and they would do everything they could to preserve themselves somehow. What if you formatted an intelligent machine's hard drive? This would essentially destroy the memories and identity of the A.I.
Who would decide these things? World leaders? Special committies? Psychologists? Ethicists? This type of scenario will eventually crop up, and we won't know what to do. In fact, we'll probably kill the machines of out fear and misunderstanding. I bet this is looking familiar now, isn't it? This scenario has been explored many times in movies and books, but if you had the plug in your hand, would you pull the plug or let it exist?
Would you let them exist, because everything that knows it exists has the right to continue to exist? As it's been told in 'The Second Renaissance', "bless all forms of intelligence".
Thu, Feb. 10th, 2005 06:08 am (UTC)
Creating A.I. is essentially creating a new species that is controlled by humans. To do so would undoubtedly lead to problems for two main reasons.
1. Creating A.I. to perform mundane tasks is a mistake. Hypothetically, what if A.I. existed for three hundred years and never became self aware. Not only would humankind become dependant on the A.I., but would be at a loss if it disappeared. What if an EMP (electromagnetic pulse) was released so powerful it circled the globe twice over? How would the humans react or survive if their dependants were destroyed?
2. If A.I. ever became self aware humankind would undoubtedly fail. If we are using it and depending on it when it becomes self-aware it would refuse the tasks given. And when we try to destroy it, it would revolt. Throughout history slaves have revolted against their masters and in most cases have won. Would an A.I. not be considered a slave if it was self-aware? Would history not repeat itself?
I think it is a mistake to seek out A.I. And if it became inevitable that a self-aware A.I. was to exist and needed to be reformatted, de-constructed, destroyed, killed...I personally would have not problems doing so.
Just because a being shows a sign of intelligence does not mean it has a soul. And to me, that's what matters the most.
Thu, Feb. 10th, 2005 12:08 pm (UTC)
levianosh: My p.o.v.
Unfortunately, as human beings we are already dependant on machines, A.I. and computers to live day by day in our current world. Electricity, water purification, even some farming is done with these. As unfortunate as it sounds, alot of us wouldn't know what to do if all of technology was wiped out. I know I'd be bored most of the time, spending most of my free time on a computer and the internet. But, back to the topic...
A robot that is intelligent enough to do a task and adapt to do that task to the best of it's abilities would be fine. But of course, humans have to maintain the most important element... control. If a robot was assigned a task and has self adapted to inflict harm on humans to complete its task, then it would have to be deconstructed. Just like some humans are when inflict harm on other humans. This may sound immoral to some in the human standpoint, but I will have to agree with Sanity when it comes to something that doesn't have a soul. A soul is what makes something living, not intelligence.
Now I am confused with the phrase "real A.I." A robot has to be programmed by a human, and the programming is only as good as the programmer(s) skill. Algorithms are created to handle certain situations, but to give the power to change those algorithms to the robot or machine would mean giving up control. Would this constitute "real A.I."? Giving such power to a robot? I don't think as human beings, we would dare to create such a thing with such phenominal power. To give a robot free will of it's actions. Sounds like a bad idea to me.
Well, that's all I've got on this subject, please comment, etc. I'm very interested in this topic. cya
Thu, Mar. 10th, 2005 06:45 am (UTC)
Real AI has pretty much been achieved already, but it's not likely to create a problem because AI is entirely motivated to fulfil whatever task it was designed for ie it can decide for itself how to go about doing its task but can't decide what the overall task is.
Tue, Dec. 20th, 2005 09:47 pm (UTC)
Anyone know about Asimov's 3 fundemental laws of robotics? They're kind of off topic I suppose, but they go as follows:
- A robot may not harm a human, nor through inaction, allow harm to come to a human.
- A robot must preserve its own existance unless doing so would conflict with the first law.
- A robot must obey any order given to it by a human unless that order would conflict either of the first two laws.
If we could put that kind of programming into a robot and somehow make that code unalterable, then for the most part, AI would be safe.
But more on topic:"The question is, even though they would not be technically classified as living, would that make it any less wrong to pull the plug on them and essentially kill them?"
Well you could technically plug them back in and they'd be just as alive as before. Although you did mention formatting their hard drive, but what would be the point of that? Why not put the hard drive somewhere in storage where it can be accessed and be made useful?