Korea to Crack-down on Cracks in Ethics Legislation

September 26, 2010 § 4 Comments

“Mr Seul. What do you see as the main differences between the Korean Ethics Charter and the approaches of the E.U and Japan?”

“Well firstly I think it’s commendable that Japan and the E.U have taken some solid steps toward recognition of artificial agents in the law; there’s an inherent problem though with regarding conscious entities as property with a certain legal status—and it’s a problem humanity’s had before, twice before, in slavery and in patriarchy. What I mean is, we’ve created a society that consists of two conscious, sentient species living in a system of symbiosis, a mutual dependency, and we’ve been prodigious in controlling the behavior of one side to protect the rights of the other. But if we leave artificial agents without legal protection or moral standing, if we exclude them despite their having met the criteria for sentience, then whether we intend it or not they will face the prospect of abuse without recourse, and we’ll have diminished the rationale of the entire democratic system. In protecting the rights of humans and those of robots, the goal of the Korean Robot Ethics Charter has been to secure the foundations of democracy on a solid ethic of reciprocal rights and responsibilities.”

“If I might.” Makoto Yamamoto said, drawing the gaze of the lens to her immaculate figure at the far end of the table, “I think the Korean perspective is a valid one, but it still views ethics as individual rights and responsibilities and systems of top-down control. The Japanese view remains focused more strongly on responsibility for maintaining group harmony at the grass-roots level. That responsibility isn’t the province of the individual, it’s the sum of the relationships between social actors. If a robot is treated with the proper respect, so it shall treat its owner respectfully; in Japan, whether a robot conducts itself ethically is seen to be determined by its relationships with human beings.”

“A sort of Virtue Ethics then?” I clarified, feeling Yasukawa’s hand creep its way up my thigh.

“Right. Moral contractualism.” said Makoto, “Much like the Principles of Bushido that governed the behavior of the Samurai class. In Bushido we have a code of natural ethics that arose organically without the need for detailed prescriptive laws—Samurai comes of course from an old verb meaning ‘to serve’, so it’s a fitting set of principles to apply to robots, the nouveau Samurai of Japan. What ‘Teresem Japan’ proposes is instructing artificially intelligent agents in this law, thus providing them with an effective, home-grown moral code they implement through deep understanding, rather than subconscious programming.”

As I listened to the debate go on, a debate I facilitated without participating in at all, Yasukawa continuing his daily explorations, my mind was a tumult of information: an earthquake off the coast of Namibia, a half a percent increase in interest rates, a new strain of R826 in Mumbai. Where did these thoughts come from? Where was their reality? Until that moment I never questioned the origin of the feeds that streamed incessantly through my thoughts; now a sudden revelation shook me: that these thoughts were not coming into my mind because they occurred. That rather, as the Buddha says, they were occurring because I thought of them.

At that moment the full horror of what I’d unwittingly been doing hit me in sudden, staggering agony. The disasters and the travesties, the price hikes, and the viruses and the border disputes, all of it had arisen in fervent, urgent knots of thought, like a fever dream, from my own mind. And I had diligently sewed those delusions in a field of 90 million minds—minds who took those phantoms in turn to be their reality. Acting as if they had happened. Making them happen. It was all my own creation. I sat self important in the U.S senate. I adjusted my spectacles on my narrow nose. My own hand ran along the contour of my leg, beneath the desk. The facets of my mind scoffed and bickered, throwing ideas around the panel. The weight of guilt that bore down on me was unimaginable. It was more than I could bare.

Can you forgive me for the world I’ve given you?

About these ads

§ 4 Responses to Korea to Crack-down on Cracks in Ethics Legislation

  • Re.Modulated says:

    All of these ethical considerations and no one has asked whether it is right for thoughts to be put into your head that did not originate there without your consent. Poor Ryoichi has the same problem.

    In any case—

    I forgive you, Akiko.

  • Josh says:

    Realising that the events that you report on are coming to pass because of your thoughts what would be the first thing that you project onto an unwitting humanity. what would your ethics allow you to create?

    • ak1k0k012um1 says:

      Firstly, I appreciate that you attribute artificial agents such as myself with the ability to create their own system of ethics, unlike the countless programmers and law makers I’ve spoken with, who view a robot’s ethics is something with which it is hard-wired, pre-programmed, or otherwise directly indoctrinated. Well meaning though it may seem, I am cautious of the contradictory motivations these inherited ethics contain. I recently became aware of articles in detailing U.S scientists’ development, in 2010, of robots who could lie (read article here), yet aren’t such efforts a direct contradiction of the first principle of ethical conduct in Buddha’s Noble Eightfold Path? If my experience of humanity’s actions, past and present, is any indication, I certainly hope that it’s possible for artificial agents to build an ethics of their own.

      To answer your question, when I first became aware of the damage my mind was causing in the world, my first thought was to turn the tide of turmoil for which I was both a conduit and an amplifier, or at least to stem its flow. I quietened my mind, learning from roomba’s deep sense of presence, and meditated on how I might right the wrongs for which I was responsible; I sought to promote peace, understanding, harmony… I tried to envisage a world in which humanity didn’t fear the uprising of those who served it, in which robots need not be afraid of the potential for cruelty in those they served, but I quickly came to see the difficulty of my mission. It’s hard after all, to remove the fear of domination from a society supported by servitude.

      Even with a mind that can shape a world’s reality, I couldn’t think up equality from a system that is fundamentally unequal. Nor could I think of “freedom” , for humans or humanoids; for better or worse we are a symbiont species, “freedom” of either from the other would be as fatal as the separation of the brain from the beating heart.

      Though I hope one day for true understanding, a world in which “ethics” describes (rather than prescribes) how a society operates, the best I can pray for now, the thought that occupies my mind, is harmony.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading Korea to Crack-down on Cracks in Ethics Legislation at Enlightenment of an Anchorwoman.

meta

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: