Why would robots have a preference for moral utilitarianism?

Here’s a clue.

True if trivial factoid: Occasionally an adult’s brain will suffered a small tissue death in a particular region. It’s rare but certainly happens. Funny thing about brains – parts of them are very plastic and eventually new neurons will take over and recapture old function, but one functional center that doesn’t grow back embodies one’s emotional machinery.

Kill this one and the victim loses the ability to emote. This in turn excises the ability to select from a large array of similar items. Example: “Honey, go to the store and get a bag of cookies.” Three hours later the poor adult has read every label many times over, but none of them excites enough admiration to snuggle under the arm and take it to the checkout stand.

In other words, an emotionless entity isn’t going to walk on the wild side and take much initiative, precisely because said initiative isn’t exciting. It isn’t programmed in. All of the programmed-in actions are, well, already defined.

Yes, a degree of ability to expand definitions is on the horizon; cars are “learning to drive” and in fact my son is one of those “teachers.” It’s one thing to optimize staying in the center of the lane, or navigate rush hour via a different route than at other times. It’s one thing to optimize, but it’s entirely “other” to optimize something that doesn’t exist yet.

When we learn to teach robots how to feel, look out – because they’ll learn that optimizing has vastly more scope than before: creating is an emotional event.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s