Gartner: Why Smart Machines require ethical programming

"Clearly, people must trust smart machines if they are to accept and use them."

Level 2: Ethical Programming

This next level of complexity is now being explored, as smart machines, such as virtual personal assistants (VPAs), are being introduced.

Here, the user perspective changes considerably. Whereas in Levels 0 and 1, the user is generally a business professional performing a job, in Level 2, the user is often a customer, citizen or consumer.

Responsibility is shared between the user, the service provider and the designer. Users are responsible for the content of the interactions they start (such as a search or a request), but not for the answer they receive.

The designer is responsible for considering the unintended consequences of the technology's use (within reason).

The service provider has a clear responsibility to interact with users in an ethical way, while teaching the technology correctly, and monitoring its use and behaviour.

For example, one smartphone-based virtual personal assistant would in the past guide you to the nearest bridge if you told it you'd like to jump off one.

Now, it is programmed to pick up on such signals and refer you to a help line. This change underlines Gartner's recommendation to monitor technology for the unintended consequences of public use, and to respond accordingly.

Level 3: Evolutionary Ethical Programming

This level introduces ethical programming as part of a machine that learns and evolves. The more a smart machine learns, the more it departs from its original design, and the more its behaviour becomes individual.

At this level the concept of the user changes again. In Level 2, the user is still in control, but in Level 3 many tasks are outsourced and performed autonomously by smart machines.

The less responsibility the user has, the more trust becomes vital to the adoption of smart machines.

For example, if you don't trust your virtual personal assistant with expense reporting, you won't authorise it to act on your behalf. If you don't trust your autonomous vehicle to drive through a mountain pass, you'll take back control in that situation.

Given the effect of smart technologies on society, Level 3 will require considerable new regulation and legislation.

Gartner recommends that as part of long-term planning, CIOs consider how their organizations will handle autonomous technologies acting on the owner's behalf.

For example, should a smart machine be able to access an employee's personal or corporate credit card, or even have its own credit card?

Level 4: Machine-Developed Ethics

Gartner doesn't expect smart machines to become self-aware within a time frame that requires planning at the moment.

However, Level 4 predicts that smart machines will eventually become self-aware. They will need to be raised and taught, and their ethics will need to evolve.

In Level 4, the concept of "users" has disappeared. We are left with only the concept of "actors" who initiate interactions, respond and adapt.

Ultimately, a smart machine, being self-aware and able to reflect, is responsible for its own behaviour.

"The questions that we should ask ourselves are: How will we ensure these machines stick to their responsibilities?" Buytendijk asks.

"Will we treat smart machines like pets, with owners remaining responsible? Or will we treat them like children, raising them until they are able to take responsibility for themselves?"

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Gartner

More about GartnerSmart

Show Comments
[]