Automobiles now contain
sophisticated AI systems to exercise control over the vehicle and respond
faster than humans can. For example, by using a wide set of sensors,
accident-avoidance systems can apply the brakes and even change lanes to help
drivers avoid accidents. In the coming years, these systems may exercise even
more control of your vehicle, such as the self-driving cars that are under
development by Google and major auto manufacturers.
But who controls the ethical
constraints by which automated robotic machinery like this operates? Consider a
selection between a set of bad choices: You suddenly find that you have to
brake, knowing you will still hit the school bus ahead of you, swerve into
oncoming traffic, or swerve to the other side into a tree. Which choice would
you make? Which choice would you want your automated car to make? Should owners
have a choice of overriding the programming in their vehicles and adjusting the
ethical parameters of their robotic systems? What should manufacturers do until
laws are passed regarding roboethics? Table 1.9 lists
different views on the issue of robotic machinery ethics. Where do your views
fall?
This Question
Hasn’t Been Answered Yet! Do You Want an Accurate, Detailed, and Original Model
Answer for This Question?
Copyright © 2012 - 2026 Apaxresearchers - All Rights Reserved.