There appears to be a resolution that grants robots legal status in order to hold them “responsible for acts or omissions” that may be passed by the European Parliament legal affairs committee.
(Photo : Tomohiro Ohsumi/Getty Images)
There appears to be a resolution that grants robots’ legal status in order to hold them “responsible for acts or omissions” that may be passed by the European Parliament legal affairs committee.
If this pushes through, this effectively grants legal status to “robots” and categorizes them as “electronic persons.” According to the Independent, MEPs have voted to proposes such legal status, adding that the legislation is needed to focus on how the machines can be held responsible for their “acts or omissions.”
The draft report, as tabled by Mady Delvaux-Stehres, states that current rules are “insufficient” for what it calls the “technological revolution.” The proposal pushes for the EU to establish “basic ethical principles to avoid parental pitfalls.”
Delvaux-Stehres’ resolution has easily gone through the EU’s legal affairs committee. The vote by the full parliament is expected to take place in February. The report suggests that robots and other manifestations of AI, i.e., bots and androids, will likely “leave no stratum of society untouched” as they are poised to “unleash a new industrial revolution.”
The more autonomous the robots are, the less they can be considered as “tools” for actors such as owners, users and manufacturers. This effectively calls for new rules which focus on how a machine can be held — partly or entirely — responsible for its actions.
This legal framework is needed, according to Delvaux-Stehres, to ensure that robots “are and will remain in the service of humans.”
In an Independent report, just last year, Google introduced a life-like “robot dog” that can clean houses, while in May, the teach company announced that German researchers were already in the process of creating an artificial nervous system that can let robots experience pain.
The Independent, however, this highlights an important moral dilemma: in some situations, for instance self-driving robots, the robot driver might have to deliberately crash a car, effectively killing everyone in it, in order to avoid killing others outside of it.
© 2016 NatureWorldNews.com All rights reserved. Do not reproduce without permission.