Thread: Twilight 2020
View Single Post
  #50  
Old 01-07-2019, 07:47 PM
StainlessSteelCynic's Avatar
StainlessSteelCynic StainlessSteelCynic is offline
Registered Registrant
 
Join Date: Feb 2009
Location: Western Australia
Posts: 2,375
Default

Quote:
Originally Posted by unkated View Post
<snip>
The Israelis have a couple of (armed) ground vehicles capable or an amount of autonomy (it can navigate a track; stop and report if anomalies detected). Distinctly, it cannot attack automatically - but that is more a matter of programming than a lack of capability.
<snip>
Uncle Ted
To go further with this, the matter of programming is more to do with legalities and liabilities than any inability to create the software that would allow any unmanned vehicle autonomy in regards to attacking.
Another aspect of the situation is that in some cases communications difficulties have lessened human oversight of various robots used in combat zones so a push to have them able to "think" for themselves is not seen as a bad thing by some people.

There's enough research and even practical examples of target recognition software available to show that the idea is viable e.g. traffic monitoring systems that have the ability to single out specific vehicles such as heavy trucks using roads they aren't supposed to. Refining the abilty would probably be a case of using various sensors to get confirmation that the potential target is an armed enemy rather than changing the software.

Many people are not happy with the idea of "armed robots" let alone the idea of those "armed robots" having the discretion to attack as their AI decides... echoes of Skynet and the Terminator...
Do you remember back in 2007 in South Africa when a 35mm AA system opened fire on some troops and killed 9 of them and injured others? That AA system wasn't even autonomous, it apparently just glitched.
https://www.wired.com/2007/10/robot-cannon-ki/

But the push for combat robots with more autonomy was being pursued even with that sort of negative publicity.
https://www.wired.com/2007/10/roomba-maker-un/

The legal/liability aspect plays a big part as obviously very few governments want the bad publicity that would be generated if an autonomous unit shot innocent bystanders.
This is similar to what's happening in the world of self-driving vehicles. In industrial areas where there's no unauthorised & untrained personnel around they are in use and working well.
There's a great barrier to their introduction for private vehicle use though because governments have not yet defined who would be at fault if a self-driving car crashes into someone or something.
Reply With Quote