Kris Osborn
How soon will the US need to be prepared to fight armies of autonomous robots? The answer, while unclear in some respects, may be “pretty soon.” As cliche as “terminator” types of comparisons circulate within analysis of robotics, autonomy and AI, there is something quite “real” about this possibility to a degree.
The consequences are serious, because while the Pentagon and US services heavily emphasize “ethics” with AI and the need to ensure a “human in the loop” regarding the use of lethal force, there is little to no assurance that potential adversaries will embrace a similar approach. This introduces potentially unprecedented dangers not lost on the Pentagon, a reason why there are so many current efforts to optimize the use of AI, autonomy and machine learning when it comes to combat operations and weapons development.
No comments:
Post a Comment