🟢 Is the war in Gaza already changing what we can accept from AI?

As I was recently exploring this notion of 80%-OK vs. 99%-OK problems, I dove into how the current wars in Ukraine and Gaza have changed AI. For instance, how much leeway do you give to errors when a drone has to identify and lock on a specific target in a war zone, let alone in an urban war zone? You would expect that any military systems assisted by AI (if not fully autonomous) would be designed to get, at the very least, 99%-OK solutions to the problems they have to solve.

Well, buckle up because you might be surprised.