Permissive airstrikes on non-military targets and the use of an AI system have enabled the Israeli army to carry out its deadliest war on Gaza.
Permissive airstrikes on non-military targets and the use of an AI system have enabled the Israeli army to carry out its deadliest war on Gaza.
Fascinating article, thanks!
This is the first I’ve heard of this being implemented. Are any other militaries using AI to generate targets?
I certainly hope that unlike many forms of AI they are able to see what criteria led to targets being selected, because often times this happens in a black box. Without this feature oversight and debugging becomes difficult if not impossible. Is the point ensuring that no human can be blamed if it goes wrong? This article certainly seems to be making the case that whatever human verification there is is insufficient and the standards for acceptable civilian casualties are lax.
It would be nice if some of their sources would go on the record if these accusations regarding target selection are true; I’d like to see the IDF respond to them and clarify its standards for selecting targets and what they consider acceptable collateral damage. Though, there are probably serious consequences to whistleblowing during wartime so I’m not holding my breath.