As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
When it comes to deadly “mistakes” in a military context there should be strong laws preventing “appeal to AI fuckery”, so that militaries don’t get comfortable making such “mistakes.”
When it comes to deadly “mistakes” in a military context there should be strong laws preventing “appeal to AI fuckery”, so that militaries don’t get comfortable making such “mistakes.”