Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies

In the coming months, NYC Mayor Bill de Blasio will announce a new task force on “Automated Decision Systems” — the first of its kind in…

While these systems are already influencing important decisions, there is still no clear framework in the US to ensure that they are monitored and held accountable.¹ Indeed, even many simple systems operate as “black boxes,” as they are outside the scope of meaningful scrutiny and accountability. This is worrying. If governments continue on this path, they and the public they serve will increasingly lose touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems. The urgency of this concern is why AI Now has called for an end to the use of black box systems in core public agencies. Black boxes must not prevent agencies from fulfilling their responsibility to protect basic democratic values, such as fairness and due process, and to guard against threats like illegal discrimination or deprivation of rights.

Source: Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies