The GDPR explicitly requires controllers to inform individuals of decisions made about them by automated or artificially intelligent algorithmic systems. The controller must inform the individual up front about the existence of the decision making activity, as well as provide information about its underlying logic, significance, and envisaged consequences. There is increased debate, however, about whether the GDPR also provides individuals with a right to explanation about an output of the automated decision making.
In their post for the International Association of Privacy Professionals (a summary of a longer article first published in the Oxford Business Law Blog) Morrison & Foerster Senior of Counsel Lokke Moerel and Associate Marijn Storm argue that the GDPR indeed provides individuals with a right to explanation but also note that both sides to this debate may be losing the forest through the trees. Under the GDPR, controllers will ultimately be accountable for the outcome of their automated decision making processes, so in addition to informing individuals, they should also be able to justify that the correlations applied in the algorithm are meaningful and unbiased.
Read more, including steps controllers can take to ensure Algorithmic Accountability in Lokke and Marijn’s post for the IAPP.