Skip to main content area Skip to institutional navigation Skip to search Skip to section navigation

Are we ALGO-Ready?

December 09, 2019

Maria Sevlievska is a Bulgarian legal academic writing in the UK and US context. She holds an LLB from the London School of Economics and is currently pursuing an LLM in Information Technology Law there. She is an incoming Trainee Solicitor, and previously participated in the creation of the UK’s first student-led Law Review and served as its Editor. 

Are we ALGO-Ready?

The notion of a tech-savvy local government conjures a distant utopia. Yet, news has emerged that London Councils are deploying Xantura’s predictive analytics system to detect children at risk of maltreatment. The system collates and “analyses various data sources, including school and health records, to judge families’ risk scores” (Graham, 2017). Allowing London Councils to act before a crisis occurs, the eventual aim is that “screening …becomes fully automated, with data being shared across all…agencies”(ibid).

Despite an instinctive appeal to the proposal, we must question whether fully automated screening should be employed here? This post argues to the contrary.

Practicalities

Admittedly, the proposal to fully automate screening is not entirely ungrounded. First, with fully automated screening, social workers will be able to refocus efforts from the assessment of referrals to more prompt and effective early intervention. Second, Fully Automated Screening has incontestable cost-advantages. Financially strained London Councils will benefit from savings of $1,263,000 per annum, enabling more efficient budget allocation (McIntyre & Pegg, 2018). Finally, eliminating the human element in case selection will evade the risk of wasted resources, consequent upon emotion-driven decision-making.

The benefits of Fully Automated Selection, however, are overstated. Crucially, the aforementioned advantages come at a significant cost: privacy. Xantura’s algorithm, “uses statistics from multiple agencies”, accessing information about school attainment and debt (Graham, 2017). Albeit justifiable for the protection of at-risk children, mass monitoring from multiple sources makes aggregation invasive, with London Councils having questionable insight into the private lives of identified families (Degeling & Berendt, 2017). Furthermore, with Fully Automated Selection, algorithmic bias becomes a pertinent issue. Since parents are comprehended on the basis of connections with others rather than their actual behavior, absent human judgement, lower class families may be unfairly discriminated against (Mittelstadt, 2016).

The Law

The case against fully automated screening is equally compelling from a legal perspective. This is because the only legal tool we have to regulate algorithmic decision-making—the General Data Protection Regulation —is entirely unfit for purpose.  In theory, the Act imposes a series of obligations on ‘data controllers’ and accords ‘data subjects’ with corresponding rights. (Scott, 2017). The obligations include the requirement that personal data be processed fairly and lawfully, while rights range from being informed about which of your data is being processed to ordering the controller to cease processing your information (ibid). As with other areas of the law, however, the legislation’s application in practice falls short of its intended purpose.”

First, Fully Automated Selection should not be implemented since algorithmic inferences do not unambiguously merit the law’s protection. This stems from the General Data Protection Regulation’s characterization of personal data as “information relating to identifiable persons” (4(1)). Such a definition is difficult to square with anonymized profiles despite non-binding statutory supplements and European Court of Justice case law suggesting an expansive reading of personal data (Wachter & Mittelstadt, 2018). Absent explicit legislative text recognizing algorithmic profiles as personal data, inferences relating to affected families (bar those whose identities are ultimately discerned) may have no protection in law.

Second, the introduction of Fully Automated Selection is undesirable because of the General Data Protection Regulation’s lax transparency standards. With these failing to ensure that families are appropriately informed about Xantura’s data processing, exercising control over algorithmic profiles will prove difficult. For example, in imposing notification requirements on controllers, Articles 13 and 14 of the General Data Protection Regulation, merely require that subjects are informed about the “categories” of data received, rather than its specifics (13(1)(e)). In addition, local councils may bypass disclosure requirements altogether by evidencing that these entail “a disproportionate effort, in particular for processing in the public interest” (14(5)(b)). Equally full of legal lacunae is Article 15, a provision which allows proactive data subjects to themselves request information about the data processing. To evade disclosure London Councils must simply evidence that it may “adversely affect the rights and freedoms of others” (here, the at-risk children) (15(4)). Since there is no legal guarantee that affected families will be informed of Fully Automated Selections’ occurrence, we must oppose it. This is especially so, because knowledge of processing is an essential prerequisite to the exercise of the remainder of the data protection rights (Wachter & Mittelstadt, 2018). 

A final reason why Fully Automated Selection must be rejected is because, under the General Data Protection Regulation, affected families lack genuine avenues for redress. Article 22, which purports to grant subjects the right “not to be subject to a decision based solely on automated … profiling” (i.e. a right to object (Kaminski, 2018)), is no more than a ‘paper dragon’. Indeed, it is rendered inapplicable when automated processing is authorized by law (22(2)(b))—a change we may speculate the UK government would opt for in implementing Fully Automated Selection. Although such a law must also lay down rights safeguards, profiled families lack the guarantee of having mechanisms to contest algorithmic decisions (in contrast to processing justified by consent or contract) (22(3)). As such, wrongly categorized families may lack access to justice; an unacceptable predicament, considering its aforementioned dangers.

Conclusions

Clearly, when it comes to fully automated screening, local councils are putting the cart before the horse. To effectively balance between its practical ‘pros and cons’, the General Data Protection Regulation must explicitly class algorithmic profiles as personal data, while limiting the palette of legal loopholes available to public sector controllers. Regulators must also consider embedding code in algorithms, so as to automatically audit legal compliance (Cate, 2017).

While the above analysis focuses specifically on the UK scenario, non-UK states should proceed with caution when considering implementation of algorithms in the furtherance of social goals – whether it be detection of child maltreatment or others. This is especially important for the United States (“US”), where Pittsburgh has recently become the first US jurisdiction to employ a predictive analytic algorithm for child abuse hotline screening. Technology is merely an enabler, not a panacea.

 End Notes

13(1)(e), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-13-gdpr/.

14(5)(b), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-14-gdpr/.

15(4), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-15-gdpr/.

22(2)(b), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-22-gdpr/.

22(3), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-22-gdpr/.

4(1), Art. Regulation (EU) 2016/679 (General Data Protection Regulation). Available at: https://gdpr-info.eu/art-4-gdpr/.

Cate, F. H., Kuner, C., Svantesson, D. J. B., Lynskey, O., Millard, C. (2017). Machine Learning with Personal Data: Is Data Protection Law Smart Enough to Meet the Challenge?Retrieved from https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?referer=https://www.google.co.uk/&httpsredir=1&article=3633&context=facpub

Degeling, M., & Berendt, B. (2017, May). What’s wrong about Robocops as Consultants? - A technology-centric critique of predictive policing. AI and Society, Online First.

Graham, J. (2017, September 18). London uses data to predict which children will be abused. Retrieved from Apolitical: https://apolitical.co/solution_article/london-uses-data-predict-which-children-abuse/

Kaminski, M. E. (2018). The Right to Explanation, Explained. Berkeley Technology Law Journal, 18-24.

McIntyre, N., & Pegg, D. (2018, September 16). Councils use 377,000 people’s data in efforts to predict child abuse. Retrieved from The Guardian: https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse

Mittlestadt, B. D., Allo, P., Taddeo, M., Wachter, S., Floridi, Luciano. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data and Society, 1-21.

Wachter, S., & Mittelstadt, B. (2018). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review.