Regulating algorithms and what this means in delivery
It’s reasonable to expect that local and national governments should be digitally and data-enabled1 without compromising the public interest. However, evidence has shown that the unregulated application of AI in the public sector disproportionately impacts the rights of the poorest and most vulnerable2.
Recently, the UK’s attempt to use an algorithm to determine A-Level and GCSE grades widened the class divide and triggered public outrage. It seems that artificial tools can have a very human impact.
We are at the stage where we must standardize transparency practices, particularly in the UK, to ensure public confidence in AI services by building-in transparency from the offset.
The current ad hoc response to data subjects exercising their individual rights is not sustainable, as AI tools are expected to become more prevalent going forward. Maintaining trust is key to encouraging continued investment in AI development.
Evidence has shown that the unregulated application of AI in the public sector disproportionately impacts the rights of the poorest and most vulnerable. It seems that artificial tools can have a very human impact.
The impact of AI regulation on service and product design
Regardless of the mode of regulation, principles from both legislation and guidance will have tangible effects on how AI is deployed. A few areas are discussed below.
What’s clear from almost every indicative guidance is the significance of risk assessment, which extends to known risks, and risks that may emerge with its intended purpose3. From a delivery perspective, additional effort around risk measurement, mitigation, avoidance and data diligence may lengthen delivery timelines.
As regulation is enforced, certain types of data will lend themselves to different types of risks, so we may see government departments enforce department-specific codes of practice. This could indicate types of unforeseen risks that delivery partners need to consider, factoring in the lessons learnt from previous unforeseen risks in similar delivery programs. For example, the immigration department may view risks differently than local councils do. This risk management will be a part of build and design, and iteratively revisited throughout the delivery process.
To mitigate risks on an ongoing basis, delivery partners must provide the necessary expertise around the potential ethical impacts of AI. Finding individuals to fill this role may not be as straightforward as finding a delivery or project manager.
Testing, testing
The UK Guidance4 states that:
- Users and developers will need to make a policy specification — in other words, the purpose of using the tool being developed
- Not only must the policy specification inform the risks, but parties will also need to test their data against this specification, documenting how it achieves or hinders their outcome while avoiding unintended effects or consequences
- Testing should be based on factors that include quality, relevance, accuracy, diversity, ethics and sizing of datasets, with regular impact assessments. This is not an exhaustive list, but there must be a baseline against which they can be measured. For example, what’s the right size of dataset? The answer will differ depending on the use and context. Another question is who will decide on the baseline? Will it be self-regulated by public sector organizations themselves, or by an independent organization? If the latter, it will involve an external approval process and introduce another layer of governance.
- A formal review is mandatory, at least quarterly, to challenge and test both the data and how well it delivers the intended outcomes. This will contribute to managing the risk of unintended consequences and prevent risks from developing over time. Arguably, the review points should be much more frequent and dynamic than what is suggested in the UK government’s guidance. The harm from a biased algorithm can be felt very quickly, and it should be at the top of any delivery partner’s agenda.
Rigorous testing of this type will no doubt be a time-intensive task, potentially slowing down delivery timelines and increasing costs.
Who is your development team?
It is of paramount importance to consider who is developing the AI tools, and the conflicts of interest that may arise. The primary interests of public sector organizations and private AI developers can, at times, be in opposition to each other. It is vital to synergize both groups to encourage businesses to invest and inject private sector innovation. Simultaneously, it is crucial to uphold the public interest, which is of utmost importance to public sector organizations.
While creating and implementing a tool, do the developers have the majority public interest in mind? The UK Guidance asserts that multidisciplinary and diverse teams will ensure fair service for all users. Their varied perspectives help avoid omissions and bias in outcomes that can have grave consequences. Having an advisor from various interest groups (industry, Government and academia) will be critical in the future. It would be reasonable to have a role that acts as the interface between different groups, ensuring the quality of outcomes from technical and policy perspectives.
To conclude, a more thorough AI design and delivery process is key to protecting the interests of every individual that may unwillingly be subject to AI-based decision making.
Being diligent and transparent about decision making is the only way to maintain the integrity of the Government and the trust of the public, who are already uncertain about the misuse of AI.
It’s time to change the narrative, and the first step is an operational and enforceable regulatory framework. With recent developments, the UK Government is leading the charge.
In November 2021, the Central Digital and Data Office (CDDO) and CDEI published the world’s first algorithmic transparency standards. It sets the tone for AI regulation grounded in building trust across the public sector. It will be interesting to see the outcome and adoption of these standards.