Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

Regulating algorithms and what this means in delivery

It’s reasonable to expect that local and national governments should be digitally and data-enabled1 without compromising the public interest. However, evidence has shown that the unregulated application of AI in the public sector disproportionately impacts the rights of the poorest and most vulnerable2.

Recently, the UK’s attempt to use an algorithm to determine A-Level and GCSE grades widened the class divide and triggered public outrage. It seems that artificial tools can have a very human impact.

We are at the stage where we must standardize transparency practices, particularly in the UK, to ensure public confidence in AI services by building-in transparency from the offset.

The current ad hoc response to data subjects exercising their individual rights is not sustainable, as AI tools are expected to become more prevalent going forward. Maintaining trust is key to encouraging continued investment in AI development.

Evidence has shown that the unregulated application of AI in the public sector disproportionately impacts the rights of the poorest and most vulnerable. It seems that artificial tools can have a very human impact.

The impact of AI regulation on service and product design

Regardless of the mode of regulation, principles from both legislation and guidance will have tangible effects on how AI is deployed. A few areas are discussed below.

What’s clear from almost every indicative guidance is the significance of risk assessment, which extends to known risks, and risks that may emerge with its intended purpose3. From a delivery perspective, additional effort around risk measurement, mitigation, avoidance and data diligence may lengthen delivery timelines.

As regulation is enforced, certain types of data will lend themselves to different types of risks, so we may see government departments enforce department-specific codes of practice. This could indicate types of unforeseen risks that delivery partners need to consider, factoring in the lessons learnt from previous unforeseen risks in similar delivery programs. For example, the immigration department may view risks differently than local councils do. This risk management will be a part of build and design, and iteratively revisited throughout the delivery process.

To mitigate risks on an ongoing basis, delivery partners must provide the necessary expertise around the potential ethical impacts of AI. Finding individuals to fill this role may not be as straightforward as finding a delivery or project manager.

Testing, testing

The UK Guidance4 states that:

  • Users and developers will need to make a policy specification — in other words, the purpose of using the tool being developed
  • Not only must the policy specification inform the risks, but parties will also need to test their data against this specification, documenting how it achieves or hinders their outcome while avoiding unintended effects or consequences
  • Testing should be based on factors that include quality, relevance, accuracy, diversity, ethics and sizing of datasets, with regular impact assessments. This is not an exhaustive list, but there must be a baseline against which they can be measured. For example, what’s the right size of dataset? The answer will differ depending on the use and context. Another question is who will decide on the baseline? Will it be self-regulated by public sector organizations themselves, or by an independent organization? If the latter, it will involve an external approval process and introduce another layer of governance.
  • A formal review is mandatory, at least quarterly, to challenge and test both the data and how well it delivers the intended outcomes. This will contribute to managing the risk of unintended consequences and prevent risks from developing over time. Arguably, the review points should be much more frequent and dynamic than what is suggested in the UK government’s guidance. The harm from a biased algorithm can be felt very quickly, and it should be at the top of any delivery partner’s agenda.

Rigorous testing of this type will no doubt be a time-intensive task, potentially slowing down delivery timelines and increasing costs.

Who is your development team?

It is of paramount importance to consider who is developing the AI tools, and the conflicts of interest that may arise. The primary interests of public sector organizations and private AI developers can, at times, be in opposition to each other. It is vital to synergize both groups to encourage businesses to invest and inject private sector innovation. Simultaneously, it is crucial to uphold the public interest, which is of utmost importance to public sector organizations.

While creating and implementing a tool, do the developers have the majority public interest in mind? The UK Guidance asserts that multidisciplinary and diverse teams will ensure fair service for all users. Their varied perspectives help avoid omissions and bias in outcomes that can have grave consequences. Having an advisor from various interest groups (industry, Government and academia) will be critical in the future. It would be reasonable to have a role that acts as the interface between different groups, ensuring the quality of outcomes from technical and policy perspectives.

To conclude, a more thorough AI design and delivery process is key to protecting the interests of every individual that may unwillingly be subject to AI-based decision making.

Being diligent and transparent about decision making is the only way to maintain the integrity of the Government and the trust of the public, who are already uncertain about the misuse of AI.

It’s time to change the narrative, and the first step is an operational and enforceable regulatory framework. With recent developments, the UK Government is leading the charge.

In November 2021, the Central Digital and Data Office (CDDO) and CDEI published the world’s first algorithmic transparency standards. It sets the tone for AI regulation grounded in building trust across the public sector. It will be interesting to see the outcome and adoption of these standards.

By Sara Alasadi, Account Manager

Posted on: February 21, 2022

Topics

Artificial Intelligence

Public Sector

Share this blog article


About Sara Alasadi
Account Manager
Sara Alasadi is an Account Manager in the Public Sector. As well as her day job, she has a legal background and is in the midst of her Masters Degree at the University of Edinburgh, exploring the influence of technology on the law, and vice versa. She has particular research interest in the role of artificial intelligence in Criminal Justice, and more recently, the impact of technology on access to justice. Aside from this, she is a strong advocate for multicultural diversity in the workplace and is actively engaged in Atos’ Together Network, that drives cultural awareness and celebration.

Follow or contact Sara