Closing the Digital Divide through Corporate Digital Responsibility
We live in divided times.
Some have. Many have not.
Many have access to the internet. Some still do not.
70% trust digital technologies to deliver good outcomes for society, 30% do not.
At the recent CAMSS UK CIO conference in London, Christopher Joynson and I presented our research thoughts on Digital Society, the survey we conducted on Digital Inclusion at the end of 2017, and then talked about why it mattered.
Having introduced our research and thinking, we asked the room … “on a scale of Utopian to Dystopian, how do you feel about the impact that Digital is making on Society”. I was surprised to find that everybody (bar me) placed their mark much closer to the Utopian side than Dystopian. Equally, no one in the room watched Black Mirror (apart from Christopher & I).
I should say that we were asking the question of senior IT professionals, people who had spent their career using technology to enable businesses to perform better. And I should also add that we had earlier had various debates on the “What is Digital”topic, ranging from the usual “what’s different” through the potentially radical changes of the Artificial Intelligence (AI) age. Just as a reminder, for me … Digital is a mindset, “what would the web do”, a bucket of technologies, but it is NOT Cloud, Social, Mobile and Analytics – they are just where it started.
The keynote of CAMSS UK was the incredible Tanmay Bakshi – 14 year old AI evangelist, brilliant passionate speaker and clearly a soaker up of technology capability and expertise. There are plenty of others who have praised his session. Yet, Tanmay and the room were clearly firmly in the Utopian space. AI, and Digital, will help us all. Those that fear job losses are wrong. People are happy to be extensively analysed by computers. I don’t buy it. In amongst two questions that concern me, he made an excellent point – namely that there is always both good and bad use of technology. Absolutely true. The challenge is to ensure that the good can always mitigate the bad.
Job losses, or improved productivity?
So why am I challenging two points? Firstly, job losses. It is undoubtedly true that AI has the power to assist – it will assist health professionals make faster and better choices, lawyers to get information faster, and retailers to better understand the effectiveness of their advertising and more. But by the very nature of this description, AI enables us to save time – to operate faster. And if we are operating faster, we are working at higher productivity, and if we are doing that … then a shareholder business mentality is that there are costs to be saved to increase profitability. And that’s generally an indicator of job losses.
Let’s go briefly to Brexit. Irrespective of personal position, one might hypothesise based on current data that we won’t have enough health professionals or carers to look after an ageing (and growing) population, and we won’t have enough pickers to pick produce in the fields. In fact … let’s be clear … there are skills shortages in a number of areas of industry. We are about to lose a wave of experience from the market that made their career in one place, and that is no longer true.
We need AI. We need robotics and automation. I believe, that with big reductions in net migration, it is the only way in which we maintain functioning as a society. And in that context, I completely agree with Tanmay that it is a necessity to help us. But let us not think that there will not be an impact on jobs. It will change the nature of the workplace even more than offshoring has – indeed, it will partly replace offshoring given the decreasing cost differential of onshore/offshore.
How do we now feel about Privacy?
My second challenge to utopia (and AI) returns to how we feel about privacy. Post Facebook/Cambridge Analytica, I strongly believe that if we were to repeat our survey now, we would see a downward trend in trust towards big business use of personal data. A question was asked in the event, “Do you think that people will be OK with AI led interpretation of their behaviours to a degree even greater than perhaps they know themselves?”. The answer was positive, that people would be good with it, as it would deliver them personal benefit.
Now this takes us in to the whole privacy debate, and the impact that General Data Protection Regulation (GDPR) will make. Our Digital Inclusion survey sits on the fence on AI. It’s clearly not yet trusted. Many people expressed concerns with having voice assistants listening to them 24x7, or even simply being given advice on major retailing decisions. As society becomes more aware of the insight that organisations gain from your purchases, your posts, your likes, your movements, your browsing history, even the movement of your cursor … will everyone be comfortable with it?
I think we have some guidance from our survey. If it clearly gives an individual personal time benefit, and/or it relates to improving their personal health, and it (their information) is trusted to be safe and secure, then (on the whole) we’re good. So how we do capture this in a way that provides guidance to organisations on how to best engage with these technologies for the good of all people, and therefore of society?
We’ve got a proposition.
Towards Corporate Digital Responsibility
The final point of our presentation was to introduce the term “Corporate Digital Responsibility” (CDR).
Now, you’ll be more familiar with Corporate Social Responsibility (CSR), which has evolved quickly over the past few years to be more than community and carbon, to cover many aspects that indicate an organisation is seen to be doing the right thing in the context of the sustainability of the planet and society. I am delighted to see the alignment of Worldline’s own CSR policies to the UN Sustainability Charter’s 17 goals and we are at the point of calling out the 18th – Digital Responsibility.
Corporate Digital Responsibility (CDR) is therefore a build on CSR, not a replacement for it. CDR must be about more than data. Digital is about more than Data. How do we encapsulate the right, the ethical, the best of outcomes we can create with all data and technologies that we can see under the Digital banner – social media, blockchain, AI & machine learning … the list goes on. How can we commit to a set of principles that demonstrate that our focus is to drive for more positive, more utopian outcomes for society, in a conscious and aware state rather than simply and blindly hoping for the best?
Corporate Digital Responsibility is about protecting people’s right’s around data (in line with regulation), about ensuring that trust is maintained because they see that products and services save them personal time, help them with their health and ageing, and protect them from less acceptable or threatening uses of those same technologies.
As we presented this, there was a very valid point from the audience, in essence stating that we were talking about principles ahead of their time. We agree. When we started talking and exploring about Digital Society 18 months ago, we got lots of quizzical looks and confusion. But that has changed. Bigger names in the industry are saying similar things. The thinking is coalescing, and maturing. Our goal is to continue to raise the profile of closing the #DigitalDivide - to ensure that digital inclusion, accessibility, positive outcomes for more of society, increasing trust and increasing personal value, are at the forefront of the discussion.
In that regard, we are proposing to continue to measure on an annual basis how society, through people around the world, feels about Digital Technologies. This will be our guide as to the success of business and government balancing R&D and innovation, regulation and privacy, opportunity and threat over the coming years during the most technologically disruptive time of our existence. For those who share similar concerns, whether currently sitting on the Utopian or Dystopian side of the #digitaldivide, we would welcome your input.
 Atos Scientific Community 2017 Digital Inclusion Survey, “on aggregate 70% of those love or like, 30% of those dislike or hate”