The ethics of AI: When and when not to use it
Artificial intelligence (AI) is truly living up to its name, becoming more intelligent by the day. Traditional mathematical logic, underpinned by large scale compute power is now able to make intelligent decisions on a level never before possible. To be clear: the math was always there. The compute power was not. Many would label the earliest computer models as primitive compared to what is available now.
In the 1980's, the ability to play Pong on an Atari was seen as exciting. However, talks of performing machine learning on a large scale would be laughed down. Storing the data itself was a challenging feat. Churning millions (and even billions) of rows of data through mathematical algorithms was a complete non-starter.
Fast forward to today: This problem no longer exists, and AI has penetrated through almost every domain. It is used successfully by many Atos clients.
Going one lever deeper
The sophistication of AI truly reveals itself in the recent surge of deep learning models. Deep learning is a subset of machine learning that seeks to mimic the logic of the human brain. The human brain has evolved over millions of years. Using its neurological architecture to design AI models is not such a bad idea. The mathematicians who originally hypothesized the success of deep learning models have proven to be correct. Deep learning models are among some of the highest performing machine learning models in existence, owing to their multidimensionality, non-linearity, and ability to generalize.
With the surge of machine learning models and high-performance computers, it’s fair to say that the evolution of AI is rapidly converging toward the full capabilities of the human left brain. At times, it almost behaves like a human. Take image recognition for example. AI deep learning models can now identify faces, shapes and other visual patterns just like a human. Naturally these models depend on masses of training data and human coding (i.e. we teach computers in our own image) for them to be truly effective. However, this is no different than the human brain, which relies on a storehouse of experience and memories to identify new visual patterns.
Testing the limits
Nevertheless, it’s important to remember that a human brain can do much more. The human brain comprises two distinct hemispheres – the analytical left brain and the creative and emotional right brain. AI mimics analytical behaviors only. It does not and cannot produce emotions. AI is non-sentient. Consequently, it will never be possible for AI to outperform humans in all areas.
So, herein lies the challenge. How is AI to be used? Furthermore, is it being used appropriately?
There are a sizable number of businesses and enterprises that are yet to embrace AI. Many others have yet to be convinced that artificially programmed models are better at making decisions than themselves. Some question whether AI models are fit to do so.
AI has been proven across many domains. Anything that solely relies on analytical and scientific rationale is a perfect fit for AI. Financial modeling, diagnosing physical illnesses (such as cancer), pharmaceutical drug research, demand forecasting and manufacturing scheduling have all seen repeated success.
Domains such as job recruitment, social media, policing, military, product advertisement and retail have seen some success, but have been subject to ethical concerns and legal interventions over how AI is being used. These domains involve a greater level of human sentiment and right brain activity, and thus require additional considerations when determining whether to use AI.
Historical data bias has been flagged among policing data, thus rendering significant caution over how AI is used in law enforcement. Facial recognition usage has also seen ethical push back. Privacy concerns have been flagged over how social media data is used, along with bespoke advertisements. The Cambridge Analytica scandal is still fresh in the minds of many. Driverless cars have raised countless ethical concerns also due to the legal complexities involved if anything were to go wrong. Who would be accountable?
How to move forward responsibly?
With all this in mind, not all use cases seem suitable for AI. For smart projects where more right brain activity (creative or emotional) is engaged, a hybrid human-AI solution might be better suited than a solely machine-led design. Careful planning, ethical consideration and sometimes plain common sense can avoid future pitfalls!
Therefore, the role of a data scientist goes far beyond the obvious technical and modeling competencies. Data scientists must have the skills and awareness to evaluate a client’s domain and deduce whether a business challenge can be solved using AI and technology alone.
It may be that AI can be used but not in the intended capacity. Rather than automatically providing recommendations to end users, the solution may adopt a hybrid approach where a machine learning model suggests a series of recommendations that a human must approve or overwrite. This is an example of utilizing AI while ensuring that the human is fully accountable for all decision making.
All technical and business professionals have a duty to speak up when they feel that an AI proposition, pitched by their organization is not appropriate.
Anything that oversteps the boundaries of moral and ethical conduct should be met with caution. After all, end users would expect nothing less than complete integrity.
The question is not whether AI should be used. The answer is obvious, and the value of AI has been proven time and time again. The question is when.
Get this right and everything else falls into place.
The question is not whether AI should be used because the value of AI has been proven time and time again. The question is when.