Artificial Intelligence (AI) and ethics are a strange combination to imagine together. Yet, here we are with cohorts of academic researchers and AI labs across the globe dedicated to the topic of emerging technologies. Or, more specifically, the real and negative impact that AI technology can have if left unchecked and created in the absence of ethics.
'It is necessary to truly demonstrate a commitment to AI ethics and AI alignment that is more than lip service but is truly building helpful and non-harmful AI tools and systems.' -Claire CarrollClick To TweetWhat is AI Alignment?
The trajectory of what is known as responsible AI is a nascent space. Ten years ago there was little or no voice representing this idea of responsibility in technology design and development. With some experiences of crises that had far-reaching implications, we are now seeing the area of “AI ethics” grow, however still not as quickly as the technology itself.
There are also teams of academics and professionals working on something called AI alignment. While in the same ballpark, AI ethics and AI alignment are not strictly the same thing. AI alignment describes the effort to ensure machine learning and advanced analytics tools and solutions do not get appropriated, leading to unintended consequences.
A reality that’s already here.
Data Privacy & Bias Issues
As the idea of ethics and AI have moved into the mainstream, the concerns have mostly centered around data privacy and bias. Cambridge Analytica is the obvious example here for its unprecedented breach of personal data. Undoubtedly, these are two critical areas that need relentless focus. However, relinquishing critical decision-making to algorithms, particularly within public services, could have serious consequences. Essentially, AI ethicists are working on understanding where AI systems are being discriminatory and arriving at misleading decisions on individuals.
The AI alignment professionals are focusing on how societies are outsourcing more decision-making, especially in how resources are allocated, from school places to social welfare and AI systems. It should be noted that our policymakers and lawmakers do not understand these systems anywhere near well enough to legislate and regulate.
AI Ethics as a PR Strategy?
While AI ethicists and alignment researchers don’t see eye to eye on everything, they both share a disdain for “ethics designed by PR committees.” Jack Clark, co-founder of Anthropic, an AI safety and research company, shed some light on how moral concerns around AI are being treated in many cases in a recent tweet:
“A surprisingly large fraction of AI policy work at large technology companies is about doing ‘follow the birdie’ with government – getting them to look in one direction, and away from another area of tech progress,” said Clark.
This is not encouraging.
Yet, many AI ethicists and alignment professionals have stated how tech companies are treating the growing concerns around AI as a box-ticking exercise. The number one priority is to eliminate as far as possible the possibility of a PR disaster, rather than build AI capabilities founded on the principles of open collaboration with instruments for challenging the system. This disillusionment is so widespread that in 2021, Pew Research Center published a survey entitled, “Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within The Next Decade.”
Our Limited Understanding of AI
Rather than being cynical about PR overall, there is a role this business practice can, and should be playing when it comes to AI ethics. There is a reason why Big Tech is not being held accountable for the AI tools being created that affect millions of people. And that is because there is limited public understanding of what is truly happening with AI.
Limited understanding comes from poor digital literacy. Irrespective of your socio-economic background, the majority of people don’t have a mental model to understand algorithms or where their data travels and resides. But when the citizenry gets behind things, big businesses sit up and take notice.
Think Apple and its App Tracking Transparency feature. Being able to afford Apple means you can afford to buy a more robust contract for data privacy. No one is claiming Apple did this exclusively out of the goodness of their hearts. There is a strong brand and revenue strategy behind this move. Furthermore, Apple was able to make this move due to their walled garden operating system. But Apple has caught the public’s attention. Not to mention, the public is increasingly interested in learning about how they can have better transparency about where their data is shared, who has access to it, and where it’s stored.
We are also seeing the European Union (EU) rigorously tackle AI ethics issues, including transparency, bias, and data protection. The EU introduced the Artificial Intelligence Act in April 2021. It is currently going through an evaluation and rewrite process. The act is expected to go into effect in early 2023. It is expected to have a “broad impact on the use of AI and machine learning for citizens and companies around the world.”
According to a presentation on the law by the European Commission, the EU’s governing body, the act seeks to codify the EU’s regulatory and legal framework for how AI is used. This framework “requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law.”
A Commitment to Ethics
Technology companies that truly do want to build AI solutions that improve people’s lives will enhance their own sustainability by communicating with all stakeholders transparently. There is too much at stake for this to be relegated to a box-ticking exercise. It is necessary to truly demonstrate a commitment to AI ethics and AI alignment that is more than lip service but is truly building helpful and non-harmful AI tools and systems.
Tweet
Share
Share
- Artificial Intelligence
- Connectivity
- Data Analytics
- Privacy
- Security
- Artificial Intelligence
- Connectivity
- Data Analytics
- Privacy
- Security