Pace of AI adoption necessitates a radical approach to policy making

The following is an original article written by Christina Caljé, published on Medium.

I was honored to join members of Unesco, UNICRI and the Council of Europe at the Hague Summit for Accountability in the Digital Age on 7 November, tackling the timely and complex topic of how to ensure accountability in AI-based systems. 

The Institute for Accountability in the Digital Age will compile a summary report of the findings. I was asked to contribute my personal recommendations on how to bridge the gap between digital technology and legal frameworks.

This contribution, I share with you in full, below.

Christina Caljé presenting her views on accountability in AI systems, The Hague Summit 2019

Urgency to Address the Accountability Gap

As CEO of a media technology company, my contextual reference for accountability in AI is predominantly on core technical and ethical risks, such as explainability of model results or biases in system decision making.

However, when looking at it through a universal governance prism, the absence of specific legal frameworks becomes the key and immediate risk to resolve. Decisions are increasingly made by autonomous AI-based systems, and the existing mechanisms of accountability are not translating neatly to the digital world.

As the pace of AI adoption accelerates across geographies and industries, so does the urgency to evolve our legal system to address the resulting accountability gap. That said, we must strike the right balance between speed and inclusivity in executing this foundational step towards creating accountability for AI systems.

Framing the Argument For Sector-Based Approach

Personally, I see a sector-based approach as most effective in bringing quick alignment on the perceived moral, social and economic risks and potential solutions in cases of ‘AI gone wrong’. It’s important to recognize that not only will the applications of AI vary per sector, so will the spectrum and severity of potential consequences.

To illustrate why this would be the right path forward, let’s analyze the varying dynamics of similar AI techniques applied in two different sectors. Namely, we can compare use cases for computer vision and machine learning algorithms in marketing vs. healthcare industries.

Contrasting applications and implications of AI: Marketing vs. Healthcare

As a marketing use case, I’ll reference Autheos, since our platform employs both forms of AI in optimizing video marketing strategy for our Brand clients. Computer vision algorithms systematically detect elements such as objects, emotional sentiment and human demographic in our clients’ video content. The recognized elements are fed into our data warehouse as output tags and, based on the client use case, those tags are one of the (many) input factors into our performance based machine learning algorithms that autonomously decide which video is shown to a visitor on the client’s consumer site.

Besides optimizing the consumer’s video experience onsite, interconnectivity between our computer vision and machine learning algorithms facilitates quantitative insights. Autheos shares these insights with the client’s content team, in order to inform their digital (video) marketing strategy.

Computer vision example – Autheos

Contrasting this with the healthcare context, an AI-system based on computer vision and machine learning has a completely different use case. AI systems based on computer vision and machine learning algorithms are transforming medical fields, such as dermatology and radiology, by identifying complex patterns and making diagnoses that might otherwise be missed by a (human) doctor. 

With these two sector examples in mind, the worst case scenario of AI gone wrong might ‘only’ result in reputational concerns or lost revenue in the marketing example. For a marketeer, this downside scenario might feel disastrous but, it pales in comparison to the potentially life-changing or, in extreme cases, life ending effects in the healthcare use case of AI.

With such wide ranging applications and downside risks across just two industries, aligning multi-industry stakeholders on the most urgent risks to tackle seems an almost impossible task, let alone establishing a full legal framework in the immediate term.

Roundtable discussions, The Summit for Accountability in the Digital Age

New and iterative approach to policy making

Instead, the aforementioned complexities necessitate a new and iterative approach towards policy making in this digital age. Establishing a preliminary legal framework, created and overseen by a global stakeholder group of industry experts would yield a first, quick win towards regulating AI and defining accountability.

In selecting the stakeholders per industry to spearhead these sector based initiatives, it’s crucial to adopt an inclusive approach that extends beyond the obvious choices. Since innovation is global, diverse representation from continents and companies of varying stages of maturity — startup to scale up to publicly traded company — will enrich the perspectives and help to ‘future proof’ the eventual frameworks.

Calling upon international agencies to bring alignment as last step

After preliminary sector-based frameworks are in place, a network of international agencies should take the next step, scanning and identifying foundational similarities to build upon. The ‘battle-tested’ industry frameworks will deliver learnings to inform a v.2 overarching framework spearheaded by the international agencies that eventually replaces or complements the sector-based initiatives.

This sector driven approach will allow us to not only move quickly, but also to yield a flexible solution that maintains relevance and effectiveness as the applications and implications of AI continue to drastically alter the world we live in. This would be a new strategy to policy making, but as society is evolving in this digital age, so too must our approach to governance.