IBM calls for “precision regulation” of AI

IBM policy Lab

IBM has called for “precision regulation” of artificial intelligence (AI), just 24 hours after a statement by Google CEO Sundar Pichai on the same topic.

IBM officially released its IBM Policy Lab at the World Economic Forum. The initiative is aimed at giving policy-makers suggestions for technology-related problems.

In a post on its official blog, IBM referred to the IBM Policy Lab, saying it supported targeted policies that “would increase the responsibilities for companies to develop and operate trustworthy AI.”

The IT major acknowledged that given the ubiquity of AI there will be never be a one-size-fits-all rules that can properly accommodate the many unique characteristics of every industry making use of this technology and its impact on individuals. 

IBM has also outlined a set of priorities to be taken into consideration when looking at AI regulation, including several directly addressing issues around compliance and explainability.

Incidentally, the Policy Lab was soft-launched in November 2019.

IBM’s proposed regulation framework takes into account whether companies are providers or owners (or both) of AI systems, in addition to the level of risk presented by particular products as determined by:

(a) the potential for harm associated with the intended use

(b) the level of automation and human involvement

(c) whether an end-user is substantially reliant on the AI system based on end-user and use-case.

The post said….governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems.

This implicit recognition of the fundamental difference in accountability between stages of AI development can help appropriately assign responsibility for providing transparency and ensuring fairness and security, based on who has better control over the protection of privacy, civil liberties, and harm- prevention activities in a given context.

– IBM Policy Lab

In the lifecycle of AI capabilities in the marketplace, organisations may contribute research, the creation of tooling, and APIs; in later stages of operation, organizations will train, manage, and control, operate, or own the AI models that are put to real-world commercial use. These different functions may allow for a distinction between “providers” and “owners,” with expectations of responsibilities based on how an organisation’s role falls into one or both categories, said IBM.

IBM’s proposed precision regulation framework incorporates 5 policy imperatives for companies, including appointing a “Lead AI ethics officer” and framing different rules for different risks.

These policies would vary in robustness according to the level of risk presented by a particular AI system, which would be determined by conducting an initial risk assessment based on potential for harm associated with the intended use, the level of automation (and human involvement), and whether an end-user is substantially reliant on the AI system based on end-user and use-case, said IBM.

For more, click here.

Image credit: IBM Policy Lab


Leave a Reply

Click here to opt out of Google Analytics