The CoE (Council of Europe’s Convention) has introduced this Convention with the aim to ensure that artificial intelligence systems operate in accordance with its core values, like human rights, democracy, and the rule of law. It is a response to growing concerns about how AI might influence both physical and virtual environments, particularly in ways that could interfere with these principles.
The new CoE Convention defines AI systems as a machine-based system that, for explicit or implicit objectives, infer, from the input they receive, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments, with varying levels of autonomy and adaptiveness after deployment. The definition of AI Systems set out in the Convention is virtually identical to the definition of AI systems in the EU AI Act.
The Convention explains AI systems as machines that, based on the information they receive, figure out how to create things like predictions, content, recommendations, or decisions. These outputs can affect both physical and digital environments, and after they’re set up, they can work with different levels of independence and flexibility.
Two important general obligations are prescribed:
There are seven core principles on which the Convention is built upon:
In order to mitigate the risks and adverse impacts of AI systems, the Convention prescribes several measures as a part of its risk and impact management framework. These measures include evaluating the severity and likelihood of potential impacts based on the context and intended use of AI systems. This involves engaging relevant stakeholders, especially those whose rights may be affected, continuously applying risk prevention measures, monitoring any risks to human rights, democracy, and the rule of law, documenting these risks, and testing AI systems where required, both before their first use or after significant modifications.
Signatories of the Convention shall be required to maintain measures to ensure that relevant information regarding AI systems, which have the potential to significantly affect human rights, are documented and provided to authorized bodies. Where appropriate, this information should also be made available or communicated to affected persons.
In addition, measures must be in place to ensure that such information provided is sufficient for affected individuals to contest decisions made, or significantly influenced, by the system. This also applies, where relevant, to the use of the system itself. Finally, measures should guarantee that individuals have an effective means to file complaints with the competent authorities.
The signatories of the Convention will periodically consult with the aim to facilitate application and implementation of the Convention, considering amendments to the Convention, making recommendations for the interpretation and application of the Convention, facilitating exchange of information on legal and technological developments, and facilitating friendly settlements of disputes arising from the application of the Convention, and facilitating cooperation with stakeholders.
Each signatory will report to the convention of signatories, detailing the steps taking to apply the Convention to the public and private sectors, and addressing risks from activities within the lifecycle of AI systems.