The European Union and the Council of Europe have respectively adopted the Regulation on Artificial Intelligence (the so-called AI Act) and the Framework Convention on Artificial Intelligence. Both acts have a twofold purpose: on the one hand, to facilitate and expand the use of artificial intelligence, promoting innovation and the use of such systems; on the other hand, to ensure that the use of artificial intelligence systems is compatible with the standards of protection of human rights and, more generally, the democratic principles. To achieve these objectives, both organisations, albeit in partially different ways, adopt a “risk-based approach”. This paper aims to investigate the main aspects related to the risk-based approach, highlighting its criticalities and pointing out that it does not appear to be the most appropriate regulatory model to deal with the potential offered by artificial intelligence.