4.2 General requirements applicable to high-risk AI systems
The AI Regulation imposes the following general requirements on high-risk AI systems:
(a) Transparency: High-risk AI systems must be designed and developed to ensure that the system is sufficiently transparent to enable users to interpret its output and use it appropriately;
(b) Human oversight: High-risk AI systems must be designed and developed in such a way that there is human oversight of the system, aimed at minimising risks to health, safety and fundamental rights;
(c) Risk management system: A risk management system must be established and maintained throughout the lifetime of the system to identify and analyse risks and adopt suitable risk management measures;
(d) Training and testing: Data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, accurate and complete;
(e) Technical documentation: Complete technical documentation that demonstrates compliance with the A
Regulation must be in place before the AI system is placed on the market and must be maintained throughout the lifecycle of the system; and
(f) Security: A high level of accuracy, robustness and security must consistently be ensured throughout the lifecycle of the high-risk AI system.