A financial services company based in Canada is deploying a high-risk credit scoring AI system that will process applications from EU citizens. The company is already compliant with PIPEDA. To comply with the EU AI Act, which of the following represents the MOST critical additional governance requirement for them as a 'provider'?
Q2Multiple answers
An AI development team is building a model to predict equipment failure in a factory. They are in the process of establishing data provenance for their training dataset. Which THREE of the following activities are essential for this process? (Select THREE)
Q3
A large multinational corporation has a centralized AI ethics board but allows individual business units to develop and deploy their own AI applications. This has led to inconsistent application of corporate AI principles and duplicated risk assessment efforts. To address this, the Chief AI Governance Officer proposes a 'hub-and-spoke' governance model. What is the primary advantage of this model in this scenario?
Q4
During a post-deployment audit of an AI-powered recruitment tool, it was discovered that the system disproportionately rejects qualified candidates from a specific demographic group for technical roles. The model was trained on the company's historical hiring data. This is a classic example of which type of AI risk?
Q5
**Case Study** A healthcare provider, 'WellCare,' wants to deploy a third-party, cloud-hosted AI model that analyzes medical images to detect early signs of a specific disease. The AI vendor claims a 98% accuracy rate in their lab tests. WellCare's patient population has a significantly different demographic and genetic makeup compared to the population used for the vendor's training data. The vendor's contract has a strict limitation of liability clause and does not provide access to the model's internal logic or full training dataset, citing intellectual property concerns. WellCare's AI Governance Council is reviewing the deployment decision. The primary business objective is to improve patient outcomes by catching the disease earlier. However, the legal team is concerned about potential misdiagnoses and the associated liability. The IT department is concerned about data security, as patient images will be processed by the third-party vendor. Which of the following is the MOST critical governance activity for WellCare to perform before signing the contract and deploying the model?
Q6
True or False: According to the NIST AI Risk Management Framework (RMF), the 'GOVERN' function is primarily focused on post-deployment monitoring and measurement of AI system performance.
Q7
An insurance company uses an AI model to detect fraudulent claims. The model is a complex deep learning system, making its decisions difficult to interpret. To meet the 'explainability' principle of responsible AI, the company uses SHAP (SHapley Additive exPlanations) to generate a report for each decision, highlighting the top three factors that contributed to the outcome. This approach is an example of which type of explainability?
Q8
A retail company deploys a generative AI chatbot for customer service. To improve performance, they decide to fine-tune the base model using transcripts of their own customer service calls. From a governance perspective, what is the MOST significant new risk introduced by this fine-tuning process?
Q9
An AI governance professional is reviewing the design of a new AI model intended for a high-risk application. They are applying the 'risk mitigation hierarchy' as part of their assessment. According to this principle, what should be their first consideration?
Q10
A city's transportation authority plans to use an AI system to optimize traffic light timing. The system uses real-time camera feeds from intersections. Under the EU AI Act's risk classification framework, which category would this system MOST likely fall into?