Is Your Vendor Using AI? – Third-Party Risk and Regulatory Readiness (feat. KPMG Report)
Earlier this month, KPMG released a comprehensive report outlining the legal, ethical, and regulatory issues surrounding the use of AI. From the EU AI Act to privacy compliance, copyright challenges, and third-party risk, the report covers a wide range of developments that legal and compliance teams should monitor closely.
Below is a translated summary of the report’s key findings, especially relevant for companies adopting or contracting services powered by AI systems.
1. Introduction: The Legal and Ethical Risks of AI
AI holds enormous promise to improve efficiency, cut costs, and enhance lives. However, it also raises increasing concerns around its social, legal, and ethical implications.
Legal departments play a critical role in evaluating AI risks and guiding responsible deployment. Establishing trustworthy AI strategies is no longer just about compliance—it’s about building market trust and securing long-term competitive advantage.
2. Regulation and Governance Landscape
EU – The AI Act:
The EU AI Act takes effect in August 2024 and imposes strict obligations for “high-risk AI systems,” including:
AI literacy and training
Risk assessment frameworks
Documentation and audit trails
Human oversight mechanisms
Security and data governance measures
These obligations apply even to companies outside the EU if their AI systems are used within the EU. The main compliance deadline is August 2026.
UK – Principle-Based Regulation:
Rather than codifying AI laws, the UK government promotes sector-led regulation based on principles such as:
Safety and robustness
Transparency and explainability
Fairness
Accountability and governance
Contestability and redress
The UK is also preparing for broader regulatory efforts via an AI Safety Institute and increased access to training data.
3. Ethical Considerations
Common ethical challenges around AI include:
Lack of transparency or explainability
Bias or discrimination from training data
Unclear responsibility in automated decision-making
Global guidelines from UNESCO, the OECD, and the G7 provide ethical frameworks companies can adopt. Many legal teams are forming AI ethics boards and investing in talent capable of managing these complexities.
4. Third-Party Risk Management
As AI becomes increasingly integrated into vendor solutions, organizations must conduct due diligence not only on their own systems but across their entire supply chain.
Legal teams should:
Require pre-disclosure of AI use in vendor products
Demand appropriate safeguards and audit rights
Allocate liability clearly in case of AI-related failures
Recent regulatory developments emphasize the need for contractual protections and proactive third-party risk management when AI is involved.
5. Privacy and Data Protection
AI solutions that process personal data must comply with data protection laws like the GDPR and UK GDPR. Key principles include:
Transparency in data usage
Fairness and non-discrimination
Data minimization
Companies must perform Data Protection Impact Assessments (DPIAs) and clearly define legal bases for data use. Individuals must also be able to opt out of having their personal data used in AI training or decision-making.
6. Copyright and AI
Generative AI tools raise difficult questions around the use of copyrighted works. The UK government is reviewing proposals to allow text and data mining of publicly available works for AI training.
Legal teams must:
Clarify whether internal materials or contracts can be used for AI learning
Determine who holds rights over any client or user data involved
Bar associations globally are calling for strict guidelines around the legal use of generative AI.
7. General Purpose AI (GPAI) – Legal Implications
GPAI models, due to their versatility and power, require stricter controls under the EU AI Act. Obligations include:
Disclosure of training data summaries
Continuous monitoring for reliability
Technical documentation and risk management
The UK Law Society encourages in-house legal teams to leverage GPAI for contract review, compliance tasks, and risk analysis—but emphasizes the need for secure, high-quality inputs.
8. Legal Department Transformation
AI is transforming the legal function. Automation of routine tasks allows legal teams to focus on high-impact, strategic roles.
Key emerging capabilities include:
Technical understanding of AI
Ability to assess AI risk and navigate regulatory frameworks
Redesigning legal operations for AI-integrated environments
Legal teams must now position themselves not only as advisors but as strategic partners in corporate innovation.
For AI-related legal inquiries, please contact: sc@lexsoy.com
© LexSoy Legal LLC. All rights reserved. All content on this site is the property of LexSoy Legal LLC and is protected under copyright and intellectual property laws.