technical-advice-and-support-provided

January 2024 Newsletter

A message from Jen Kitson, Customer Success and Operations Director and Deputy GC (Private Sector)

Jen Kitson
We wish you a very happy New Year and hope you enjoyed the Christmas break!

I am delighted to share with you our latest Private Sector Newsletter. This quarter we focus on bringing you the latest developments in the AI space and our top tips to ensure you are prepared – a recurring theme, but deliberately so given the queries we keep receiving from clients on this topic. In addition, we discuss the new ICO/CMA position paper on “Reject All” cookie consent banners and the changes in the UK, post Brexit, on transfer mechanisms to highlight the impending 21 March 2024 deadline! If you wish to learn more on any topics featured in this Newsletter, do not hesitate to contact us.

I am excited to see what the rest of 2024 has to offer and the new ways in which we can succeed together!

Jen

Latest AI Developments

Latest Ai Developments
As artificial intelligence (AI) continues to transform industries, governments worldwide are responding with evolving regulatory frameworks. These regulatory advancements are shaping how businesses integrate and leverage AI technologies. Understanding these changes and preparing for them is crucial to remain compliant and competitive.

Recent Developments in AI Regulation:

1. United Kingdom: The UK's approach to AI focuses on creating a pro-innovation regulatory framework. This approach, outlined in the AI white paper released by the Department for Science, Innovation and Technology (DSIT), aims to foster public trust by formulating rules proportionate to AI risks across various sectors. The UK's strategy differs from the EU’s, as it prioritises guidelines over new legislation, requiring context-specific regulation based on outcomes of specific AI uses. The framework requires collaboration between regulators and innovators.
2. European Union: The EU is at the forefront of AI regulation, with the European Parliament and Council reaching a political agreement on the European Union's Artificial Intelligence Act on 8 December 2023. The EU AI Act (final text pending) will shortly be officially adopted and enter into force, with most provisions applying after a two-year grace period. Prohibitions will take effect after six months and obligations for general-purpose AI models after 12 months. The EU AI Act intends to ensure AI systems' safety and legal certainty for AI innovations while minimising risks to consumers. It introduces a risk-based approach with four risk classes for AI systems, being unacceptable-risk, high-risk, limited-risk, and minimal/no-risk. Unacceptable-risk AI systems, such as those infringing on fundamental rights or EU values, are banned. High-risk AI systems will require specific obligations like testing and transparency. The EU AI Act bans certain AI uses, like untargeted scraping for facial recognition databases and social scoring based on personal characteristics. High-risk AI systems have stringent requirements (risk mitigation and data governance) and obligations are introduced for all general-purpose AI models, with additional requirements for those posing systemic risks, including model evaluations and risk assessments. Enforcement will be through national market surveillance authorities and a new European AI Office. Penalties vary based on the severity of infringements, with more proportionate fines for smaller companies.
3. United States: In the US, AI regulation is evolving, marked by the White House's Executive Order on AI, which requires safe, secure and trustworthy development and use of AI. It recognises AI's potential for solving challenges and boosting prosperity, while also acknowledging risks such as fraud, bias and national security threats, advocating for a coordinated federal approach grounded in principles of safety and responsible innovation. Despite this, there is no comprehensive federal AI legislation akin to the EU's AI Act. State-specific AI legislation remains limited, though state privacy laws may impact AI systems handling personal data. Sector-specific guidance is emerging, with the National Institute of Standards and Technology's AI Risk Management Framework offering a voluntary guide for managing AI risks. The Federal Trade Commission is increasing scrutiny on AI use to prevent unfair practices, while the Food and Drug Administration plans to regulate AI-powered tools in healthcare, all pointing to a growing trend towards sector-specific AI regulatory frameworks in the US.
4. China: China has implemented a comprehensive regulatory framework focusing on generative AI services. Effective from August 2023, the Interim Administrative Measures for Generative Artificial Intelligence Services framework, specifically targets generative AI services (those creating text, images, audio or video), imposes strict rules for lawful data sourcing, user consent for personal information, data labelling and quality assurance. It also requires companies to protect personal information, moderate content to prevent illegal activities and conduct security assessments for AI services that could influence public opinion or social mobilisation. Non-compliance with these regulations can result in penalties under key laws like the Cybersecurity Law.
5. United Nations: The UN AI Advisory Board issued its interim report on Governing AI for Humanity in December 2023, that did not propose any single model for AI governance but instead focused on principles and functions which it says an eventual AI global governance framework should address following more consultation.
6. G7 Nations: The G7 countries (US, Canada, Japan, UK, France, Germany and Italy) are collaboratively working on a unified approach to AI regulation (the Hiroshima AI Process), prioritising ethical deployment that aligns with human rights and democratic values, with an aim to set global norms and standards.
7. AI Safety Summit 2023: The summit, held at Bletchley Park in November 2023, marked a pivotal moment in AI governance, highlighted by the adoption of the Bletchley Declaration on AI Safety, calling for collaborative action to manage AI's risks and opportunities. The summit convened around 150 global representatives from government, industry, academia and civil society for comprehensive discussions. The outlined objectives and the commitment to future summits in South Korea and France reflect a strong international dedication to a human-centric and collaborative approach towards AI safety.

Actions for AI Preparedness

Actions For Ai Preparedness
Now we understand the latest regulatory developments in artificial intelligence (AI), let’s turn to what organisations should be doing in order to get “AI Prepared”.
1. Responsibility: Appoint a workplace AI taskforce to proactively consider AI use within the organisation. This AI taskforce can carry out internal AI audits and prepare appropriate policies and procedures. They should also be consulted when introducing AI functionality into products and services and to vet AI service providers. Having a centralised point of contact to co-ordinate and assess organisational AI issues enhances visibility and risk management.
2. Know where you stand: seek to conduct AI audits and risk assessments internally, understand your organisation’s use of AI tools and data. This could be a function of the AI taskforce. Assess AI applications, especially where the organisation operates in high-risk sectors like healthcare or finance. Understanding where AI use falls in terms of potential risk categories is vital. Regular audits of AI systems can ensure compliance with data protection laws and uncover potential biases or ethical issues.
3. Data Governance: With many AI regulations focusing on data privacy, security and robust data governance policies are essential (as per KC’s prior newsletter article). Ensure that data practices are compliant and that your organisation has the appropriate processes and documentation in place.
4. AI Frameworks: Consider creating an AI policy, adopting ethical AI principles and frameworks to ensure that areas of concern are addressed, particularly regarding data security (e.g. ensuring the organisation’s data is not inadvertently placed into the public sphere), accuracy and quality assurance (e.g. ensuring employees don’t solely rely on outputs from AI tools, but validate the generated information), control (agreeing what is and isn’t reasonable use) and fairness (e.g. ensuring that use of AI tools guard against bias). An AI policy may also be helpful when engaging with regulators and to educate employees about the ethical and legal implications of AI use within the organisation.
As AI continues to advance, regulatory frameworks globally will also evolve. Organisations are advised to proactively prepare for these changes to leverage AI effectively and responsibly and to remain compliant with applicable regulatory frameworks.

UK Privacy reminder – Data transfers based on the old EU SCC’s must be replaced before 21 March 2024

Uk Privacy Reminder
In February 2022, the UK introduced the International Data Transfer Agreement (IDTA) and the UK Addendum to the European Commission's new standard contractual clauses (new EU SCCs).

These documents, essential for data protection in the post-Brexit era, are designed to ensure that personal data transfers from the UK to countries not covered by the UK's adequacy regulations comply with the UK GDPR's Article 46.

The IDTA and UK Addendum were created in response to GDPR, Brexit and European case law, particularly the CJEU's Schrems II decision. They address the limitations of the old EU SCCs, which are outdated and not fully aligned with the UK GDPR.

The UK Addendum acts as an add-on to the new EU SCCs, ideal for multinational organisations dealing with data transfers under both EU and UK GDPR. It is noted for its flexibility, ease of execution and automatic incorporation of ICO revisions. However, it also has limitations, such as being dependent on the new EU SCCs and not addressing all data transfer scenarios.

The IDTA, on the other hand, is a standalone agreement suitable for UK-based organisations. It is praised for its flexibility, user-friendly format and broader applicability compared to the new EU SCCs. Despite these advantages, the IDTA does not include mandatory processor requirements under Article 28 of the UK GDPR, meaning that organisations relying on the IDTA would need to put in place additional terms regulating processor requirements (e.g. by executing a separate data processing agreement).

Transfer arrangements using the old EU SCCs executed before 21 September 2022 will remain valid until 21 March 2024, unless processing operations change. After 21 September 2022, organisations must use the IDTA or UK Addendum for any new UK GDPR transfer arrangements. These new arrangements must replace any existing ones based on the old EU SCCs before 21 March 2024.

Organisations are also required to conduct a Transfer Risk Assessment (TRA) before any transfer, ensuring supplementary measures are in place if necessary.
The ICO has updated its UK GDPR guide to reflect these changes, clarifying the definition of "restricted transfer" and providing future guidance on the IDTA, UK Addendum and TRAs. Additionally, the UK has adequacy regulations for certain countries and territories, which facilitate data transfers without needing these new mechanisms.
Stock pictures supplied by Freepik
phone linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram