01327 317537 info@actifhr.co.uk
Select Page

You must have been hiding underneath a rock not to realise the increasingly significant role that artificial intelligence (AI) is playing in the UK.

According to Government figures, around 15% of all businesses have adopted at least one AI technology, which translates to around 432,000 companies. The current usage of AI technologies is limited to a minority of businesses, it is more prevalent in certain types of larger businesses including IT, telecommunications and the legal sector. But perhaps of more concern is understanding the risks it may pose, including staff rights and civil liberties, and perhaps most importantly who’s in charge of its regulation?

Launched in March 2023 Chat GPT is the latest AI technology to embed itself in our daily lives. But in recent weeks, Italy has banned ChatGPT and Elon Musk and other AI experts are calling for a pause on AI development so we can decide what direction we want it to take. Whilst only last month, The Trades Union Congress (TUC) held a half-day conference to highlight the challenges of ensuring that workers are treated fairly, as what it calls “management by algorithm” becomes increasingly prevalent.

Regulation is key

The advantages of AI are clear to see with its time and cost-saving capabilities and ability to monitor, streamline activities, eliminate biases and automate repetitive tasks. But legislation and regulation are surely key, which is why the UK government’s recently published white paper entitled “A pro-innovation approach to AI regulation” makes for interesting reading.

Intended to build public trust in AI, promote innovation and make it easier for businesses to grow and create jobs, the document sets out a series of principles for the use of the technology, including the need for safety, transparency, fairness, accountability and contestability.

The government’s aim is to adopt a regulatory (as opposed to a legislative) approach that doesn’t stifle creativity. And whilst there is no single regulatory body in the UK responsible for dealing with the use of AI, the white paper suggests that the existing regulators, including the Health and Safety Executive (HSE), the Equality, Human Rights Commission (EHRC), Information Commissioner’s Office (ICO) and the Employment Agency Standards Inspectorate (EASI) – should interpret and apply the five new “values-focused cross-sectoral principles” to address any AI risks which fall within their remits in accordance with existing laws and regulations.

Implications for employers

Over the coming months, regulators will be working hard to clarify certain areas and identify any barriers to the application of the principles, so there will probably be some additional guidance issued. However before then, it’s important that employers take some preparatory steps to help “future proof” their approach to the use of AI. This may include:

  1. Getting familiar with existing laws that govern AI use, including the Equality Act 2010, GDPR etc.
  2. Identifying and auditing your current AI-based technologies to check how they stack up against both existing employment and data protection legislation as well as the 5 proposed principles.
  3. Making sure anyone dealing with your AI-based technologies is familiar with the government’s White Paper proposals and that any areas for concern or further consultation have been flagged.
  4. Identifying which of the sector specific regulators may apply to you and look out for their updates, ensuring that you are aware of what you should be doing.
  5. Looking at the global picture to see what other countries are doing as the UK may follow their changes.

If you need any advice on HR or Employment law matters please email caroline.robertson@actifhr.co.uk