Skip to main content
Comment & Opinion

Should employees use ChatGPT? Why businesses need a GenAI policy

“Employees are increasingly turning to GenAI tools to help with their day-to-day tasks, often without the knowledge of their employer. An employee-facing policy governing the use of GenAI at work could help employers to optimise the productivity benefits that GenAI offers, whilst protecting against the various risks it poses.”

- Philippa de la Fuente, Employment & Immigration

What is GenAI?

Generative AI (GenAI) refers to a type of artificial intelligence that can create new content, such as text, images, music – or even video – based on patterns and data it has learned from existing information. Unlike traditional AI, which is mainly designed to recognise patterns or make predictions, GenAI can produce novel and creative outputs that resemble the data it was trained on, but are not exact replicas. [1]

Large Language Models (LLMs) are a category of GenAI which specialise in producing text-based data and are trained to give the user the answer that is the most likely outcome to the prompt. ChatGPT, a chatbot developed by OpenAI, is a widely known example of an LLM. By way of example, if you asked ChatGPT to explain what GenAI is (like we did), it might produce something very similar to the paragraph above.

With a survey carried out by Deloitte finding that one in seven people have used GenAI for work (up 66% from the year before), it is clear that UK employees are increasingly turning to GenAI for help with their day-to-day tasks. [2]

Should employees use GenAI?

GenAI has been identified as a useful tool for a variety of time-consuming tasks, such as producing documents, summarising documents, research and idea generation. Research conducted by Pearson predicts that GenAI has the potential to help UK workers to save 19 million hours a week by 2026 on “routine and repetitive” tasks. [3]

Businesses will be alive to the clear benefits of increased worker productivity, and embracing this technology could be imperative to remaining competitive. However, there are a plethora of legal concerns which arise from its use, including data security, disclosure of confidential information, generation of intellectual property, and concerns over employee performance.

What are the risks from an employment perspective?

In relation to the work produced by GenAI, there are risks of “hallucination” (meaning the tool fabricates information) and the information being out of date (if the data it was trained on is historic). Over-reliance on these tools could lead to de-skilling and dis-engagement of employees and could also negatively impact stakeholder relationships, where personal relationships (and, therefore, the authenticity of individuals) are valued.

We are seeing examples of employees using the tool to “re-write” emails to sound more “intellectual”, or cut corners by feeding the tool information about a work product or report which is needed (which could well include confidential information or employee data) and using its output as the final product. Not only does this raise serious confidentiality and data breach issues, but also concerns over the quality of work and whether an employee is fulfilling the duties they were employed to do. The resultant impact may be disciplinary sanctions or performance concerns. In order for an employer to take such action, having a clear policy on GenAI use is valuable.

Where employees can use the tool in line with the business’ policy, there may be other knock-on impacts for the business to consider. If significant efficiencies are realised, then restructures, redundancies or changes to roles and duties may need to be considered – the workplace is rapidly changing and therefore roles and working practices will need to change with it.

The potential for unlawful discrimination is also a key concern. As discussed in our previous publications regarding the use of (i) AI in recruitment and (ii) AI by employers, whilst there is potential for future AI to help to eliminate discrimination, current models tend to have been trained on inherently biased data, meaning there is also the potential for GenAI to produce bias and/or discriminatory work. Naturally, it will be difficult to justify the decisions made by an algorithm, and any Tribunal claims resulting from discriminatory GenAI-produced work could be challenging to defend.

Why do we need a policy?

Research suggests that UK employees are using GenAI at work regardless of whether they have been encouraged to or not.[4] In fact, Deloitte’s study found that 31% of UK employees who use GenAI for work access publicly available tools that they personally pay for.[5] Some employees may have found that – or at least feel that – doing so has given them an advantage over their peers.

An employee-facing policy governing the use of GenAI in your workplace could help to both optimise the benefits of GenAI whilst protecting against the risks it poses. Encouraging your employees to use GenAI in a responsible and controlled way is likely to prevent covert use of GenAI tools, and enable GenAI-produced work to be thoroughly checked for accuracy, suitability and bias. Setting the parameters around its use will also give a better basis on which to conduct disciplinary or performance processes where the tool is not used in line with policy.

Whilst some employers may prefer to implement an outright ban on GenAI use, particularly in light of the risks outlined above, this approach may result in inefficiencies for the business and falling behind the competition in the long run; there is a balance to be struck and so it’s a decision which businesses will want to give serious thought to.

What should our policy say?

As a minimum, your policy should address:

  • the GenAI tools which are permitted to be utilised at work (if any);
  • the tasks for which employees are permitted to use GenAI and whether they should be required to disclose that they have used GenAI for a particular task;
  • record-keeping (e.g. watermarking of GenAI-produced work);
  • requirements for GenAI work to be checked for accuracy, suitability and bias;
  • how its use will be monitored (and considerations around employee privacy);
  • actions employees must take to protect confidential information and mitigate the risk of copyright infringement;
  • who has accountability for GenAI work; and
  • possible consequences of failing to comply with the policy (e.g. disciplinary action).

Your policy can also be supported by employee training, which should explain the benefits and risks of using GenAI at work. As this is a rapidly developing area, it will be important to continue to review and update the policy over time as your business becomes more accustomed to its use and what type of use does and does not work for you – along with the list of which applications can (or cannot) be used. If the policy will result in significant changes to workplace practices, duties or roles, consultation with employees (and their agreement to changes) may be required.

Should you have any questions relating to the contents of this article, or if you would like assistance with implementing a GenAI policy, we are happy to help.

 

 

[1] Definition generated by ChatGPT

[2] Over 18 million people in the UK have now used Generative AI | Deloitte UK

[3] New research from Pearson shows how Generative AI could help UK workers to save 19 million hours a week by 2026 | Pearson plc

[4] Over 18 million people in the UK have now used Generative AI | Deloitte UK

[5] One in three employees who use GenAI for work pay for it themselves | Deloitte UK

Our people

Philippa
de la Fuente

Associate

Employment & Immigration

CONTACT DETAILS
Philippa's contact details

Email me

CLOSE DETAILS

Charlotte
Smith

Partner

Employment & Sport

CONTACT DETAILS
Charlotte's contact details

Email me

CLOSE DETAILS