ChatGPT: Is it all talk and what use is it for the nuclear sector?

By Jeremy Gordon

Everyday use of artificial intelligence has taken a huge leap forward with the release of ChatGPT. It feels like a tool we can really begin to make use of, but will its technology make it in the nuclear industry?

ChatGPT caused a sensation with its ability to quickly produce texts that respond to our questions and commands in a lifelike and creative way. We cannot help but recognise it as some form of intelligence that we can interact with (more on that later). However, before we decide to place our much trust in it, we need to consider how it actually works.

ChatGPT is based on a Large Language Model (an LLM). That is a huge data structure built by analysing millions of books and web pages and recording how closely words are connected to each other: It knows which words are likely to come next in a sentence, given what came before. That’s why ChatGPT outputs very professional-looking text where most of what it says can be verified from authoritative sources somewhere on the internet.

But which documents does the information come from, exactly? It is impossible to say. Having combined so many sources into a single model, ChatGPT is a blurry representation of all of them at once, but at the same time none of them in full accuracy. Like a photograph compressed as a JPEG, ChatGPT seems complete and accurate at the top level, but if you ‘zoom in’ and look for fine detail you will find the information simply isn’t there.

This is where we hit a problem, because the texts produced by ChatGPT do have very definite meanings, and the crispness and confidence with which they are produced slips under the radar of our instinctive fact-checking. It gives us information that is probably about right. That’s very useful, especially in a conversational form, but it is of very limited use in engineering subjects where roughly right is not right. And not right is wrong. And wrong could lead to big problems, sooner or later. Put it this way, there’s little hope that a regulator would accept justifications generated by an LLM, even if they were perfectly correct. And a future where a contractor’s LLM provides reasoning for decisions and that is validated by a customer’s LLM seems a very bad idea indeed!

So where might roughly-right AI be useful in nuclear work? Here are a few ideas.

  • It could be a learning tool for new staff. An LLM could be trained with historical and technical documents from a large company and answer questions to help new hires learn the ropes. Imagine an international firm with many subsidiaries and operations all over the world. A chatbot could support people to learn how the company functions and the context behind non-obvious working practices. Similarly an LLM could help people get up to speed with the history of a large and complex site undergoing decommissioning, perhaps. In both of those cases, a worker would not be able to take significant action before checking their understanding with an experienced person or against original documents, but as a talking history book there are some obvious benefits.
  • It could be a conversational or interviewing tool for HR. ChatGPT is able to contextualise its responses according to previous inputs. In other words it ‘gets to know you’ through the course of a conversation. It could also describe the content of those conversations in a different way for the HR department itself, for example describing how worker displays a number of attributes. However, it is easy to imagine workers becoming guarded. Not knowing how their inputs will be used, they might perceive the chatbot as a kind of surveillance. By the same token workers might figure out which inputs lead to better results for them and begin to game the system. Despite these pitfalls, it is quite likely tools like this will be commonplace in future.
  • It could help the hiring process by summarising CVs and covering letters. Some companies already use simple word scanning to pick out CVs of people with the right skills, but that is very prone to error and gaming. CV scanning can also reinforce prejudice if applied to personal characteristics. For example, a system trained on the CVs of existing staff as a model for future hires could backfire by boosting the ranking of applicants with common names, whereas applicants with less common names would be downgraded. These kinds of things can easily be more trouble than they are worth, so, not a good idea for an intelligent company or a thoughtful recruiter.
  • Similarly, tools like ChatGPT could rapidly draft job descriptions. But remembering how LLM’s work, they would probably only be able to produce variations on established themes, and nothing really innovative. Generic job descriptions are already quick to produce, with the most important aspects usually being the subtleties. It’s hard to predict, but I would expect these kinds of small but personally meaningful decisions will (and should) remain something for human intelligence for a while yet.

Artificial Intelligence has advanced in leaps and bounds in recent years. One of the founders of the discipline, Alan Turing, set out criteria in 1950 to help us recognise AI when it arrives. He said if a machine could hold a text conversation with a human without that person reliably suspecting they were talking to a machine then the machine would have to be considered intelligent. Chat GPT officially passed this Turing test in December last year.

But something else from AI history is appropriate here. A slide from a presentation on AI by made by an IBM staffer in 1979 which said simply: “A computer can never be held accountable. Therefore a computer must never make a management decision.”

We now find ourselves somewhere between these two maxims, while applications of AI expand every day. Will ChatGPT’s technology find a way into nuclear? It probably will find a role, somewhere around the edges, and perhaps its successors will earn our trust for a position more central.

Help us grow and achieve your potential at a values-driven business.