Why I strive to be more like chatGPT

Dr Rosemary Francis

--

The UK’s technology secretary, Peter Kyle, got caught earlier this week using ChatGPT for policy and campaign advice. New Scientist used our powerful freedom of information laws to obtain records of the searches the minister had made using the AI. Kyle asked for help on a number of matters including advice on why the adoption of artificial intelligence is so slow in the UK business community, and which podcasts he should appear on.

This has sparked a debate on whether it is appropriate for a member of parliament to be using an AI to make decisions. He has been supported by his party who are keen to push AI adoption. He has also been using tool to define technical terms upon which he needs to make decisions. Like most politicians heading up technology sectors, he is not a scientist or engineer himself and will be on a very steep learning curve. In the good old days perhaps he would have asked Google, but now even asking Google is likely to throw up a load of AI generated results though the AI Overview feature of their search platform. Wikipedia is the other source of truth for many and that is notoriously riddled with errors and omissions. In particular women and people of colour are routinely deleted or overlooked, both for living people and historical account. So if a politician wants to use an AI to regurgitate this stuff instead of reading the inaccuracies first hand, what’s the difference and what is the harm? I hope he still reads the reports from organizations from the Royal Academy of Engineering which is set up to inform the government about engineering trends and strategies, but those documents often assume a certain level of understanding.

Hashers gathering on the 10th Century Castle Mound for a beer stop

Like all good debates this took place nationally in pubs and after hashing I polled my fellow runners on their thoughts. This being Cambridge, most people work in or adjacent to the technology industry: our tiny city is the birthplace of Arm, Amazon Alexa, affordable genome sequencing etc. A friend of mine is in a new role in a company with a wider range of engineering disciplines than he is used to and he was keen to point out that chatGPT is really useful for working out what another team mate is talking about.

“I’ve been in smaller companies where everyone worked together because there was only one of you and they spoke my language and I spoke theirs and everything was fine. Now I’m in a larger organization where each discipline has a whole team and each team has a completely different way of describing the same thing. You ask a simple question and get a completely bizarre or useless answer in response.” said my friend in the pub.

With hardware engineers, embedded engineers, software engineers, and mathematicians each in their own team, there is a lot of potential for misunderstanding. This is not unique to the company he is working in, but a problem that all organizations face when bringing together people with adjacent fields of expertise. Domain-specific language often comes up in conversation at the Royal Academy of Engineering. There we bring together engineers and business leaders from a wide range of backgrounds to work on policy and advisory issues. I am new to policy work so as well as having the same problems that everyone else has when we mix different disciplines I also have to learn the language of policy making. The other academy fellows are all good communicators, but I am having to learn the language of government, which is very new.

Back to my friend in the pub, he said he uses ChatGPT all the time at work to decipher the requests and information from his colleagues:

“ChatGPT is a better listener than any colleague I’ve ever had” said my fellow hasher. “It is not always right, but it is patient and it doesn’t mind how many times you ask the same question.”

I’ve hired or mentored a lot of junior staff and creating a culture in which people can ask questions is important, but making sure they can ask a question more than once is even more important. The company I founded made software using techniques that few developers had used before so this was equally important for senior developers coming across our technology for the first time. Even my most experienced developers occasionally had information overload when they were new. Its really common when anyone is new for information to take a bit of time to settle so asking the same question multiple times is a natural way to learn and add context each time.

This is something that the large language models give us that is harder to get elsewhere. When learning something new we can trawl books, blogs and tutorials to get the same information explained in different ways until it sinks in. That takes time though. When we want to be spoon fed some information more quickly, I can see the appeal of a digital friend who will always be keen to help and won’t mind how often you bother them. ChatGPT may not always be right, but it won’t be annoyed or busy. I can absolutely see why you would ask the AI first before approaching your scary colleague. I have had scary colleagues before and I have also probably been that scary colleague for some people.

When leading teams I have always tried to be the person who speaks all languages. As an engineer this is something I can bring to a leadership role that others with a less technical background cannot. I have the skills and knowledge to pick up enough about the work in each of my teams in order to be the translation layer. It is not practical to do that for every interaction, but it helps when working with my engineering managers to define specifications and to spot gaps in our planning if I know what each term means to each team. This was especially true in my most recent role at Altair where I lead the HPC workload management team and we had at least four workload manager products across the company. Each product had been developed in a different team with completely different meanings for the same functionality or the same term meaning different capabilities in different products. This meant I had to learn the product-specific language as well as more general domain-specific terms within workload management and HPC. This was actually a great part of my job and a great way to remain usefully technical despite being in a senior leadership role.

I believe a good leader will always speak your language. That should be the first thing to do when building a team: to gather and disseminate the terms we use and to bring everyone into a common understanding. Without this I don’t think we can work together towards a common goal.

An AI never says “go away you asked me that yesterday”.

This year I want to go beyond being the person who understands their colleagues. I’m setting a goal to be as patient as ChatGPT. To never be too busy to answer a question to the best of my abilities. So for colleagues, friends and my children, whether they ask me what quantum computing is or whether they ask me if its time to go yet, I’m going to give it my all.

--

--

No responses yet

Write a response