ChatGPT is not intelligent

As I understand things, ChatGPT uses probabilistic methods applied to a massive amount of data (currently the language content of the internet up to September 2021) to predict what word is most appropriate to follow the preceding words in its output. To kick-start that process you give it an instruction which guides the form of its response. So it’s answers are a re-packaging of the previously-written material on which is has been trained; it does not create new ideas. There is a parameter called ‘temperature’, however, which can vary its answers from being more focused and deterministic to being more diverse and unpredictable; a kind of creativity, perhaps. 

At present, therefore, we are the intelligent agents and ChatGPT is simply our assistant. Our assistant can retrieve information quickly and package it in ways that can help us think through the ideas we are pursuing. Also, we can ask it to do things that will help us analyse the matter at hand, for example by taking large stacks of data from several sources, combining them and charting certain characteristics. And when we ask it to identify connections between things, it will sometimes find connections we would not have thought of ourselves. 

‘In-context learning’ is an emergent ability of large language models. It refers to the AI model’s ability to understand and generate responses based on the context provided within the current conversation. I don’t know where other AI models stand on this, but at present ChatGPT cannot remember or learn from one interaction to the next.

In-context learning has given rise to a new discipline of ‘prompt engineering‘ (inputs to ChatGPT are known as prompts). Prompt engineering uses the concept of roles: system, user, and assistant. The roles allow for a structured back-and-forth conversation between the User and the Assistant, with the System role setting up the general behaviour and context of the Assistant. ChatGPT explains them in this short conversation here.

Conversations with ChatGPT can be carried out through its Chat Completion API. Instead of just supplying a single string as a prompt, you can provide an array of message objects. Each message object has a ‘role’ that can be ‘system’, ‘user’, or ‘assistant’, and ‘content’ which is the text of the message from the role. That content can be of various types such as instructions (which can be simple or complicated), primary content (text that is to be worked on by the AI in accordance with the instructions), examples of the desired form of output, cues about what the output should include, and supporting content to provide context for the framing of the output. 

ChatGPT is not intelligent. But it and other Large Language Models are becoming more complex individually, they are capable, if allowed, of updating their underlying knowledge including with what they learn during conversations, and they are being interconnected with other types of AI offering a wide range of different capabilities. Therefore we can expect these increasingly complex systems to begin exhibiting emergent abilities that will surprise us and that might become difficult to distinguish from intelligence.