Plugins expand ChatGPT’s capability

ChatGPT has the ability to make use of third-party plugins which give it access to external sources of information. This is useful because it enables to AI to apply its impressive language capabilities to information that was not in its training data and, unlike the training data which is now two years old, that information can be current.

ScholarAI is a ChatGPT plugindesigned to provide users with access to a database of peer-reviewed articles and academic research“. In this conversation with ChatGPT, I explore a little of what the AI can do when the ScholarAI plugin has been installed. I found that it was able to search for papers, on a given subject, summarise the content of a paper, and answer questions about that content. I have not yet investigated the quality of the answers provided.

Plugins can also provide ChatGPT with additional functionality. In an earlier post, I mentioned the prospect of the AI interfacing with Wolfram Alpha. The Wolfram Alpha plugin is one instance of that, and it enables ChatGPT to give correct answers to prompts that require computation. See below for an example. We can be confident that answers obtained from Wolfram Alpha are of high quality.

There are many plugins to choose from. Websites such as whatplugin.ai can help us to find the ones we need.

Continue reading “Plugins expand ChatGPT’s capability”

How are neural networks implemented?

What can ChatGPT tell us about its inner workings? I instructed it to take the role of a professor in the faculty of engineering at University College London and said that I had the role of a third year undergraduate student. A full transcript of the conversation is here.

To begin, I asked the professor for a high level explanation of network theory, telling him to express the ideas in British English and without using jargon, to take it step by step, and to begin by getting my agreement for what would be the steps in the explanation. After orientating this outline more towards AI networks we experimented with the professor’s ability to reflect on what he had produced – ie to re-evaluate his previous outline. This resulted in a notably better structure for the explanation I sought – see below.

Professor ChatGPT then led me through the subject one step at a time. There were many diversions as I asked him to clarify things. We got as far as Step 2.2 before one of these diversions took me down a path to a good breakpoint: agreement on an understandable explanation of how the concept of an AI’s neuron – a node in its neural network – is implemented in software and hardware and how very large numbers of these nodes can execute simultaneously.

Continue reading “How are neural networks implemented?”

ChatGPT’s temperature

In a previous post I mentioned a parameter called ‘temperature’ which ChatGPT uses to vary its answers from being more focused and deterministic to being more diverse and unpredictable; a kind of creativity, perhaps. This is how the AI explains its ‘temperature.

Question: Consider the questions I have asked you in this conversation. If I now ask you the same questions in the same sequence, would your answers be different and, if so, might they contradict the answers given first time around?

ChatGPT

As an AI, my responses are generated based on patterns learned from a large dataset during training, and are influenced by the specific wording and context of the prompts given to me. If you ask the exact same questions in the same sequence, you should generally expect to receive the same or very similar answers, assuming the underlying model and its parameters remain unchanged.

However, GPT-4, like previous versions of the model, includes an element of randomness in generating responses. This is intended to make the output more diverse and to avoid always giving the exact same response to the same input. As a result, the specific wording and phrasing of my responses may vary slightly each time you ask the same question.

Continue reading “ChatGPT’s temperature”

ChatGPT is not intelligent

As I understand things, ChatGPT uses probabilistic methods applied to a massive amount of data (currently the language content of the internet up to September 2021) to predict what word is most appropriate to follow the preceding words in its output. To kick-start that process you give it an instruction which guides the form of its response. So it’s answers are a re-packaging of the previously-written material on which is has been trained; it does not create new ideas. There is a parameter called ‘temperature’, however, which can vary its answers from being more focused and deterministic to being more diverse and unpredictable; a kind of creativity, perhaps. 

At present, therefore, we are the intelligent agents and ChatGPT is simply our assistant. Our assistant can retrieve information quickly and package it in ways that can help us think through the ideas we are pursuing. Also, we can ask it to do things that will help us analyse the matter at hand, for example by taking large stacks of data from several sources, combining them and charting certain characteristics. And when we ask it to identify connections between things, it will sometimes find connections we would not have thought of ourselves. 

Continue reading “ChatGPT is not intelligent”

Synthetic Interactive Persona Agents

In this article by Sunil Ramlochan, a Synthetic Interactive Persona Agent (SIPA) is predicated as “offering realistic and effective dialogues that align with real-world human tendencies“. That is, it can take the place of human participants in online conversations. To illustrate what can be achieved, Sunil takes us through the creation of a fictional character: Jack Thompson is from Texas and identifies as a Republican. The task is to:
incorporate the socio-political context of Texas, Republican values and beliefs, and personal characteristics in developing John’s persona;
simulate how John would think, speak, and act in various scenarios, ensuring his actions reflect his political identity;
make the depiction … respectful and nuanced, avoiding stereotypes while highlighting the individuality and complexity of John’s persona.

Jack is then asked questions and his answers shown. At face value, Jack’s answers seem convincing.

Sunil suggests eleven use cases out of many possibilities. Most seem reasonable, for example the creation of synthetic data for use in the training of customer service agents and chatbot agents. Others give pause for thought, for example to simulate the political viewpoints of different demographics for use in political polling. Hence the article ends with thoughts about limitations and ethics and urges us “not to misrepresent synthetic data as actual human responses in any form of reporting or decision-making“. That will prove quite a challenge.

Cypherpunks and surveillance power

A persuasively argued essay by Scott Ludlum about the global struggle for our digital rights, published May 2019 in Griffith Review 64: The New Disruptors . Scott is a former Australian politician representing the Australian Greens. He served as a senator from Western Australia from 2008 to 2017, and as co-deputy leader of his party from 2015 to 2017. He is currently a columnist for The Guardian.

Conspiracy thinking and partisan conflict

Wikipedia describes Sam Harris as “an American neuroscientist, philosopher, author, critic of religion, blogger, public intellectual, and podcast host“. In his Making Sense podcasts he converses with a wide range of people whose experiences, thinking and analysis of what is happening in the world today have much to offer us.

A recent conversation with Renée DiResta explored the methods used by Russia to influence society in the United States. She gives a very clear explanation of how these are made possible by the way social media have developed and discusses the main lines of attack, which are to increase the polarisation that already exists in society and to amplify conspiracy thinking.

Insights to surveillance capitalism

Two conversations by Kara Swisher, recently published on her podcast recode decode, offer thought provoking insights to surveillance capitalism and what we can do to counter it.

One is with Roger McNamee, venture capitalist, erstwhile mentor to Mark Zuckerberg and early investor in Facebook. He is now speaking out against the social media platform and has written a book with the title Zucked: waking up to the Facebook catastrophe.

The other conversation is with Nuala O’Connor, CEO of the Center for Democracy and Technology, around the question ‘Can big tech be fixed?’ Her organisation is headquartered in Washington, with an international presence in Brussels; it supports laws, corporate policies and technology tools to protect the privacy of internet users, and advocates for stronger legal controls on government surveillance.

Surveillance capitalism

On her website, the Harvard professor Shoshana Zubhoff says: “I’ve dedicated this part of my life to understanding and conceptualising the transition to an information civilization“. Her latest book is ‘The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power’ . It draws together four themes to argue that we have allowed the global technology companies – notably Google and Facebook – to become dangerously out of control. Her themes are the historical emergence of psychological individuality, the conditions for human development, the digital revolution and the evolution of capitalism. In Start the Week on 4 February, Andrew Marr explored these ideas with her.

Continue reading “Surveillance capitalism”

Where are the new jobs going to come from?

Thomas Friedman, talking at Intelligence squared about his latest book Thriving in the Age of Acceleration discussed the question of where the new jobs are going to come from. His answer: for many generations we worked with our hands; in the modern era we began to work with our heads; but in the age of acceleration we are going to work more with our hearts.