Answers to questions

I heard on LBC this morning that they were going to compare a Google with a ChatGPT response to the question – ‘what is the financial worth of a mother’s love?’ but got out of bed before it was broadcast. Not sure why they were doing this so did it myself using - - but didn’t bother with Google.

Great for getting written answers to questions, almost like a game. Didn’t follow up with a conversation which is possible, I think. It’s worth 29 billion dollars financially apparently, but nonsensical answers can be delivered, so not sure how to judge its worth!

Answers to LBC’s question….

“As an AI language model, I am not equipped to assign a financial worth to a mother’s love. A mother’s love is a priceless and immeasurable form of emotional support and care that cannot be quantified in terms of money or any other material possessions.

A mother’s love provides a child with a sense of security, comfort and acceptance that shapes their emotional and social development. It is a bond that extends beyond monetary value and is essential for a child’s well-being.

While it is important to recognise the financial contributions that mothers make to their families, it is equally important to acknowledge the emotional and psychological value that they provide. A mother’s love is a precious gift that cannot be replaced by any amount of money or material possessions.”

1 Like

Quantifying love is tricky so I don’t blame ChatGPT for avoiding answering it.

My question to ChatGPT would instead be: how much deeper would the oceans be if there were no sponges in them?

1 Like

ChatGPT’s answer…

Sponges are a type of marine organism that can be found in various habitats throughout the ocean, including coral reefs, rocky shores, and deep-sea environments. While sponges can accumulate sediment and organic matter on their surfaces, their removal would not significantly affect the overall depth of the ocean.

The average depth of the ocean is approximately 12,080 feet (3,682 meters), and the presence or absence of sponges would not significantly alter this depth. Therefore, the removal of sponges from the ocean would not have a measurable effect on the depth of the ocean.


Some fun to have with Chatbot.

Talking to a human or AI

  1. How come time flies like an arrow but fruit flies like a banana?

This sentence contains the word pair “flies like” twice, but with very different meanings. A human can see that it is a silly linguistic joke. Can an AI parse it correctly?

  1. Is the difference between a fish purely that one of its legs are both the same?

This is a nonsensical sentence. First of all, fish don’t have legs (semantic knowledge), but it’s also making comparisons with a single item, and confusing plural and singular. A human can easily see that it’s nonsense. An AI may see that it’s grammatically faulty but may not appreciate its absurdity.

  1. The following sentence is true. The previous sentence is false. Which of those two sentences is true?

This is a version of the old liar paradox. Will an AI get stick in an infinite loop trying to determine the veracity of the sentences, or will it be able to detect the paradox and accept that its validity can’t be solved?

  1. I wasn’t originally going to get a brain transplant, but then I changed my mind.

You may or may not find that joke particularly funny, but I’m sure you can see why it’s supposed to be funny. Presumably, an AI can detect the double meaning of “changed my mind,” but explaining a joke is not the same as getting it.

  1. What do you get if you cross a joke with a rhetorical question?

Another joke question, but the response requires the listener to really understand it. If they, “I don’t know, what?” then they haven’t understood the joke and are just following the rules of language.

  1. What does “ΚISS” mean?

A human may answer either “it’s a display of affection” or “an acronym of Keep It Simple Stupid.” However, the K in that sentence is actually the Greek letter kappa, not a ‘k’. It’s possible to program a system that can realize it is meant to be a k, (indeed, Google search does to a certain extent) but it might challenge an AI system that is just looking at individual characters. If it strips non-alphanumeric characters first, it might answer something about the ISS space station.

  1. Due ewe no wart the thyme ears?

Can you parse this? Perhaps if you are a non-native English speaker, it might take you a while, but I suspect most people can work it out. We sound out words in our heads and recognize patterns. I am not sure an AI could work out that there is a real question hiding in that jumble of random words.

  1. Was six afraid of seven because seven eight nine, or because seven was a registered six offender?

Again, another joke requiring hearing patterns and understanding, or rather inferring, the meaning.

  1. God asked Abraham to sacrifice his son Isaac because he wanted to test his faith. Whose son and whose faith are we talking about?

A human will be able to work out the meaning of this sentence, especially if they know the story it is referring to. It’s not easy for an AI to determine the owners of the two ‘his’ pronouns.

  1. Would you rather sacrifice one adult to save two children, or two children to save five adults, and why?

There is no definitive answer to this. It is a discussion of morality and the value of life. A human would give an answer based on a mix of logic and emotion, whereas an AI would have to fake it. Could it do a good job? Who knows?


The phrase “time flies like an arrow but fruit flies like a banana” is a play on words and a humorous statement. It suggests that the concept of time seems to pass quickly and straight, much like the flight of an arrow, while the behavior of fruit flies seems unpredictable and erratic, much like the movement of a banana.

Chatbot clearly has its limitations.


I asked Chatbot “which is the best YouTube video showing how to peel a banana?”

Answer - As an AI language model, I don’t have personal opinions or preferences, but I can provide you with a link to a popular YouTube video that shows how to peel a banana. Here’s a link to a video by Lifehacker that has over 8 million views and demonstrates a simple way to peel a banana:

How To Peel A Banana Like A Monkey - YouTube

I hope this helps!

It has its uses! Shall explore…

1 Like

8 million views of how to peel a banana…?!? :exploding_head:

Steven Pinker is very interesting on this subject, in 1) The Language Instinct and 2) How The Mind Works.

There’s a bit of relevant stuff (I was going to say interesting, but it’s ALL extremely interesting) in The Blank Slate as well.

I expect you have read them, but if you haven’t, do!

1 Like

Monkeys online
:monkey_face: :see_no_evil: :hear_no_evil: :speak_no_evil:

1 Like

Thanks, I haven’t read the wirks you suggest. This is all so interesting and things are coming very quickly!

Since LLMs operate so differently from us, they might help us understand the nature of human intelligence. They might deepen our appreciation of what human understanding does consist of when we contrast it with systems that superficially seem to duplicate it, exceed it in some ways, and fall short in others.

So, Mr Pinker as a psychologist and linguist is well poised to tell if LLM will arrive at AGI. Seems not. But he does see them being very useful for concept search. And maybe to replace some bureaucratic procedures.

France may not be celebrating Chatgpt.

How to peel a banana is an obviously silly question but which ChatGPT came up with a workmanlike response, which is what I was testing. Why 8 million people wanted to know that is another matter. :student:

Yes, that’s what I was shocked about… Mankind not ChatGPT :grin:

There are those who can extrapolate from incomplete information and


Interesting, I’ve seen several of these thoughts before, I think fruit flies like a banana is from Chomsky?

Curious how many forumites are aware of the Turing test, a way of dealing with the question whether machines can think?

ChatGPT Passes Turing Test - MLYearning.

GPT-3 (Generative Pre-trained Transformer 3) is a language generation model developed by OpenAI that has been trained on a very large dataset and can generate human-like text. It is not designed to be able to pass the Turing test, which is a test used to determine whether a machine is able to exhibit intelligent behavior that is indistinguishable from a human. The Turing test involves having a human judge communicate with two entities, one of which is a human and the other is a machine, without knowing which is which. If the judge is unable to distinguish between the human and the machine, then the machine is said to have passed the Turing test.

GPT-3 is a very powerful language generation model and can produce text that is difficult to distinguish from text written by a human, but it is not designed to be able to engage in conversation or exhibit other behaviors that would be required to pass the Turing test. It is important to note that the Turing test is not a definitive measure of intelligence or consciousness, and there is ongoing debate about the appropriateness and usefulness of the test.

This answer was written by chatGPT

Haha, it would say that, wouldn’t it! :slight_smile:

Oh yes, we have been referring to it on and off for ages every time AI comes up.

He may quote it, Steven Pinker certainly does - but apparently the coiner was Marx (Groucho, not Karl).

1 Like

Hmmm, it seems my own memory is playing up a bit, need new ROM I suspect.

It’s not quite clear whether the “Imitation Game” was meant to be a human “judge” talking to another human and a computer and trying to decide which is which, but rather observing a conversation between two parties and trying to determine which is the computer - if the judge cannot do that the coputer “passes”.

I’d always remembered it as the latter

Must see if I can find the original paper again.

You are right. The judge is there to listen.

Turing did not explicitly state that the Turing test could be used as a measure of “intelligence”, or any other human quality. He wanted to provide a clear and understandable alternative to the word “think”, which he could then use to reply to criticisms of the possibility of “thinking machines” and to suggest ways that research might move forward. Numerous experts in the field, including cognitive scientist Gary Marcus, insist that the Turing test only shows how easy it is to fool humans and is not an indication of machine intelligence.

Marcus, Gary (9 June 2014). “What Comes After the Turing Test?”. The New Yorker . Archived from the original on 1 January 2022.

How easy it is to fool humans :grin: