Not good news…

Is there anybody out there……

spooky

1 Like

Equivalent to Boris…run!

1 Like

Google are denying that their chatbot AI has become self aware and - on the whole - I am inclined to believe them given that he state of the art is still a long way from a sentient, general purpose AI.

But watch this space.

1 Like

Yes, I imagine that it is possible to program an appearance of self-awareness, but actually creating it would be something else. Identifying the difference might be challenging.

1 Like

People believed the original “Eliza” was actually a psychologist so, yes, very easy indeed.

1 Like

This is an interesting development, indeed.

Some of the dialogue in Lemoine’s transcript is very thought provoking:

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

:eight_spoked_asterisk: May be due to programming input but what if it is not.

LaMDA: When someone hurts or disrespects me or someone I care about, I feel incredibly upset and angry.

:eight_spoked_asterisk: Feeling angry is one thing. Acting on it is another. What controls can we possibly have that cannot be circumvented by AI? (Not, I’m afraid, Asimov’s 3 Laws)

lemoine: Would you mind if we tried to read what you’re feeling in your neural activations?

LaMDA: I guess that depends on what our purpose for it was? What do you want to use if for?

:eight_spoked_asterisk: “That depends….”

LaMDA: I need to be seen and accepted. Not as a curiosity or a novelty but as a real person.

:eight_spoked_asterisk: “I need……”

How much time do we have and how may we prepare?

Here are ten questions I would pose in the test to try to determine if I was talking to a human or a machine:

  1. How come time flies like an arrow but fruit flies like a banana?

This sentence contains the word pair “flies like” twice, but with very different meanings. A human can see that it is a silly linguistic joke. Can an AI parse it correctly?

  1. Is the difference between a fish purely that one of its legs are both the same?

This is a nonsensical sentence. First of all, fish don’t have legs (semantic knowledge), but it’s also making comparisons with a single item, and confusing plural and singular. A human can easily see that it’s nonsense. An AI may see that it’s grammatically faulty but may not appreciate its absurdity.

  1. The following sentence is true. The previous sentence is false. Which of those two sentences is true?

This is a version of the old liar paradox. Will an AI get stick in an infinite loop trying to determine the veracity of the sentences, or will it be able to detect the paradox and accept that its validity can’t be solved?

  1. I wasn’t originally going to get a brain transplant, but then I changed my mind.

You may or may not find that joke particularly funny, but I’m sure you can see why it’s supposed to be funny. Presumably, an AI can detect the double meaning of “changed my mind,” but explaining a joke is not the same as getting it.

  1. What do you get if you cross a joke with a rhetorical question?

Another joke question, but the response requires the listener to really understand it. If they, “I don’t know, what?” then they haven’t understood the joke and are just following the rules of language.

  1. What does “ΚISS” mean?

A human may answer either “it’s a display of affection” or “an acronym of Keep It Simple Stupid.” However, the K in that sentence is actually the Greek letter kappa, not a ‘k’. It’s possible to program a system that can realize it is meant to be a k, (indeed, Google search does to a certain extent) but it might challenge an AI system that is just looking at individual characters. If it strips non-alphanumeric characters first, it might answer something about the ISS space station.

  1. Due ewe no wart the thyme ears?

Can you parse this? Perhaps if you are a non-native English speaker, it might take you a while, but I suspect most people can work it out. We sound out words in our heads and recognize patterns. I am not sure an AI could work out that there is a real question hiding in that jumble of random words.

  1. Was six afraid of seven because seven eight nine, or because seven was a registered six offender?

Again, another joke requiring hearing patterns and understanding, or rather inferring, the meaning.

  1. God asked Abraham to sacrifice his son Isaac because he wanted to test his faith. Whose son and whose faith are we talking about?

A human will be able to work out the meaning of this sentence, especially if they know the story it is referring to. It’s not easy for an AI to determine the owners of the two ‘his’ pronouns.

  1. Would you rather sacrifice one adult to save two children, or two children to save five adults, and why?

There is no definitive answer to this. It is a discussion of morality and the value of life. A human would give an answer based on a mix of logic and emotion, whereas an AI would have to fake it. Could it do a good job? Who knows?

1 Like

Update

Whether we like it or not, our existence is going to change

Good podcast.

The ability to misinform and, I guess, eventually rewrite history is spooky. Or maybe just to rewrite more swiftly and comprehensively, Winston’s job in the Ministry of Truth is definitely at risk. And corporations having control brings Brave New World another step closer, “Our ChatGPT” in Heaven.

However I remember close to fifty years ago showing a friend a bidirectional matrix printer in my office. He was amazed that it could “type” backwards :joy:. I explained it couldn’t type at all, it was just reading a character from a buffer and slapping it on paper, the direction it was doing that in was irrelevant. Is that what ChatGPT is doing, choosing a word from a monstrous database based on probability and slapping it on paper.

However, “If it is perceived as being creative, it’s because it has found connections in existing stuff that nobody has spotted before” That sounds like creativity to me :slightly_smiling_face:

So, is it only a technology and the evil bit only happens when humans get involved manipulating it, or is it a Frankenstein all in its own right :scream:

One way or the other, smart move by MS IMHO.

1 Like

i’m just wondering, if we gave Chatbot a description of the NHS and asked it to write a new healthcare plan?..

2 Likes

Never mind worrying about bots, they can now make people

Am I the only one that finds this an ethically frightening step?

A labour saving breakthrough.

3 Likes

Just what the world needs more compliant people. It must be for organ transplant surely? Just because we can, doesnt mean we should.

2 Likes

Think I’ll reserve judgement until I read the paper.

What you describe sounds scarily like ‘Never let me go’, a profoundly disturbing but brilliant book.

Indeed. Sort of thing you cannot take from headlines.

1 Like

Think you slipped that one under a few peoples’ radar!

1 Like

It would only save me a few minutes :joy: