Answers to questions

More (from Wiki) on how the Turing test is unsuitable for where we are with AI now

The Turing test does not directly test whether the computer behaves intelligently. It tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:

Some human behaviour is unintelligent

The Turing test requires that the machine be able to execute allhuman behaviours, regardless of whether they are intelligent. It even tests for behaviours that may not be considered intelligent at all, such as the susceptibility to insults,[75] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviours in detail it fails the test.

This objection was raised by The Economist, in an article entitled “artificial stupidity” published shortly after the first Loebner Prize competition in 1992. The article noted that the first Loebner winner’s victory was due, at least in part, to its ability to “imitate human typing errors.”[51] Turing himself had suggested that programs add errors into their output, so as to be better “players” of the game.[76]

Some intelligent behaviour is inhuman

The Turing test does not test for highly intelligent behaviours, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test.

Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.[77]

The Language-centric Objection(Editing Turing test - Wikipedia)

Another well known objection raised towards the Turing Test concerns its exclusive focus on the linguistic behaviour (i.e. it is only a “language-based” experiment, while all the other cognitive faculties are not tested). This drawback downsizes the role of other modality-specific “intelligent abilities” concerning human beings that the psychologist Howard Gardner, in his “multiple intelligence theory”, proposes to consider (verbal-linguistic abilities are only one of those). [78].

Consciousness vs. the simulation of consciousnessEdit

Main article: Chinese room

See also: Synthetic intelligence

The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviouristor functionalist approach to the study of the mind. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

John Searle has argued that external behaviour cannot be used to determine if a machine is “actually” thinking or merely “simulating thinking.”[45] His Chinese roomargument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be “about” something.)

Turing anticipated this line of criticism in his original paper,[79]writing:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[80]

Naïveté of interrogators(Editing Turing test - Wikipedia)

In practice, the test’s results can easily be dominated not by the computer’s intelligence, but by the attitudes, skill, or naïveté of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term “average interrogator”: “[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning”.[81]

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the “interrogators” are not even aware of the possibility that they are interacting with computers. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner Prize competitions used “unsophisticated” interrogators who were easily fooled by the machines.[52] Since 2004, the Loebner Prize organisers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[82]

One interesting feature of the Turing test is the frequency of the confederate effect, when the confederate (tested) humans are misidentified by the interrogators as machines. It has been suggested that what interrogators expect as human responses is not necessarily typical of humans. As a result, some individuals can be categorised as machines. This can therefore work in favour of a competing machine. The humans are instructed to “act themselves”, but sometimes their answers are more like what the interrogator expects a machine to say.[83] This raises the question of how to ensure that the humans are motivated to “act human”.

Silence(Editing Turing test - Wikipedia)

A critical aspect of the Turing test is that a machine must give itself away as being a machine by its utterances. An interrogator must then make the “right identification” by correctly identifying the machine as being just that. If however a machine remains silent during a conversation, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.[84]Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.[85]

Impracticality and irrelevance: the Turing test and AI research

Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research.[53] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvigwrite: “AI researchers have devoted little attention to passing the Turing test.”[86] There are several reasons.

First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as object recognition or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. “Aeronautical engineering texts,” they write, “do not define the goal of their field as ‘making machines that fly so exactly like pigeons that they can fool other pigeons.’”[86]

Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.

Turing did not intend for his idea to be used to test the intelligence of programs — he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence.[87] John McCarthy argues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is “unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.”[88]

Gordon Moore, Intel co-founder and creator of Moore’s Law, dies aged 94

2 Likes

Dear ChatGPT, write me a short story about a farm in France, with stone barns, farming folk, fields, crops, sunflowers, a scythe, a river, blue sky, cats, sacks of grain, an ancient tractor and a rusty old bicycle, in not more than 500 words.

Here it is, for what it’s worth…!

In the heart of rural France, nestled among vast fields of sunflowers, stood an old farm. The stone barns and cottages were weathered and worn, but they exuded a certain rustic charm that the farming folk loved. They worked tirelessly on the land, tending to the crops with their scythes and hauling sacks of grain to and fro.

The farm sat beside a winding river, which sparkled in the sunlight under a bright blue sky. Cats roamed the farm, lazily basking in the sun and occasionally pouncing on unsuspecting mice.

In the centre of the farmyard sat an ancient tractor, rusty and creaky, but still useful for hauling heavy loads. Beside it leaned a worn old bicycle, its wheels bent and spokes missing, but it still served its purpose for the farmer’s son.

Despite the hard work, the farming folk loved their way of life. They relished the quiet, peaceful days spent under the warm sun, surrounded by the golden fields of sunflowers. And every evening, they gathered around a large oak table, sharing stories and laughter over a simple but satisfying meal.

One hot summer day, as the sun beat down on the farm, a group of travellers arrived at the gates. They were weary and hungry, having travelled for days without rest. The farmer and his family welcomed them with open arms, offering them a place to rest and food to eat.

As the travellers rested in the shade of the stone barns, they marvelled at the beauty of the farm. They had never seen such vast fields of sunflowers, or such a peaceful way of life.

That evening, as the farming folk and the travellers sat around the table, the travellers shared stories of their adventures. They spoke of far-off lands, of towering mountains and bustling cities, of vast oceans and exotic creatures.

The farming folk listened intently, their eyes wide with wonder. And as the night drew on, they realized that they had everything they needed right there on the farm. They had the beauty of the fields, the comfort of their homes, and the warmth of their community.

As the travellers departed the next day, the farming folk watched them go with a newfound appreciation for their way of life. They realized that the most precious things in life were often the simplest, and that the beauty of the farm and the joy of their community were worth more than any riches or adventures that lay beyond their fields of sunflowers.

2 Likes

Crickey @Bonzocat ! I think I can see a cunning plan here by AI to send us all back to the 19th century.

I better start buying candles to add to my stock of :roll_of_toilet_paper::roll_of_toilet_paper::roll_of_toilet_paper:

I think it must have been the mention of a scythe in my instructions…!

I asked ChatGPT - can artificial intelligence pose a catastrophic threat to the world? Part of its answer included "For example, an AI system designed to maximize paperclip production could destroy the world in pursuit of that goal.” Really?

Its full answer…

Artificial Intelligence (AI) has the potential to bring about many benefits to the world, including improvements in healthcare, transportation, and communication. However, like any powerful technology, it also has the potential to pose catastrophic threats.

One of the most significant concerns with AI is the possibility of a superintelligence that could become uncontrollable and act against human interests. If an AI system were to become more intelligent than humans, it could potentially make decisions that are harmful to humanity. For example, an AI system designed to maximize paperclip production could destroy the world in pursuit of that goal.

Another concern is the potential for AI to be used in malicious ways, such as in cyber attacks or in the development of autonomous weapons. Such uses of AI could pose a significant threat to global security.

While these concerns are valid, it is essential to note that they are currently hypothetical scenarios. Researchers and policymakers are actively working to mitigate these risks and ensure that AI is developed and used responsibly. As AI continues to advance, it is crucial to prioritize safety and ethical considerations to ensure that it is used for the betterment of humanity.

@Bonzocat re the rustic story, as I suspected it might be, about the level of a 12 year old.

1 Like

I’ll ask ChatGPT to write the same story, but add in with my original words, some more adult words or themes…! May not be able to post the story here though – we’ll see. I wonder how prudent it will be.

And it forgot the last sentence. "After the travellers departed, the farmers started wondering where they’d left the old tractor "

1 Like

Aaaahhhh… The old paperclip maximizer

I ather think ChatGpt has been reading Wiki.

Something rather interesting about what ChatGpt can reproducing from parsing different texts it locates throughout the web, is what happens to copyright?

If a writer finds his own prose/text/research in print, say perhaps in a media publication, assembled by a journalist using ChatGpt without crediting the original author, is this not plagiarism?

Who will be ‘policing’ all texts spewing from ChatGpt for content? Will there be a system of royalties for such partial reprints?

Authors and publishers should be aware.

Well done and thank you @Bonzocat for your efforts in running ChatGpt on our behalf!

2 Likes

Can you imagine a lifelike tactile humanoid type head perched on your desk, to which you ask questions and it replies, with a voice and other facial characteristics of your choice, using a brain load of ChatGPT? Super secretary/research assistant? It’s feasible I’d have thought, but slightly horrifying.

1 Like

Gross. Things don’t need to be humanoid, we’re prone enough to pareidolia to do fine with boxy looking things, and those horrors are in the uncanny valley.

3 Likes

I asked ChatGPT to rewrite its original story about rural France to a higher literary level in the style of John Steinbeck!

In the rolling hills of France, there was a farm, a small haven amidst the vast expanse of fields. Stone barns, weathered by years of farming tradition, stood in testament to the unyielding labour of the farming folk who called this place home.

These were hardy people, their roughened hands and sinewy arms a testament to the back-breaking toil that defined their existence. From dawn to dusk, they tended to the fields, wielding their scythes with a practiced ease, slicing through the golden wheat and emerald corn that were their lifeblood. Each swing of the scythe was like a prayer, a supplication to the bountiful earth that sustained them.

In the summer, the fields burst with the vibrant yellow of sunflowers, their nodding heads following the path of the sun. The farmers would wander through the fields, their calloused hands gently plucking the largest and most robust blooms to grace their humble abodes.

Nearby, a river flowed, providing a never-ending source of fresh water for the crops. The farmers would pause in their work to dip their toes in the cool water, feeling the rejuvenating power of the earth’s lifeblood course through their veins.

Above them, the vast expanse of blue sky stretched, a boundless canopy that whispered of endless possibility. Even the cats that roamed the farm, darting and playing among the crops, seemed to recognize the sacredness of this place.

Through the hard labour and the endless demands of the land, the farmers found solace in the knowledge that they were a part of something greater, something that transcended the limitations of their own existence. It was the eternal rhythm of the land, a symphony of toil and reward, sacrifice and bounty.

As the day drew to a close, the farmers loaded sacks of grain onto their wooden trailers hauled by tractors that had seen better days. They would cycle down the dusty road that wound through the fields, the cool evening air brushing past their faces as they made their way back to the barns.

The setting sun cast a warm glow over the landscape, infusing the fields with a serene radiance. The farmers would gather outside the barns, sharing glasses of wine and tales of their daily labours. They would watch as the cats chased each other through the fields, and the sunflowers bowed their heads in the gentle breeze.

As night descended, the farmers retired to their humble abodes, exhausted but content. They knew that they were a part of something greater, something timeless and sacred, a testament to the enduring power of the land.

Shan’t do it again…

Very good effort, though a little verbose.

I’m just thinking, no more need for publishers of Mills & Boon type chic lit and soppy romance to employ human writers. ChatGPT can churn out day dreams by the dozens in less than the blink of an eye. Repetition will actually be an advantage because readers of the genre take consolation from familiarity.

:melting_face:

I think it is possible to learn to manipulate ChatGPT to achieve whatever you strive to create. Better than Mills & Boon. I foresee people wanting to write using AI and publishing/selling their books through YouTube or other similar means. It’s probably already happening.

No it’s not. It is perfectly allowable to quote from someone elses copyrighted works. To violate copyright, you have to publish a substantial portion of the work. What a substantial portion is is not clearly defined.

Thank you @hairbear . Truly a meal ticket for the legal profession. :grimacing:

First they state it is not copyright, then they say “However….”

Just in case, I always take care in making quotes and crediting the source and/or author. You never know, internet postings have a long life and one day here may be retrospective rulings as there are currently regarding artworks.

A substantial part is not defined in copyright law but has been interpreted by the courts to mean a qualitatively significant part of a work even where this is not a large part of the work. Therefore, it is quite likely that even a small portion of the whole work will still be a substantial part.

Is it not perfectly OK as long as you dedicate/reference or otherwise identify the copyright holder?

I got permission to quote directly a large chunk of prose from another book in my own book. I made an effort to get permission from the author and did so, not from him because he was dead, but from his publisher. But would it have still been ok as long as I made the direct reference to the writer and from whence it came, but without the expressed permission?

Yes, I expect it is better to be safe now than sorry later. I think in the case of your book, you did the right thing.

@hairbear Correctly pointed out that British (Common) law is a bit vague in the issue of what constitutes “a substantial part” of text copied.

I just think he vagueness leaves open a door to opportunistic litigation at some time in the future. I personally cannot live under a sword of Damocles, so tend to err on the side of caution in all things.