(How) would you know if one of us was a chatbot?

That’d be a Chaibot.

Meanwhile……

3 Likes

TL;DR programmers aren’t out of a job yet.

OK, I tried asking it a question. Having recently upgraded a few of my machines to Fedora 37 I am going to have to start writing my firewall descriptions in nftables, rather than the venerable iptables. I’m not *that* big a fan of firewalld, it’s OK for simple stuff but I don’t really trust it for more complex setups, and it creates/uses a huge amount of boiler plate that, for the most part I don’t need.

So,

This is what I got back.

# Define the limit and time interval
define ssh_conn_limit = 4
define ssh_time_interval = 600

# Create a new chain for SSH connections
table inet filter {
    chain ssh_conn_limit {
        # Match all new incoming SSH connections
        tcp dport 22 counter packets 0 bytes 0 comment "ssh connection limit" 
            recent name ssh_recent timeout $ssh_time_interval add

        # Drop connections if the number of connections exceeds the limit
        limit rate $ssh_conn_limit/minute counter packets 0 bytes 0 drop comment "exceeded ssh connection limit"

        # Accept all other SSH connections
        accept
    }
}

# Apply the chain to the input traffic
table inet filter {
    chain input {
        # Match all incoming TCP packets
        tcp dport 22 jump ssh_conn_limit
    }
}

I’m not totally convinced.

I need to have a look at what it has done in detail, it might well work as I asked and, as I said, I’m new to nftables so need to take time to understand it all but, on a quick look I think there are three problems.

First off I do really like the fact that it put a couple of defines at the top of the code so that parameters can be adjusted, I didn’t ask it to do this and it’s good practice so a definite :+1: there.

I quite like the fact that it is a “complete” example as well, but with the actual solution self contained, another :+1:

BUT

It hasn’t really understood that I want to rate limit only new TCP connections, I don’t want to throttle ssh packets belonging to established connections - so there really ought to be a “ct state established” in there somewhere. I only want to add incomming new connections to the “recent” list (nice generation of the list name there as well) so there needs to be a “ct state new” as part of that rule.

Finally it has (correctly) used the “recent” filter (well, it looks correct, it’s what you’d use in iptables, but I can’t see it documented for nftables) to track connections it doesn’t refer to that again - just uses a conventional rate limit filter to drop (all) packets.

So, I don’t actually think the supplied answer will work.

I know that some people have said they got good solutions by working iteratively with the tool - but that requires either recognising that the answer is incorrect/limited - or trying it and discoverng its limitations - and knowing how to push ChatGPT towards a better solution (which sounds like programming skills to me) .

So, I don’t think programmers are quite out of their jobs yet.

Of course, they said that with SQL and COBOL managers would be able to write code in English and there would be no need for programmers 70 years ago, and apparently it’s still a job with prospects.

1 Like

Found one!

2 Likes

Excellent, pushing it for more details crashed it.

It’s convinced “recent” is a thing though - I understand its training data only goes up to 2020/2021 so I suppose it could have been  a thing but isn’t any longer.

Looking at the Fedora documenation you’re supposed to do it with dynamic sets.

Bottom line is that I’m a bit less worried than I used to be.

That said a comp sci lecturer that I know is definitely worried and they have been running their exam questions through it to see how well it does, improve their ability to spot its output and improve their exam questions as well.

Can I just say a great big thank you to @toryroo for inviting me into this thread, the contents of which I have absolutely no understanding. :roll_eyes: :rofl:

10 Likes

How would we know??

Turing’s test to tell if we are talking to a computer or a human is being superseded. We must reason to be aware.

In the months and years to come, these bots will help you find information on the internet. They will explain concepts in ways you can understand. If you like, they will even write your tweets, blog posts and term papers.

They will tabulate your monthly expenses in your spreadsheets. They will visit real estate websites and find houses in your price range. They will produce online avatars that look and sound like humans. They will make mini-movies, complete with music and dialogue.

“This will be the next step up from Pixar — superpersonalized movies that anyone can create really quickly,” said Bryan McCann, former lead research scientist at Salesforce, who is exploring chatbots and other A.I. technologies at a start-up called You.com.

As ChatGPT and DALL-E have shown, this kind of thing will be shocking, fascinating and fun. It will also leave us wondering how it will change our lives. What happens to people who have spent their careers making movies? Will this technology flood the internet with images that seem real but are not? Will their mistakes lead us astray?

“All the President’s Men,” Carl Bernstein and Bob Woodward’s classic tale of uncovering Watergate, tells a story about a history paper that Mr. Woodward wrote as a freshman at Yale. After reading countless documents that described King Henry IV standing barefoot in the snow for days as he waited to beg forgiveness from Pope Gregory in 1077, Mr. Woodward included the anecdote in his paper. His professor gave the paper a failing grade, explaining that no human being could stand barefoot in the snow for so long without his feet freezing off.

“The divine right of kings did not extend to overturning the laws of nature and common sense,” the professor said.

Drawing from endless documents about King Henry’s visit to Canossa, ChatGPT might well make the same mistake. You must play the professor.

Certainly, these bots will change the world. But the onus is on you to be wary of what these systems say and do, to edit what they give you, to approach everything you see online with skepticism. Researchers know how to give these systems a wide range of skills, but they do not yet know how to give them reason or common sense or a sense of truth.

That still lies with you.

Turing’s “test” is always misunderstood/mis-described.

It was originally postulated as a third party monitoring the conversation between two others - a human and a computer, the computer was said to have “passed” if the impartial observer could not tell which was the human and which the computer.

It is a mistake to assume that ChatGPT is “smart”, it is very, very good at understanding a question put to it in natural language, and very good at composing a natural language reply - but it is quite happy for that reply to be utter garbage as above (and as disclaimed on the ChatGPT website).

We are still some way off the “holy grail” of the General AI, though we’re a lot closer than we used to be.

It’s a bit like Boston Dynamics robots - they are almost frighteningly good, but there’s no real intelligence, it’s all choreographed - for now

4 Likes

Since people don’t even do that now with “traditional” media what chance they will with AI generated “information”? Welcome to the Twilight Zone :scream: Pass me the Soma, I need another hit.

1 Like

Nothing wrong with that provided you recognise and make intelligent use of misinformation.

In this instance I’d imagine the original anecdote had been repeated and repeated uncritically and with additional embroidery. One could possibly trace its evolution by chronologically sequencing examples. If a student did that to demonstrate the unreliability of second and third hand sources, I’d give them a very good grade.

One of the Open University modules I used to teach required students to analyse the reliability or otherwise of historical accounts of Cleopatra (all surviving historical accounts are s/h and were written by Romans in the centuries following her death).

2 Likes

There’s an interesting podcast by Tim Harford on this:

I was on Amazon services via chat last week, and the responses seemed very scripted so I asked if they were a bot? They replied, no I’m human, what made me ask that? It did strike me that would be the response an AI bot would give, however a few things suggested they probably were human.

It won’t be long before we can no longer tell them apart. Combine that with deep fake technology, and we’re going to have a hard time distinguishing truth from fiction within the decade. It’s going to open up all sorts of cans of worms… Also begs the question how do we know we’re not already in a simulation? We could be a simulation within a simulation within a simulation ad infinitum.

1 Like

Certainly is!

I hadn’t realised the public had been conversing online with chatbots since the early 90s. He ‘Amelie’ sounded just like a regular narcissist. Mind you, I always wanted an Aibo robot dog, so I guess we convince ourselves we are hearing what we want to sometimes.

Listening to this podcast, I can’t help wondering how very worried all the workers at scam factories in India must be feeling. Without a doubt, their greedy boss will soon be replacing them all with ChatGPT plug ins and the online scamming can continue 1000s more per day at 0 increased cost.

I think we need to ask ChatGPT for some 100% full proof questions we can ask to see if we’re talking to a chatbot. Or a narcissist
:smirk_cat:

1 Like

Alphabet, the parent company of Google, unveiled Bard, a chatbot powered by artificial intelligence. It will rival ChatGPT, a popular Microsoft-backed venture that generates text, images and video that seem as though they are created by humans. Various firms plan to add similar technology, including Baidu, a Chinese search giant. Bard will be tested on some users before a full launch in the coming weeks” Economist.

1 Like

https://www.aa.com.tr/en/science-technology/artificial-intelligence-uncovers-lost-work-by-spanish-playwright-lope-de-vega/2802624#

There’s a sweet Isaac Asimov story with a storytelling chatbot called a Bard in it :slightly_smiling_face:

1 Like

I wonder what would happen if the two had a dialogue!

Here is a sad and automatic development

No real surprise that some humans would immediately look fir a way to twist what could be such a wonderful tool into an accelerated pathway for wickedness.

I wonder what AI is learning about humans

It may just be a coincidence of timing but I expect there will be many, many more discoveries like this

Doesn’t actually say the code was broken using ChatGPT or similar but

The team was computer scientist and cryptographer George Lasry, music professor Norbert Biermann, and physicist Satoshi Tomokiyo, who stumbled across 57 letters in the national library of France’s online archives.

Maybe an achievement using Fragmentarium.

An article to help decide which chatbot you might want to use