AI developments

A major ‘control problem’ lies in the wording. Asimov laws are ambiguous to a machine as written in English, not code.

Definitions for ‘robot’ and ‘human being’, even a combination of the two (i.e. hydraulic joint replacements) are and becoming increasingly ambiguous. If a robot does not analyse itself as a robot, it may conceivably conclude that the Laws do not apply.

Asimov’s :robot: Law #1 “A robot may not injure a human being or, through inaction, allow a human being to come to harm” can be tweaked to “ A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm." But even such added words may be no solution.

A dilemma in Mr Asimov’s short story ‘Liar’, highlighted this. A robot will hurt humans if he tells them something and hurt them if he does not. Poor Bot! So, in an increasingly sophisticated robot, the potential and severity of all actions, doing or not doing, are weighed and a robot will break the laws as little as possible rather than do nothing at all. Mmm… That means Law 1 is open to a robot’s (!) interpretation.

As for :robot: Law #2, “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” We have only to look at current bad actors to know that if a way can be found to make someone richer or more powerful using a robot, they will. And organic casualties will be considered collateral damage.

:robot: Law #3, ”A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Asimov, realising there was a problem, later added the “Zeroth Law,” to precede the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” My robot is thinking that humanity may indeed be threatened by human beings. Then it would be OK to e l i m i n a t e.

All hangs in words.

Meanwhile, I rather enjoyed the fictional series ‘Next’ where increasingly chilling AI sets itself up to decide all that is best for this world. :alien:

Yes - I think Asimov played with issues around his own laws in some of his stories (forgive me - it is 45 years since I was an avid reader of his works) though I think your objections could be overcome by coding the laws as “you…” for the robot in question.

Wasn’t the “zeroth law” expressly introduced to allow  a robot to harm an individual human for the good of humanity?

Asimov’s laws are quite good in themselves but I agree there are always going to ambiguities in anything phrased in English. But the more important point is they are a good point for stimulating debate - perhaps we should be putting kill switches and safeguards into AI programs, waiting until it all goes pear shaped and then trying to retro fit controls might be difficult.

And, lets face it, Asimov was on the nail about the need for controls, even if his laws were not 100% watertight.

1 Like

This morning read a rather depressing but not wholly surprising series of interviews with people who had been made redundant and replaced with AI.

However, it also made me reflect that despite the reasonable AI précis of my work mentioned above, AI could not have created anything comparable to the original text, most of whose findings had resulted from first hand observation as opposed to material available online. In addition to understandings gained from comparative analysis of physical locations, there was also subsequent deconstruction of historical aesthetic values and their significance.

Consequently, it was reassuring to believe that in the future there should remain many areas of research that will continue to be primarily reliant on field work and methodologies that rely on first hand experience of the subject.

Hollywood is toast if this is what 2 blokes with a computer or two can knock up.

A clip of Trump at the end, or interspersed clips, would have topped that off :slight_smile:

Absolutely. A visionary ahead of his time. Sometimes fact does follow fiction. There have been efforts to research these predictive events in literature

Literature isn’t clairvoyance. Project Cassandra isn’t about the deliberate prediction of facts and events, but rather about revealing potential. It’s about a “something” that may even be subconsciously inscribed in a story: for example, a gradually building nervousness and insecurity, social tensions and irritations, subcutaneous feelings of threat…”

Yes. A medical conundrum solved to enable a robot to perform surgery. I forget what but hopefully not amputation.

Might work but relies on motivation by humans not being fear or greed or lust for power….

Oops!

I have been using ChatGPT.com with varying degrees of success. It is free but will only let me do a certain amount of work before I am asked to come back later.

My question to the techies on here is, what is the best AI to use. My queries are mainly text based. I am not that interested in images and videos. There just seems to be a lot of these systems out there. I started with Bard which turned into Gemini. AI seems to be creeping in everywhere. I even have a button now which keeps popping up on my phone asking me what I want to know.

A second question is, ChatGPT seems to remember everything I tell it which I find is very useful. Is this information available to other AI systems? I am more curious than worried.

Article in the current edition of Prospect about AI, AGI and how it is being used and developed to subjugate people to those who control it.

Not encouraging.

Personally I find ChatGPT to be the best for answering text-based queries… Although I try not to use it for trivial things nowadays as the novelty has worn off and I worry that the energy consumption is having a huge, negative impact on the environment. Google searches are bad enough, AI queries are off the scale.

I do also use claude.ai too, especially for techie, work-related stuff.

Yes, all LLMs learn from what’s being fed to them in terms of queries. My company has its own, private instance that doesn’t share any information us users provide it with… They pay an insane amount in licenses for the privilege though (but the value of our intellectual property makes that investment worthwhile).

1 Like

I found this article interesting.

And this paper… It doesn’t use “bullshit” in a derogatory way, but as a more accurate way of describing what LLMs churn out. Waffling might be another way of putting it.

And then… to be honest I haven’t worked my way through this myself yet, but it looks interesting.

1 Like

You could upload it to Google’s NotebookLM and ask it to summarise everything for you :grin:

https://notebooklm.google.com/

Edit: here’s what it came back with…

The research paper “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” investigates the capabilities of Large Reasoning Models (LRMs) compared to standard Large Language Models (LLMs), particularly in solving complex problems. It introduces a novel evaluation method using controllable puzzle environments to precisely manipulate problem complexity and analyse both final answers and internal reasoning traces, which are often overlooked in traditional benchmarks. The study identifies three distinct performance regimes: standard LLMs excelling at low complexity, LRMs demonstrating an advantage at medium complexity, and both types of models failing at high complexity. Crucially, the paper highlights that LRMs exhibit an unexpected reduction in reasoning effort as problems become excessively complex, despite having sufficient token budgets, and that even providing explicit algorithms does not improve their performance on highly complex tasks, suggesting fundamental limitations in their generalisable reasoning and exact computation abilities.

Although I liked the ability of it to create an interactive Mind Map…

1 Like

Whilst taking a break earlier, I read this article which talks about NotebookLM and how they recommend using it to create an “Everything” notebook about yourself…

This is essentially exactly what Steven’s “Everything” notebook is supposed to be. He suggests loading up the notebook with “sources that include the general knowledge you work with most days.” Some of his suggestions include inspirational quotes from books you’ve read, core documents describing your workplace, or brainstorming ideas that you’ve captured over the years.

The article explained that doing so essentially creates a personalized AI — something NotebookLM’s founders, who left Google, are also trying to achieve with their newly launched AI tool, Huxe. A personalized AI would mean the AI draws information from your world (i.e., your notes, projects, and the sources you added) instead of giving you generic answers.

Think of it as an AI sounding a bit like you and becoming your second brain. Though that sounds a little bit creepy, it just means the AI becomes more helpful and relevant. A notebook with all the sources you need in one place would mean that NotebookLM would be able to surface exactly what you need, when you need it, and in a way that actually makes sense for how you think and work.

I’ve been playing around with Notion recently, looking into whether I could use it for taking notes in meetings more efficiently, or perhaps using it for a personal knowledge management system using the Zettelkasten method. Sounds like this is similar to that… But I would definitely only want to have this data stored locally.

I find this adorable

2 Likes

You won’t when they are performing riot control :scream:

Yes, it’s cute now but the pace of development is exponentially rapid - give it 10 years and I don’t think we’ll quite have K2S0 but I hope that whatever we do have is on my side (which is doubtful TBH).

At least they will remain somewhat limited by available battery tech (any sort of active combat role is going to be hugely consumptive of power ) but I think it reasonable to be concerned about the direction robotics could take in the wrong hands (i.e Trump, Putin, Netanyahu, Xi, …).

2 Likes

Think either of Monty Python’s philosopher football teams
would run rings round them.

Though less certain about the current expensively assembled Man Utd shower…

I greatly admired John Sladek, an American science fiction author and satirist. He wrote a short story, ostensibly by ‘Iclick Asimove’ called ‘Broot Force’ which played with Asimov’s three laws of robotics.
In this artificially intelligent age, it’s worth a read if you can get hold of his collection of short stories: “The Steam Driven Boy and Other Strangers”.

Just got an e-mail from academia.edu which is where I keep my academic publications that people can download for free. They keep trying to get me to resubscribe to their premier service by sending me AI generated summaries of my work.

The latest of these is a review of something that I published in 1993. The AI reviewer’s only criticism is that my account would have been stronger if I’d made reference to the several publications it lists, the oldest of which was published in 2008!

1 Like

There was one of those AI generated FB posts saying that the concrete in the Hoover dam generated so much heat on setting that some parts took 125 years to fully cool and cure.

The Hoover dam was started in 1930 and completed in 1936!

2 Likes

AI producing some brilliant stuff.

Enjoy Comrades! Lack of visible Revolutionary Zeal will be punished!

1 Like