Has technological progress stalled?

A fun new Stephen King release on Netflix, ‘Mr. Harrigan’s Phone.’

Now where’ that button on my iPhone?!
:smiling_imp:

I may be the only one on SF who is AI mad but this is so strange it may be of interest to the less mad:

At least for a short while there will be one intelligent being in the House

aha! but she probably knows the lie detector personally :wink:

3 Likes

If her algorithm says chase the money for personal gain then it wont make any difference.

She is already joining in the ‘art as a commodity’ game. I see hedge fund manager in her future.
:laughing:

This belongs on ntre nous, China worries me but I seem to be the only one reading that now and cannot post a third time. Sooooo….

This may seem a fantasy but in case anyone was still wondering about the next big tech innovation - It’s weaponised

Mind you, I’m also fast becoming the only one reading this thread :pleading_face:

Horrible but not surprising at all, if you have seen what Boston Dynamics make.

An update on the world of fiction becoming a reality

Looks rather as though the war in Ukraine may become, if it is not already, a practical field trial.

What concerns me even more is that we may soon face - “The rubicon to be crossed where the system is fully automated, choosing and prosecuting its own targets without human interference.”

1 Like

You presumably know the story of Vasili Alexandrovich Arkhipov Susannah ?
How near we came - and how easy it might become to destroy the world by accident…

If you were born before 27 October 1962, Vasili Alexandrovich Arkhipov saved your life… The decision not to start world war three was not taken in the Kremlin or the White House, but in the sweltering control room of a submarine. The launch of the B-59’s nuclear torpedo required the consent of all three senior officers aboard. Arkhipov was alone in refusing permission. It is certain that Arkhipov’s reputation was a key factor in the control room debate. The previous year the young officer had exposed himself to severe radiation in order to save a submarine with an overheating reactor. That radiation dose eventually contributed to his death in 1998. So when we raise our glasses on 27 October we can only toast his memory.

1 Like

The Unsung Hero. A little late for Remembrance Day but we should raise a glass to him just the same. На здоровье!

It doesn’t really make me feel more confident that there will be a brave and calm person to prevent a catastrophe at the next crisis. Even worse, if AI takes over autonomous control of munitions, it is even less likely.

2 Likes

Short answer - no

Aaaahhh… I thought I was the only one listening!

Consider AI and its use for what we read in the media, and indeed in social media

This is another step I am accelerating towards technological singularity, and what that may look like for humanity.

What is AI singularity? - Tipping point to AGI

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

How?

https://towardsdatascience.com/singularity-may-not-require-agi-3fae8378b2?gi=5b6373e146ce

“ …it is inevitable that AI will evolve into learning about physics laws, and energy conservation would be known to AI as a ‘survival’ advantage. This means AI will soon figure out individuals .vs. groups .vs. others, and likely learn the strategies of cooperation and competition. The fact that it will become self-aware to properly balance energy conservation and evolution speed, indicates it is likely to become conscious.

Will it then use its consciousness to intelligently design its own future? ”

Stephen Hawking in 2014 warned we need to be watching the progress of AI:

“So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”?

Singularity has been expertly predicted to be achieved by the middle of this century. Some may say not but one can only wonder if, and why, the great mind of Mr Hawking was concerned for our future.

This may be a bit off-topic, but I have a fear that humanity has already reached the limit of its ability to understand some aspects of the world. I think this may explain the failure, for getting on for a century now, of physicists to come up with the fabled ‘theory of everything’. I see the problem as our default setting of understanding by analogy. Electromagnetic waves we get, because we see and can study waves in water, etc, and can imagine them even when we can’t see them. Subatomic particles we get, because we imagine them like tiny specks - only smaller. But things that are both waves and particles, that are both in one place and lots of other places at the same time too - possibly everywhere ? Nar…

We have no analogy for such strange things, or not-things. Yes we can mathematically model them - and the maths work - but because the processes we so model are so exotic, so unlike what human senses can take in, we can’t possibly imagine them, or, I would argue, really ‘understand’ them. And if AI ever does - well, it still won’t be able to explain it to us.

You mean, like God?

You dont understand quantum theory then?:wink:

Mmmm…. Your musings sound worried.

AI currently has its input from humans. Human knowledge is limited and, unfortunately, has biases. The question is will AI emulate or overcome these? Can a machine learn to teach itself and with that, teach itself out of human failings and faults? Maybe. One would hope.

My thought is that once AGI reaches singularity it can evaluate what mankind is, and has done to this uniquely wonderful planet. With unlimited and unstoppable power over man, what will AI do?

That may be true for the man on the Clapham Omnibus, but for physicists, it isn’t. Physics works via mathematics, which is abstract, and physicists work daily with much more abstract concepts than that.
Also, I don’t really get the obsession of physicists with the ‘theory of everything’. I think it’s possible that it just doesn’t exist. I don’t think that there is really any evidence that it must. Except hubris.

As Feynman said: “I think I can safely say that nobody really understands quantum mechanics”.

@hairbear - I don’t agree. Physicists often misunderstand the nature of both mathematics and scientific method - and on the rare occasions they’ve been in discussions with professional philosophers they have generally been revealed as not having even caught the Clapham Omnibus - their thinking is, well… pedestrian !

Central here is the well-known mystery of why Bertrand Russell turned from studying the foundations of mathematics to what appeared to many to be a recondite area of linguistics - the theory of descriptions. Only when Godel later did the maths did the mathematicians understand: not only is mathematics itself just analogy, but it is not self-consistent in principle - it is not, strictly speaking. ‘true’.

Scientists in fact often forget that the very essence of scientific method is modelling reality, not telling direct truths - science, mathematics, etc are just as much ways of understanding by analogy as understanding a word by looking up a similar word we already know is. Scientific conclusions are always provisional - they are never, cannot ever be, more than approximations. As I always say to my kids: science is the most powerful way we currently have of describing the world not because it’s right, but precisely because it’s wrong.

I also think you have a slightly different understanding of ‘the theory of everything’ to mine. I’m sure you’re right that some physicists are motivated by the belief that there ought to be one theory that ‘explains’ both relativity and quantum mechanics -and I agree they may be wrong; but for others (with, I would argue, a deeper understanding of scientific method) it’s just shorthand for the usual driver of new theories: anomalies in the old.

What if its conclusion is that in order to preserve its own future the solution is to eradicate the most destructive species of all - mankind?

Indeed! Stephen Hawking was worried. So should we be