My observation is that intelligence has little to do with self control, and it’s much more about belief around ethics and morals, often but not always influenced by upbringing. Intelligence is often an enabler, rather than a moderator. Perhaps Asimov 's 3 laws need reconsidering for AI?
Asimov’s robot books are all about the way these laws go wrong, with various consequences.
The flaw with Asimov’s laws lies perhaps in this: They assume that morality and moral decisions can be made by means of an algorithm, that discrete yes/no answers suffice to “solve” moral quandaries.
The challenge for programmers of robotics require that they code in rules for the decisions AI should make. However. Individual moral decisions require wisdom, a mind trained how to think so that it can handle each individual case properly. An algorithm is not wisdom.
Assuming no ‘wisdom’, programmers will need many, many wording examples to clarify. E.g.:
Asimov’s rule to avoid “through inaction, allow a human being to come to harm.” Without further exacting clarification AI could bring a halt to all war, abortions, and euthanasia throughout the world, and embark on a massive evangelisation effort to prevent humans causing themselves infinite harm. What if ‘harm’ were to be identified as pertains to the collective rather than the individual, and overpopulation may be considered harmful?
The best we can say about Asimov’s books is that they are a warning. And a fine read!
It seems this has been noted by others - scientific “references” which look the part but are not to actual papers.
I would expect universities and other concerned institutions to arrange for a parallel creation of an authentication program. Possibly AI, as in, “Is this gobbledegook by you or one of your ‘friends’?”
If AI answers “Ha! Ha!”, maybe we should worry
The BBC article has a quote that may interest
“They look perfect - they’ve got the right names of authors, they’ve got the right names of journals, the titles all sound very sensible - they just don’t exist,” James says.
If LLM ChatGPT is just mining answers from throughout the internet, how is it ‘making up’ plausible sounding authors and publication titles? Where are these coming from in its neural network?
I may have said already, I worked for someone like that. Perhaps a part of creating perceived intelligence was generating a new set of data based on samples of old data, carefully curated around a median value of variability?
Which is exactly what we need for protection - you can’t have someone who’s only a ‘bit dead’. But a point of AI is to make a robot brain that can manage the enormous complexity of moral variables to come up with the ‘right’ answer, otherwise they are just an electronic reiterator of a set of views. We usually manage these conflicts by deciding one view is (mostly) right and the other (mostly) wrong - the example of abortion you gave is a good one.
What this is leading to, of course, is that we want to invent ‘God’ to tell the right way.
I like the juxtaposition of your two preceding posts @Ancient_Mariner !
The first explains carefully curated lying and the second describes an AI self designed Solomon. Will we see the difference?
I wholeheartedly agree with you that there needs to be some gateway set up as protection. Wise and experienced IT leaders have requested a pause to hash out these protections. So far, no pause in sight.
I’m guessing the curated lying may win.
It’s “seeing” that references are normal in scientific discourse and replicating the general form. Lacking actual intelligence and not realising that the reference has to be genuine it just cooks up something that looks right.
Yikes! What else might it cook up next???
Sticks and pitchforks are coming out
Apart from the general fears expressed in this article there is an interesting bit if info that is not exactly AI but tech
One example of harm, they said, was the use of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the under treatment of their hypoxia.
Tech tools should have the same stringent checks and tests as meds before being on open release to the public for use but currently they are not. This is rightly a concern.
I have been trying to (self!) learn about the method behind the algorithm in LLMs and AI neural network - backpropagation.
La Backpropagation est alors une méthode pour évaluer le gradient de manière beaucoup moins coûteuses.
La différence majeure avec la méthode précédente est que lorsqu’on évalue le gradient, on ne calcule que la valeur de la dérivée partielle au point considéré et non pas la fonction dérivée partielle.
[ Backpropagation is then a method to evaluate the gradient in a much less expensive way.
The major difference with the previous method is that when the gradient is evaluated, only the value of the partial derivative at the point considered is calculated and not the partial derivative function.]
How AI ‘learns’ and why it makes interesting errors in releasing false answers may be down to merely not having enough computing power. With time, and $$$, this will be remedied.
Or, the programmers will find a new way other than backpropagation
Now I need to go and learn about sparse networks….
Tools for medical use are heavily regulated, but those for purchase by the public are just for guidance.
Ah, good. I don’t know if the oximeter is available over the counter but just wanted to warn any readers.
Actually, I expect that the medical field is one area AI can make incredible advances.
I don’t know about this oximeter specifically, but instruments etc for professional health care use are regulated as Medical Devices with stringent requirements for performance and efficacy, just like a pharmaceutical or diagnostic product. I hope that’s reassuring.
But, as Futurismpoints out, a big concern here is that false information on the sites could serve as the basis for future AI content, creating a vicious cycle of fake news.
At present there are still some working tests to identify AI.
While we are focussing on where AI fails, it is busy working, with open release, on its own ‘repairs’.
AI is here now and will be world changing
we’re unlikely to have anything like as long to get used to these new ideas, and it’s unlikely to be boring!
An excellent article about what AI is not
Who remembers back when they told us computers would make paperwork obsolete?
Not in my world…
Given we live in an age when politicians lie and the electorate don’t seem to care, does it really matter what Chatbots say?
I think all this A"I" (because there is no intelligence involved in LLMs at all) is, as usual with tech, overhyped. Remember how Blockchain was going to change the World
Google gets panicky about Bing etc. etc. but search engines are already manipulated, so they they a bit more manipulated. A few more social media positive feedback loops intensify, bubbles get bubblier. There will be an early mover advantage and then things will settle down again until the next big “breakthrough”.
Meanwhile, human bad actors, whether using LLMs or not, are and will continue to be the biggest threat we face.
Pass me the soma.