My observation is that intelligence has little to do with self control, and itâs much more about belief around ethics and morals, often but not always influenced by upbringing. Intelligence is often an enabler, rather than a moderator. Perhaps Asimov 's 3 laws need reconsidering for AI?
Asimovâs robot books are all about the way these laws go wrong, with various consequences.
The flaw with Asimovâs laws lies perhaps in this: They assume that morality and moral decisions can be made by means of an algorithm, that discrete yes/no answers suffice to âsolveâ moral quandaries.
The challenge for programmers of robotics require that they code in rules for the decisions AI should make. However. Individual moral decisions require wisdom, a mind trained how to think so that it can handle each individual case properly. An algorithm is not wisdom.
Assuming no âwisdomâ, programmers will need many, many wording examples to clarify. E.g.:
Asimovâs rule to avoid âthrough inaction, allow a human being to come to harm.â Without further exacting clarification AI could bring a halt to all war, abortions, and euthanasia throughout the world, and embark on a massive evangelisation effort to prevent humans causing themselves infinite harm. What if âharmâ were to be identified as pertains to the collective rather than the individual, and overpopulation may be considered harmful?
Yikes!
The best we can say about Asimovâs books is that they are a warning. And a fine read!
It seems this has been noted by others - scientific âreferencesâ which look the part but are not to actual papers.
I would expect universities and other concerned institutions to arrange for a parallel creation of an authentication program. Possibly AI, as in, âIs this gobbledegook by you or one of your âfriendsâ?â
If AI answers âHa! Ha!â, maybe we should worry
The BBC article has a quote that may interest
âThey look perfect - theyâve got the right names of authors, theyâve got the right names of journals, the titles all sound very sensible - they just donât exist,â James says.
If LLM ChatGPT is just mining answers from throughout the internet, how is it âmaking upâ plausible sounding authors and publication titles? Where are these coming from in its neural network?
I may have said already, I worked for someone like that. Perhaps a part of creating perceived intelligence was generating a new set of data based on samples of old data, carefully curated around a median value of variability?
Which is exactly what we need for protection - you canât have someone whoâs only a âbit deadâ. But a point of AI is to make a robot brain that can manage the enormous complexity of moral variables to come up with the ârightâ answer, otherwise they are just an electronic reiterator of a set of views. We usually manage these conflicts by deciding one view is (mostly) right and the other (mostly) wrong - the example of abortion you gave is a good one.
What this is leading to, of course, is that we want to invent âGodâ to tell the right way.
I like the juxtaposition of your two preceding posts @Ancient_Mariner !
The first explains carefully curated lying and the second describes an AI self designed Solomon. Will we see the difference?
I wholeheartedly agree with you that there needs to be some gateway set up as protection. Wise and experienced IT leaders have requested a pause to hash out these protections. So far, no pause in sight.
Iâm guessing the curated lying may win.
Itâs âseeingâ that references are normal in scientific discourse and replicating the general form. Lacking actual âintelligence and not realising that the reference has to be genuine it just cooks up something that looks right.
Yikes! What else might it cook up next???
Sticks and pitchforks are coming out
Apart from the general fears expressed in this article there is an interesting bit if info that is not exactly AI but tech
One example of harm, they said, was the use of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the under treatment of their hypoxia.
Tech tools should have the same stringent checks and tests as meds before being on open release to the public for use but currently they are not. This is rightly a concern.
I have been trying to (self!) learn about the method behind the algorithm in LLMs and AI neural network - backpropagation.
La Backpropagation est alors une méthode pour évaluer le gradient de maniÚre beaucoup moins coûteuses.
La diffĂ©rence majeure avec la mĂ©thode prĂ©cĂ©dente est que lorsquâon Ă©value le gradient, on ne calcule que la valeur de la dĂ©rivĂ©e partielle au point considĂ©rĂ© et non pas la fonction dĂ©rivĂ©e partielle.
[ Backpropagation is then a method to evaluate the gradient in a much less expensive way.
The major difference with the previous method is that when the gradient is evaluated, only the value of the partial derivative at the point considered is calculated and not the partial derivative function.]
How AI âlearnsâ and why it makes interesting errors in releasing false answers may be down to merely not having enough computing power. With time, and $$$, this will be remedied.
Or, the programmers will find a new way other than backpropagation
Now I need to go and learn about sparse networksâŠ.
Tools for medical use are heavily regulated, but those for purchase by the public are just for guidance.
Ah, good. I donât know if the oximeter is available over the counter but just wanted to warn any readers.
Actually, I expect that the medical field is one area AI can make incredible advances.
I donât know about this oximeter specifically, but instruments etc for professional health care use are regulated as Medical Devices with stringent requirements for performance and efficacy, just like a pharmaceutical or diagnostic product. I hope thatâs reassuring.
Another warning
Interestingly,
But, as Futurismpoints out, a big concern here is that false information on the sites could serve as the basis for future AI content, creating a vicious cycle of fake news.
At present there are still some working tests to identify AI.
While we are focussing on where AI fails, it is busy working, with open release, on its own ârepairsâ.
AI is here now and will be world changing
weâre unlikely to have anything like as long to get used to these new ideas, and itâs unlikely to be boring!
An excellent article about what AI is not
Who remembers back when they told us computers would make paperwork obsolete?
Not in my worldâŠ
Given we live in an age when politicians lie and the electorate donât seem to care, does it really matter what Chatbots say?
I think all this A"I" (because there is no intelligence involved in LLMs at all) is, as usual with tech, overhyped. Remember how Blockchain was going to change the World
Google gets panicky about Bing etc. etc. but search engines are already manipulated, so they they a bit more manipulated. A few more social media positive feedback loops intensify, bubbles get bubblier. There will be an early mover advantage and then things will settle down again until the next big âbreakthroughâ.
Meanwhile, human bad actors, whether using LLMs or not, are and will continue to be the biggest threat we face.
Pass me the soma.