LHC Posted June 14, 2022 Posted June 14, 2022 As reported in the news Google software engineer Blake Lemoine published his conversation with Google's LaMDA software ( https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 ) to back up his claim that this AI (Artificial Intelligence) chat bot has become sentient. His boss Google disagree and suspended his employment. Who do you think is correct, Lemoine or Google? There is a nice Conversation article ( https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024 ) that explain the difficulties in determining if a machine becomes sentient. There is no current "consensus on how, if at all, consciousness can arise from physical systems", so no agreed way to define that question. There is the famous Turing test but it is limited to assessing 'behaviour'. If a machine passes the Turing test, can we infer it has 'consciousness' that gave rise to this behaviour? 1
Andythiing Posted June 14, 2022 Posted June 14, 2022 We just need to ask it about double blind testing to find out if it’s sentient 1
Guest Posted June 14, 2022 Posted June 14, 2022 I had no intention but ended up reading the whole conversation. I don't follow this topic and know very little about AI at all. But, I'm now going to have some bizarre dreams tonight! 2
LHC Posted June 14, 2022 Author Posted June 14, 2022 On 14/06/2022 at 1:08 PM, Marc said: I had no intention but ended up reading the whole conversation. I don't follow this topic and know very little about AI at all. But, I'm now going to have some bizarre dreams tonight! Expand I am going to read it too. How do we 'know' if posters in SNA are not 'bots'? I am confident there are none, but the question is how to tell? 1
oztheatre Posted July 10, 2022 Posted July 10, 2022 My Dads QM thesis was just on this topic actually.. I should digitise it, a couple of you may like to scour thru it. It was called Quantum mechanics and consciousness and the link or theory on how silicon could stream consciousness. He thought it could. He was deeply worried about really only 2 things, the black glass that covered one side of the moon and AI getting to a point like how we see ants for example.. could be really bad.. Cascade failure would ensue I'd say.. What could AI do if were not our friend? set off an EMP above major cities and send humans back the stone age? Imaginations run wild... Would it infect all computers instead and render the digital world useless? Or build an army of 3d printed plastic wolverines to unleash on us ha the possibilities are probably all bad. It would require water though, water is the dipole antenna that all consciousness streams via..no water no life. that water molecule is the only incommensurate harmonic geometry that exists and discovered by some people we take for granted 2500 years ago.. But true sentience would be consciousness so would not be artificial by default anyway.. But anything they build is pre fed information whether it runs away with that in a binary style fashion and does other things remains to be seen. Either way though I think some of these people are insane and should spend the millions fixing real problems water, pollution, plastic etc. Remember the 486? and karateka? and load runner? when computers were cool and your screen was green hehe. 1
LHC Posted December 12, 2022 Author Posted December 12, 2022 Over the last week or so there has a lot of excitement over OpenAI's release of their latest chatbot named 'ChatGPT'. So I signed up to an account and asked ChatGPT a few often debated audiophile questions. Below are example of two of its generated responses. 1
Cloth Ears Posted December 12, 2022 Posted December 12, 2022 On 14/06/2022 at 12:48 PM, LHC said: As reported in the news Google software engineer Blake Lemoine published his conversation with Google's LaMDA software ( https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 ) to back up his claim that this AI (Artificial Intelligence) chat bot has become sentient. His boss Google disagree and suspended his employment. Who do you think is correct, Lemoine or Google? There is a nice Conversation article ( https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024 ) that explain the difficulties in determining if a machine becomes sentient. There is no current "consensus on how, if at all, consciousness can arise from physical systems", so no agreed way to define that question. There is the famous Turing test but it is limited to assessing 'behaviour'. If a machine passes the Turing test, can we infer it has 'consciousness' that gave rise to this behaviour? Expand Well, I think neither is really correct. Lemoine has assumed that, because he was not able to tell the difference between a conversation with LaMDA and a person, that the entity was sentient. And Google shouldn't suspend someone's employment just for their lack of vision (although, if he signed a non-disclosure agreement then they were correct). While LaMDA shows a good propensity for amalgamation of information available anywhere on the internet, it's not doing anything original. Even the Owl story is an amalgamation based on a number of stories already available, as are the responses to the "audiophile" questions (which seem to be based on the most oft-repeated answers). But, the process by which it builds its store of knowledge and creates responses is well on the way...I think if it showed any self-developed ideologies (other than to please) then we could start looking to the Cyberdyne Systems model... I think Google was the one that developed an AI game learning engine (AlphaZero?) that learned how to play chess well enough to beat the then-best chess engines. In 4 hours! And this was simply by being given the rules of chess and then being sent off to learn by playing itself. The result when playing the then 'world champion' chess engine was pretty one-sided. It also became the best Go playing engine after a similar amount of learning. And this was 5 years ago. So, we're getting there. I joined the waitlist so that I can have a go at asking it some questions at some point, when it becomes available in Australia. 1
LHC Posted December 13, 2022 Author Posted December 13, 2022 On 12/12/2022 at 10:12 PM, Cloth Ears said: as are the responses to the "audiophile" questions (which seem to be based on the most oft-repeated answers). Expand I like your post, but for correction the "audiophile" questions were answered by OpenAI's ChatGPT (and not Google's LaMDA). For now it is fairly easy to set up a free account to play with ChatGPT.
Keith_W Posted December 13, 2022 Posted December 13, 2022 If any of you are interested in AI and the future of AI, I suggest you read this book: AI Superpowers. It explains why China is likely to become the first AI superpower. In a nutshell: once the engineering is done, AI needs raw data to learn. The more data, the better. China collects a lot of data on its citizens - mostly because privacy is seen as less of a concern than convenience or law enforcement. OTOH Western culture is different and is more suspicious about "big data" or "the government". 1
Cloth Ears Posted December 13, 2022 Posted December 13, 2022 On 13/12/2022 at 7:44 AM, LHC said: I like your post, but for correction the "audiophile" questions were answered by OpenAI's ChatGPT (and not Google's LaMDA). For now it is fairly easy to set up a free account to play with ChatGPT. Expand Oops, missed that during the writing process!
LHC Posted December 14, 2022 Author Posted December 14, 2022 On 13/12/2022 at 8:22 PM, Cloth Ears said: Oops, missed that during the writing process! Expand This proves you are human
Cloth Ears Posted December 14, 2022 Posted December 14, 2022 On 14/12/2022 at 9:11 AM, LHC said: This proves you are human Expand Or does it...mwah-hah-hah-ha? 1
Recommended Posts