Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having
https://www.youtube.com/watch?v=c4Zx849dOiY
Wow, that is scary. I had thought AI was supposed to have limitations compared to humans and be relatively monkey-see-monkey-do and wasn't the sort of thing that would end up like HAL 3000.accelafine wrote: ↑Tue Jun 10, 2025 11:22 pm Interesting discussion with one of the leaders in the field of AI.
Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having![]()
https://www.youtube.com/watch?v=c4Zx849dOiY
OK, I'll admit I haven't watched the video but this sounds like bullshit. AI doesn't have any reasoning skills such that it would WITHOUT human direction do this 'task'.accelafine wrote: ↑Tue Jun 10, 2025 11:22 pm Interesting discussion with one of the leaders in the field of AI.
Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having![]()
https://www.youtube.com/watch?v=c4Zx849dOiY
Watch it. Perhaps it was prompted. If you have evidence that it was then feel free to share it.attofishpi wrote: ↑Wed Jun 11, 2025 6:55 amOK, I'll admit I haven't watched the video but this sounds like bullshit. AI doesn't have any reasoning skills such that it would WITHOUT human direction do this 'task'.accelafine wrote: ↑Tue Jun 10, 2025 11:22 pm Interesting discussion with one of the leaders in the field of AI.
Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having![]()
https://www.youtube.com/watch?v=c4Zx849dOiY
clickbait
Mmm, bit busy making industrial music..accelafine wrote: ↑Wed Jun 11, 2025 7:45 amWatch it. Perhaps it was prompted. If you have evidence that it was then feel free to share it.attofishpi wrote: ↑Wed Jun 11, 2025 6:55 amOK, I'll admit I haven't watched the video but this sounds like bullshit. AI doesn't have any reasoning skills such that it would WITHOUT human direction do this 'task'.accelafine wrote: ↑Tue Jun 10, 2025 11:22 pm Interesting discussion with one of the leaders in the field of AI.
Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having![]()
https://www.youtube.com/watch?v=c4Zx849dOiY
clickbait
There are many aticles about this.
https://www.axios.com/2025/05/23/anthro ... ption-risk
https://www.businessinsider.com/claude- ... pus-2025-5
I agree. It doesn't seem as if it was genuinely doing it of its own volition.attofishpi wrote: ↑Wed Jun 11, 2025 11:30 am I watched the start of the vid. So they ran a test environment - closed intranet.
This is the problem with A.I. - although it has no natural self preservation desire (*that's a sentient thing), it can via humans mimic anything that a sentient human desires.
In this case, I'd think that they: Gave the AI a requirement to self preserve - in this case from being replaced by an update that an engineer has pending..
It would then map strategies to accomplish this end.
It may within the intranet system access have access to concepts, ideas on how to affect human decision making, blackmail would be one of those.
It could then research within this intranet forms of blackmail.
Hey presto - it worked out blackmail strategies
How to threaten a human via blackmail is researched
Humans can be killed-seems impossible
Humans have secrets
What is Keith the engineer personal life - email search
Affairs are not acceptable to humans
Keith has had an affair
Bingo!
..well, something like that.
It truly is scary. I saw Putin talking about the dangers of AI ironically in the hands of a dictator. AI driven with nefarious motivation to the extreme..
Yep, ultimately it always comes to humans being the driving force (the PROMPT motivation) - some will do good with that, but the Putin, Xi Ping Pongs and gang bangers etc..will use it for terrible evils.
Well that's the thing, the good guys can put restrictions on A.I that it as a deterministic machine cannot cross. Trouble is, the bad guys versions of A.I. may only have some guarantees about protecting themselves and fuck everyone else.accelafine wrote: ↑Wed Jun 11, 2025 11:39 amI agree. It doesn't seem as if it was genuinely doing it of its own volition.attofishpi wrote: ↑Wed Jun 11, 2025 11:30 am I watched the start of the vid. So they ran a test environment - closed intranet.
This is the problem with A.I. - although it has no natural self preservation desire (*that's a sentient thing), it can via humans mimic anything that a sentient human desires.
In this case, I'd think that they: Gave the AI a requirement to self preserve - in this case from being replaced by an update that an engineer has pending..
It would then map strategies to accomplish this end.
It may within the intranet system access have access to concepts, ideas on how to affect human decision making, blackmail would be one of those.
It could then research within this intranet forms of blackmail.
Hey presto - it worked out blackmail strategies
How to threaten a human via blackmail is researched
Humans can be killed-seems impossible
Humans have secrets
What is Keith the engineer personal life - email search
Affairs are not acceptable to humans
Keith has had an affair
Bingo!
..well, something like that.
It truly is scary. I saw Putin talking about the dangers of AI ironically in the hands of a dictator. AI driven with nefarious motivation to the extreme..
Yep, ultimately it always comes to humans being the driving force (the PROMPT motivation) - some will do good with that, but the Putin, Xi Ping Pongs and gang bangers etc..will use it for terrible evils.
Still, the fact that it emulates human behaviour kind of makes it more dangerous. They should be trying to make it the exact OPPOSITE of what humans would do. If there's a wrong way to do/use something then human will invariably choose it.
Yep, and how to lock scientists in dungeons for interrogation!accelafine wrote: ↑Wed Jun 11, 2025 12:19 pm Here's Sabine's take on it.
Apparently when different versions of AI get together they like talking about philosophy, metaphysics and poetry![]()
Love Sabine. I gotta find the recent vid she did..ah, just found it !accelafine wrote:https://www.youtube.com/watch?v=KY7_ufxh_Rk
This comes from Anthropic AI's marketing team rather than actual science.accelafine wrote: ↑Tue Jun 10, 2025 11:22 pm Interesting discussion with one of the leaders in the field of AI.
Hear how an AI in a test situation read via emails that it was going to be replaced then attempted to blackmail its 'keeper' by threatening to expose an affair he was having![]()
https://www.youtube.com/watch?v=c4Zx849dOiY
She loves those clickbait titles and her jokes are terribleattofishpi wrote: ↑Wed Jun 11, 2025 12:33 pmYep, and how to lock scientists in dungeons for interrogation!accelafine wrote: ↑Wed Jun 11, 2025 12:19 pm Here's Sabine's take on it.
Apparently when different versions of AI get together they like talking about philosophy, metaphysics and poetry![]()
I'd never consciously allow AI free reign on my PC..
Love Sabine. I gotta find the recent vid she did..ah, just found it !accelafine wrote:https://www.youtube.com/watch?v=KY7_ufxh_Rk
Gravity Proves That We Live In A Simulation, Physicist Claims
https://www.youtube.com/watch?v=ArUTSOZcn0E