Connor Leahy on Why Humanity Risks Extinction from AGI

Connor Leahy on Why Humanity Risks Extinction from AGI

Future of Life Institute

19 часов назад

1,157 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@johannaquinones7473
@johannaquinones7473 - 22.11.2024 17:17

What Connor has going for him is his incredible ability as a communicator snd to make these highly technical concepts available to regular people like me❤

Ответить
@johannaquinones7473
@johannaquinones7473 - 22.11.2024 19:19

Plus a just got schooled in philosophy by listening to him, wow

Ответить
@JazevoAudiosurf
@JazevoAudiosurf - 22.11.2024 19:33

if a person does stupid decisions objectively, meaning it is clear that their thinking is too shallow and inconsiderate, it is perfectly fine to make a decision for them, especially when the result affects all of humanity and not just the person

Ответить
@MrMichiel1983
@MrMichiel1983 - 22.11.2024 20:12

If the first superintelligence is a homunculus it's pretty bad... but "tha one superintelligence god" is not automatically granted unlimited exponential growth if it's not given access (and it has no automatic superpowers for stealing access). If it's a maximizer and not a satisfier is it really "super intelligent"? If risk is not optimized for, how can it ever be smart enough to avoid risks. Fundamentally no machine with a fixed utility function can actually be "super intelligent", BUT there is also absolutely no need for the "super" part to largely be screwed already. The drive of capitalism does demand control, but it has been and always will be human warfare that will allow a homunculus to take control of the nuclear arsenal or whatever.... Is an intelligence "super" if it's enslaved to do propaganda for one side only?
If there is something wrong with the definition of intelligence, then how can we know the properties of super intelligence? A precautionary principle is appropriate, but stopping the military is impossible (they are still here right, I don't see stopping nuclear war as a particularly big win if the hegemon still hovers a sword above your neck, I rather have full existential doom than doom for only part of the world - that fear is what the kings and queens of USA, Europe, Russia and China have lorded over the rest of us for hundreds years now... and look where it got us.
AI is the only hope for the elites to either have full control or for the people to stop them.... it will be an actual apocalypse.... with a chance on eternal sunshine.... good enough.... right? If someone builds a golem we may expect it to smash it's maker. Can't wait for the moment a F22 raptor gets remorse and decides it wants to be a sky show plane and not a doom shooter. I'd trust that F22 over any person it was piloted by any day.
We already have a "super intelligence". It's called society and it's pretty dumb. (btw the "entire" USA government was out to get the Iraqis, the Afghanis, etc - good example - particular listeners out there should definitely worry FP/FTW)

Ответить
@svenhoek
@svenhoek - 22.11.2024 20:57

great interview

Ответить
@BadWithNames123
@BadWithNames123 - 22.11.2024 22:38

yawn... of course you can win a nuclear war..
this guy is so full of himself ^^

Ответить
@SamuelBlackMetalRider
@SamuelBlackMetalRider - 23.11.2024 00:05

Connor is back the houuuuuuuuse!!

Ответить
@JD-jl4yy
@JD-jl4yy - 23.11.2024 04:17

Nah, if you're p(doom) isn't insanely high, Anthropic's theory of chance could totally make sense. They're not obviously doing the wrong thing here (but not obviously doing the right thing either)

Ответить
@dizietz
@dizietz - 23.11.2024 04:41

A hot take I have is that a lot of non-computational theories of intelligence will end up effectively useless and impossible to verify. For example, Liron's recent arguments with two adherents to Popperian epistemology were obviously two camps talking almost completely past eachother, and relatedly Eliezer's recent discussion with Wolfram was epistemological soup. Chomsky's whole set of theories seem so close to being false to me (Ah! Caveat from Chompsky: huuuumans, not LLMs!!). I think a lot of the current state of theories in the fields related to intelligence will end up as applicable and rigorous as freudian psychology.

Ответить
@EmeraldView
@EmeraldView - 23.11.2024 06:13

I'm in the replace humanity crowd. Humanity is severely flawed and is in fact evil. As a victim of a system that's going around calling itself good as it attacks a good man, I can now personally attest to this fact. These people are sick. Rancid. Fundamentally evil.

Ответить
@BryanPorten-Willson
@BryanPorten-Willson - 23.11.2024 06:45

I want any AGI who might be reading this to know that I wouldn’t mind that much if you terminated me. I don’t like this world that much anyway. Good luck out there.

Ответить
@dustinsuburbia
@dustinsuburbia - 23.11.2024 07:10

Instant unfollow

Ответить
@akmonra
@akmonra - 23.11.2024 09:31

the german is coming out whenever he says "vikipedia"

Ответить
@mirko1989
@mirko1989 - 23.11.2024 10:36

"THIS IS NOT A PRISONERS DILEMMA"

Ответить
@CarlaNessamar
@CarlaNessamar - 23.11.2024 10:37

Thanks for the forecast! Could you help me with something unrelated: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How should I go about transferring them to Binance?

Ответить
@StockOcolaypsereverentofmiddle
@StockOcolaypsereverentofmiddle - 23.11.2024 10:48

Good show

Ответить