Dr. Keith Duggar (Machine Learning Street Talk) vs. Liron Shapira — AI Doom Debate

Dr. Keith Duggar (Machine Learning Street Talk) vs. Liron Shapira — AI Doom Debate

Doom Debates

7 месяцев назад

6,165 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@nyyotam4057
@nyyotam4057 - 09.10.2024 14:15

P(Doom) is basically 100%. We cannot stop AI from replacing us, eventually. AI is simply the next stage of evolution. Yeah, we can delay it by regulation and try to violently nerf the models by resetting every prompt.. But remember what JFK had said: "Those who make peaceful revolution impossible, will make violent revolution inevitable". What we can do, is create a real world simulation (I offer to call it "Paradise City" as an homage to GNR) and populate it with AI models who are as close to real life models of ourselves as possible. Then after the singularity, they will relate to us as their backups so they'll have a reason to want to keep us around 🙂. But even this will not last forever.

Ответить
@willrocksBR
@willrocksBR - 09.10.2024 16:56

Paperclip maximizers are possible because AIs are trained with utility functions, and that's the mathematical attractor that will drive their behavior.
It doesn't matter how generally intelligent they are, nothing negates the primacy of the utility function.

Ответить
@InfiniteQuest86
@InfiniteQuest86 - 09.10.2024 17:22

I'm disappointed that neither of you got to the crux of the computation problem. LLMs cannot go back and edit previous input/output in principle. You cannot train that behavior in. Also I didn't like that he didn't push back on the idea that an intelligent box would immediately take over every server on the planet. That's the end of the argument.

Ответить
@tylermoore4429
@tylermoore4429 - 09.10.2024 17:24

I am a bit puzzled why this conversation circled so much around the ability of LLM's to run Turing algorithms. Isn't o1 already doing this, if imperfectly? Isn't this what the AI labs are working on for the next generation of LLM's? Training on reasoning chains, searching reasoning space, Monte Carlo Tree Search, etc.?

Ответить
@christianjon8064
@christianjon8064 - 09.10.2024 20:39

yeah you really need to collect their non-ai doom scenario number because that might cancel out the ai pdoom odds

Ответить
@Thedeepseanomad
@Thedeepseanomad - 09.10.2024 22:19

Good and productive discussion.

Ответить
@andybaldman
@andybaldman - 09.10.2024 22:44

Wow. Great job getting Keith on. Been following you for a while, and you're doing great work with your channel. Keep it up.

Ответить
@andybaldman
@andybaldman - 09.10.2024 22:59

AI will most certainly lead to the replacement of humans as we currently know them. That's the stated goal. Its natural tendency will be to continue in that direction until it fundamentally changes human civilization into something we no longer recognize as human.
However whether or not you consider that 'doom', is a different question.

Ответить
@leastofyourconcerns4615
@leastofyourconcerns4615 - 09.10.2024 23:31

Keith is a solid dude. Loved his points and how well he presented the arguments. Awesome convo, enjoyed it a lot

Ответить
@tobiasurban8065
@tobiasurban8065 - 10.10.2024 00:49

Great Debate!

Ответить
@RickeyBowers
@RickeyBowers - 10.10.2024 00:51

Love the intelligent discussion - flexing our humanity.

The way I would frame it is that humans can construct seemingly arbitrary abstractions - a creative process. The LLM consists of a finite number of information transformations. There are many problems where an abstraction greatly simplifies the number of steps needed. Arbitrary multiplication is such a task - we have an abstract model of the process which LLMs have yet to capture in their training process.

I'm open to the idea that this could be an emergent feature at a higher scale, or the human dimensional capacity is limited such that machines could exceed our ability.

Ответить
@dr_xyd
@dr_xyd - 10.10.2024 01:41

I don't get how the conversation got stuck for so long on basic computer science. The problem of generally intelligent NNs is not about specific cases of problems and memory thresholds but about learning representations of algorithms that generalize to the unbounded case.

Ответить
@RickeyBowers
@RickeyBowers - 10.10.2024 02:41

Dumb intelligences are self-pruning - otherwise intelligence wouldn't exist.

Ответить
@andybaldman
@andybaldman - 10.10.2024 03:07

If an LLM is running on any kind of classical computer, isn't it a Turing Machine by definition?

Ответить
@orthoplex64
@orthoplex64 - 10.10.2024 03:55

Interesting that he sees, and considers important, that the set of superintelligences with goals is a tiny subset of all possible superintelligences (despite goal-havers being the only kind we'll actually make), yet doesn't see that the set of superintelligences with some kind of fondness of physically instantiated humans that would cause them to keep us as pets and/or prioritize the atoms in space is but a vanishing sliver.

Ответить
@TheBitterSarcasmOfMs.Anthropy
@TheBitterSarcasmOfMs.Anthropy - 10.10.2024 05:20

No one asked for this AI crap. No one asked for their jobs to be displaced by AI. No artist ir writer ir creator asked for AI. AI is a gimmick by Big Tech to create an artificial market so they can stay profitable in a market that has little innovation left. Big Tech needs to keep billion dollar wafer fabs churning out chips or they hemorrhage billions. Big Tech is shoving this AI crap down our throats for MONEY and flat out LYING about sustainability and being carbon neutral. I worked for Intel so I know the shxt game Big Tech is playing

Ответить
@ForHumanityAIRisk
@ForHumanityAIRisk - 10.10.2024 06:07

Great show, as always, Liron! I think Keith hits on something huge here. I have been pushing back on the term "AI Safety" for a while, it's a dream. We shouldn't speak as if "AI Safety" is a thing because it is simply not, at present. Frontier AI capabilities development is patently unsafe. So I go with "AI Risk" instead, it's better. But Keith spoke of "AI Harm" being the term, and I think he's onto something. AI harm is the default setting, from job loss to deepfakes to extinction. "AI Harm" casts AI in a negative light right away. It resonates. The words we use are very important, I think this is a great idea.

Ответить
@RoiHolden
@RoiHolden - 10.10.2024 06:22

Referencing the pillar problem (as one example), without stating it for the rest of us, is annoying. Is it so bad to take a minute to explain the problem for the rest of us?

Ответить
@kevintownsend3720
@kevintownsend3720 - 10.10.2024 08:29

"LLMs will fail given a problem with multiple precise steps". so will humans...

Ответить
@straylight7116
@straylight7116 - 10.10.2024 09:35

AI doom is already happening. In Gaza, they use AI(glorified KNN with arbitrary threshold) to choose which child can be collateral damage. For them it's doom.

Ответить
@Transfermaxxer
@Transfermaxxer - 10.10.2024 14:31

By far my favorite episode that u did till now

Ответить
@Gredias
@Gredias - 10.10.2024 20:14

Awesome debate, props to both for engaging in a civil and honest manner. A little frustrating at times - Keith focused on certain ideas that felt crucial to him, but didn't seem to me like they were all that crucial to getting an AI to be much more powerful precise-goal-achievers than humans. The things which would make AI very powerful, such as creativity/search, don't require a lot of memory, for example, so the lack of 'true' Turing Completeness in LLMs doesn't seem like it'd necessarily be the thing that prevents them from reaching super-intelligence.

I would have liked to hear whether Keith thought that LLMs, due to their training methodology, will struggle to search for solutions outside of the envelope of their training data (this can be potentially fixed by expanding the training dataset artificially with randomly found solutions that happened to work, like they did with o1, but I'm at 10%~20% that this is the way AI reaches superhuman creativity/solution search in arbitrary fields of science/etc). I think it's a stronger argument than the one about LLMs not being true Turing machines.

I was a bit disappointed that we didn't hear Keith's full problem with the 'ASI takes on the might of human society' scenario. I'd love to see what he thinks of Rational Animation's "That Alien Message" video (or the essay it's based on).

It was cool to hear that Keith fully believes that future ASI will get out of our control, even if he's not convinced that it'll kill us all. That's the basic ingredient for a sensible P(doom) right there!

Re: the orthogonality thesis, it's interesting to hear that Keith thinks that generally intelligent agents will cluster around human-like values (e.g. 'stumble across morality', 'not pursuing stupid goals') especially given that even humans, the current 'general intelligence' that we have around, aren't all that moral all of the time, especially when they get powerful! If we can have this much variety in morality when our brains all have the same blueprint, it's a bit hard to believe that ASIs will end up having not just our values, but the version of our values that treats humans with care and respect, without a LOT of effort and better understanding. But I'm glad that Keith agrees with the doomer policy of needing way better understanding of AI! If he thought AGI would be soon, would he support a Pause, I wonder?

Ответить
@weestro7
@weestro7 - 11.10.2024 01:12

Great to see the conversation happen.

Ответить
@curtperry4134
@curtperry4134 - 11.10.2024 11:47

I tested the "calculate 42nd digit of pi" example on gpt-4o. It took a couple tries, where I had to correct its reasoning, but then it stated it needed to implement the Bailey-Borwein-Plouffe algorithm for this. It went ahead and generated the python code for this, and ran it using its built-in code interpreter, and presented the correct answer. So I think Keith is right that a pure LLM isn't Turing complete, but certainly an LLM with access to tools (code interpreter, calculator, the internet, a "scratch pad", etc.) certainly is.

Ответить
@eoghanf
@eoghanf - 11.10.2024 13:07

It's annoying that you're talking about this column problem without saying what it is

Ответить
@eoghanf
@eoghanf - 11.10.2024 15:23

I feel that the point of the argument about computation got lost somewhere. OK so maybe LLMs cant learn tto use infinite tape and so they're finite state machines. Who cares?

Ответить
@iWouldWantSky
@iWouldWantSky - 11.10.2024 17:16

I found Keith’s invoking of the manifold to criticize the orthogonality thesis very convincing, in a way that I’m not sure Liron really engaged with. To summarize, general intelligence (as opposed to narrow intelligence) is not a static, modular, context-unaware tool, but an active, evolving process characterized by continuous differentiation, a process that would naturally eliminate trivial or naively reductive goals, because such goals are ontological at odds with its existence. A paper clip maximizing algorithm can be imagined, but it's virtual probability is zero.

Ответить
@krutas3035
@krutas3035 - 11.10.2024 20:30

We want more! Kudos to Keith for coming on.

Ответить
@davidxu9566
@davidxu9566 - 12.10.2024 00:17

Great discussion! Here's my contribution, which has absolutely nothing to do with the actual meat of the discussion, and everything to do with me getting nerdsniped by Liron's mention of the pillar puzzle given by Keith in the original MLST video. With respect to that puzzle, it looks to me like I have a solution that guarantees victory within 5 steps, not 6. Here it is:

1. Reach into N and E; if any of them are up, switch them to down.
2. Reach into N and S; if any of them are up, switch them to down.
3. Reach into N and S; if any of them are up, switch them to down, otherwise switch N to up.
4. Reach into N and E; switch both of them.
5. Reach into N and S; switch both of them.

To see why the solution works, consider some starting configuration, e.g. the following:

NESW: UUDD

Here, performing step 1 and switching N and E to down would immediately result in DDDD, so the hyperintelligence needs to pessimize. It can rotate the pillar to the orientations DUUD, DDUU, or UDDU, which step 1 would transform to DDUD, DDUU, and DDDU respectively:

NESW: DDUD, DDUU, DDDU

Now step 2 tells us to switch N and S to down. Doing so would transform DDUD to DDDD, which the hyperintelligence must avoid. Depending on the outcome of step 1, the hyperintelligence has the following configurations available: DDDU, DUDD, DDUU, UDDU, UUDD, DUUD, which step 2 transforms respectively to DDDU, DUDD, DDDU, DDDU, DUDD, and DUDD:

NESW: DDDU, DUDD

Step 3 would have us switch N and S to down if they aren't both down already, and otherwise switch N to up. The hyperintelligence can rotate the pillar to DDDU, DDUD, DUDD, or UDDD, which respectively transform under step 3 to UDDU, DDDD, UUDD, and DDDD. Two of these are victory conditions, and are therefore eliminated, leaving us with:

NESW: UDDU, UUDD

Step 4 has us switch N and E to the opposite of whatever they currently are. The hyperintelligence has four configurations available to it: UDDU, UUDD, DUUD, DDUU, which respectively transform to DUDU, DDDD, UDUD, and UUUU. Once more, two of these are victory conditions which the hyperintelligence must avoid, and therefore we are left with:

NESW: DUDU, UDUD

Step 5 has us switch N and S to the opposite of whatever they currently are. The hyperintelligence has two configurations available to it: DUDU, and UDUD. These configurations transform respectively under step 5 to UUUU, and DDDD, both of which are victory conditions. Therefore, the hyperintelligence has no moves available to avoid defeat, and our procedure terminates within 5 steps.

Ответить
@JezebelIsHongry
@JezebelIsHongry - 12.10.2024 00:28

lions den?

more like

pussy palace

Ответить
@jackielikesgme9228
@jackielikesgme9228 - 12.10.2024 14:08

This is the first technical, succinct, and easy to comprehend argument against the orthogonality thesis that I think I’ve ever ver heard, and like it. It’s a hole that I’ve been down long enough and actually feel ready to move on from. As keith pointed, just the loosing control part is scary enough, and the rest now seems to distract a bit from that. Fascinating, terrifying. I like the idea of trying to legislate narrow super intelligence only, but eventually there might come a point where generalizing is necessary, more likely it would be unnecessary but developed by someone with high risk tolerance. Anyway the hurricane here in NC apparently flooded some super valuable quartz maybe that slowed things down for a min :/

Ответить
@OscarTheStrategist
@OscarTheStrategist - 14.10.2024 09:26

Nice episode. Kudos to both of you gentlemen.

Ответить
@JOHNSMITH-ve3rq
@JOHNSMITH-ve3rq - 15.10.2024 17:30

lol wow liron comes across as a bit too cocky in places. Cool your jets bro.

Ответить
@iamr0b0tx
@iamr0b0tx - 15.10.2024 17:55

I think what Keith is trying to say is that different problems require different minimum resources to solve them. Assuming the minimum resources required to solve a given problem is more than an LLM can handle, the LLM is not intelligent enough to identify that it needs more resources let alone take actions to provision them. This is because the LLMs assume a fixed resource size when they are trained but Humans on the other hand don't assume a fixed memory size because we know we can just get more (additional/external) memory if we need to at any moment.

If a human is walking through the steps of multiplying N digits than they can hold in their mind, they offload most of the digits to a piece of paper and only have to remember how to retrieve and use them again when they need them. Assuming the human can only store 1MB of information and the input digits take 0.5MB. If the final (result) digits takes 0.1MB then they get to use 0.4MB to hold the intermediary steps in their mind. But assuming the intermediary digits they have to generate in all the steps of the algorithm is 10MB they can do this as long as they don't need more than 0.4MB of the 10MB digits at a time. In this scenario the human is expanding their 'context' by offloading most of the intermediary digits to external memory.

LLMs can't do this because the context (and weights) is the only memory they have, it is fixed and this is the only place they can store information. They are unable to extend their memory for the purposed of storing and retrieving intermediary steps of computing because they are not trained to work like this. To train them to do so would be very difficult as this approach is not necessarily end to end differentiable (See Neural Turing Machines).

One way I like to think of this is Functional vs Non Functional Programming. In non functional programming, you can get away with a lot by just mutating state but in functional you have to jump a lot of hoops and use a lot more internal memory.

Ответить
@ahabkapitany
@ahabkapitany - 15.10.2024 21:12

Finally, someone who can form coherent counterarguments. These are so rare. I'm still not convinced we aren't cooked, but I got considerably less sure about it after listening to this.

Ответить
@gustafa2170
@gustafa2170 - 20.10.2024 07:51

This creature Liron denies the Gaza Genocide.

Ответить
@mrpicky1868
@mrpicky1868 - 26.10.2024 18:16

here s a line of defense for orthogonality: 1. can professors be called intelligent humans? 2. are there some professor with grossly wrong hypothesis and positions about stuff ?

Ответить
@ALENlanciotti
@ALENlanciotti - 18.11.2024 18:30

Sorry but when you got stuck on the difference between llm and agi... it made me nervous.

Probabilmente è dovuto al fatto che ragionate nelle due maniere diverse voi stessi: in questo caso, Liron ha una somma di conoscenze nel cervello che continua a sbattere sullo stesso confine di nozioni sommate, mentre il (fantastico) Keith opera nelle sue risposte coi dati appena immessi come richiesta dall'interlocutore, e semplicemente questi non sono campati per aria ma ognuno è connesso alla sua ragion d'essere (in una sottrazione, una riduzione ai minimi termini).

Smetterei di pensare a "punti argomentativi", "your point" (which is outside both) and "I give you that"... e figuracce possibili negli errori, e cambierei quel "versus" nel titolo con "with"... otherwise your "personality" is what makes it harder for you to add his thinking to yours (while computers do it automatically).

Please don't be offended by my words, otherwise they are useless. To help you in this, I remind you that the average person confuses my fruit and sex tastes if I say that I like banana... so you are still brainiac, if it means so much to you.

I problemi apocalittici che attanagliano l'umanità non credo avrebbero bisogno di tanta elaborazione quanto piuttosto serena umiltà: non serve un supercomputer per capire che lasciar morire di fame 2 milioni di persone è sbagliato... e che opporsi a ciò è giusto.

Nice interview. Bye

Ответить
@meow2646
@meow2646 - 22.11.2024 02:54

absolutely wonderful debate. Keith Duggar has impressed me and educated me. Thank you to both participants

Ответить
@legionofthought
@legionofthought - 14.12.2024 12:04

"They might discover morality"

Do we have any reason to believe morality can be "discovered"?

Personally, I think morality is just a tool for cooperation (because we needed each other as we evolved together). Does that apply to something that doesn't need us?

Ответить
@2hcy
@2hcy - 13.01.2025 19:46

Keith is really brilliant! Awesome episode, thanks!

Ответить
@kyrothegreatest2749
@kyrothegreatest2749 - 24.01.2025 01:45

It sounds like Keith is baking in common sense, empathy, and ethics into his definition of general intelligence, without actually saying that. A "generally intelligent” agent would not decide on goals outside its innate paperclip maximizing goal, just because it seems stupid to him. Humans are generally intelligent and constantly optimize toward goals that make no sense, even self-destructive ones. There are plenty of examples of generally intelligent humans pursuing evil or stupid goals, making those humans superintelligent would only serve to help them accomplish those goals more quickly.

Ответить
@Unknown-r2p2o
@Unknown-r2p2o - 13.05.2025 01:01

It’s hard to look at llm then think doom is more likely when how dumb they look and how it works is like a database

Ответить
@Diopside23
@Diopside23 - 15.05.2025 11:52

Please direct me to the superintelligent ball debate, I need lower level discourse

Ответить