Jump to content


  • Posts

  • Joined

  • Last visited

  1. Eliezer seems like he needs some better rhetorical skills to be able to communicate his ideas well... he tried to do a bunch of "socratic dialog" things trying to get Ross to understand through analogy, but never could make it fully land. Something similar happened to him in the Lex Fridman interview he did, where he went in circles trying but failing to communicate an idea to Lex. I guess his request for Ross to not read in preparation was to try to find a way to communicate his ideas with someone without any notions about it. One main idea that I think got a bit lost during the discussion here is that it isn't really important that a machine learning tool actually thinks in a similar way to humans at all, it just matters what their capabilities end up being, even if the mechanism is completely different and they're just doing massive pattern matching or something. This got confused a bit with the initial question of whether the AGI would be like the one that's just a bunch of useful tools or the one that was like sentient and become conscious with lightning, Eliezer answered that it'd be more similar to the second one, but I think he meant in the sense that its intelligence was created by chance kind of on its own, rather than in the sense of it becoming sentient. There's a bunch of examples of simple systems that were used in video game style places, where an algorithm was made using this style of evolution like system, where you just specify a goal, and let a bunch of parameters combine randomly and save the ones that work best, genetic algorithms, and where the end result wasn't actually the wanted outcome, but a way in which the system strictly accomplished the goal https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml (that channel also has lots of videos explaining more about ai safety) The way in which things like GPT are trained is also like this, only with a much, much more complex system, where specifying the goal so that it does what we want is much harder. Also since they're trained with examples, and the examples are also complex and not easily controllable, the biases in the examples are also a way in which the objective is hard to control, and the end result is something "that grew on its own", rather than strictly following the objective we want. Then the point is that a very powerful system that we can't control very precisely can be dangerous just because it can do a lot of stuff, without the need of it being like a "sentient being with its own desires like a human", it can be a completely robotic pattern matching machine, but if it can do complex things, and we can't control precisely the way it works because of that training process, it can end up doing damage. Then the other part of his argument is how fast and how sophisticated can these algorithms get, and I think this is the part where I'm not sure Yudkowsky has enough evidence to back up his certainty for his postition. Like in principle it seems possible, but so far, as impressive as GPT4 is, and how people are hooking many systems like it together with like robotic arms and cameras to get it to do more stuff, the actual danger is whether they can actually get as powerful as Yudkowsky claims, as fast as he claims, such that their not having precisely specified goals can end up being as dangerous as he claims. In that regard it seems many other experts don't have as extreme of a position as him, because the the evidence that it can be dangerous in that way isn't as clear.
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.