I’ll be having another discussion about AI with sociologist / statistician Jacy Reese Anthis of the Sentience Institute at 5:10pm UTC on June 5th on twitch.tv/rossbroadcast. After the last interview, he reached out to me and I think his background should give us a lot to talk about. I also think this discussion will be more likely to be what I was looking for than the previous one. We’ll be talking a lot about AI and consciousness and how AI might be likely to shape society. Additionally, I plan to ask him some sociology questions in general and try to make some predictions about where we’re heading as a civilization.
February 7th: Videochat with fans
All above will be live at twitch.tv/rossbroadcast at 6PM UTC.
Here is the May videochat. Had a lot of announcements this time, on more videos, possibly more interviews, a bit of a followup on the AI discussion earlier. It ended up a bit of a blur to me again, there was definitely some discussion on emulation, Microsoft / Blizzard buyout, misc. stuff. I’m pretty sure I misread question and said “there are two points” then forgot to say the second one.
Since I had a discussion / debate on AI a few weeks ago, the tendency for the discussion on AI risks to branch out in different directions that can lose sight of the problem has been kind of bugging me, so I ended up coming up with a chart that lays out exactly how I perceive all of this. On this chart, I listed every major risk I could think of, and tried to separate them by how possible I thought they were. Much of current AI discussion I think is about risks I perceive as fantasy, because they hinge entirely upon superintelligence emerging. This chart also lays out my reasoning why I think superintelligence in AI is not going to happen.
However, since I’m not an expert, I left some gaps to fill in for anyone who disagrees with my conclusion, in case my assessment is faulty. I acknowledge I don’t fully understand the justification for saying AI is an existential risk to humanity any more than anything we are already doing without AI, so this chart should make it easier to pinpoint exactly what information I’m missing if I’m wrong.
The “Option 4” or “Counterpoints” part of the chart is specifically what I would need someone to fill in order to make the opposing argument credible and the “fantasy” part of the chart a potential reality. If someone is able to point out an essential piece of information I’m missing that makes superintelligence a likelihood, then I’d say the existential risks of AI are plausible. If they cannot, then I’m forced to conclude it’s not as well. I suspect much of the concern about AI becoming superintelligent grossly underestimates the complexity of the brain and / or consciousness itself.
Anyway, this was probably a big waste of my time, but this should clear up any confusion of where I’m coming from and hopefully help refine people’s thinking a little bit when talking about potential risks of AI. Big thanks to Laurynas Razmantas for helping me illustrate this!
Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on May 20th at twitch.tv/rossbroadcast. I’ll have a short follow-up to the AI discussion I had earlier + talk about possible future discussions / interviews. Normal videos still coming!
An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity. For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea. I feel like we each tried our best to make our case, even if we got lost in the weeds a few times. There’s definitely food for thought here either way. Also, I screwed up and the chat text ended up being too tiny, sorry about that.
I’m not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion:
1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it’s refined.
2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren’t necessarily shared also. In many cases, they aren’t. This strikes me as the case for human intelligence v. machine learning.
By the end, I honestly couldn’t tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots. Maybe some of you can figure it out!