Ask questions or topics to discuss in the link here for the next videochat with fans at 5:00pm UTC on June 17th at twitch.tv/rossbroadcast. My previous plans went to hell, but I’m trying hard to get the next Game Dungeon out before the next videochat, but it will be close. If it’s not out by Saturday, it shouldn’t be more than a day or two afterwards. Other videos still coming!
-
Due to malicious attention from the Stop Killing Games campaign, site registrations have been temporarily disabled.
All other normal logins and site functions still work. Thank you for your understanding.
-
February 7th: Videochat with fans
All above will be live at twitch.tv/rossbroadcast at 6PM UTC.
I had a good discussion with sociologist and statistician Jacy Reese Anthis. I thought this one went really well, we talked about AI, consciousness, reality, some philosophy, and some directions society might be going in general. While we had different outlooks, I think I was able to get some answers I was looking for on AI. I think this chat was a good omen for future ones with people as a side thing I’m doing.
I’ll be having another discussion about AI with sociologist / statistician Jacy Reese Anthis of the Sentience Institute at 5:10pm UTC on June 5th on twitch.tv/rossbroadcast. After the last interview, he reached out to me and I think his background should give us a lot to talk about. I also think this discussion will be more likely to be what I was looking for than the previous one. We’ll be talking a lot about AI and consciousness and how AI might be likely to shape society. Additionally, I plan to ask him some sociology questions in general and try to make some predictions about where we’re heading as a civilization.
Here is the May videochat. Had a lot of announcements this time, on more videos, possibly more interviews, a bit of a followup on the AI discussion earlier. It ended up a bit of a blur to me again, there was definitely some discussion on emulation, Microsoft / Blizzard buyout, misc. stuff. I’m pretty sure I misread question and said “there are two points” then forgot to say the second one.
Since I had a discussion / debate on AI a few weeks ago, the tendency for the discussion on AI risks to branch out in different directions that can lose sight of the problem has been kind of bugging me, so I ended up coming up with a chart that lays out exactly how I perceive all of this. On this chart, I listed every major risk I could think of, and tried to separate them by how possible I thought they were. Much of current AI discussion I think is about risks I perceive as fantasy, because they hinge entirely upon superintelligence emerging. This chart also lays out my reasoning why I think superintelligence in AI is not going to happen.
However, since I’m not an expert, I left some gaps to fill in for anyone who disagrees with my conclusion, in case my assessment is faulty. I acknowledge I don’t fully understand the justification for saying AI is an existential risk to humanity any more than anything we are already doing without AI, so this chart should make it easier to pinpoint exactly what information I’m missing if I’m wrong.
The “Option 4” or “Counterpoints” part of the chart is specifically what I would need someone to fill in order to make the opposing argument credible and the “fantasy” part of the chart a potential reality. If someone is able to point out an essential piece of information I’m missing that makes superintelligence a likelihood, then I’d say the existential risks of AI are plausible. If they cannot, then I’m forced to conclude it’s not as well. I suspect much of the concern about AI becoming superintelligent grossly underestimates the complexity of the brain and / or consciousness itself.
Anyway, this was probably a big waste of my time, but this should clear up any confusion of where I’m coming from and hopefully help refine people’s thinking a little bit when talking about potential risks of AI. Big thanks to Laurynas Razmantas for helping me illustrate this!