Videochat May 2023

Here is the May videochat. Had a lot of announcements this time, on more videos, possibly more interviews, a bit of a followup on the AI discussion earlier. It ended up a bit of a blur to me again, there was definitely some discussion on emulation, Microsoft / Blizzard buyout, misc. stuff. I’m pretty sure I misread question and said “there are two points” then forgot to say the second one.
Videochats with Fans Post page

Ross's Interpretation of AI Risks

Ross's Interpretation of AI Risks
 

SDkzYqI.png


 

Since I had a discussion / debate on AI a few weeks ago, the tendency for the discussion on AI risks to branch out in different directions that can lose sight of the problem has been kind of bugging me, so I ended up coming up with a chart that lays out exactly how I perceive all of this. On this chart, I listed every major risk I could think of, and tried to separate them by how possible I thought they were. Much of current AI discussion I think is about risks I perceive as fantasy, because they hinge entirely upon superintelligence emerging. This chart also lays out my reasoning why I think superintelligence in AI is not going to happen.  

However, since I’m not an expert, I left some gaps to fill in for anyone who disagrees with my conclusion, in case my assessment is faulty. I acknowledge I don’t fully understand the justification for saying AI is an existential risk to humanity any more than anything we are already doing without AI, so this chart should make it easier to pinpoint exactly what information I’m missing if I’m wrong.

The “Option 4” or “Counterpoints” part of the chart is specifically what I would need someone to fill in order to make the opposing argument credible and the “fantasy” part of the chart a potential reality. If someone is able to point out an essential piece of information I’m missing that makes superintelligence a likelihood, then I’d say the existential risks of AI are plausible. If they cannot, then I’m forced to conclude it’s not as well. I suspect much of the concern about AI becoming superintelligent grossly underestimates the complexity of the brain and / or consciousness itself.

Anyway, this was probably a big waste of my time, but this should clear up any confusion of where I’m coming from and hopefully help refine people’s thinking a little bit when talking about potential risks of AI. Big thanks to Laurynas Razmantas for helping me illustrate this!
General News

Questions for Videochat May 2023

Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on May 20th at twitch.tv/rossbroadcast. I’ll have a short follow-up to the AI discussion I had earlier + talk about possible future discussions / interviews. Normal videos still coming!
General News

Discussion / debate with AI expert Eliezer Yudkowsky

An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity.  For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea. I feel like we each tried our best to make our case, even if we got lost in the weeds a few times. There’s definitely food for thought here either way. Also, I screwed up and the chat text ended up being too tiny, sorry about that.

 

EXTRA:

I’m not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion:

1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it’s refined.

2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren’t necessarily shared also. In many cases, they aren’t. This strikes me as the case for human intelligence v. machine learning.

MY CONCLUSION
By the end, I honestly couldn’t tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots. Maybe some of you can figure it out!
Other Videos Post page

Upcoming discussion / possible debate with AI expert Eliezer Yudkowsky

Upcoming discussion / possible debate with AI expert Eliezer Yudkowsky
I’ll be having a discussion / possible debate with AI expert Eliezer Yudkowsky on May 3rd at 6:10PM UTC at twitch.tv/rossbroadcast. He reached out to me after the last videochat I had where he was mentioned, so we’re going to have a full discussion on AI and possibly the end of humanity. I’ll post it on Youtube afterwards. There will also be a twist to it that I’ll reveal when it starts.
General News
Contact

The best way to contact me is via email. I only sometimes see Youtube and forum comments and I only use Twitter for quick announcements.

I'm happy for email from people and will try to answer questions, however I can often get overwhelmed by the volume of email I have and I often neglect it in favor of spending more time making videos. I DO read all email sent to me, but replies can often take a while. Please don't get offended by slow replies, even if it can sometimes take months. It's the only way I can have enough time to keep making more videos. If it's something time-sensitive, it may help to put that in the title. Alternately, if it's been a LONG time and you're waiting on a reply, it's okay to send me another message to remind me.

I probably won't reply to emails that are only reports of news or requests for Ross's Game Dungeon due to limited time. There is about 0.1% chance your email will be considered spam by Gmail and I'll miss it. As flawed as the process is for me, email is still the best way to ensure I'll see what you have to say. Also, be sure to check the FAQ!

Back
Top