-
Posts
4,467 -
Joined
-
Last visited
Everything posted by Ross Scott
-
Here is the May videochat. Had a lot of announcements this time, on more videos, possibly more interviews, a bit of a followup on the AI discussion earlier. It ended up a bit of a blur to me again, there was definitely some discussion on emulation, Microsoft / Blizzard buyout, misc. stuff. I'm pretty sure I misread question and said "there are two points" then forgot to say the second one. This is a blog post. To read the original post, please click here »
-
Since I had a discussion / debate on AI a few weeks ago, the tendency for the discussion on AI risks to branch out in different directions that can lose sight of the problem has been kind of bugging me, so I ended up coming up with a chart that lays out exactly how I perceive all of this. On this chart, I listed every major risk I could think of, and tried to separate them by how possible I thought they were. Much of current AI discussion I think is about risks I perceive as fantasy, because they hinge entirely upon superintelligence emerging. This chart also lays out my reasoning why I think superintelligence in AI is not going to happen. However, since I'm not an expert, I left some gaps to fill in for anyone who disagrees with my conclusion, in case my assessment is faulty. I acknowledge I don't fully understand the justification for saying AI is an existential risk to humanity any more than anything we are already doing without AI, so this chart should make it easier to pinpoint exactly what information I'm missing if I'm wrong. The "Option 4" or "Counterpoints" part of the chart is specifically what I would need someone to fill in order to make the opposing argument credible and the "fantasy" part of the chart a potential reality. If someone is able to point out an essential piece of information I'm missing that makes superintelligence a likelihood, then I'd say the existential risks of AI are plausible. If they cannot, then I'm forced to conclude it's not as well. I suspect much of the concern about AI becoming superintelligent grossly underestimates the complexity of the brain and / or consciousness itself. Anyway, this was probably a big waste of my time, but this should clear up any confusion of where I'm coming from and hopefully help refine people's thinking a little bit when talking about potential risks of AI. Big thanks to Laurynas Razmantas for helping me illustrate this! This is a blog post. To read the original post, please click here »
-
Since I had a discussion / debate on AI a few weeks ago, the tendency for the discussion on AI risks to branch out in different directions that can lose sight of the problem has been kind of bugging me, so I ended up coming up with a chart that lays out exactly how I perceive all of this. On this chart, I listed every major risk I could think of, and tried to separate them by how possible I thought they were. Much of current AI discussion I think is about risks I perceive as fantasy, because they hinge entirely upon superintelligence emerging. This chart also lays out my reasoning why I think superintelligence in AI is not going to happen. However, since I'm not an expert, I left some gaps to fill in for anyone who disagrees with my conclusion, in case my assessment is faulty. I acknowledge I don't fully understand the justification for saying AI is an existential risk to humanity any more than anything we are already doing without AI, so this chart should make it easier to pinpoint exactly what information I'm missing if I'm wrong. The "Option 4" or "Counterpoints" part of the chart is specifically what I would need someone to fill in order to make the opposing argument credible and the "fantasy" part of the chart a potential reality. If someone is able to point out an essential piece of information I'm missing that makes superintelligence a likelihood, then I'd say the existential risks of AI are plausible. If they cannot, then I'm forced to conclude it's not as well. I suspect much of the concern about AI becoming superintelligent grossly underestimates the complexity of the brain and / or consciousness itself. Anyway, this was probably a big waste of my time, but this should clear up any confusion of where I'm coming from and hopefully help refine people's thinking a little bit when talking about potential risks of AI. Big thanks to Laurynas Razmantas for helping me illustrate this!
-
Trying to get gamescope working on Linux with Nvidia card
Ross Scott posted a topic in Miscellaneous
I've been trying to get Gamescope working with an Nvidia card (Geforce 970) on any Ubuntu-based distro of Linux. I've had some people helping me, but we've been running into a wall in terms of getting gamescope working. I want gamescope specifically for having the ability to downsample the image and I was told that's the best way to do it. PROBLEM: I can get gamescope to work in any capacity. I've tried it with multiple games on Lutris, but also in the terminal just trying to run glxgears (errors below) gamescope -f -- glxgears Below are some logs I ran of the GOG copy of Judge Dredd v. Death, however this problem has occurred on every game I've tested so far. Log of Judge Dredd v. Death running using DXVK + dgvoodoo: Log of Judge Dredd v. Death failing to launch using DXKV + dgvoodoo2 + gamescope GENERAL STEPS: Installed Ubuntu Mater 22.04 LTS with internet enabled and let it update everything + 3rd party drivers Installed Lutris via the software Under the System tab for the Lutris profiles, I turned on "Disable Lutris Runtime" sudo apt update sudo apt install wine64 sudo apt install libvulkan1 libvulkan1:i386 vulkan-tools sudo add-apt-repository ppa:samoilov-lex/gamescope sudo apt install gamescope sudo nano /etc/default/grub added "nvidia-drm.modeset=1" to GRUB_CMDLINE_LINUX= sudo update-grub reboot Since the above didn't work, compiling gamescope from the source is apparently rather problematic on Ubuntu, so one of the people assisting me compiled one for it (attaching to this post). After opening it, I ran this to copy over the installed one: sudo cp gamescope /usr/games/ EXTRA STUFF: -I have tested this in other distros also: •Ubuntu (regular) •Pop_OS! They gave me the same result with gamescope. -I have been using the 530 proprietary Nvidia driver, but tested it on 525 also. -I also tested this on a Geforce 1070 and had the same result. -This was part of a test to check results on both AMD and Nvidia graphics, so simply switching to an AMD card would defeat the purpose of this experiment. -While I'm willing to switch to another distro, I was preferring to stick with something Ubuntu-based to maximize compatibility. I'd essentially like to confirm from people it's not possible to get gamescope working on Nvidia hardware on Ubuntu before abandoning it. Thanks in advance for any help. If anyone posts a solution, I'll add it to the post later. gamescope.zip - - - EDIT 1: The ppa I was directed to was outdated. The one I should have been using was sudo add-apt-repository ppa:ar-lex/gamescope EDIT 2: User ReflexiveTransativeClosure has posted an alternate version of gamescope where downsampling does work, but introduces oversharpening and a blur on the entire image. EDIT 3: User ReflexiveTransativeClosure fixed the issue with a custom build! It's the latest version he posted in the thread on page 2. Turns out gamescope downsampling always worked, but only for non-integer numbers. Brilliant. -
Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on May 20th at twitch.tv/rossbroadcast. I'll have a short follow-up to the AI discussion I had earlier + talk about possible future discussions / interviews. Normal videos still coming!
-
Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on May 20th at twitch.tv/rossbroadcast. I'll have a short follow-up to the AI discussion I had earlier + talk about possible future discussions / interviews. Normal videos still coming! This is a blog post. To read the original post, please click here »
-
An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity. For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea. I feel like we each tried our best to make our case, even if we got lost in the weeds a few times. There's definitely food for thought here either way. Also, I screwed up and the chat text ended up being too tiny, sorry about that. EXTRA: I'm not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion: 1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it's refined. 2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren't necessarily shared also. In many cases, they aren't. This strikes me as the case for human intelligence v. machine learning. MY CONCLUSION By the end, I honestly couldn't tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots. Maybe some of you can figure it out!
-
An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity. For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea. I feel like we each tried our best to make our case, even if we got lost in the weeds a few times. There's definitely food for thought here either way. Also, I screwed up and the chat text ended up being too tiny, sorry about that. EXTRA: I'm not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion: 1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it's refined. 2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren't necessarily shared also. In many cases, they aren't. This strikes me as the case for human intelligence v. machine learning. MY CONCLUSION By the end, I honestly couldn't tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots. Maybe some of you can figure it out! This is a blog post. To read the original post, please click here »
-
Upcoming discussion / possible debate with AI expert Eliezer Yudkowsky
Ross Scott posted an article in General News
I'll be having a discussion / possible debate with AI expert Eliezer Yudkowsky on May 3rd at 6:10PM UTC at twitch.tv/rossbroadcast. He reached out to me after the last videochat I had where he was mentioned, so we're going to have a full discussion on AI and possibly the end of humanity. I'll post it on Youtube afterwards. There will also be a twist to it that I'll reveal when it starts. -
I'll be having a discussion / possible debate with AI expert Eliezer Yudkowsky on May 3rd at 6:10PM UTC at twitch.tv/rossbroadcast. He reached out to me after the last videochat I had where he was mentioned, so we're going to have a full discussion on AI and possibly the end of humanity. I'll post it on Youtube afterwards. There will also be a twist to it that I'll reveal when it starts. This is a blog post. To read the original post, please click here »
-
Here’s the April videochat. I finally fixed the framerate! This went on too long, but there were several good questions. Covered a range of topics, but a whole lot was on AI this time.
-
Here’s the April videochat. I finally fixed the framerate! This went on too long, but there were several good questions. Covered a range of topics, but a whole lot was on AI this time. This is a blog post. To read the original post, please click here »
-
Freeman Freeman Freeman
-
Freeman Freeman Freeman This is a blog post. To read the original post, please click here »
-
Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on April 22nd at twitch.tv/rossbroadcast. Still working on a bunch of stuff; the next Freeman's Mind should be out before then. Also, the pace on future FM episodes will increase a lot since it fell behind.
-
Ask questions or topics to discuss here for the next videochat with fans at 5:00pm UTC on April 22nd at twitch.tv/rossbroadcast. Still working on a bunch of stuff; the next Freeman's Mind should be out before then. Also, the pace on future FM episodes will increase a lot since it fell behind. This is a blog post. To read the original post, please click here »
-
I'm using my same method in the "make Freeman's Mind HD" in mvtools. That's because it will take an intuitive leap with motion vectors and pretty much eliminate all ghosting. Srcdemo2 can look very good, but the faster the motion, the higher the chance of ghosting. You essentially need an infinite framerate with srcdemo2 in order for that not to happen.
-
Here's the March videochat. Had the usual rambling. Everything is running behind unfortunately, but it's still happening, long FM episode should be next. Also, it looks like I screwed up and clipped some of the chat and didn't notice, but it's mostly there.
-
Here's the March videochat. Had the usual rambling. Everything is running behind unfortunately, but it's still happening, long FM episode should be next. Also, it looks like I screwed up and clipped some of the chat and didn't notice, but it's mostly there. This is a blog post. To read the original post, please click here »
-
Ask questions or topics to discuss here for the next videochat with fans at 6:00pm UTC on March 18th at twitch.tv/rossbroadcast. I'm running behind schedule, again, but I have several interesting videos coming!
-
Ask questions or topics to discuss here for the next videochat with fans at 6:00pm UTC on March 18th at twitch.tv/rossbroadcast. I'm running behind schedule, again, but I have several interesting videos coming! This is a blog post. To read the original post, please click here »
-
Here's the February videochat. Talked about some of the usual topics that come up in these, also a chunk on AI and job implications. I updated OBS for this one, so fingers crossed that clicking sound didn't return when I did. Freeman's Mind has resumed!
-
Here's the February videochat. Talked about some of the usual topics that come up in these, also a chunk on AI and job implications. I updated OBS for this one, so fingers crossed that clicking sound didn't return when I did. Freeman's Mind has resumed! This is a blog post. To read the original post, please click here »
-
Ask questions or topics to discuss here for the next videochat with fans at 6:00pm UTC on February 11th at twitch.tv/rossbroadcast. I'm still working on some larger videos, but may break it up with shorter ones, haven't decided yet, more stuff still coming!
-
Ask questions or topics to discuss here for the next videochat with fans at 6:00pm UTC on February 11th at twitch.tv/rossbroadcast. I'm still working on some larger videos, but may break it up with shorter ones, haven't decided yet, more stuff still coming! This is a blog post. To read the original post, please click here »