Jump to content

Videochat June 2022

Here’s the latest June videochat. This one was a bit of a mess since my voice started going after 15 minutes (but came back), the framerate recorded low again (I need to troubleshoot that), and I rambled way too much. I kept the door closed to my room to cut back on noise, but I think it resulted in me not getting enough oxygen. I’m not sure I said anything of value except the part at the end about a documented case where 80% of public comments to the FCC were faked.

  Reply to post
Sign in to follow this  

Recommended Posts

1:45:54 - Grimrock has cooldowns rather than turns. It requires a little waiting between attacks, although you can move while you wait (and doing so is helpful for dodging enemy attacks).

Share this post


Link to post
Posted (edited)
On 6/20/2022 at 7:56 PM, Ross Scott said:

Here's the latest June videochat.  This one was a bit of a mess since my voice started going after 15 minutes (but came back), the framerate recorded low again (I need to troubleshoot that), and I rambled way too much.  I kept the door closed to my room to cut back on noise, but I think it resulted in me not getting enough oxygen.  I'm not sure I said anything of value except the part at the end about a documented case where 80% of public comments to the FCC were faked.

 

This is a blog post. To read the original post, please click here »

 

maybe keep your AC on next time? (if you have a central one and not just an in room one (aka a wall unit? ))

Edited by kerdios (see edit history)

Burn the World!

Share this post


Link to post

2:03:30

on the subject of terminology that didn't exist at the time for certain scenarios, I'm pretty sure Ross had another violation (besides mentioning Jackass) when he dropped the term 'Toxic Relationship' in one of the more recent episodes, which I'm pretty damn sure didn't exist around Y2K

Share this post


Link to post
Posted (edited)

Evangelisation warning...

 

I've been guzzling down the realtime raytracing cool-aid since the day, long ago, when I popped a 50MHz MC68060 into my Amiga, and a fullscreen preview render of a simple scene in Real3D v2 suddenly took only a few seconds to render. (That preview = single lightsource pinned to the camera, no shadows, everything the same opaque diffuse material)

 

Even after these long years of waiting, I remain impatient for things to get to where rasterization (including the hybrid approach currently used in games (EDIT3: ...with RTX or DX equivalent)) can be consigned to a museum, and I do think the difference is "game changing"; A lot of the nice lighting in current games comes from static, prerendered lightmaps (sometimes multiple sets, to simulate a day-night cycle), and even UE5's Lumen is but a poor substitute.

I'd argue that even when you do not consciously take note of the exchange of reflected light between one's shoes, and the floor one stand on, or the lack thereof, "your brain does", and the latter inevitably produces this "cut-and-paste clipart" appearence. :7

 

 

There are a few bits of reasoning to my madness... On one hand, raytracing is *a lot* more processing heavy than rasterisation, but on the other hand: Given it pretty much works per-pixel, it has the potential to allow for some optimisation options that are much harder to do full-buffer:

 

For starters: If the implementer thinks ahead just a little bit, computing should be a fair bit more distributable.

It is not going to happen of course, but I'd like to see multi-GPU return, in a form where you could supplement a main graphics card with an arbitrary number of GPU-and-cache-only, RT-cores-only, power/performance-optimised, "render farm" cards, where the bus/network/protocol that connects these would be an industry standard that allows you to freely mix cards from different manufacturers and generations, with the managing software doling out work allotments between them, in accordance with their respective capabilities.

Home- and pro users could run the exact same application builds, and differ only in the size of their hardware stacks.

 

Next: Given this fragmentisation of the work, one could conceivably cast rays in passes, per working unit, adding fidelity with each one, and stop dead at any arbitrary point in time (typically in anticipation of imminent screen refresh), to collate what one has so far, and construct the finished frame from it, inherently dynamically scaling rendering quality to performance and scene complexity.

 

Then there is the matter of the viewplane. Working per-pixel, this does not necessarily have to be a flat rectangle - it could be a sphere or cone section, or anything, which could save a lot of unnecessary work and buffer memory -- in particular with wide fields of view, since rendering to a single rectangle becomes more and more inefficient (by the tangent), the farther out from the centre you go, until you reach infinity at 180°.

 

Continuing along the line of thought above, we have the distortion (usually "pincushion" type) caused by the lenses in a VR headset, which blows up the imagery in the centre of the lens, and compresses it at the periphery (EDIT2: Ehm... It does the opposite, of course: Compresses the centre and stretches the edges -- the compensating software *counter* distortion does what I wrote). This too could be accounted for (e.g. by shaping the viewplane), saving work by weighting the distribution of it to the parts of the image where it makes the most good, and doesn't go to waste. Here one could from the get-go cast one's rays with direction deviations given by modelling the lens; Again optimising one's efforts by working smarter-not-harder, and eliminating the need to distort the rendered image in post, in order to compensate for the lens distortions.

 

...and then the matter of foveated rendering... Given the very heavy bias in the density of cone type photoreceptors on the human retina, to a tiny spot aligned with one's gaze, there is a fair bit of work to save, by putting the lions share of one's rays in the narrow view cone of that spot, where one's vision is the sharpest -- many upcoming HMDs will purportedly have eyetracking that is fast and accurate enough to support following the direction the user is watching, and updating what part of the frame receives this preferential treatment on the fly.

(EDIT: ...again something that that is easier to do when one can do it per-pixel. All the techniques I have mentioned are going to be pretty much prerequisite to make decent use of high resolution and high field of view VR headsets, to my mind.)

 

So there are my assessments and opinions... I fully expect the industry to disappoint. :7

Edited by jojon (see edit history)

Share this post


Link to post

Regarding monitors, another reason "competitive players" wouldn't want to go ultrawide is some modern games cap out your FOV horizontally. In other words, your view will end up more zoomed in, than it is on 16:9, and you will have chunks of the screen missing.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in the community.

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  



×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.