Jump to content

The software of my dreams

Recommended Posts

On 1/13/2021 at 4:46 AM, Ross Scott said:

The scope of what I'm looking for is too large for me to be pushing it in any one direction beyond tool development.  What you're talking about would be best if I wanted a specific game world converted, not hundreds /  thousands.  My video is intentionally more broad, to try and ignite interest in this, so that as advancements are made on this front in software, people who saw the video can keep this kind of application in mind in the future and get incentivized to make it happen for games they might be interested and hopefully leading to more worlds being explorable. 

This is true, however there are real world limitations. Some reconstruction projects for real cities have taken tens, if not hundreds of hours, and this is due to real world limitations with gpu. We are still not in that level with hardware.

My point is that, if something like this wants to be pushed, we have to start somewhere don't we?

I say, pick and old game you really like. Old too to avoid legalities. Bring the community together to take screenshots of everything randomly, have 2-3 guys that know their shit stitch and clean, and make meshes public.

Then have people explore those worlds.


Public interest rises, many more projects advance trying to automate this.


Edit: i believe i am failing at making a point clear so here we go. Making software to extract 3d data from games using sfm (structure from motion), is NOT a commercially viable option, at least from a AAA perspective.

I don't want to look as if I am shilling my shit, but it is an example.

I made an online application for sfm, which is supposed to let agricultural sectors with less resources analyze crops with just their cellphone. Analize 3d reconstructions from plants, with a toolset for morphological analysis.


It was a bitch to get initial funds, and we barely were able to get a webapp going.

I then opened a go fund me here, with no real success: https://gogetfunding.com/plarepi-localization/

Sure, we are only looking to localize and at best, add AI algoritms, the product is already finished... but we are still from a small country and little budget. Getting users in our country has been hard.


Reality is, empowering people with tools is a risky venture for it requires as much effort educating people as it does selling said tools. It becomes niche as hell.

It may be because agriculture isn't innovative on its own, it may be our country, and also that we are huge nerds with no social pull... but it doesn't take from the fact that a AAA companies aren't gonna take this risk.

This is why i say that something like "the dream software" must be financed by the fans, for the fans. I don't see it happening otherwise.

Even things like testing, must start with people's help.

Sure, i can make a gofundme project, but, we aren't gonna achieve shit without LOTS of support.

Oh, and those wondering, it should be financed, but the final product MUST be open source.

Edited by Ovimeister (see edit history)

Share this post

Link to post
On 1/14/2021 at 6:43 PM, Roundcube said:

It could be I haven't heard of it, but I've wondered for a while now why there isn't a community built game engine produced in a manner similar to Linux.  A community owned and developed engine that was constantly evolving to incorporate the wants of the community rather than budgetary shackles of shareholders.


I admit, it would have to be a baller engine to get programmers to opt into it.

IIRC, there was an attempt to do this sort of thing with the Torque engine, but it never really took off. https://en.wikipedia.org/wiki/Torque_(game_engine)

Don't insult me. I have trained professionals to do that.

Share this post

Link to post

I've got multiple dream softwares... a lot of these tie into "The GUI should be better", so... :P

1: A discord client with proper customization. I don't like how *flat* discord is and the only other discord clients I know of are Ripcord (which I'm using right now) and Better Discord (which is a pain in the ass!)

2: Something to make Windows 10 MUCH more customizable. Once again, flatness blows. Give me Aero or give me death.

Share this post

Link to post

Hey Ross I came up with an idea/suggestion for the next Freeman’s mind episode!! How about including the all knowing vortagaunt after the part where you destroy the chopper. If you don’t know what I’m talking about you can search it on YouTube. Of course it’s  completely up to you. Just an idea 

Share this post

Link to post

Ross: I've put a big reply up on my site, with some detailed thoughts on technical direction, unexpected outcomes and ethical issues relating to 3D gameplay capture and 3D world reconstruction:






Apologies to everyone who has been commenting here, I have not read through things yet.  I spent quite a bit longer on this reply than I anticipated.  I'll get there.


I only found out about Ross' video a few days ago; Ross' RSS feed is being blocked by his site's Cloudflare setup, so my (and everyone elses'?) feed reader can't access it.


Edited by Veyrdite (see edit history)

Share this post

Link to post
On 1/23/2021 at 9:35 PM, Veyrdite said:

Ross' RSS feed is being blocked by his site's Cloudflare setup, so my (and everyone elses'?) feed reader can't access it.

It's probably because someone is trying to DDoS the site, or trying to bruteforce passwords.

Edited by BTGBullseye (see edit history)

Don't insult me. I have trained professionals to do that.

Share this post

Link to post

If anyone is interested, I can do some work on prototyping an application for this, what I would need is somebody to generate some training data.


First, to deal with the "shadow problem:" If someone can generate screenshot pairs of game screens with lighting turned on, and lighting turned off, then we can use that to train a model to de-shadow an image, producing a texture layer and a shadow layer.  This should be done programatically so the screenshots match up exactly - so you would record a demo of walking around a map, and then on playback get the engine to screencap the original screen once and then again with lighting disabled.  This can also be done to the spectral highlights as well, but it's best to start off basic first.


I'm not familiar with photogrammetry algorithms, be we can give that a shot.  Off the top of my head, we can create another model to add depth information to a screenshot (and perhaps even better if it's from a video, but for now it's better to focus on just the screenshot case first), with the dataset again being generated the same way as in the above, just with a depth-based shader being used instead.  Stitching together the depth-based images into 3d space should be easier.  Otherwise, there are tonnes of other algorithms to look into as well (eg: https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods ).


So if anyone wants to take the dataset task on, or if anyone has old gaming hard to devote to this, post here I guess.  For anyone interested in the deep learning side of things, trying to implement some of these algorithms could be a good way to get into AI stuff.



Edited by Beaker (see edit history)

Share this post

Link to post

Finally got around to watching this.


This got SO CLOSE to my own dream software: real-time world creation.


Imagine a game that created itself AS YOU EXPLORED IT. It would take data from real places, other games, etc, and throw it at a super-AI that would then generate more world on the fly. We're talking machine learning based on real places, not just random procedural stuff.


I imagine an MMO game where there are literally no limits to where you can go. The game could take place in a city, initially, but someone could hop in a car and drive past the city limits and the game will start filling in the surrounding country-side, and even generating a neighboring city. Then the players take it from there and breathe life into them by, say, moving into the houses and apartments, taking up jobs in the commercial buildings, etc. You'll feel like you're exploring a real place, which also makes it perfect for VR! All you have to do to determine what content is generated is to pick what initial data is thrown at the AI (i.e. sample images, 3D data, etc).


I actually think this is within our reach with machine learning in its current state. The issue is server space. Whatever content is generated would need to be stored and preserved for consistency, and that data is potentially infinite. We're talking Google-size server space here. But the end result would be a video game completely unbounded in where, and how far, you can go.

Share this post

Link to post
On 2/15/2021 at 12:49 AM, Ganymede105 said:

The issue is server space. Whatever content is generated would need to be stored and preserved for consistency, and that data is potentially infinite. We're talking Google-size server space here. But the end result would be a video game completely unbounded in where, and how far, you can go.

Easy fix: Make it a limited size sphere of a world, rather than an endless flat expanse. Assign different structure parameters to smaller indicative symbols, and it can recreate the world on the fly, without actually storing the entire world shape-by-shape.

Don't insult me. I have trained professionals to do that.

Share this post

Link to post
Posted (edited)

I sent an email to Ross a while back asking for his thoughts, but I haven't gotten a response. So I guess I'll just do a public post here.


The short version is: Have we considered using voxels?


If anyone here isn't familiar with them, voxels are basically just pixels in 3D space. You've probably seen them in some retro-esque 3D games that are going for a visual style similar to something like Minecraft. Cube World is probably the most obvious example I can think of.


However voxels have much, MUCH more versatility than just blocky retro-throwback stuff. Computers now can actually render really detailed models using voxel geometry instead of polygons. And for a while, some lesser-known titles were using voxel based engines as opposed to polygonal ones to try and get more detailed environments. Nowadays because Polygons have become the standard and additional levels of detail aren't really that big of a deal anymore, Voxels have kind of fallen into a niche. Usually they're used for simulation games where updating polygonal 3D models on the fly aren't really an option. I'm sure a lot of you are familiar with Teardown, and in my search I also found this: 


However both of these games seem to be low-balling what voxels are capable of.

Here's a tech demo I found of a tank, and this is from over five years ago: 



And here's an example of Voxels being used to create realistic terrain on the Nintendo DS: 



I bring all this up because I feel like this is the easiest compromise for reading a 3D world off of a video file. For one, the AI wouldn't have to carefully construct polygonal models for everything it finds. It just needs to replicate what it sees: Pixels. And two, we don't have the special engines with advanced optimization tricks needed to generate those huge, detailed open world games... but voxels are capable of having a very crude level of detail system. As the object gets further away, just merge the voxels into fewer larger ones. 


The main thing I'm worried about would be Ross's obsession with Anti-Aliasing, because (and correct me if I'm wrong) I think most of the proper Anti-Aliasing systems in games these days rely on the model and texture data to get the best result. FXAA could probably be decent general substitute, but if you're in VR and you mash your face into something it's just gonna look weird.

Edited by HQDefault (see edit history)

Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in the community.

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.