Jump to content

Exnihilator

Member
  • Content Count

    2
  • Joined

  • Last visited

  1. Could video game capture be one of those best case scenarios with a stable camera a vastly simplified light model (pre raytracing at least) and repeated materials/textuers. Going back to my previous post where I mentioned reading the games memory - reading the exact camera position/direction/fov from the memory would also be more accurate than gyroscope data and could help any algorithms. I get the feel that in the real world AI is mainly used to make sense of messy data and is basically statistics if you dig deep enough -- but video/other data from games could be way less messy if you think about what you're feeding it - disabling bloom/motion blur/other fancy camera effects could make the (still impossible) task much easier if you think of it as a specific problem incompatible with real world video-to-3d.
  2. I think for games just using the video input is just the first step - the extra steps of segmenting the data and getting individual models out of it would be too messy/lossy to be easily usable without doing some actual archaeology level work and and cataloging / labeling objects and such. The AI would have no context for this just from videos. The "NES games to 3D" approach where the AI actually looks at and tries to understand the game code/memory a bit would be more likely to get usable / quality data. Applying that to 3D games might not be theoretically impossible, but it's of course not actually feasible because the search space is so much bigger and everything is much more complex. If instead of the video the input would be a screenshot and a dump of the games RAM/VRAM to search. It could try to find the textures/3d data in the memory that correlate with the input image. It'd still require doing photogrammetry on the source image so the AI would know exactly what it's trying to find and what transforms/camera angles are needed. I really just want the raw data (textures/geometry) to see how the original creators made it in the first place. An AI created imitation doesn't give me that. It might also be a less open-ended problem with clear goals to train an AI for - recreate this image in a 3d engine using that memory data. Another and actual possible-to-implement variant of this would be something like an asset flip detector that matches to a catalog of known models and gives you the models from that instead of reverse engineering anything. Like a reverse image search but for 3D models. It's cheating of course and wouldn't give you any of the the interesting and unique stuff. From that idea you could maybe even work towards recreating game worlds using generic assets instead of exact matches - because usually doesn't really matter if it's rock_03b or rock_7d. I haven't played it, but there's a game* that feels adjacent to this whole thing of visiting other worlds called Jazztronauts where you steal props from random Source engine maps. The concept of stealing anything from any game speaks to me. * it's a mod for Garry's mod
×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.