Jump to content


  • Posts

  • Joined

  • Last visited

  1. Well, I can also elaborate a bit about that. Actually the problem is a bit more complex - a virtual CPU works _exactly_ like a normal one, so if a normal one may perfrom a MSAA/FXAA pass, than a virtual can do this as well, no problem. That's the whole point. So if we are talking about software emulation rendering here - we are scored. Anti-aliasing in an emulator (a "hard mode" emulator, such as VirtualBox, VMWare and such) should work out of box! Except that no antialiasing works in software emulation (no hardware acceleration). I don't think you quite understand the issue. The issue isn't about performing anti-aliasing - all turing machines can do that - the issue is about figuring WHAT data to apply AA on. The virtual CPU is just a dumb cpu that speeds through instructions; it doesn't understand any high-level concepts about the stuff it's doing. The virtual CPU is no help whatsoever in applying anti-aliasing, because it doesn't understand where in memory the frame buffer is or where the 3d data is stored, as well as not being able to find the depth buffer (which may not even exist, since older games tended to use painters algorithm or other techniques). It just performs calculations, it has no understanding of what it is actually doing. Ultimately adding AA to an emulated game (again, talking about low-level emulation) is as hard as adding AA to a native game - you need to add it in the game's code itself. That requires per-game hacks. Typically, it's *easier* to add AA if it's using hardware emulation. If you have hardware emulation, your emulation code is already capable of de-constructing the 3d data the game uses, as it needs to construct the various buffers to send to the GPU. In software emulation, you can very easily get away with a dumb pixel write from the emulated devices memory-mapped regions and let the game's code perform all the 3d calculations in software. However, because these calculations are happening within the game itself, your emulator has no understanding of what the data is. It cannot easily apply AA in that case.
  2. I know this isn't exactly what you want to hear, but I've like to elaborate on why AA in emulators is such a headache: I'm assuming you are talking about MSAA, and not FXAA or other screen-space AA techniques which are usually a little easier to implement. Computer speed is (usually) not the issue. It doesn't matter if current computers are 10x faster, because usually applying AA on a low-level emulator (it's a lot easier with high-level emulators, but they aren't as accurate) isn't a problem of not having enough cycles, it's a problem of finding the information. To make this a little easier to understand, let me explain exactly what a low level emulator does. A low level emulator will take in the original binary data that represents the game's code (like the original platform's equivalent of an .exe), and then run it through a virtual CPU. However, it has no in-depth understanding of the meaning of the data - just like the CPU in our computers, the virtual CPU in an emulator isn't really smart. It doesn't understand why it does something, it's just really fast at doing it. MSAA works by finding the edges of the 3d shapes and sampling them at a higher rate, before blurring them slightly, which makes edges look smoother. The issue is that this requires the code to have direct access and understanding of where in RAM the 3d shapes are, and the format they are stored in. A CPU doesn't think to itself "let's do transformations on 3d shapes", it thinks "let's do some abstract maths on abstract data representing a certain type of number". If somebody passed you a spreadsheet of binary data and told you to perform MSAA on it, you'd have no idea where to start. The emulator has the same issue. Through pretty complex code analysis you could determine what data is of a certain type (floating point numbers, usually, for 3d data), but determining what data represents 3d shapes and how to anti-alias that just isn't really easily possible. The only true way you could do that would be via having a human reverse-engineer it beforehand and write game-specific hacks to add MSAA.
  3. I'm firstly going to just flatout admit that I didn't watch the entire episode. I got bored and stopped quickly into the first one. I think the thing I enjoy in your videos are the interesting things you say. The game you play doesn't particularly need to be fun or even enjoyable to watch, it just needs to provide something interesting to talk about, something you can talk about that hasn't been said before. That's why 2 of my favourite episodes were Polaris Snocross and Nyet 3. You were saying and discovering things that people hadn't seen before. Same goes for The Last Stand - even despite being a fairly simple flash game in the large scheme of things (and despite not being particularly obscure), you looked at it from an interested new perspective by critically reviewing it as a game that most people wouldn't even bother with. On the contrast, one of my least favourite episodes was Wolfenstein, simply because it was a boring game that we already all know about and nothing new interesting could be said. The reason I'm mentioning this is because I think this dual-commentator style didn't really work because we ultimately watch your videos for the things that are said in it. There's 2 commentators but none can really say something interesting that the other hasn't yet, especially in this unedited style. I think the system could work if the other person had a drastically different viewpoint, because then they could add interesting discussion to the video that didn't exist before - for example, a programmer could go into detail about some of the technical tricks games used or an artist could talk about how the art design helps the gameplay.
  4. Usually particle physics and other cosmetic physics is completely serverside, since it doesn't really matter if 2 people's explosions look a little different. I can't talk for most of these games since I don't have much knowledge of their internal workings. Generally, physics is client-side if it doesn't affect gameplay, while if it does, it is serverside. In Garry's Mod, the dead ragdolls are completely clientside, but all the props are done serverside because building physics based contraptions is essentially all the game is about. In CS: GO, the ragdolls are serverside, because knowing where players died and where they were shot from is a pretty big part of the game - however, random small physics props are clientside because they can't affect movement and are purely cosmetic. In both of these games, the actual players movements are calculated serverside to stop hacking - in CS GO competitive matches or in Gmod RP, noclip hacks would be disastrous! The main issue with clientside physics is that it's very easy to cheat by just modifying the client. In a game like planetside or world of tanks, you could make vehicles much faster and easier to handle if the server wasn't authoritative over physics. The client says "my vehicle is here" (where "here" is an invalid position like hovering up in the air or inside an object), while the server has no information to counter that. In Planetside 2, you could have heat-seeking grenades bouncing across the map if it was clientside. Perhaps hacks like these DO exist for these games, but I've not played enough of them to know. Some games use a mix of serverside and clientside physics. For example, a while back one of my friends was considering creating a multiplayer Kerbal Space Program mod (this was before the existing MP mod, before official MP was announced, and while the developers were pretty stoic about it being impossible...). He decided that he would have the server hold authoritative state of a player's position in the universe, while the clients performed the intense PhysX physics calculations in a bubble around them, to handle ships colliding and bending and so on. In Path of Exile, the desync you are hearing of came from another networking symptom which is pretty dense to explain... When Path of Exile was first released, they used a networking model similar to most FPS games, which the Source engine uses - an authoritative server would hold the game state while each client would tell the server about their movements. The primary issue with this system is that the server actually lives in the future due to latency - each client sees the world in the past. To get around this, each client performs what is called clientside prediction, where they predict what the state of the world is on the server and show that instead. The issue with clientside prediction is that like all predictions, it can be wrong. Have you ever played a Source game where you were being shot at, ducked behind a wall, then died while already behind the wall? This is because the client incorrectly predicted that you'd still be alive by the time you got behind the wall, while the server finds that 1 extra bullet hit you. In FPS games this is mostly unnoticeable, but in a top-down moba it is pretty huge. Dodging attacks becomes a game of probabilities. On the client you may dodge the attack, but the server finds it hit you anyways, or vice versa. So Path of Exile switched to another networking system, used more in RTS games (and in League of Legends). In this system, instead of sending across the state of game entities, you just send over the inputs. There's no authoritative server in this case, usually. This is very useful for RTS games, because instead of needing to say 10,000 units moved and sending their new positions, you can just say that a player performed a movement command, and then every client will independently simulate the results of those inputs. Consider a game where player A tells his units to move right - instead of telling player B and C the new position of his unit, he just tells them to simulate the move right command on his character. This is very useful in a moba because now every client sees the exact same state of the world, with nobody in the future or past. If a client fires an attack, every other client sees exactly where it is, and can dodge it without prediction messing them up. Of course, as with every system, this system has a couple of disadvantages. Firstly, every computer is capped to the simulation (not graphics) FPS of the slowest computer; you can't simulate the next physics frame while another computer hasn't simulated the current one. This is what causes the slow-motion in RTS multiplayer games, even if you were sure that your computer could handle it. Secondly, the system had increased latency. There's a visible delay before your commands are sent to and simulated across each client, so if you pressed "move right" then there could be a delay of a few hundred ms before you actually do. The third primary issue is the one which caused the desync issue. The game client needs to be perfectly deterministic. This means that any calls to rand() must be equally seeded across every client, so each client sees the same sequence of random numbers, and that all clients always call it at the same times. It means that no multithreaded race conditions can cause the state of the game world to diverge. Each client must ALWAYS do exactly the same thing when given the same input. If anything ever diverges, you quickly get a butterfly effect where a small change causes the entire world state to diverge so much over time that each computer sees completely different things, making the game unplayable. This is called "desync", and considering that this networking mode was added to Path of Exile after being in development instead of planned for at the start, it's clear why it could happen. I suspect they'll work these bugs out over time. So really, as with anything complex, the answer to everything is "it depends"
  5. Physics is typically server-side, otherwise every player would see a different state of the world. Some games which don't care much about physical state (like CS: GO) have client-side physics, but games like Garry's Mod where the physical location of items is shared with every player, the physics is serverside. Animation is usually client-side when it comes to actually performing the blending and IK and actually manipulating the skeleton, but the animation state machines are usually server-side (if a player is crouching, walking, or running, every other player should know about it). Of course, many other components are often at least partially serverside. Usually games will have their own networking systems instead of using TCP (usually UDP with some added features), since TCP is built for a web browser and so doesn't care much about latency. These networking systems are obviously partially serverside, and can often be middleware. I do agree that releasing some code is better than none, but it's not really that much better. Honestly, releasing code with redacted elements could actually be counter-productive because of the added confusion it'd create in the codebase. And of course, because of software patents it can be illegal for a company to release code they own fully. If your game uses perlin noise for procedural generation, if it uses marching cubes for voxel meshing, or if it uses Carmack's reverse, then you'd be unable to share that section of the code. Undoubtedly in most games there will be many other portions of the codebase which are riddled with patented techniques. I know what you're thinking - "so what? just release everything else! Something is better than nothing!". That's true... but then it misses the point. The "release the source code" option was listed as the last possible choice, when nothing else is possible and they don't have the time or money to continue, but it isn't. To release the source requires hiring technically inclined lawyers to read your source code, for programmers to look through it and redact information, it requires collaboration with middleware companies to figure out how much they are willing to allow, and in reality it requires masses of effort. Releasing the code could take as much effort as making a patch could! So, the "go fuck yourself" option isn't chosen because of selfish desire to hide the source. It's chosen because the other options could amount to "go fuck ourselves". No company wants to risk being sued, and no company wants to waste money and effort on patching a game nobody plays. The issue stems from the decision to make online-only games. The issue isn't to do with making online-only games playable after-the-fact. Although it's very sad that these games are gone, trying to fix it by requesting that companies release source is just hiding the wound instead of healing it. The entire concept of an online-only game is fundamentally flawed if you ever want the game to survive. I'd like to say that NONE of what I'm saying is an excuse - it's just an explanation. I don't think companies should be allowed to do this. However, companies do not do it out of malicious intent, they do it out of brutal efficiency and in the pursuit of money. When every possible option expect for 4 requires spending money for no gain, they are always going to choose option 4. I've greatly enjoyed this conversation. It's fantastic having a place to communicate where people are fair! On places like Reddit I find myself being downvoted because I say what's true instead of what people want to hear, regardless of what I actually believe from a moral standpoint. As always, I love your videos. Can't wait for the next one! I'm not saying that doing this is impossible, I'm just stating how difficult it is. A game like Deus Ex is very old and it's techniques are frankly pathetic by todays standards. My point is more about the companies decisions. This discussion has gone onto a bit off a tangent here... It's ALWAYS possible to reverse-engineer a game to working state. However, I think that highly redacted source just isn't that important ultimately. The parts that are redacted are usually the parts which are toughest to replace, which kinda defeats the point. Deus Ex was just about changing the models to an intermediate format which they could modify, and has nothing to do with actually replacing middleware systems.
  6. Well, you are assuming they would want to use the same exact middleware. While it IS more faithful to the original, sometimes it's simpler or even better to replace the module with something else. Let's say that the developers wanted to use Havok as their physics engine. By removing it and releasing the source code, the modders might be want to be unfaithful and use another physics engine, which may be a hypothetical free iteration of Havok, or another physics engine which might be open source. Now, this is stepping into mod territory from total preservation territory, but if that's the only way of keeping at least some aspect of the game alive, I'm fine with it. Heck, if there's a community for that game after some time went by, they might even improve upon it with some tinkering and state-of-the-art software. That's opening a previously unmodifiable game up for total modding, and that's a pretty big stride. Often there's no replacement for the middleware, and even if there is, converting it can be incredibly time-consuming and difficult. Consider VPhysics, a project to replace the Havok physics of the Source engine with Bullet - although it's definitely impressive, it's still hugely buggy and causes many game-breaking issues. Other middlewares just cannot reasonably be replaced. An animation middleware like granny has their own special animation format which stores curves instead of keyframes, and converting from that could be difficult. Bink video has their own video formats, and Wwise has their own audio formats. Finding converters for file formats is hard enough, but even then ensuring that the game can correctly interface with replaced components could be practically impossible.
  7. The issue with requiring the consumer to install X, Y and Z first is that only works when the middleware has a freely available user component. If the middleware isn't freely available to users (like an animation middleware - you can't exactly pop onto Rad Game's Tools and just download the .dll for Granny 3d), and especially if the middleware is statically linked instead of dynamically linked, then it'd be practically impossible to legally redistribute the required data, if you can even get a hold of it in the first place. I do agree that it gives a headstart to modders. However, I've found many other modders I've worked with are rather... cynical, and it actually backfires. People would rather be given nothing than be given half of what they wanted, surprisingly enough. I've worked on a Total War mod, and whenever a new modding tool was announced people would immediately point out limitations in the system and get very angry - to the extent that they'd be happier when left without any sort of developer support at all! If you release only half of what is needed people will usually just accuse the developers of malicious intent - "you pretend to support us, but in reality you give us broken tools so our lives are harder!". It's not a particularly logical mindset, but I've seen it a lot. I wish it weren't this way, but ultimately it means that the developers either get a whole bunch of negative press for releasing tools or they get no press and people are silently angry. Just take a look at this, from a modding summit hosted by the Creative Assembly (developers of Total War): http://www.twcenter.net/forums/showthread.php?678823-Mod-Summit-2015-Assembly-Kit-for-Total-War-ATTILA Just a few posts down we see this: These people are actually probably the minority, but they are just so vocal that it's often not worth the risk from the developer's perspective.
  8. Wow. That's pathetic... lets hope nobody is dumb enough to buy this.
  9. I think we're having a communication problem. You're literally describing the exact opposite scenario I was condoning: I was saying rip out the middleware (as in remove it from what you do distribute), then pass on the source code that your company did write for tinkerers of the dying game. Even then, it would be for noncommercial use only. Ah, yeah we may be having a communication issue. My thoughts were a bit mixed up when I said that - here's what I meant: Let's assume you are a modder trying to bring the game back to life, and the game's developers have given you the code (but without any of the middleware). If you wished to, you could theoretically rip the middleware machine code out of the server executable and plug it into the game, which would allow the game to work (with a bit of patching up), but importantly you wouldn't need to code a replacement for the middleware itself. However, such a practise is illegal if you wish to actually share the modified executable with other players - which means that creating a modified version of the game that is standalone is actually very difficult, even with the source code to non-middleware-related parts. I was talking about the programmer's options when trying to revive the game's standpoint when given only source to the game without middleware, not the company's options for releasing the source. Removing an online-only component is almost never impossible. However, it can be very expensive. Anyways, usually when game developers say "we are using the cloud to perform calculations", they are just talking out of their ass to try to justify online-only DRM, like with Simcity.
  10. Saying that Valve released "most of the relevant source code" is simply not true. Valve released the SDK - they released the tools pipeline to make content for the engine, and released some wrapper code to actually talk to the engine. The meat and bones of the engine, and by far the significant majority of the codebase, is still completely under wraps. Why does this matter? Because if Valve decided on a whim that distributing builds of the source engine itself was illegal (instead of freely as now), then modders and game developers on Source would be breaking the law if they releasing their work, and could be sued. The same goes for external middleware. You can't just legally rip the Havok physics engine machine code out of Source and use it in your own games... and on that matter, you can't legally rip some AI middleware out of the server software and push it into the client. Reverse engineering itself is legal for compatibility purposes, but often releasing the actual results of this reverse-engineering (a modified build of the game) is highly illegal. It's mostly down to how anal the middleware companies want to be, but if they wanted to, they could effectively make the game's source worthless. They could even try and argue that their API is their legal property and can't be shared - so then what can the game developer so? Redact all the lines which actually communicate with the middleware? That's hardly practical, and the end code would be practically unreadable. Releasing only the relevant source isn't that helpful, even if it's possible to release that at all. It'd be like releasing Java source code in an alternate universe where nobody has a Java runtime - technically they've given you the code, but it's impossible for you to actually run and compile the code. Ultimately the programmer at the end would still need to reverse engineer the codebase and/or API of the middlewares they don't have. Ultimately that's better than reverse engineering EVERYTHING, but it's not much better really considering the complexity of these middleware tools. I agree with you on a moral standpoint. Killing games is something that nobody would ever like to see... I just think that sometimes it's unavoidable, if the devs have dug themselves deep enough into a hole. Better planning at the start could always avoid it, but sometimes it'd just not practical to release source code. Of course, better planning at the start could ensure they'd planned for the contingency of releasing source code... but now we're just back at the start issue again. The underlying issue is that games die because the developers ultimately dug them a grave from the start. They are on life support from the moment of release until the developer decides to pull the plug, and trying to get around this is incredibly difficult even in an ideal world where you have the full source code. The real solution is that developers plan ahead... I think laws should be put in place to avoid this, preferably. However, the old games which have their servers shut down, or the new games that are destined to some day have their servers shut down? They are pretty much gone and I don't think it's really possible to save them, sadly. The only people who can properly save them are the developers, and that's only if they make the financial decision to save a dying game.
  11. I'd like to mention option 3 to ensure the survival of online games isn't always realistic. Many (if not most!) games utilize many middlewares to handle certain aspects of the game - for example, physics engines, sound engines and animation engines. Although it's perfectly legal to release the game code, usually it's not legal to release the middleware's source if you even have it in the first place - and without these middlewares, the game source is essentially neutered and useless. Releasing the source code of a game is rarely easy, even under ideal situations. Consider Doom 3, where even though it was built on it's own engine from scratch with minimal outside middleware, releasing the source was STILL a headache because of software patents meaning that the lighting code (despite being independently discovered and written by John Carmack himself - Carmack's reverse) couldn't be released. I don't think that's the reason. I honestly think it was a legitimate (if perhaps misguided) attempt at art direction. Tonemapping isn't inherently some "lazy" approach of faking lighting - I mean, with some tweaking of the "true" lighting colours, you'd get a practically identical effect. The point is, the game wouldn't look any better or much different whether you used tonemapping or modified the light colours to be identical. Tonemapping is used in many games, and when it's used correctly you never even question it or realize it is there. That doesn't mean the effect is subtle, necessarily, it just means it's done in a natural way. There's no flaw with the technolgoy, it's a flaw with the art direction. You should criticize the art decision to make the world look so grey and dull, but not blame the graphics technique used to make that true. If the tonemapping wasn't used to make it look crap, then the lighting would be used to make it look crap. I see this fallacy too often. People will say they hate bloom and HDR, but in reality they hate the way it's applied instead of the technique itself. When a technology is correctly applied, you often can't even tell it's being applied at all. I have severe issues with this analogy. Simply put, we are entitled to playing games that we paid money for. If pay for a game, a company shouldn't be able to freely revoke access to playing it. If it's a subscription-based MMO, it's fair enough. We are paying the company for a consumable month of gameplay. If it becomes unprofitable, I'd say the developer is morally justified in pulling the plug because ultimately they are paying money in hosting the servers. It's not a *good thing* but it's not fair to force the developers to pay to host servers when they aren't breaking even. If it's a game where components are uselessly and deliberately made server-side for the sake of DLC or other purposes, then it's completely unfair, because we pay only once. We pay a higher price tag up-front, because we are buying a copy of the product - not just mere access to the product. Revoking access at that point is taking the game away from us which we already paid for.
  12. Honestly, I think this is my least favourite Game Dungeon yet. It's not because it's modern-ish, but mostly because it's essentially a review about a not very interesting game, and there wasn't any interesting facts or much funny parts. Compared to my favourite game dungeon episodes, like Polaris Snocross or Nyet 3, it just isn't as entertaining. The game is not interesting on a technical or design level, like the Snocross AI modding and such, and there's little room for humour which at best arises from the situations the game presents, like the difficulty and style of Nyet. Wolfenstein lacks both of these. I don't think it's something bad about modern games, just moreso that Wolfenstein is predictable and lacks any really interesting points of discussion. Regardless, I greatly enjoy your videos! I'm looking forwards to the next one
  13. Even if he did claim fair use, it's illegal for him to make money off Freeman's Mind since Half-Life legally is Valve's copyright. Fair use is only applicable as long as you're not selling your product afaik. Yup. If it was entirely original map design and scripting, then he could sell it as a standard machinima. (kinda on the lines of Civil Protection, if they used different character models/textures) But could he? Freeman's mind and Civil Protection are made using the Half-life 2 engine, Source. Even if he doesn't use the assets, Half-life 2 and the Source engine itself isn't free or open-source.
  14. It's really interesting to be able to explore the entire facility! Despite the couple of overlaps, Black Mesa seems surprisingly coherent - I would've imagine that the world map overall would've been far less compact than it actually is.
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.