• 1 Post
  • 102 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • I’ve found that disabling VSync in games entirely and then letting MangoHud do the limiting works a bit better. Some of that will be because I’m using Proton on Linux, which has DXVK as a translation layer. Games will be trying to limit their frames the DirectX way, whereas MangoHud is limiting them the Vulkan way and is ‘closer to the monitor’ for keeping the pace right.


  • Also, MangoHud has an ability to set fps_limit in a per-game way that generally results in much smoother frame-pacing than most games achieve by default. That’s awesome for eg. Dark Souls / Elden Ring, which are stuttery at 60 fps but buttery at 59 for some reason, but also for random strategy games which would be just fine at 30 fps but instead have all the fans roaring to render at 144.




  • Love Tyranny and PoE. Think Deadfire would have been an exceptional game if there was about half as much of it, but even as an epic RPG it does go on. Ten bucks for ‘three big games’ of content is a steal, though.

    It isn’t that ‘successful game has a better-funded sequel that loses the magic due to feature creep’ is exactly unheard of - it’s a tale as old as time. But Deadfire was a sales disappointment, which it probably wouldn’t have been if they’d only spent half as much making it, and so we won’t be getting a PoE3 :-(


  • Agreed. Amazing game, but it’s because most of it is excellent so the jank is easy to ignore, rather than the whole thing being polished.

    I think they made the parry-heavy emphasis of the game even more difficult to ‘read’ by having all the early enemies be very twitchy robots with difficult-to-anticipate parry timings. It becomes much easier to get the timing right once the enemies become more ‘organic’ a bit later. That’s also the point where you have some better gear and some level ups, so it’s not quite so brutal.

    Giving the early enemies slow, smooth attacks with big swings would make sense for robots, sort out the difficulty curve, and give you plenty of chance to get used to parries. They can reasonably require a lot of damage so ripostes would be the only way to effectively defeat them - health which you could reasonably remove from a lot of the late-game enemies who are stupidly robust.

    Never felt like P actually has iframes on his dodge? It’s serviceable enough when the important thing is to move away from where an attack is going to land, but it’s certainly not a Dark Souls-style ‘dodge through the attack’. It’s not Sekiro’s ‘running away to tease out an attack you can punish’ either, he’s a very slow dude in comparison.



  • addie@feddit.uktoGames@lemmy.worldMarathon is delayed
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 days ago

    Not so much “remade” but the engine was open-sourced and it’s been kept up-to-date for modern computers. Exact same levels, graphics, sound effects as it ever was, but obviously the resolution now is much higher than it was in the early nineties. Think my graphics card can push it at 4K 144Hz while still being in power-saving mode; it does more work rendering desktop fonts nicely.

    There’s also a port of Pathways Into Darkness onto the engine, if you want to play it? It’s a real bitch to emulate a classic Mac to get it running, but this is basically drag-and-drop. It was brutally unfair even at the time, and contains a lot of features which have not aged well and are distinctly un-fun - it is not a game that’s afraid to waste your time, put it like that. I do love the idea of it - the atmosphere of it is probably the best bit, and I’d love a modern remake of it.

    https://lochnits.com/aopid/



  • You are not joking. Comparing a $2000 Purism Liberty with eg. a $200 HMD Fusion. The Fusion has somewhat better screen and battery; much better processor and camera. More RAM, the option of more storage, has NFC. It’s also designed to be easy-to-maintain, but is somewhat thinner and lighter despite having a larger screen area. Are ‘made in USA’ and ‘open-source drivers’ worth paying 10x as much for a noticeably worse phone? (It’s not really ‘made in USA’ either - it’s a mix of US, Chinese and Indian parts assembled in the USA.)

    I think that the people who believe a US-made iPhone will also cost $2k are kidding themselves - economy of scale and all that, but it must be substantially more.


  • Yeah, mine was similar. Had some old Win95 machines from work that were getting thrown away; scavenged as much RAM as possible into one case and left Red Hat Linux downloading overnight on the company modem. Needed two boxes of floppy disks for the installer, and I joined up a 60 MB and an 80MB hard drive using LVM to create the installation drive. It was a surprisingly functional machine - much better at networking than it was as a Win95 computer - but yeah, those days are long gone.





  • There’s two kinds of motion blur, really - camera based, and model based. Camera-based requires calculating one motion vector for the whole screen, which is basically free. Model-based requires projecting the motion of each vertex of the model in the projected view; one matrix multiply per vector is not ‘expensive’ on a modern graphics card. Depth of field requires comparing the depth buffer, which you’ll have already created as part of rendering, and then taking several ‘taps’ around each point on the screen to calculate the blur for the ‘focus distance’ compared to the actual distance. The final image post-processing will generally process the whole screen anyway, so you’re just throwing a couple of extra steps in for the two effects.

    Now, what does it save you? If your engine is using TAA (temporal anti-aliasing) then that’s performed by ‘twitching’ the camera a tiny amount (less than a pixel) every frame. If nothing’s moving, then you can merge the last several frames to get a really high-quality anti-alias; all the detail that wouldn’t be caught with a ‘completely static’ camera will be captured, and the result looks great. But things do move; if you recalculate ‘where things were’ then you can get a reasonable idea of what colour ought to be at each pixel. Since we need to calculate all the movement vectors to do that, then using the same info gives us the motion blur data ‘for free’ - we can add a little blur in post-processing to hide the TAA mistakes in post processing, and when implemented well(*) then it looks pretty effective. It’s certainly much, much cheaper to calculate that ‘proper’ antialiasing like MSAA.

    (*) It is also quite easy to not implement TAA well, and earn the ire of gamers for turning everything into a blurry mess. Doom (2016) does a fantastic job of it - it’s in the engine at a low level - and I’ve never seen anyone complain about that game being blurry or smeared.

    It takes time to load high-quality textures and models from disk, and it uses up the RAM budget for each frame. Using lower-quality textures and models for distant objects greatly helps rendering speed and prevents stutter, and a bit of depth-of-field hides the low-quality rendering with a bit of a smear.

    Now, if your graphics card greatly exceeds the design requirement (which was probably some kind of console) then you can switch these effects off and the game will look even better, which might make you question why they’re there in the first place. To help consoles look better with some ‘cinematic’ effects, is why.



  • Another fantastic project that makes gaming on Linux so much easier. It’s incredibly strong in configurability and ‘robustness’. Yes, you might have to set up all of your Wine bottles and things like that, which can be a faff, but once it’s working in Lutris, it just keeps on working on Lutris.

    Great for long-running series, too. I’ve been a big fan of the XCOM series since the Amiga days; in Lutris, it’s easy to have UFO: Enemy Unknown / Terror from the Deep running in openxcom, Apocalypse in DosBox, and connected up to the Firaxis remakes in Steam. Similarly, love me a metroidvania, and have got most of the 40+ CastleVania games lined up and ready-to-go, just a double-click away.


  • Heroic has made me start buying games on GOG again.

    I used to dual boot “Windows for games” and “Linux for work”, and would buy GOG in preference to Steam because I love what they do.

    Got rid of Windows years ago because it’s more of a PITA than it’s worth, and basically went 100% Steam because Proton is so good.

    Heroic is so awesome - better interface than Steam, in many ways - that GOG is back on the menu.

    Awesome interview as well, @PerfectDark@lemmy.world - a really interesting read.


  • CMake, which is kind of the universal standard build system for C++ now, has “fetch content” since v3.11. Put the URL of a repository (which can be remote, but also local, which is handy) and optionally the branch / commit ID that you’d like, and it will pull it into your build directory automatically. So yeah, you can pull anything nefarious that you’d like. I don’t think most people would question pulling and building a library from Github as part of the build, especially if it had a sensible name for the task at hand.


  • You’ve got that a bit backwards. Integrated memory on a desktop computer is more “partitioned” than shared - there’s a chunk for the CPU and a chunk for the GPU, and it’s usually quite slow memory by the standards of graphics cards. The integrated memory on a console is completely shared, and very fast. The GPU works at its full speed, and the CPU is able to do a couple of things that are impossible to do with good performance on a desktop computer:

    • load and manipulate models which are then directly accessible by the GPU. When loading models, there’s no need to read them from disk into the CPU memory and then copy them onto the GPU - they’re just loaded and accessible.
    • manipulate the frame buffer using the CPU. Often used for tone mapping and things like that, and a nightmare for emulator writers. Something like RPCS3 emulating Dark Souls has to turn this off; a real PS3 can just read and adjust the output using the CPU with no frame hit, but a desktop would need to copy the frame from the GPU to main memory, adjust it, and copy it back, which would kill performance.