“then” is used to depict time, sequence or a causal relationship. “than” is used with comparative adjectives, to depict comparison.

  • 0 Posts
  • 198 Comments
Joined 1 year ago
cake
Cake day: November 12th, 2024

help-circle





  • Yeah.
    Although I usually tend to send a link directly to the post (which is relevant to what is being discussed), the things around that might change their impression. And considering that their is more political stuff than plain tech stuff, almost everywhere on the internet rn, that kind of a result is expected.



  • Ah well, I am not good at that.
    The best I have gotten people to say is how I “know so much” about stuff at work and the best I can point them to is Wikipedia, StackOverflow and the like, which of course they aren’t really interested in doing and their lines are probably just a way to try and flatter me.



  • When you’re talking about the PCIe peripheral, you’re talking about a separate dedicated graphics card or something else?

    Yes, similar to what a PCIe Graphics Card does.
    A PCIe slot is the slot in a desktop motherboard that lets you fit various things like networking (ethernet, Wi-Fi and even RTC specialised stuff) cards, sound cards, graphics cards, SATA/SAS adapters, USB adapters and all other kinds of stuff.

    I guess the main point of NPUs are that they are tiny and built in

    GPUs are also available built-in. Some of them are even tiny.
    Go 11-12 years back in time and you’ll see video processing units embedded into the Motherboard, instead of in the CPU package.
    Eventually some people will want more powerful NPUs with better suited RAM for neural workloads (GPUs have their own type of RAM too), not care about the NPU in the CPU package and will feel like they are uselessly paying for it. Others will not require an NPU and will feel like they are uselessly paying for it.

    So, much better to have NPUs be made separately in different tiers, similar to what is done with GPUs rn.

    And even external (PCIe) Graphics Cards can be thin and light instead of being a fat package. It’s usually just the (i) extra I/O ports and (ii) the cooling fins+fans that make them fat.









  • Well, the server ECC variant is still pretty useful for desktop workloads. Just make sure AMD always supports it in the next generations. If it’s still a DIMM, then it can be sold right away.

    GDDR7, again, if the chip has the required pins as in GPUs, then GPU manufacturers can simply buy them, test them for a few hours maybe, and pop them in their lineups with a bit of re-calculation of traces (in case the exact pinout differs). Of course you get some re-soldering damage, but there’s not much you can do about it. On the other hand, if the GDDR7 is in GPUs already, most the companies would require is to alter the firmwares a bit and sell refurbished units.

    HBM2. Seems like it is possible to get slottable modules with HBM2. Pretty sure some industrious people in China will find a good use for them. Perhaps with RISC V processors?
    And the AI specialised units shouldn’t be fully useless either. Remember the cancer studies case?
    It is still useful computing ability that can be used well by those who know how.