• FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Edit: As an aside, it’s worth noting that the Steam Reviews metric is a tad misleading in a similar way to Rotten Tomatoes, in that it only gauges ratio of positive reviews, over what those reviews are actually saying. A universal consensus of a game being a 7/10 (if we assume 7/10 is positive) will appear “better” than a game where 99% of people believe it is a 10/10, but 1% think it sucks. It’s good at predicting whether you will like it, it is bad at predicting how much.

    I’m not sure how relevant this is, since your described situation pretty much doesn’t happen. Like so many things in life, reviews are expected to follow a normal distribution. There are definitely counter-examples (e.g. shitstorms leading to massive downvote waves), but due to the large number of reviewers things should average out for normal cases.

    • bogdugg@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I suck at math, but if the mean is sufficiently over the “positive” threshold, and there’s a low standard deviation across reviews, wouldn’t this have the problem I describe? The more certain people are about the quality of good games, the less relevant the ratio becomes, which is perhaps the opposite of what you would want.

      • FooBarrington@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I suck at math, but if the mean is sufficiently over the “positive” threshold, and there’s a low standard deviation across reviews, wouldn’t this have the problem I describe?

        Since Steam reviews are only positive or negative, not on a point scale, I’m not sure how this problem would come to pass. The distribution of reviews around the mean are expected to be similar for your described 10/10 game and the 7/10 game, and since the review system itself is only boolean in nature there is no distorted result.

        The more certain people are about the quality of good games, the less relevant the ratio becomes, which is perhaps the opposite of what you would want.

        Why does the ratio become less relevant the more certain people are about the quality of good games? Again, the review is only positive or negative, no actual review number assigned. In which cases do you expect the ratio to drift away from the actual useful information?