This is more “home networking” than “homelab,” but I imagine the people here might be familiar with what in talking about.

I’m trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to “the internet.” Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn’t it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don’t see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn’t help them add additional users. Upload bandwidth doesn’t magically turn into download bandwidth.

Obviously there’s some reason for this, but I can’t think of one.

  • Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    2 months ago

    Upload bandwidth doesn’t magically turn into download bandwidth

    Actually, it does. Various Cable and DSL standards involve splitting up a big (eg, measured in MHz) band of the spectrum into many small (eg, around 4 or 8 kHz wide) channels which are each used unidirectionally. By allocating more of these channels to one direction, it is possible to (literally) devote more band width - both the kinds measured in kilohertz and megabits - to one of the directions than is possible in a symmetric configuration.

    Of course, since the combined up and down maximum throughput configured to be allowed for most plans is nowhere near the limit of what is physically available, the cynical answer that it is actually just capitalism doing value-based pricing to maximize revenue is also a correct explanation.

    • corroded@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      2 months ago

      You are absolutely correct; I phrased that badly. Over any kind of RF link, bandwidth is just bandwidth. I was more referring to modern ethernet standards, all of which assume a separate link for upload and download. As far as I am aware, even bi-directional fiber links still work symmetrically, just different wavelengths over the same fiber.

      If you have a 10GBaseT connection, only using 5Gb in one direction doesn’t give you 15Gb in the other. It’s still 10Gb either way.

      • jdnewmil@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        You are being obtuse. Fiber and cable and DSL are not “ethernet standards” and Ethernet is not used for last mile connections. Re-read the excellent explanation.

        • notgold@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          2 months ago

          Ethernet is used last mile all the time. Fibre is just the medium. Wide area networks use many protocols, ethernet is very common. See MAN for some more context.

      • poVoq@slrpnk.net
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        If you have a 10GBaseT connection, only using 5Gb in one direction doesn’t give you 15Gb in the other. It’s still 10Gb either way.

        That’s just a question of adhering to standards. The chip that does the routing internally has a total throughput and that is obviously both directions combined.

  • AmbiguousProps@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    Cable does this because of the inherit bandwidth restrictions it comes with, along with being able to price gouge customers to pay for faster upload. Coax (really, DOCSIS) and the related infrastructure around it simply does not have the bandwidth to offer symmetric connections, at least for companies like Comcast (yuck). They will wait until it’s absolutely necessary to upgrade their infrastructure to support faster upload. Even then, it likely will not be symmetric since there needs to be channels for TV/phone too.

    Fiber has much more bandwidth, and the related infrastructure does too. That’s why it’s almost always symmetric.

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Historically, last-mile technologies like dial-up, DSL, satellite, and DOCSIS/cable had limitations on their uplink power. That is, the amount of energy they can use to send upload through the medium.

    Dial-up and DSL had to comply with rules on telephone equipment, which I believe limited end-user equipment to less power than what the phone company can put onto the wires, premised on the phone company being better positioned to identify and manage interference between different phone lines. Generally, using reduced power reduces signal-to-noise ratio, which means less theoretical and practical bandwidth available for the upstream direction.

    Cable has a similar restriction, because cable plants could not permit end-user “back feeding” of the cable system. To make cable modems work, some amount of power must be allowed to travel upstream, but too much would potentially cause interference to other customers. Hence, regulatory restrictions on upstream power. This also matched actual customer usage patterns at the time.

    Satellite is more straightforward: satellite dishes on earth are kinda tiny compared to the bus-sized satellite’s antennae. So sending RF up to space is just harder than receiving it.

    Whereas fibre has a huge amount of bandwidth, to the point that when new PON standards are written, they don’t even bother reusing the old standard’s allocated wavelength, but define new wavelengths. That way, both old and new services can operate on the fibre during the switchover period. So fibre by-default allocates symmetrical bandwidth, although some PON systems might still be closer to cable’s asymmetry.

    But there’s also the backend side of things: if a major ISP only served residential customers, who predominantly have asymmetric traffic patterns, then they will likely have to pay money to peer with other ISPs, because of the disparity. Major ISPs solve this by offering services to data centers, which generally are asymmetric but tilted towards upload. By balancing residential with server customers, the ISP can obtain cheaper or even free peering with other ISPs, because symmetrical traffic would benefit both and improve the network.

    • corroded@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      This is a really good explanation; thank you!

      There is one thing I’m having a hard time understanding, though; I’m going to use my ISP as an example. They primarily serve residential customers and small businesses. They provide VDSL connections, and there isn’t a data center anywhere nearby, so any traffic going over the link to their upstream provider is almost certainly very asymmetrical. Their consumer VDSL service is 40Mb/2Mb, and they own the phone lines (so any restriction on transmit power from the end-user is their own restriction).

      To make the math easy, assume they have 1000 customers, and they’re guaranteeing the full 40Mb even at peak times (this is obviously far from true, but it makes the numbers easy). This means that they have at least a 40Gbit link to their upstream provider. They’re using the full 40Gb on one side of the link, and only 2Gbit on the other. I’ve used plenty of fiber SFP+ modules, and I’ve never seen one that supports any kind of asymmetrical connection.

      With this scenario, I would think that offering their customers a faster uplink would be free money. Yet for whatever reason, they don’t. I’d even be willing to buy whatever enterprise-grade equipment is on the other end of my 40/2 link to get a symmetrical 40/40; still not an option. Bonded DSL, also not an option.

      With so much unused upload bandwidth on the ISP’s part, I would think they’d have some option to upgrade the connection. The only thing I can think is that having to maintain accounts for multiple customers with different service levels costs more than selling some of their unused upload bandwidth.

      • poVoq@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        The routing equipment at the distribution boxes is likely a limit. Both in regards to power consumption and heat production, plus especially with older equipment the total throughput it is capable of.

      • litchralee@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        My last post didn’t substantially address smaller ISPs, and from your description, it does sound like your ISP might be a smaller operator. But essentially, on the backend, a smaller ISP won’t have the customer base to balance their traffic in both directions. But they still need to provision for peak traffic demand, and as you observed, that could mean leaving capacity on the table, err fibre. This is correct from a technical perspective.

        But now we touch up on the business side of things again. The hypothetical small ISP – which I’ll call the Retail ISP, since they are the face that works with end-user residential customers – will usually contract with one of more regional ISPs in the area for IP transit. That is, upstream connectivity to the broader Internet.

        It would indeed be wasteful and expensive to obtain an upstream connection that guarantees 40 Gbps symmetric at all times. So they don’t. Instead, the Retail ISP would pursue a bustable billing contract, where they commit to specific, continual, averaged traffic rates in each direction, but have some flexibility to use more or less than that commited value.

        So even if the Retail ISP is guaranteeing each end-user at least 40 Gbps download, the Retail ISP must write up a deal with the Upstream ISP based on averages. And with, say, 1000 customers, the law of averages will hold true. So let’s say the average rates are actually 20 Gbps down/1 Gbps up.

        To be statistically rigorous though, I should mention that traffic estimation is a science, with applicability to everything from data network and road traffic planning, queuing for the bar at a music venue, and managing electric grid stability. Looking at historical data to determine a weighed average would be somewhat straightforward, but compensating for variables so that it can become future-predictive is the stuff of statisticians with post-nominative degrees.

        What I can say though, from what I remember in calculus at uni, is that if each end-user’s traffic rates are independent from other end-users (a proposition that is usually true but not necessarily at all times of day), then the Central Limit Theorem states that the distribution of the aggregate set of end-users will approximate a normal distribution (aka Gaussian, or bell curve), getting closer for more users. This was a staggering result when I first learned it, because it really doesn’t matter what each user is doing, it all becomes a bell curve in the end.

        The Retail ISP’s contract with the Upstream ISP probably has two parts: a circuit, and transit. The circuit is the physical line, and for the given traffic, probably a 50 Gbps fibre connection might be provisioned for lots of burstable bandwidth. But if the Retail ISP is somewhat remote, perhaps a microwave RF link could be set up, or leased from a third-party. But we’ll stick with fibre, as that’s going to be symmetrical.

        As a brief aside, even though a 40 Gbps circuit would also be sufficient, sometimes the Upstream ISP’s nearby equipment doesn’t support certain speeds. If the circuit is Ethernet based, then a 40 Gbps QSFP+ circuit is internally four 10 Gbps links bundles together on the same fibre line. But supposing the Upstream ISP normally sells 200 Gbps circuits, then 50 Gbps to the Retail ISP makes more sense, as a 200 Gbps QSFP56 circuit is internally made from four 50 Gbps, which oftentimes can be broken out. The Upstream and Retail ISPs need to agree on the technical specs for the circuit, but it certainly must provide overhead beyond the averages agreed upon.

        And those averages are captured in the transit contract, where brief exceedances/underages are not penalized but prolonged conditions would be subject to fees or even result in new contract negotiations. The “waste” of circuit capacity (especially upload) is something both the Retail ISP (who saves money, since guaranteed 50 Gbps would cost much more) and the Upstream ISP willingly accept.

        Why? Because the Upstream ISP is also trying to balance the traffic to their upstream, to avoid fees for imbalance. So even though the Retail ISP can’t guarantee symmetric traffic to the Upstream ISP, what the Retail ISP can offer is predictability.

        If the Upstream ISP can group the Retail ISP’s traffic with a nearby data center, then that could roughly balance out, and allow them to pursue better terms with the subsequent higher tier of upstream provider.

        Now we can finally circle back on why the Retail ISP would decline to offer end-users some faster upload speeds. Simply put, the Retail ISP may be aware that even if they offer higher upload, most residential customers won’t really take advantage of it, even if it was a free upgrade. This is the reality of residential Internet traffic. Indeed, the unique ISPs in the USA offering residential 10 Gbps connections have to be thoroughly aware that even the most dedicated of, err, Linux ISO afficionados cannot saturate that connection for more than a few hours per month.

        But if most won’t take advantage of it, then that shouldn’t impact the Retail ISP’s burstable contract with the Upstream ISP, and so it’s a free choice, right? Well, yes, but it’s not the only consideration. The thing about offering more upload is that while most customers won’t use it, a small handful will. And maybe those customers are the type that will complain loudly if the faster upload isn’t honored. And that might hurt Retail ISP’s reputation. So rather than take that gamble through guaranteeing faster upload for residential connections, they’d prefer to just make it “best effort”, whatever that means.

        EDIT: The description above sounds a bit defeatist for people who just want faster upload, since it seems that ISPs just want to do the bare minimum and not cater to users who are self-hosting, whom ISPs believe to be a minority. So I wanted to briefly – and I’m aware that I’m long winded – describe what it would take to change that assumption.

        Essentially, existing “average joe” users would have to start uploading a lot more than they are now. With so-called cloud services, it might seem that upload should go up, if everyone’s photos are stored on remote servers. But cloud services also power major sites like Netflix, which are larger download sources. So net-net, I would guess that the residential customer’s download-to-upload ratio is growing wider, and isn’t shrinking.

        It would take a monumental change in networking or computer or consumer demand to reverse this tide. Example: a world where data sovereignty – bonafide ownership of your own data – is so paramount that everyone and their mother has a social-media server at home that mutually relays and amplifies viral content. That is to say, self-hosting and upload amplification.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    Apart from the technical reasons already mentioned, before things like Twitch and TikTok and Instagram were a thing, people mostly downloaded content and very rarely uploaded much. So it made sense for the ISPs to allocate more downstream channels and advertise much higher download speeds which is what everyone cared about. Especially with DSL and aging copper lines, it kind of tops out at 40-50 Mbps for most people when lucky (even though VDSL2-Vplus technically can go up to 300/100). Especially if you’re shoving IPTV on that line, 25/25 is much less desirable for the average consumer than say, 45/5.

    And as others have said, it’s much easier for the ISP to throw more power on your lines to sustain faster speeds, so it just kind of happened that it was convenient for everyone to do it that way.

    It also has the side effect of heavily discouraging hosting servers at home, reduces the amount of bandwidth used by torrenting and the likes.

  • BearOfaTime@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    There’s a fixed amount of available bandwidth (signaling) on a given connection, regardless of directionality.

    • Markaos@lemmy.one
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      That really depends on the technology used. For example, all modern Ethernet standards (which includes both copper and fiber optic) are full duplex, meaning they can provide the full bandwidth in both directions at once. So a gigabit Ethernet link can do a gigabit in one direction AND a gigabit in the other direction at the same time (but not two gigabits in one direction).

    • corroded@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      This is only true when you have a single transmission medium and a fixed band. Cable internet is a great example; you only have a few MHz of bandwidth to be used for data transmission, in any direction; the rest is used up by TV channels and whatever else. WiFi is also like this; you may have full-duplex communications, but you only have a very small portion of the 2.4Ghz or 5Ghz band that your WiFi router can use.

      Ethernet is not like this. You have two independent transmission lines; each operates in one direction, and each is completely isolated from any other signals outside the transmitter and receiver. If your ethernet hardware negotiates a 10Gb connection, you have 10Gb in one direction and 10Gb in the other. Because the transmission lines are separate, saturating one has absolutely no effect on the other.

      • poVoq@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 months ago

        This is only the theory. In the end there is still a chip doing the routing that has a total throuput it is capable of regardless of the direction.

  • poVoq@slrpnk.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    A factor I noticed here with my fiber ISP that hasn’t been mentioned: total bandwidth of the router that comes with the contract.

    While this is finally changing now, the cheap SoCs that where used for building these mass produced routers topped out at about 1.5gbit total throughput.

    So to avoid people complaining about false advertisement and still sell ”1gbit" fiber, the maximum they are offering is a 1000/400 Mbit connection.

    • corroded@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I thought I spelled something wrong; I didn’t, and now I’m lost. Aren’t most non-fiber connections asymmetrical connections?

  • IsoKiero@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    A part of it is because technology, specially a decade or so ago, had restrictions. Like with ADSL which often/always couldn’t support higher upload speeds due to the end user hardware, and the same goes with 4/5G today, your cellphone just doesn’t have the power to transmit as fast/far as the tower access point.

    But with wired connections, specially with fibre/coax, that doesn’t apply and money comes in to play. ISPs pay for the bandwidth to the ‘next step’ on the network. Your ‘last mile’ ISP buys some amount of traffic from the ‘state wide operator’ (kind-of, depends heavily on where you live, but the analogy should work anyways) and that’s where the “upload” and “download” traffic starts to play a part. I’m not an expert by any stretch here, so take this with a spoonful of salt, but the traffic inside your ISP’s network and going trough their hardware doesn’t cost ‘anything’ (electricity for the switches/routers and their maintenance is excluded as a cost of doing business) but once you push additional 10Gbps to the neighboring ISP it requires resources to manage that.

    And that (at least in here) where the asymmetric connections plays a part. Let’s say that you have a 1Gbps connection to youtube/netflix/whatever. The original source needs to pay for the network for the bandwidth for your stream to go trough in order to get a decent user experience. But the traffic from your ISP to the network is far less, a blunt analogy would be that your computer sends a request to the network ‘show me the latest Me. Beast video’ and youtube server says ‘sure, here’s a few gigabits of video’.

    Now, when everyone pays for the ‘next step’ connection by the actual amount of data consumed (as their hardware needs to have the capacity to take the load). On your generic home user profile, the amount downloaded (and going trough your network) is vastly bigger than the traffic going out of your network. That way your last mile ISP can negotiate with the ‘upstream’ operator to have capacity to take 10Gbps in (which is essentially free once the hardware is purchased) and that you only send 1Gbps out, so ‘upstream’ operator needs to have a lot less capacity going trough their network to ‘the other way’.

    So, as the link speeds and amount of traffic is billed separately, it’s way more profitable to offer 1Gbps down and 100Mbps up for the home user. This all is of course a gross simplification of everything and in real world things are vastly more complex with caching servers, multiple connections to the other networks and so on, but at the end every bit you transfer has a price and if you mostly offer to sink in the data your users want and it’s significantly less than the data your users push trough to the upstream there’s money to be made in this imbalance and that’s why your connection might be asymmetric.