We (A&A) sell gigabit services, as both Ethernet and FTTP. We see other ISPs selling 950M or 900M, why? Will I get a gigabit?
What is a gigabit anyway?
A gigabit speed means one billion bits of data per second, simples...
There are a couple of gotchas to start with - firstly a gigabit is 1,000,000,000 bits per second, it is not 230 (i.e. 1,073,741,824) bits per second. A lot of files on computers are measured in gibibytes or mebibytes, not gigabytes or megabytes. The second gotcha is this is bits, not bytes. There are 8 bits in a byte, so a gigabit is 125,000,000 bytes per second.
So where exactly are we measuring that speed?
The answer is that when packets of data are sent to you, i.e. to your equipment in your home or office, those packets of data have a sequence of bits that are coming at 1 billion bits per second for the whole of each packet.
That sounds like a cop out, what do we mean by that exactly?
Well, that is how it works, packets of data are sent. You don’t actually get a “file” in one go, it comes as packets of data, and there can be lots of other packets of data for other reasons, and even gaps between the packets.
Why gaps?
The main reason for gaps between the packets is that the Internet as a whole, and lots of the bits of infrastructure to you, are a shared service. You are sharing infrastructure with other people. How much of that is shared and where it is shared depends on the service you have and where you are transferring data from. An un-contended Ethernet service from us means we can send back to back packets at a gigabit from our data centre to you and get that true full gigabit speed with no gaps - yay! But even then, our equipment and the rest of the Internet is shared with other people. For services like FTTP, some of the infrastructure between us and you, and even between the exchange and you, is shared - indeed BT basically only guarantee about 20% of the download speed during their busiest period.
Being shared does not necessarily mean you don’t get the full gigabit though - that happens when there is contention, and lots of the Internet is built with spare capacity at all times. We aim for A&A never to be the bottleneck, for example, so yes, most of the time, if you have a gigabit service you can expect a gigabit throughput.
Other stuff in packets?
Even when there is no contention and packet are coming back to back with no gaps at that full gigabit speed, that does not mean you can transfer a file at a gigabit speed (i.e. 125MB/s). But why?
The answer is that packets don’t just contain the data in your file. Even a full sized packet carrying, say, 1440 bytes of data from your file, will have extra bits. Typically 20 bytes TCP, 40 bytes IP, 26 bytes Ethernet with VLAN tag, 8 bytes PPPoE, all of which take up space in that packet. That means over 5% is not part of a file transfer even with full sized packets. That is one reason to see services sold as 950M, not because the actual service is not gigabit, it is, but because the way you are using it (Internet access) has overheads.
What is special about gigabit?
All of what we have said is true for any speed of Internet connection, contention and overheads mean the speed of a file transfer does not look the same as the underlying speed of the bits on the wire, which is what you are buying. So what makes gigabit special?
Well, a gigabit is fast. Indeed, with even just tens of milliseconds latency, it is way faster than TCP/IP was originally designed for. But let’s look at a few of the reasons a gigabit can be a challenge.
- Your computer may only have a gigabit port, and so may your switches. That alone means you are sharing your own gigabit infrastructure with other uses on your own network. That can have some impact. Indeed, actually managing a gigabit throughput can be a challenge for many computers depending on their age. If you have a modern computer, you should be able to get pretty close though.
- Remember WiFi is rarely sensible at these speeds. Some specifications of modern WiFi over short distances do claim to handle a gigabit, but it is rare. If you want to get close you need Ethernet wired in.
- Your router / firewall may not be up to it. Even the firewalls we make, such as the FireBrick FB2900 cannot quite do a gigabit, sorry. We have new designs planned which can, but component shortages mean they are a year or two off. Just because a router or firewall has gigabit ports does not mean it can handle a gigabit throughput - so check the specification.
- Whilst much of the Internet backbone is very fast fibres, some even terabits in speed, lots of bits at the edges are slower. Gigabit end user connections are relatively new, and so some companies serving files to you may have lots of servers, and ports, but those ports could be only gigabit themselves. So even though they may have capacity for thousands of customers downloading at once, if you are one of 2 people on a server port that is only a gigabit, you may only get half that speed. The good news is this is changing, and lots of kit is faster - notably the big content delivery networks which are used for a lot of downloads (like software updates and so on) are a lot faster.
- Even where there are big servers with very fast ports, gigabit speed end users can hog all of the download capacity, and so servers may well limit individual transfer rates to make it fare. Having a gigabit means you can easily “suck” 10 times as much as most other people.
- A lot of speed checkers were designed for ADSL and VDSL and simply don’t have the capacity to be able to measure gigabit speeds. So check if they claim to be able to do that. Some do now, because there are gigabit users, so some can cope.
What can I actually expect?
YMMV, as they say (Your mileage may vary). I personally have a gigabit fibre but using a FireBrick FB2900, and in my case, most of the time, the FB2900 is the bottleneck. Even so I have seen sustained downloads of a game s/w update at 850Mb/s. Some people with faster equipment than I have do get much higher speeds. A gigabit (allowing for those packet overheads) is possible, but it will depend on a lot of factors, as you can see.
Ah, the concept of whether a bit/Kilobit/Megabit is worked out on a base-10 calculation or a base-8 calculation. Have struggled with this for 15 years. I want to use the proper, correct calculation. Most people don't care and want to use the base-10 calculation.
ReplyDeleteCityFibre - any thoughts on their wholesale product? But they offer it as a symmetric product, so a 500 Mbps line is 500 Mbps download AND 500 Mbps upload.
Whether a router/network-device with 1 Gpbs ports can actually handle 1 Gbps throughput. Oh, yes. Had this debate with a supposed IT expert who was taking over one of my client sites. Gave him a detailed and friendly briefing on the whole infrastructure. And pointed out that the router was the oldest piece of hardware and probably to be top of his list for replacement as it wouldn't handle any future data throughput speeds. He wouldn't believe me and categorically stated that a router with 1 Gbps ports can transfer 1 Gbps of data. Pointed him to the router manufacturer's specifications showing that the router's CPU, RAM, etc. cannot handle 1 Gbps. He ignored me and I wish him good luck when he tries to put 1 Gbps through that router.
Does any domestic premises actually need 1 Gbps in 2022? - haven't found one yet. I have heard of one house which is maxing out a 1 Gbps leased line and is shortly to upgrade to a 10 Gbps leased line, but this is a multi-million pound country estate with large numbers of employees and staff on site.
1 Gbps to the residence is nice to have, of course and if budget allows then why not have it? Have I got 1 Gbps in my own home? Yes, of course! In fact I have two separate 1 Gbps lines. Do I need either of them? No.
What bandwidth is actually needed for residential or SME? I suggest 100 Mbps at present. Most of our business clients (max 20 users in a building) have 100 Mbps lines or there or thereabouts and are perfectly happy with zero complaints. Indeed, checking their data transfer throughput most of them use 10 or 20 Mbps maximum.
Do any of my comments mean that we should not innovate and should not push ahead with bigger and better bandwidth? - of course not. We should always strive for the best.
Data transmission has been base-10 prefixed since the first kilobit was transfered on a wire but memory and storage have used the same prefixes to mean base-2 for even longer.
DeleteAnd they're both correct.
Yes, it's confusing to noobs and whiners, no I don't care. I'm not using prefixes as bloody stupid as mebi, gibi, and kibi.
"Data transmission has been base-10 prefixed since the first kilobit was transfered on a wire but memory and storage have used the same prefixes to mean base-2 for even longer."
DeleteHave you got a definitive reference for that? As it's an argument I struggle to win when trying to calculate theoretical data transfer times.
Early on data transmission was over cables, which used variations of physical signals to transmit the data - a 56Kbps modem really was a (theoretical) 56,000 bits per second.. I always assumed because it was by people with a science/radio background they'd use 1000 because that's what that discipline does.
DeleteMemory.. well when you're programming it makes little sense to use base 10.. The chips are matrixes of switches that naturally end up as powers of 2. Maybe not so much these days but back in the day if your array didn't alight to 16 byte paragraph boundaries you would get wierd effects at the edges, so you routinely made things 1024 bytes long etc. because you didn't want to bugger up the alignment. So that's how programmers learn to think (I mostly still do that with arrays even today where there's no performance hit).
Disks have had a bit of a journey.. They're based on power of two sectors (128/512/etc.) but the number of sectors was always 'how many can we fit in one revolution of this spinny thing' and similar for tracks, so there was never a standard.. Originally they were quoted as capacities of powers of 2 still.. occasionally still are.. but then someone noticed they sound bigger if you use powers of 10.. Then as they got bigger and more complex - a modern SSD has part of the disk for remapping, for example, it all becomes a bit handwavey how big they actually are.
Anyone who's tried to replace a disk in a RAID will be familiar with one 1Tb disk not having the same number of sectors as another 1Tb disk.. but they're both 1Tb.. ish..
Now if you want to rage at suffies rage at how a 4k TV doesn't have 4k anything..
Yeh, it was really only memory that made any sense being base 2 based as it has address lines. Some disks were great with base 2 and base 10 combined to make their size.
DeleteOf course kilo, mega, etc, all have long standing meanings as well as legally defined meaning for SI units (and base 10 based) which is why we have had, for a very long time, kibi, mebi, etc to mean the base 2 magnitudes.
most routers will only give the quoted throughput with full size IP packets, start sending 56 byte packets through and watch the throughput dropoff dramatically.
ReplyDeleteTo make matters worse, most consumer router manufacturers don't even tell you what NAT speed they can do. Or they will give you a generic value for a range of routers which is clearly not accurate as those routers have different SoC with different numbers of cores and clock rates.
ReplyDeleteOne of the main reasons I moved to x86 PCs as a router was I was fed up of buying a router and not knowing what NAT throughput it was actually capable of, especially once you start turning on things like QoS or using WiFi.
Modern consumer routers are even worse thanks to hardware NAT offloading and not being at all clear what you can and can't do before that gets turned off.
Why they can't implement a function in the UI that warns you hardware NAT will be turned off and gives you a rough idea of maximum throughput once that happens I do not know. I'd argue the UI should clearly state an estimated NAT speed for your current configuration period, though quite how they would calculate that I do not know. Especially as it could vary depending on if you are loading the CPU with WiFi traffic too.
Then your other problem is, your average user doesn't even know what NAT is.
Even on x86 its a problem, as I've not found a good way to test things like PPP throughput. I tried setting up rp-pppoe on a fast Linux box but it didn't seem to handle above about 300Mbit, which is well below what my pfSense box should do so presumably an issue with rp-pppoe.