Background
Some businesses choose to deploy their systems across both Azure China and Azure Global. For these systems to function effectively, it is crucial that their communication over the internet performs well. To assess the current state of the network between Azure China and Azure Global, I conducted a series of tests and would like to share the results.
Targeted:
- Azure Virtual Machines with the B1s size
Not Targeted:
- Azure Virtual Machines with accelerated networking (typically B12ms, B16ms, B20ms, or F-series)
- Azure ExpressRoute (primarily an option for large enterprises)
The goal of these tests is not to achieve the highest possible network speed or to promote Azure. This is why I intentionally avoided using high-end VM and network configurations. My aim is to provide a realistic representation of what individuals and small businesses in China typically experience.
Test Method
I use iPerf3 to perform the tests. The method is fully explained in my previous blog post "Test Network Speed Between Azure VMs Using iPerf".
- VM configuration
- System: Ubuntu Server 22.04 LTS
- Size: B1s
- Disk: Standard SSD
- NSG: Allow TCP 5201 port
- Azure China Regions used
- China East 2
- China North 2
- Azure Global Regions used
- East Asia
- Southeast Asia
- Japan East
- Japan West
- Korea Central
- West US
- East US
- Australia East
- West Europe
- 3 test runs for each VMs from Azure China to Azure Global
Cloud init content
#cloud-config
runcmd:
- apt update && apt upgrade -y
- apt install iperf3 -y
Ubuntu Server is using the following TCP congestion control
edi@****:~$ sysctl net.ipv4.tcp_available_congestion_control
net.ipv4.tcp_available_congestion_control = reno cubic bbr
edi@****:~$ sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = bbr
Keep in mind, these tests only represent results in Feb 2023. Microsoft could change the network configuration any time. So, these test results are only for your reference. Please do your own test whenever you need.
About Azure's Network Bandwidth Limit
You can find expected network bandwidth for varied sizes of VMs here. Most series have a table with networking specifications in the last column titled, Max NICs / Expected network performance (Mbps).
However, for B series VMs, including the B1s I used in these tests, does not have information about network bandwidth limit. According to information from Microsoft product team, B series does not provide a consistent level of network performance, this is why they don't publish network bandwidth limit information about it. This is also one of the reasons why I am writing this post.
Test Example
I am doing the same process for each DC location, to keep this post short and clean, I do not post every test run output here. You can find the information in the "Conclusion" section of this post.
Datacenter: China East 2 - East Asia
Note: Sensitive host name and IP address information is masked as `*`.
Test run #1
edi@****:~$ iperf3 -c *.*.*.* -p 5201 -t 10
Connecting to host *.*.*.*, port 5201
[ 5] local 10.1.0.4 port 54568 connected to *.*.*.* port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 68.0 MBytes 571 Mbits/sec 0 6.04 MBytes
[ 5] 1.00-2.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 2.00-3.00 sec 86.2 MBytes 723 Mbits/sec 0 6.04 MBytes
[ 5] 3.00-4.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 4.00-5.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 5.00-6.00 sec 86.2 MBytes 723 Mbits/sec 0 6.04 MBytes
[ 5] 6.00-7.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 7.00-8.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 8.00-9.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
[ 5] 9.00-10.00 sec 87.5 MBytes 734 Mbits/sec 0 6.04 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 853 MBytes 716 Mbits/sec 0 sender
[ 5] 0.00-10.06 sec 852 MBytes 711 Mbits/sec receiver
Test run #2
edi@****:~$ iperf3 -c *.*.*.* -p 5201 -t 10
Connecting to host *.*.*.*, port 5201
[ 5] local 10.1.0.4 port 54568 connected to *.*.*.* port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 67.1 MBytes 563 Mbits/sec 0 6.01 MBytes
[ 5] 1.00-2.00 sec 86.2 MBytes 723 Mbits/sec 0 6.01 MBytes
[ 5] 2.00-3.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
[ 5] 3.00-4.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
[ 5] 4.00-5.00 sec 86.2 MBytes 724 Mbits/sec 0 6.01 MBytes
[ 5] 5.00-6.00 sec 86.2 MBytes 723 Mbits/sec 0 6.01 MBytes
[ 5] 6.00-7.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
[ 5] 7.00-8.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
[ 5] 8.00-9.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
[ 5] 9.00-10.00 sec 85.0 MBytes 713 Mbits/sec 0 6.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 836 MBytes 701 Mbits/sec 0 sender
[ 5] 0.00-10.04 sec 835 MBytes 698 Mbits/sec receiver
Test run #3
edi@****:~$ iperf3 -c *.*.*.* -p 5201 -t 10
Connecting to host *.*.*.*, port 5201
[ 5] local 10.1.0.4 port 54568 connected to *.*.*.* port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 66.8 MBytes 560 Mbits/sec 0 6.06 MBytes
[ 5] 1.00-2.00 sec 85.0 MBytes 713 Mbits/sec 0 6.06 MBytes
[ 5] 2.00-3.00 sec 85.0 MBytes 713 Mbits/sec 0 6.06 MBytes
[ 5] 3.00-4.00 sec 83.8 MBytes 703 Mbits/sec 0 6.06 MBytes
[ 5] 4.00-5.00 sec 85.0 MBytes 713 Mbits/sec 0 6.06 MBytes
[ 5] 5.00-6.00 sec 83.8 MBytes 703 Mbits/sec 0 6.06 MBytes
[ 5] 6.00-7.00 sec 85.0 MBytes 713 Mbits/sec 0 6.06 MBytes
[ 5] 7.00-8.00 sec 83.8 MBytes 703 Mbits/sec 0 6.06 MBytes
[ 5] 8.00-9.00 sec 83.8 MBytes 703 Mbits/sec 0 6.06 MBytes
[ 5] 9.00-10.00 sec 85.0 MBytes 713 Mbits/sec 0 6.06 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 827 MBytes 694 Mbits/sec 0 sender
[ 5] 0.00-10.04 sec 827 MBytes 690 Mbits/sec receiver
Average: 702 Mbps
Conclusion
- As for 2/25/2023, the network speed for B1s VMs between a few datacenters in Azure China and Azure Global are listed below.
- Azure does not provide an official bandwidth limit information for B1s, it seems to be 1Gbps.
- The fastest connection is between China East 2 in Azure China and East Asia in Azure Global.
Source location | Target location | Average speed |
---|---|---|
China East 2 | East Asia | 702 Mbps |
China East 2 | Southeast Asia | 357 Mbps |
China East 2 | Japan East | 272 Mbps |
China East 2 | Japan West | 275 Mbps |
China East 2 | Korea Central | 297 Mbps |
China East 2 | West US | 125 Mbps |
China East 2 | East US | 65 Mbps |
China East 2 | Australia East | 112 Mbps |
China East 2 | West Europe | 73 Mbps |
China North 2 | East Asia | 564 Mbps |
China North 2 | Southeast Asia | 276 Mbps |
China North 2 | Japan East | 268 Mbps |
China North 2 | Japan West | 261 Mbps |
China North 2 | Korea Central | 296 Mbps |
China North 2 | West US | 108 Mbps |
China North 2 | East US | 78 Mbps |
China North 2 | Australia East | 126 Mbps |
China North 2 | West Europe | 86 Mbps |
Comments