globaldatanetmenu

.SCP performance with ssm-agent

Apr 3rd 2020-2 min read


At globaldatatnet we are loving Amazons solution to use ssh over AWS SSM. It removes the need of a SSH bastion host and enables us to control access with AWS IAM. Check our blog post if you want to know how to set this up.

But recently one of our customers reported slow file copies he made with scp. That let us run a small test case to check the performance difference between scp with SSM and without.

Our test setup

We were using a default vpc in the Frankfurt region eu-central-1 and creating two ec2 t2.micro instances in the same availability zone. After setting up ssh to allow direct ssh access and ssh access via SSM from server A to server B, we downloaded a 649 MB ISO for our test transfer. We copied the file three time with both methods from server A to server B and messured the execution time with the linux time command.

Results

Our test file needs about 11 seconds to be copied with scp directly and up to 13 minutes and 19 seconds using scp with SSM. That is shows that scp with SSM is 78 times slower than using direct connections. See our results below.

SSH without SSM

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@172.31.44.11:./ssh1.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB  62.9MB/s   00:10

real    0m10.562s
user    0m3.238s
sys     0m1.229s

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@172.31.44.11:./ssh2.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB  62.9MB/s   00:10

real    0m10.630s
user    0m3.303s
sys     0m0.996s

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@172.31.44.11:./ssh3.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB  62.9MB/s   00:10

real    0m10.690s
user    0m3.217s
sys     0m1.038s

SSH with SSM

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@i-04d0008e348c08e7c:./ssm-agent-1.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB 833.0KB/s   13:17

real    13m19.052s
user    0m5.902s
sys     0m0.871s

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@i-04d0008e348c08e7c:./ssm-agent-2.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB 833.7KB/s   13:17

real    13m18.015s
user    0m5.512s
sys     0m0.724s

[ssm-user@ip-172-31-42-176 ~]$ time scp archlinux-2020.04.01-x86_64.iso ssm-user@i-04d0008e348c08e7c:./ssm-agent-3.iso
archlinux-2020.04.01-x86_64.iso                                                                                 100%  649MB 833.4KB/s   13:17

real    13m18.327s
user    0m5.424s
sys     0m0.794s

Conclusion

Even with this drastic perfomance difference we recommend using ssh with SSM for administrative access to ec2 instances. But if you need to transfer files between ec2 instances an alternative is required. It is usually a good pattern to use a workflow that uses AWS S3 or AWS EFS instead of scp file transfers. That also helps to keep your server stateless. Of course it is still possible to allow direct ssh connections between servers and use ssh via SSM for admin access. This still had the drawback of a weekend security setup for your ec2 instances, sinces you need to handle private ssh keys on your servers.

globaldatanetCloud Development, Optimization & Automation

.Navigation

.Social

  • follow globaldatanet on instagram
  • follow globaldatanet on facebook
  • follow globaldatanet on twitter
  • follow globaldatanet on linkendin
  • follow globaldatanet on twitch
  • follow globaldatanet's tech rss feed
  • follow globaldatanet at github
© 2021 by globaldatanet. All Right Reserved