112

I have been using sshfs to work remotely, but it is really slow and annoying, particularly when I use eclipse on it.

Is there any faster way to mount the remote file system locally? My no.1 priority is speed.

Remote machine is Fedora 15, local machine is Ubuntu 10.10. I can also use Windows XP locally if necessary.

Franck Dernoncourt
  • 24,246
  • 64
  • 231
  • 400
CuriousMind
  • 2,021

15 Answers15

57

If you need to improve the speed for sshfs connections, try these options:

oauto_cache,reconnect,defer_permissions,noappledouble,nolocalcaches,no_readahead

command would be:

sshfs remote:/path/to/folder local -oauto_cache,reconnect,defer_permissions
Meetai.com
  • 741
  • 1
  • 6
  • 10
28

sshfs is using the SSH file transfer protocol, which means encryption.

If you just mount via NFS, it's of course faster, because not encrypted.

are you trying to mount volumes on the same network? then use NFS.

Tilo
  • 494
22

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transferred data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

Franck Dernoncourt
  • 24,246
  • 64
  • 231
  • 400
aland
  • 3,066
  • 18
  • 26
15

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

sshfs_opts="-o auto_cache -o cache_timeout=115200 -o attr_timeout=115200   \
-o entry_timeout=1200 -o max_readahead=90000 -o large_read -o big_writes   \
-o no_remote_lock"

You should also check out the options recommended by Meetai in his answer.

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

find project/folder/on/mounted/fs/A > /dev/null &
find project/folder/on/mounted/fs/B > /dev/null &
find project/folder/on/mounted/fs/C > /dev/null &

If you also want to pre-cache file contents, you could try this:

tar c project/folder/on/mounted/fs > /dev/null &

Obviously this will take much longer, will transfer a lot of data, and requires you to have a huge cache size. But when it's done, accessing the files should feel nice and fast.

joeytwiddle
  • 1,795
11

I found turning off my zsh theme that was checking git file status helped enormously - just entering the directory was taking 10+ minutes. Likewise turning off git status checkers in Vim.

Franck Dernoncourt
  • 24,246
  • 64
  • 231
  • 400
8

After searching and trial. I just found add -o Compression=no speed it a lot. The delay may be caused by the compression and uncompression process. Besides, use 'Ciphers=aes128-ctr' seems faster than others while some post has done some experiments on this. Then, my command is somehow like this:

sshfs -o allow_other,transform_symlinks,follow_symlinks,IdentityFile=/Users/maple/.ssh/id_rsa -o auto_cache,reconnect,defer_permissions -o Ciphers=aes128-ctr -o Compression=no maple@123.123.123.123:/home/maple ~/mntpoint

maple
  • 180
5

I've been doing testing with various tools on MacOS 12.1 on an M1 mac and wanted to share some possibly helpful results.

Short Version: Try using rclone mount instead of sshfs. This enabled me to get full gigabit speed both up and down.

A little more about my experience and testing:

Setup: M1 Mac connected over gigabit ethernet to a server running Rocky 8, with a big high speed raid filesystem. Speeds below will be in MB/s, so wire speed would be about 125 MB/s (1 Gb/S).

For me, default settings of sshfs gave me ~30 MB/s down from the server and full 120 MB/s up. Using the option -o Ciphers=aes128-ctr increased that to about 50MB/s down (arcfour is no longer supported on open SSH, so didn't work).

Using rclone mount, I was able to get full 120+ MB/s both up and down, and the mount has otherwise worked great so far as well.

Most other non-mount tools I tried gave me roughly full wire speed up and down (Forklift, command line sftp, filezilla, rclone copy, rsync).

Cyberduck gave me very slow performance up and down, ~15 MB/s, I suspect due to compression that I have not been able to figure out how to turn off.

4

SSHFS is really slow because it transfers the file contents even if it does not have to (when doing cp). I reported this upstream and to Debian, but no response :/

3

NFS should be faster. How remote is the filesystem? If it's over the WAN, you might be better off just syncing the files back and forth, as opposed to direct remote access.

Adam Wagner
  • 131
  • 3
2

Either NFS or Samba if you have large files. Using NFS with something like 720p Movies and crap is really a PITA. Samba will do a better job, tho i dislike Samba for a number of other reasons and i wouldn't usually recommend it.

For small files, NFS should be fine.

1

New option: max_conns

Since version 3.7.0 sshfs includes an option called max_conns.

This option has the potential to greatly improve your performance.

Check your sshfs version with the following command:

sshfs -V

If your version is >= 3.7.0, then consider adding the below parameters:

-o max_conns=4

Where 4 is the number of cores on your machine (you can check this with the below command):

# To retrieve the number of cores:
grep -c ^processor /proc/cpuinfo

NOTE

This might have an impact on the CPU load used by ssh / sshfs. If you do not want to saturate your CPU for disk access, consider using a lower connection count.

JohannesB
  • 111
0

I use plain SFTP. I did it primarily to cut out unneeded authentication but I am sure that dropping the layer of encryption helps, too. (Yes, I need to benchmark it.)

I describe a trivial usage here: https://www.quora.com/How-can-I-use-SFTP-without-the-overhead-of-SSH-I-want-a-fast-and-flexible-file-server-but-I-dont-need-encryption-or-authentication

0

sshfs is for sure not a very performant way to mount a remote file system in general and other options are often faster. However, if you experience incredibly sluggish performance it might be that some I/O is happening over the SSH connection that you are not aware of.

To investigate what is happening, you can mount with sshfs -d which will spawn sshfs in the foreground, but it will then display debugging information so you can see what kind of requests are being done to the remote host. This will help you understand what is happening and see if any of the I/O should be happening in the first place.

This is not relevant to the question, but here's what my problem was specifically: A simple ls was taking 8 seconds to complete. I found out using the debug mode that during the ls command there were requests like /libselinux.so.1 and /libpcre.so.3, etc. This made no sense to me. I then figured out that my LD_LIBRARY_PATH variable contained a trailing :, thus it essentially contained an empty entry, which caused it to look up shared libraries over SSHFS.

jlh
  • 111
  • 5
0

@meetai.com 's answer was pure magic for me...

i'm on linux mint cinnamon 20.0 right now... just to add on to the answer, here is a little script i jacked meetai's solution to - a list of hosts in config file pop to select from - my two cents.

#!/bin/bash

list hosts segregating aliases from user's ssh config file

hosts="$(grep -P "^Host ([^*]+)$" $HOME/.ssh/config | sed 's/Host //')"

select host from list

select host in ${hosts}; do echo "You selected ${host}"; break; done

call sshfs to mount host

sshfs $host:/ ~/mnt/$host -oauto_cache,reconnect,no_readahead

0

My SSHFS got very slow all of the sudden at noon. I fiddle with almost all the hints in all pages I found over the internet. None fixed my problem. After 12 hours of trying to fix the issue, I went and tweak some settings at the server and increase almost all the parameters such as Apache settings, connection settings, increase a little bit everything and by trial and error, it started working as it was before. It seems that heavy traffic and wrong settings (set for light server) for high volume server was causing the slow file server and data transfer. I suggest you revise both servers and if you need to change something for better, then you could try and increase those limits (Maybe you just need some more memory or workers). A server could work fine for a while, but after you get heavy traffic, things may become a little tight there.