Disabling vsftpd


I know that I can stop FTP on my Raspberry Shake with the following command.

docker exec -it rfe pkill -9 vsftpd

Unfortunately, I’m having a difficult time getting that permanently stopped. It starts again after a reboot. While I could eventually find the right timing in systemd to wait for the docker image to fully start and then kill vsftpd after it has started, it would be better if it never started.

Is there a way to edit start.sh in the docker image to permanently prevent it from starting? Web browsers can’t use FTP by default these days. I’m trying to minimize power consumption, reduce boot time. and improve the security by not having an insecure FTP service running. I really don’t want FTP running. If I need the files, I can use ssh.

If there is an official git repository with start.sh, I’m also fine with creating a pull request with this change.

I know that I’m not the only person that has asked about this.

Hello BlackDiamond,

After killing the process, could you try to execute the following?

echo manual | sudo tee /etc/vsftpd/vsftpd.override

This should prevent vsftpd from starting again on next reboot. And, if for some reason, you need to eable it again, you can execute sudo rm /etc/vsftpd/vsftpd.override to bring the situation back as normal.


as advertised, every owner of a shake is free to tweak and configure their system however they like. in this case, disabling VSFTP by killing the daemon process inside the rfe container isn’t really useful, since:

  • modifications to a running docker container are lost upon restart of the container
    • as you’ve discovered, this “fix” will be undone on next boot-up
    • to get around this, you would have to run your kill command on every boot-up

(out of curiosity, why do you need to reduce boot-time? and how much time do you think not starting vsftpd inside the rfe container would be saved?)

regarding security:
since both SSH and VSFTP rely on the same underlying secure protocol, SSH is no more secure than VSFTP; if SSH is secure, VSFTP is even more so (the V stands for Very). further, when our recommendations are followed (namely, change the myshake user’s password, connect the Shake to a router and not directly to the internet-at-large) the possibility of an intrusion is minimized to the greatest extent possible. disabling VSFTP does not make your unit more secure when it has been disabled or is
not running.
while web browsers can’t use FTP these days, FTP and VSFTP are not the same thing: one is VERY SECURE, the other is an invitation to be hacked.

regarding energy consumption:
again, without knowing the exact details, i’m not sure that having VSFTP simply available actually uses any power in the case when it does nothing; this protocol is only ever used when downloading the SWARM app, Shake data, or the log files, using the front-end config app. don’t download anything and no extra power will be asked for. in any case, if a permanently idle VSFTP does use some amount of power, i can’t imagine it’s such a huge amount that the overall reduction would make the difference actually interesting. what is your use-case, btw? is your unit powered by a solar panel in a very cloudy part of the world perhaps?
to answer the question definitively, measure the exact power consumption difference between:

  • no vsftpd running, and
  • vsftpd running and not doing anything, i.e., not downloading files

i would be very curious to see the results.

a more drastic method you could employ would be to disable the rfe container itself altogether:

> sudo systemctl stop rsh-fe-config
> sudo systemctl disable rsh-fe-config

but then you would have no access to the front-end config, and, for all the reasons stated above, you would likely gain little to nothing in terms of increased secured and / or reduced power.

hope this helps,

1 Like

The reliability of my setup has been slowly degrading over the years. The reliability of the “Server Connection” has been the biggest problem, and it has become a frequent issue lately. Toggling off-line mode sometimes helps. Rebooting sometimes helps. Shutting it down and unplugging it for an hour sometimes helps. When you have to do this 20 times in a row, it gets really tedious.

I know that there are other parts that contribute more to the boot time, but every bit helps. I need it to get to the server connection as quickly as possible.

I think you’re confusing FTP (the insecure protocol) with vsftpd (the software implementation), which should not be confused with SFTP (the SSH based FTP) and FTPS (FTP over SSL). The “VS” in vsFTP is just referring to how the program was written and created, and it’s not about the underlying protocol. If you have scp/sftp enabled, there is no need for FTP port 21 to be open.

Security is like a stack of sliced Swiss cheese. You can have multiple layers. Putting it behind a firewalled router is one layer, but if the holes align, you get a security breach. I’ve seen several worms spread behind firewalls because someone brought a compromised computer behind the firewall. Firewalls help only so much. The worms are getting pretty sophisticated these days and they’re getting poorly secured devices to join in on DDOS attacks. It’s not a safe mindset to assume that this device will be in a safe network or operated by security experts. The safest network port is no open port, and this unused and hard to use service is best not existing.

If you use ps aux, you will notice that vsFTP is using resources, and all of the docker-proxy instances for each port opened for vsFTP, including the pasv ports. Each docker-proxy is using 1.8% of memory for each port for vsFTP, and 12 of the 17 docker-proxy instances are devoted to just vsFTP. Disabling vsFTP also needs to stop spawning all of these docker-proxy instances too.

Actually only the Shake data tries to be available through the FTP in the UI. The rest is available with HTTP through nginx. It would be nice if the one link in the web UI were changed from FTP to the web server so that it’s easier to download the data.

If the following changes were made in the Raspberry Shake web server, then there would be no FTP references in the GUI anymore, and the usability for the average person would be improved.


I do use the Helicorder display, since the server connection is unreliable and doesn’t try to repair itself and doesn’t upload any of the data between the times it was disconnected from the server. There have been days when an interesting seismic event happens only to notice that my Raspberry Shake disconnected hours or days ago. The data is on my Raspberry Shake, but it’s not available through the iOS app. If the web server, Helicorder and FTP were in separate docker images, I’d stop the only the FTP service. At the moment, that’s not possible to have only some of the applications in rsh-fe-config running.

On a related note, the rsh-fe-config docker image uses nginx/1.20.1. There have been some releases after that installed version that got fixes for some CVE reported exploits. See Releases | NGINX Plus for details. There have also been some new OpenSSH and OpenSSL releases since the one was installed on my Raspberry Shake. Some fixes for CVE based issues seem pretty important. So if vsFTP were configured to use FTP over SSL, the SSL implementation is still out of date and vulnerable.

hi there,

i totally agree, worms are bad!

since the vsftpd process will, for the moment, remain a non-configurable option, killing it by hand at each boot-up really is your only solution here when killing it is your only way forward. as opposed to involving systemctl and annoying timing issues:

  • write a script
  • executed by cron
    • once at boot-up

to do the following:

  • monitor log file /opt/log/postboot.log:
    • to detect that the docker container FE Config Server has been started
  • wait a few seconds for the container to get up and running
  • issue your kill command
  • DONE

i will still maintain, however, this will get you little to no gain in any context.

regarding the “Server Connection” reliability problem, not really sure what to say about this without log files. our servers are up all the time, permanently listening for connection requests and responding. most of the 1800+ Shakes that are currently connected remain connected for very long periods of time without issue. your experience is not the norm. can you send your log files? perhaps there’s something there that can point to your issue.

on that note: weren’t your frequent reboots part of how you had to handle the problem with the Pi overheating? (n.b., rebooting and / or placing it into OFFLINE mode really won’t help faulty connections to the server; that it eventually reconnects is a coincidental artifact, a red herring, if you will.) as an experiment, it might be interesting to relocate your shake to a cleaner environment, remove the enclosure to let all the heat freely escape, and then see how it performs differently, if at all.

cheers, hope this helps,

1 Like

So disabling FTP didn’t save a lot of memory, but running the following command did save about 20 MB of RAM and closed all of the related FTP ports to make it more secure. You don’t need to patch something that isn’t exposed. This command is probably the most important part.

sudo kill -9 `ps aux | /bin/grep docker-proxy | /bin/egrep 'port (10[0-9]{3,3}|21)' | /usr/bin/awk '{print $2}'`

I did find a way to patch the docker image to prevent starting vsftpd, but the docker image is configured to expose the ports, which is where the wasted resources are. It’s apparently hard to close a port for an existing docker image. Having the original Dockerfile would help. In the absence of that, I wrote the following commands, but I couldn’t reliably start the docker image from systemd. I could only launch the docker image manually. I’m only providing it here for future posterity, and I don’t recommend using these commands.

sudo systemctl stop rsh-fe-config
rm -rf /tmp/disable-vsftpd
mkdir /tmp/disable-vsftpd
cd /tmp/disable-vsftpd
docker save `cat /opt/settings/sys/rfe.txt` | tar xv
sed -i -r 's/"ExposedPorts.*\/tcp"\:\{\}\}/"ExposedPorts":\{"80\/tcp":\{\}\}/g' `ls *.json | grep -v manifest.json`
tar cv . | docker load
sudo systemctl start rsh-fe-config
docker run -d -p 80:80 -v /opt/settings:/opt/settings -v /opt/DL:/opt/DL -v /opt/data:/opt/data -v /opt/log/:/opt/log -v /sys/fs/cgroup:/sys/fs/cgroup:ro -e DOCKER_IP= --name=lessports `cat /opt/settings/sys/rfe.txt`
docker exec -it lessports sed -i 's/"ftp:\/\/"[+]window\.location\.hostname/"\/archive\/"/g' /usr/src/fe/bundle.js
docker exec -it lessports chmod 755 /start.sh
docker exec -it lessports sed -i 's/^\/usr\/sbin\/vsftpd/#\/usr\/sbin\/vsftpd/' /start.sh
docker commit --change='CMD ["/start.sh"]' -c "EXPOSE 80" `docker ps | fgrep registry.gitlab.com/rshake-public/rsh-fe-config | awk '{ print $1 " " $2 "-disable-vsftpd" }'`
echo `echo /opt/settings/sys/rfe.txt`-disable-vsftpd > /opt/settings/sys/rfe.txt
docker stop lessports
sudo touch /usr/local/bin/rsh-fe-config-start
sudo chown myshake /usr/local/bin/rsh-fe-config-start
chmod 755 /usr/local/bin/rsh-fe-config-start
(echo '#!/bin/bash';echo '/usr/local/bin/rsh-fe-config START';echo 'kill_counter=0';echo 'while [ "`/bin/ps aux | /bin/grep docker-proxy | /bin/egrep '\''port (10[0-9]{3,3}|21)'\'' | /usr/bin/awk '\''{print $2}'\'' | /usr/bin/wc -l`" -lt 12 ] && [ $kill_counter -le 20 ]';echo 'do';echo '    sleep 1';echo '    kill_counter=$(($kill_counter + 1))';echo 'done';echo 'kill `ps aux | /bin/grep docker-proxy | /bin/egrep '\''port (10[0-9]{3,3}|21)'\'' | /usr/bin/awk '\''{print $2}'\''`') > /usr/local/bin/rsh-fe-config-start
sudo sed -i 's/ExecStart=.*/ExecStart=\/usr\/local\/bin\/rsh-fe-config-start/' /lib/systemd/system/rsh-fe-config.service
sudo systemctl daemon-reload
sudo systemctl start rsh-fe-config

These are the remaining open ports after killing the docker port proxies.

myshake@raspberryshake:~ $ sudo netstat -tulp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0   *               LISTEN      426/sshd            
tcp        0      0*               LISTEN      799/python          
tcp6       0      0 [::]:18000              [::]:*                  LISTEN      1358/docker-proxy   
tcp6       0      0 [::]:http               [::]:*                  LISTEN      705/docker-proxy    
tcp6       0      0 [::]:18002              [::]:*                  LISTEN      1351/docker-proxy   
tcp6       0      0 [::]:18006              [::]:*                  LISTEN      1328/docker-proxy   
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN      426/sshd            
tcp6       0      0 [::]:16032              [::]:*                  LISTEN      1498/docker-proxy   
udp        0      0*                           392/dhcpcd          
udp        0      0*                           500/ntpd            
udp        0      0*                           500/ntpd            
udp        0      0*                           500/ntpd            
udp        0      0*                           500/ntpd            
udp        0      0*                           500/ntpd            
udp        0      0 localhost:ntp *                           500/ntpd            
udp        0      0   *                           500/ntpd            
udp6       0      0 [::]:ntp                [::]:*                              500/ntpd            

I’ll try sending the logs in a separate post. The connection is currently solid. I’m currently using an outdoor case purchased from this website, and I may just try to purchase an indoor case instead. That may help with the overheating problem, and I can find alternatives to avoid the dust problem in the garage. It’s currently staying 20-30°F cooler inside my house with the case top off.

I’ll also have to review the ntpd stuff that’s running. It might be a coincidence, but the server connection might have a tougher time when ntp/ntpd/ntpdate get into a bad state during start up. There might be a configuration conflict between them.

I would be tempted to find out why the connections drop so frequently rather than spending time playing with an image that works well for ~2,000 other people.

Re-burn image?

1 Like


i can give you the NTP information you’re looking for, no need to go looking for “the conflict”:

  • on boot-up, ntpdate is run first, to set the system clock which may be far off from real time
  • only once ntpdate is successfuly completed is the ntpd daemon started
  • only after ntpd has a successful lock on an NTP server will the rsh-data-producer container forward data to the server

when there are problems with any of this, there will be corresopnding messages in the log files, postboot.log and odf_SL_plugin.*.

but without the log files, it’s all just a guessing game, which can be alternatingly fun and not.


Here are my logs. The last one at 2023 039 failed today. The NTP failed, and the server connection didn’t work. Perhaps NTP started up before the DNS resolver could fully start up? Also it looks like /opt/settings/user/UDP-data-streams.conf started to not be readable. Perhaps it’s time to just reimage the station.

RSH.RAAE6.2023-02-08T04_52_18.logs.tar.gz (313.0 KB)


ah, log files are so helpful!

there is a curious message there: "host: command not found"

any idea where that command went? or why it’s not found? this command is used as part of the process to confirm access to the internet as well as access to the data servers. when this can’t be confirmed, it’s assumed there is no access or connection.

given all the tweaking that’s happened, it would be an interesting experiment to burn a new SD card using the latest image, don’t tweak it, and see how it behaves differently, just as an experiment.

or, figure out why host on the command line doesn’t work and fix that. i predict good things will happen when this DNS problem is resolved (pardon the pun).

hope this helps,

1 Like

So vsftpd still running is a problem, but my situation has improved.

  1. The temperature has significantly improved by switching from an outdoor enclosure to an indoor enclosure with some of the unnecessary holes taped over to avoid dust getting into it.
  2. I found out that I had a marginal ethernet cable. Apparently it had been crushed and bent by a box too many times by other people in the house. This was likely a cause of intermittent failures. That’s the first time that I’ve ever seen a cable fail.

My uptime is much better now. Thanks for the help.