I wanted to see if I could build and run Tezos on the RPi4. I'm currently running a bakery in the cloud; 4 nodes and a local signer. But cloudmachines has a running cost, atleast if you want somewhat beefy machine with a large SSD disk. RPi4 looks interesting since it now comes with a 4GB RAM model and a USB 3.0 interface for connecting a fast SSD.
After discovering this bug I moved from the default raspbian image to Ubuntu 19.10 for Raspberry Pi running
aarch64 instead of the 32 bit
armv7. I'm also running the rootfs from the USB SSD instead of the MicroSD card which was a significant perf increase.
I initially used that standard raspbian distribution, and chose the
Raspbian Buster Lite image since I need no desktop for this project.
However, after discovering a bug with tezos + 32 bit architecture, I moved to Ubuntu 19.10 for Raspberry Pi running
aarch64 instead. Procedure is exactly the same; flash the image to MicroSD card and boot up.
At the time of writing the Ubuntu iso ships with a the
5.3.0-1007 kernel. This kernel has som issues with memory > 3GB, so if you have the 4GB version you need to mount the boot partition elsewhere and add the following to
There is a newer kernel available though;
5.3.0-1012 which fixes this issue. So if you update to it you can remove these lines and enjoy the full 4GB.
If you want to run the rootfs from the USB SSD (which I highly recommend since it's a real perf. boost), flash the exact same Ubuntu image on the SSD disk too. After that we just have to modify the
/boot/firmware/btcmd.txt (keep the rest):
Do that on both the MicroSD card's boot partition, and the SSD disks. Sometimes ubuntu mounts the SSD boot partition on
/boot/firmare for some reason.
Only other software I installed was Docker. I ❤️ Docker. Cannot live without it. Get it!
curl -sSL https://get.docker.com | sh
It builds and works on the RPi4 😬🎉
Next, let's make a few folders to keep the Tezos data:
mkdir -p /data/tezos/mainnet/client mkdir -p /data/tezos/mainnet/node/data mkdir -p /data/tezos/mainnet/snapshots
I want to run a full node, so let's get a snapshot so we don't have to sync the full chain. Find snapshot's here.
cd /data/tezos/mainnet/snapshots wget <snapshot_url>
Next, let's load the snapshot:
docker run --rm \ -v /data/tezos/mainnet/client:/var/run/tezos/client \ -v /data/tezos/mainnet/node:/var/run/tezos/node \ -v /data/tezos/mainnet/snapshots:/snapshots \ --entrypoint bash \ -it asbjornenge/tezos-ubuntu-arm:latest
Notice I modified the
entrypoint to run a container without running the
entrypoint.sh script - it does currently not support loading a snapshot.
Next, let's make sure the permissions of our folders are correct:
conatiner> cat /etc/passwd | grep tezos
gid of the tezos user. Change the permissions (outside the container):
chown -R <uid>:<gid> /data/tezos/mainnet
Next, inside the container, load the snapshot:
container> tezos-node snapshot import /snapshots/mainnet.full --data-dir /var/run/tezos/node/data/
Before we can start the node we need to add the
alphanet_version file (because of this bug):
echo 2018-06-30T16:07:32Z-betanet > /var/run/tezos/node/alphanet_version
Now we are ready to start the node!
docker run --rm \ -p 8732:8732 \ -v /data/tezos/mainnet/client:/var/run/tezos/client \ -v /data/tezos/mainnet/node:/var/run/tezos/node \ -v /data/tezos/mainnet/snapshots:/snapshots \ -it asbjornenge/tezos-ubuntu-arm:latest tezos-node
And thats it 😬🎉 Give it some time to start and you are running a full Tezos node on RPi4 🚀
Once it's started, you can verify the block level of your node:
curl -s localhost:8732/chains/main/blocks/head | jq .header.level
The CPU performance and MEM usage is well within bounds. The most important thing I wanted to perf test was the disk read/write speed.
I used a very basic read/write test to perf test the disk:
cd /data sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.49814 s, 195 MB/s dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.51642 s, 708 MB/s
As we can see we get a read speed og around
200 MB/s and write speed of around
700 MB/s. I was hoping for much better perf. than this, since both the SSD and adapter support SATA 3 and USB 3.1 "superspeed". But seems the limiting factor here might be that the RPi4 has a USB 3.0 port.
I was a bit encouraged by checking the read/write on my cloud nodes where both was only around
80 MB/s 😉
Seems the above was also somewhat related to running the OS from the MicroSD card. After moving to Ubuntu and running OS from SSD I got the following results:
sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.7732 s, 225 MB/s dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.819122 s, 1.3 GB/s
- Put OS on SSD ✅
That poor MicroSD card has disk read speed around
11 MB/s so I believe the reason the
tezos-node RPC API sometimes feel a bit sluggish on the PI is caused by having the OS (and also Docker) running on that slow MicroSD card.
- UPS (powerbank battery) & 4G modem
I want to make the nodes as highly available as possible, so I want to add a batterybank and a 4G modem so it should be up no matter what (almost) 😉
- Signer with Ledger support
I haven't tested running a signer on the PI yet - it's on my TODO list.
Hope this was useful ✨