fbpx

xSPECTAR XRPL nodes how-to

This will not be your typical step-by-step guide on how to implement rippled but rather an outline of the decision that we made for our nodes.

For the initial setup, XRPL has great documentation which you can access on https://XRPL.org/install-rippled.html

We will kick-off with our full history node first and then move to our validator’s nodes. Before we go jump into our full history node, you need to have some idea with what type of hardware we are working with for our nodes:

Full history node:

· CPU Cores: Xeon Gold 6134 (8C 3.2Ghz) x 2

· RAM: 16GB 2666Mhz PC4 ECC RDIMM x 16 = 256GB

· Chassis: 24 bays (HBA mode)

· A lot of SSDs (+-50TB)

Our (currently 2) validator nodes per node:

· CPU Core: AMD RYZEN 9 5950X 16 Core

· RAM: 16GB 2666Mhz PC4 ECC RDIMM x 8 = 128GB

· 2x 3,84TB NVMe

And of course location! We use different locations and data centers.

This is our current default hardware specs, but we are also experimenting with different Xeon Gold processors for other nodes in the near future (clustering).

For the history node we focussed more on the filesystem and how we could make this as fault tolerant as possible. We looked at different filesystems available on a Linux system and chose ZFS. When we talk about ZFS we are referring to OpenZFS (https://openzfs.org/wiki/Main_Page )

Some feature of ZFS:

· Protection against data corruption. Integrity checking for both data and metadata.

· Continuous integrity verification and automatic “self-healing” repair

· Data redundancy with mirroring, RAID-Z1/2/3 [and DRAID]

· Support for high storage capacities — up to 256 trillion yobibytes (2¹²⁸ bytes)

· Efficient storage with snapshots and copy-on-write clones

See https://docs.freebsd.org/en/books/handbook/zfs/#zfs-term for the complete list of features.

These are the important ones for our project;

Pool:

A storage pool is the most basic building block of ZFS. A pool consists of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a GUID. The ZFS version number on the pool determines the features available.

Vdev:

A pool consists of one or more vdevs, which themselves are a single disk or a group of disks, transformed to a RAID. When using a lot of vdevs, ZFS spreads data across the vdevs to increase performance and maximize usable space. All vdevs must be at least 128 MB in size.

Copy-On-Write:

Unlike a traditional file system, ZFS writes a different block rather than overwriting the old data in place. When completing this write the metadata updates to point to the new location. When a shorn write (a system crash or power loss in the middle of writing a file) occurs, the entire original contents of the file are still available and ZFS discards the incomplete write. This also means that ZFS does not require a fsck after an unexpected shutdown.

Dataset:

Dataset is the generic term for a ZFS file system, volume, snapshot or clone. Each dataset has a unique name in the format poolname/path@snapshot. The root of the pool is a dataset as well. Child datasets have hierarchical names like directories. For example, mypool/home, the home dataset, is a child of mypool and inherits properties from it.

Volume:

ZFS can also create volumes, which appear as disk devices. Volumes have a lot of the same features as datasets, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualisation, or exporting iSCSI extents.

Hot Spares:

The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in a storage pool. A hot spare device is inactive in a pool until the spare replaces the failed device.

1* 9iuAcG OecFyWqKHrQZ1A

The above are the basics that you need to know before starting with ZFS, but I recommend looking into the advance settings like ashift. In short, ashift lets you manually set the sector size, which can improve the overall performance.

If you want to geek-out you can also check some benchmarks of 45drives about ZFS and RAID, https://openbenchmarking.org/result/2110221-TJ-45DRIVESX73

For our history node setup, we used a 6-disk vdepth in RAIDZ1 (which is RAID5), meaning each vdev has 6 disks. Small detail we used ZFS slices for our setup.

All disks have the following GPT partition table in our setup:

part1 - 50M - Reserved BIOS boot area· 
part2 - 50G - mdadm softraid RAID1 - mounted as / 
part3 - 10G - mdadm softraid RAID1 - used as swap 
part4 - rest of the disk – unformatted (for ZFS pool)

With this setup if one disk entirely fails our server will still boot and we will be operational in no time. As an extra failsafe we opted to include a hot swap just in case one of our disk fails in our ZFS pool.

With that combination we should be relatively safe during a disk outage. The only issue will be when we have multiple disk outage at the same time. We could eliminate this problem by moving to a different RAID, like RAID2 which support 2disk failures without data loss. But this meant that our overall storage pool would take a hit in terms of usable storage. In future setups we will cluster our nodes for further redundancy.

Besides that, we still have backup of course and are working on a seperate NAS with the full history on it as a last resort in case of deep failure at our full history node.

I’m not going to explain on how to install the operation system, this should be self explanatory.

Next is to create the 5th partition that we are going to use for our ZFS pool:

root@host:~# sgdisk --new=5:0:0 --typecode=1:BF00 /dev/sdc
root@host:~# sgdisk --new=5:0:0 --typecode=1:BF00 /dev/sdd
root@host:~# sgdisk --new=5:0:0 --typecode=1:BF00 /dev/sde
......

Create the zfs pool:

root@host:~# zpool create -f -d -m none -o ashift=12 -O atime=off -o feature@lz4_compress=enabled mypool raidz /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5

Adding disks to pool is pretty much the same command, only difference is that you need to use add instead of create:

root@host:~# zpool add -f -d -m none -o ashift=12 -O atime=off -o feature@lz4_compress=enabled mypool raidz /dev/sdg5 /dev/sdh5 /dev/sdi5 /dev/sdc5 /dev/sdj5 /dev/sdk5

Next command is to add a hot spare:

root@host:~# zpool add mypool spare /dev/sdl5

Mount the pool:

root@host:~# zfs set mountpoint=/data mypool

Once everything is ready and configured on disk level, it’s time to install rippled. For this part you can refer to the official documentation of XPRL, https://XRPL.org/install-rippled.html

Full history config:

[server]
port_rpc_admin_local
port_ws_public
port_peer[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws_public]
port = 80
ip = 0.0.0.0
protocol = ws
[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peer
[node_size]
huge
[node_db]
type=NuDB
path=/data/db/nudb
advisory_delete=0
[ledger_history]
full
[database_path]
/data/db
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
[validators_file]
validators.txt
[ips_fixed]
r.ripple.com 51235
zaphod.alloy.ee 51235
sahyadri.isrdc.in 51235
[peer_private]
1
[rpc_startup]
{ "command": "log_level", "severity": "info" }[ssl_verify]
1
[node_seed]
xxxxxxxxxxxxxxxxx
[cluster_nodes]
xxxxxxxxxxxxxxxxx st02.public
xxxxxxxxxxxxxxxxx val01.internal
xxxxxxxxxxxxxxxxx val02.internal

As you can see it’s a pretty standard configuration file for the full history node. Only the cluster_node is new, for more information see https://XRPL.org/cluster-rippled-servers.html#cluster-rippled-servers .

Note that we have enabled port_ws_public, but this is not reachable from the outside world, as we do have some firewall rule in-place to prevent this.

Now that we’ve configured our full history node, we can use all the same steps as above for our other nodes with the config of our validator nodes below. The disk setup can differ, depending on the needs (validators don’t need a huge amount of storage).

Validator xx config:

[server]
port_rpc_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_size]
huge
[node_db]
type=NuDBpath=/data/db/nudb
advisory_delete=0
online_delete=256
[ledger_history]
256
[database_path]
/data/db
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.comtime.apple.com
time.nist.gov
pool.ntp.org
[rpc_startup]
{ "command": "log_level", "severity": "info" }
[ssl_verify]
1
[ips_fixed]
<internal ip> 51235
<internal ip> 51235
[peer_private]
1
[validator_token]
xxxxxxxxxxxxxxxxx
[node_seed]
xxxxxxxxxxxxxxxxx
[cluster_nodes]
xxxxxxxxxxxxxxxxx fh01.public
xxxxxxxxxxxxxxxxx st02.public
xxxxxxxxxxxxxxxxx val02.internal

As you can see we have small differences, most obvious one is the validator_token. Also note that we used peer_private on every note for additional security. You can adjust the online_delete=256 value if you want more number of ledgers to remain in the database (especially for validator nodes). The server periodically deletes any ledger versions that are older than this number. 256 is the minimum, if you have a slower cpu or less performant hard disk it’s recommended to keep it to a minimum. We are testing higher values but our initial setup starts at minimum settings to find the optimum value.

Speaking of security, our validator nodes have limited outgoing connection only to a predefined endpoints. This is the recommend setup if you want to have a secure validator. If you ever plan to become a trusted validator (like we are doing), see the following guides and best practices.

https://foundation.XRPL.org/unl/

https://rabbitkick.club/validator.html

As stated by @XRPLLabs the minimum hardware requirements can differ from the specs given by https://XRPL.org/ but keep in mind that you need a very stable network, some decent SSD’s (with fault tolerance) and make sure there’s a decent monitoring system running on your node(s).

We look forward you commenting our setups on Twitter: @xSPECTAR

FOLLOW US ON: Twitter🐦 Discord👾 Instagram📷