Skip to content

Load Balancing Freemium Ethereum Endpoints Pt.3

First Thing’s First: Some Warnings

When we left off in Part 2 we had a fully configured and functioning Dshackle instance with load balancing across local nodes first, and then Freemium and Public Endpoints as fallback nodes.

We, also left off with a big WARNING, so let’s address that first

WARNING: Some of the methods exposed on your the Local Nodes can be dangerous. Don’t expose it to the internet, or disable all of the methods that might be dangerous. I’ll touch more on this in Part 3 if you want know details!

Let’s start with Geth, because in my opinion it is the most dangerous of all of the clients to expose publicly, especially if you expose the degub namespace.

The Dangers of Exposing Geth

The first problem with exposing Geth to the public is that there is no way to filter which functions users can or cannot call within a given namespace. You either enable the entire namespace, or you disable it. This isn’t a problem if we stick dshackle in front of it, but it can certainly be a problem if users can hit Geth directly.

Pretty much all other clients have some kind of built in filter (E.G. Nethermind, OpenEthereum, and Turbo-Geth). Nethermind even goes a step further and protects your nodes from excessively large or complex debugs by implementing an internal (tunable) timeout. Geth doesn’t do any of this.

https://github.com/ethereum/go-ethereum/issues/21963

In fact, Geth (and Turbo-Geth) will happily consume all of your RAM until it is killed by the OS via OOM errors.

https://github.com/ledgerwatch/turbo-geth/issues/1458
https://github.com/ethereum/go-ethereum/issues/22244

However, Nethermind is not without its faults either. It’s impossible to get a Debug larger than 2GB out of Nethermind. So, you win some, you lose some.

The above is not true if you have response buffering turned off!

Dealing with Dapps and Browsers with NGINX

With those warnings out of the way, let’s talk about some of Dshackle’s shortcomings.

HTTP OPTIONS and HTTP GET

The first thing to know about Dshackle is that it has no support for handling HTTP OPTIONS requests, which a lot of apps will call first, before doing any POST requests for data.

It also doesn’t do any kind of Header manipulation, so headers like Access-Control-Allow-Origin and Access-Control-Allow-Methods are essentially a non-starter with just Dshacke.

Websocket Support

I hesitate to even list this, but many people expect Websocket support, despite the fact that support for it varies widely among clients. Either way, Dshackle doesn’t support Websocket connections. It can talk to nodes over Websocket and subscribe to new blocks/headers, etc. but it doesn’t support clients connecting to it via Websocket.

This is what happens when everyone just assumes you’re using Geth…

Poor Error Handling

One of the things that people like to use their nodes for is debugging transactions and replaying transactions with eth_call – the problem is, at the time of writing this, the way that nodes handle EVM errors inside of eth_call and the error codes they return are all pretty much different.

Maybe it’ll get fixed at some point, and also there’s a JSON-RPC standards project in the works – so maybe we’ll get standardize error codes some day (heh).

Session Stickiness for Filters

In the JSON-RPC there are some methods that allow you to create “Filters” which then give you back an index number that you can use on subsequent calls to reference that filter. Well, unfortunately Dshackle has no support for stickiness, which means that you have no guarantee of hitting the same node you just created a filter on.

The one exception to this being that you could theoretically configure ONLY 1 node in your Dshackle configuration to support the “filter” methods, and then configure a “fallback” node in case that one goes down. However, that will result in ALL filters going to 1 endpoint…YMMV.

Overcoming Some Issues!

The majority of the issues listed above are really issues that developers and people wanting to do blockchain analysis have to deal with, most users are not going to be sending reverted eth_call or creating log filters, but I’ll talk about some of the ways you can get around this.

Enter NGINX

I’m certainly not going to go into depth about how to setup and configure NGINX itself, it’s one of the most popular pieces of web software in existence, you got this. But what I will talk about is how I configure NGINX to talk to Dshackle, and other nodes.

NGINX for Dshackle

Let’s start with the server config:

server {
  listen 80;
  listen [::]:80;
  listen 443 ssl;
  listen [::]:443 ssl;
  
  server_name dshackle.chasewright.com;
  
  ssl_certificate /etc/nginx/ssl/dshackle.chasewright.com.crt;
  ssl_certificate_key /etc/nginx/ssl/dshackle.chasewright.com.key;
  
  location / {
    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'Content-Type';
      add_header 'Cache-Control' 'no-cache';
      add_header 'Connection' 'close';
      add_header 'Content-Type' 'text/plain; charset=utf-8';
      add_header 'Content-Length' 0;
      return 200;
    }
    if ($request_method = 'GET' ) {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'Content-Type';
      add_header 'Cache-Control' 'no-cache';
      add_header 'Connection' 'close';
      add_header 'Content-Type' 'text/plain; charset=utf-8';
      add_header 'Content-Length' 0;
      return 200;
    }
    if ($request_method = 'POST') {
      add_header 'Access-Control-Allow-Origin' '*';
    }
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_pass http://dshackle:8080/eth;
  }
}

So, breaking this down.

  • We’re going to listen on port 80 and 443.
  • We’re going to support an SSL/TLS certificate on 433.
  • We’re going to handle HTTP OPTIONS requests directly
  • We’re going to handle HTTP GET requests directly
  • We’re going to add the Access-Control-Allow-Origin header to POST requests
  • We’re going to set a header Host on proxied requests sent upstream
  • We’re going to pass requests to dshackle on port 8080 with the route /eth

There isn’t a whole lot of reason for us to encrypt between NGINX and Dshackle, assuming they’re on the same host or local network, but you certainly can if you have the certificate infrastructure setup. I’m not going to go into details about how to manager SSL/TLS certificates in this post. Most of this data is public blockchain data anyway 🙂

Happy Browser!

If you’ve made it this far, you should be able to point just about anything to your NGINX over HTTPS and have a happily working Dapp.

But…I want WebSocket…

Well, Dshackle can’t help you here..but NGINX can! Let’s look at a load balanced setup for NGINX that supports WebSockets and has some basic failover capabilities.

upstream gethnodes {
  least_conn;
  server geth01:8545;
  server geth02:8545;
}
server {
  listen 80;
  listen [::]:80;
  listen 443 ssl;
  listen [::]:443 ssl;
    
  server_name websocket.chasewright.com;   
  
  ssl_certificate /etc/nginx/ssl/websocket.chasewright.com.crt;
  ssl_certificate_key /etc/nginx/ssl/websocket.chasewright.com.key;
  location / {
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_pass http://gethnodes/;
  }
}

Yeah, that’s about it. This is a basic config that will let you run a WebSocket connection load balanced between 2 Geth nodes. Remember though, this is a direct connection, it’s kinda risky to support this. But it does also solve the problem Error Handling and Session Stickiness.

But let’s say you’re not using Geth, you’re using OpenEthereum or Nethermind, AND you have properly configured JSON-RPC method filters AND you want to support both WebSocket and JSON-RPC as well as session stickiness. How do you go about doing that?

upstream oenodes {
  ip_hash;
  server oe01:8545;
  server oe02:8545;
}

server {
  listen 80;
  listen [::]:80;
  listen 443 ssl;
  listen [::]:443 ssl;
    
  server_name oe.chasewright.com;   
  
  ssl_certificate /etc/nginx/ssl/oe.chasewright.com.crt;
  ssl_certificate_key /etc/nginx/ssl/oe.chasewright.com.key;

  if ($http_upgrade = "websocket") {
    rewrite ^.*$ /websocket;
  }

  location /websocket {
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_pass http://oenodes/;
  }
  
  location / {
    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'Content-Type';
      add_header 'Cache-Control' 'no-cache';
      add_header 'Connection' 'close';
      add_header 'Content-Type' 'text/plain; charset=utf-8';
      add_header 'Content-Length' 0;
      return 200;
    }
    if ($request_method = 'GET' ) {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'Content-Type';
      add_header 'Cache-Control' 'no-cache';
      add_header 'Connection' 'close';
      add_header 'Content-Type' 'text/plain; charset=utf-8';
      add_header 'Content-Length' 0;
      return 200;
    }
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_pass http://oenodes/;
  }
}

There, now you have two URLs, root (/) and WebSocket /websocket – they will be load balanced for both JSON-RPC and WebSocket while retaining ip_hash stickiness.

You should now be able to combine all of these configs into a setup that works for you, supporting direct connections and WebSockets when you need it, and proxied traffic through Dshackle when you don’t. Hopefully, all the pieces are here for you to lego your own solution.

I think that about covers my ranting for now. If I get any feedback or questions on this I’ll try to expand it into a Part 4. Feel free to reach out to me on Twitter @MysticRyuujin or come talk to me on Discord in the ArchiveNode.io channel.

Published inTech

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *