Building a Banana Pi server cluster

0 7217
I wanted to build a web site about Banana Pis and host the site on a Banana Pi. I want my site to handle as much traffic as possible I'm using a cluster of Banana Pi servers.  I built my cluster by setting up a web site on one Banana Pi, and then copying it to three other
Banana Pi servers.  You can see the site here:

I used Nginx, but Apache will also work well.  I haven't tested Pyplate and Nginx with on Ubuntu, but Apache should work well.  I had a few issues setting up Nginx and uWSGI on Ubuntu.

I'm using a CMS that I wrote myself called Pyplate.  It's written in Python, and it's very lightweight.  This command installs Pyplate:

  1. curl | sudo bash
Copy the Code

If you want to use Apache, you would use this command instead:

  1. curl | sudo bash
Copy the Code

After this, you need to copy one of the sample server configuration files into your server's configuration directory.  See this link for more detailed information on setting up Pyplate:

Copying the SD card

Once I had one server set up, I shut it down and connected the SD card to my Linux PC so that I could create a back-up image of the card.

sudo dd if=/dev/sde of=/home/steve/banana_pi_server.img

I used this command to write the image to three other cards:

sudo dd if=/home/steve/banana_pi_server.img of=/dev/sde

I then used each card to boot up a Pi, and I changed the host name and IP address of each Pi.  I changed some other details as well, like usernames and passwords.

Set up rsync and ssh

The Pyplate CMS caches static versions of a site in the server's web root directory.  In order to make each node display the web site, I don't need to sync the entire CMS which contains scripts and an SQLite database.  I only need to sync the cache, which can be done very easily with rsync.  I have set up rsync to use ssh to connect the master server to the worker nodes.

I created ssh keys so that ssh connections could be made without being prompted for a password.  This helps to streamline the synchronization process.  I created ssh keys on the master node with this command:

  1. ssh-keygen -t rsa  -f /usr/share/pyplate/.ssh/id_rsa
Copy the Code

I transferred the public key to the other nodes:

  1. cat ./.ssh/ | ssh node0@ 'cat >> ./.ssh/authorized_keys'cat ./.ssh/ | ssh node1@ 'cat >> ./.ssh/authorized_keys'cat ./.ssh/ | ssh node2@ 'cat >> ./.ssh/authorized_keys'
Copy the Code

Now the worker nodes can be synchronized with the master using these commands:

  1. rsync -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node0@ -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node1@ -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node2@
Copy the Code

Configuring the load balancer

I already have a load balancer set up to serve traffic to my Raspberry Pi cluster, so it was easy to modify this to serve traffic to my new Banana Pi cluster.  

The load balancer uses Apache, so I created another virtual host file and saved it in /etc/apache2/sites-available/banoffeepiserver.conf. In Apache, each virtual host file represents a virtual server.  The existing virtual host file will carry on serving traffic to the old site.  In the new virtual host file, I used the ServerName directory so that Apache would send all traffic for to the correct virtual server.

This is what the new virtual host file looks like:

  1. <VirtualHost *:80>    ServerName    ServerAlias  *    ProxyRequests Off
  2.     <Proxy balancer://bpicluster>        BalancerMember        BalancerMember        BalancerMember        BalancerMember        AllowOverride None        Order allow,deny        allow from all        ProxySet lbmethod=byrequests    </Proxy>
  3.     <Location /bpi-balancer-manager>        SetHandler balancer-manager        Order allow,deny        allow from 192.168.0    </Location>
  4.     <Location /admin/*>        Order allow,deny        allow from None    </Location>
  5.     ProxyPass /bpi-balancer-manager !    ProxyPass / balancer://bpicluster/    ErrorLog ${APACHE_LOG_DIR}/error.log    # Possible values include: debug, info, notice, warn, error, crit,    # alert, emerg.    LogLevel warn    CustomLog ${APACHE_LOG_DIR}/access.log combined</VirtualHost>
Copy the Code

This file contains a definition of the cluster and its members' IP addresses.  The proxypass directive tells Apache to pass requests to the cluster.

I already have port forwarding set up in my router, so I didn't need to make any changes to it.  It's set up to forward HTTP traffic to the front end of the load balancer.

Then I bought a domain name and pointed it to my WAN IP address. I launched the site four days ago, and it seems pretty stable.  Page load times are good, and the cluster seems to handle spikes in traffic reasonably well.
You have to log in before you can reply Login | Sign Up

Points Rules