Enterprise Java

Setting Up Cassandra Cluster in AWS

Apache Cassandra is a NoSQL database that allows for easy horizontal scaling, using the consistent hashing mechanism. Seven years ago I tried it and decided not use it for a side-project of mine because it was too new. Things are different now, Cassandra is well established, there’s a company behind it (DataStax), there are a lot more tools, documentation and community support. So once again, I decided to try Cassandra.

This time I need it to run in a cluster on AWS, so I went on to setup such a cluster. Googling how to do it gives several interesting results, like this, this and this, but they are either incomplete, or outdates, or have too many irrelevant details. So they are only of moderate help.

My goal is to use CloudFormation (or Terraform potentially) to launch a stack which has a Cassandra auto-scaling group (in a single region) that can grow as easily as increasing the number of nodes in the group.

Also, in order to have the web application connect to Cassandra without hardcoding the node IPs, I wanted to have a load balancer in front of all Cassandra nodes that does the round-robin for me. The alternative for that would be to have a client-side round-robin, but that would mean some extra complexity on the client which seems avoidable with a load balancer in front of the Cassandra auto-scaling group.

The relevant bits from my CloudFormation JSON can be seen here. What it does:

  • Sets up 3 private subnet (1 per availability zone in the eu-west region)
  • Creates a security group which allows incoming and outgoing ports that allow cassandra to accept connections (9042) and for the nodes to gossip (7000/7001). Note that the ports are only accessible from within the VPC, no external connection is allowed. SSH goes only through a bastion host.
  • Defines a TCP load balancer for port 9042 where all clients will connect. The load balancer requires a so-called “Target group” which is defined as well.
  • Configures an auto-scaling group, with a pre-configured number of nodes. The autoscaling group has a reference to the “target group”, so that the load balancer always sees all nodes in the auto-scaling group
  • Each node in the auto-scaling group is identical based on a launch configuration. The launch configuration runs a few scripts on initialization. These scripts will be run for every node – either initially, or in case a node dies and another one is spawned in its place, or when the cluster has to grow. The scripts are fetched from S3, where you can publish them (and version them) either manually, or with an automated process.
  • Note: this does not configure specific EBS volumes and in reality you may need to configure and attach them, if the instance storage is insufficient. Don’t worry about nodes dying, though, as data is safely replicated.

That was the easy part – a bunch of AWS resources and port configurations. The Cassandra-specific setup is a bit harder, as it requires understanding on how Cassandra functions.

The two scripts are setup-cassandra.sh and update-cassandra-cluster-config.py, so bash and python. Bash for setting-up the machine, and python for cassandra-specific stuff. Instead of the bash script one could use a pre-built AMI (image), e.g. with packer, but since only 2 pieces of software are installed, I thought it’s a bit of an overhead to support AMIs.

The bash script can be seen here, and simply installs Java 8 and the latest Cassandra, runs the python script, runs the Cassandra services and creates (if needed) a keyspace with proper replication configuration. A few notes here – the cassandra.yaml.template could be supplied via the cloudformation script instead of having it fetched via bash (and having the pass the bucket name); you could also have it fetched in the python script itself – it’s a matter of preference. Cassandra is not configured for use with SSL, which is generally a bad idea, but the SSL configuration is out of scope of the basic setup. Finally, the script waits for the Cassandra process to run (using a while/sleep loop) and then creates the keyspace if needed. The keyspace (=database) has to be created with a NetworkTopologyStrategy, and the number of replicas for the particular datacenter (=AWS region) has to be configured. The value is 3, for the 3 availability zones where we’ll have nodes. That means there’s a copy in each AZ (which is seen like a “rack”, although it’s exactly that).

The python script does some very important configurations – without them the cluster won’t work. (I don’t work with Python normally, so feel free to criticize my Python code). The script does the following:

  • Gets the current autoscaling group details (using AWS EC2 APIs)
  • Sorts the instances by time
  • Fetches the first instance in the group in order to assign it as seed node
  • Sets the seed node in the configuration file (by replacing a placeholder)
  • Sets the listen_address (and therefore rpc_address) to the private IP of the node in order to allow Cassandra to listen for incoming connections

Designating the seed node is important, as all cluster nodes have to join the cluster by specifying at least one seed. You can get the first two nodes instead of just one, but it shouldn’t matter. Note that the seed node is not always fixed – it’s just the oldest node in the cluster. If at some point the oldest node is terminated, each new node will use the second oldest as seed.

What I haven’t shown is the cassandra.yaml.template file. It is basically a copy of the cassandra.yaml file from a standard Cassandra installation, with a few changes:

  • cluster_name is modified to match your application name. This is just for human-readable purposes, doesn’t matter what you set it to.
  • allocate_tokens_for_keyspace: your_keyspace is uncommented and the keyspace is set to match your main keyspace. This enables the new token distribution algorithm in Cassandra 3.0. It allows for evenly distributing the data across nodes.
  • endpoint_snitch: Ec2Snitch is set instead of the SimpleSnitch to make use of AWS metadata APIs. Note that this setup is in a single region. For multi-region there’s another snitch and some addtional complications of exposing ports and changing the broadcast address.
  • as mentionted above, ${private_ip} and ${seeds} placeholders are placed in the appropriate places (listen_address and rpc_address for the IP) in order to allow substitution.

The lets you run a Cassandra cluster as part of your AWS stack, which is auto-scalable and doesn’t require any manual intervention – neither on setup, nor on scaling up. Well, allegedly – there may be issues that have to be resolved once you hit the usecases of reality. And for clients to connect to the cluster, simply use the load balancer DNS name (you can print it in a config file on each application node)

Published on Java Code Geeks with permission by Bozhidar Bozhanov, partner at our JCG program. See the original article here: Setting Up Cassandra Cluster in AWS

Opinions expressed by Java Code Geeks contributors are their own.

Bozhidar Bozhanov

Senior Java developer, one of the top stackoverflow users, fluent with Java and Java technology stacks - Spring, JPA, JavaEE, as well as Android, Scala and any framework you throw at him. creator of Computoser - an algorithmic music composer. Worked on telecom projects, e-government and large-scale online recruitment and navigation platforms.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button