This is the manual for tlp-cluster, a provisioning tool for Apache Cassandra designed for developers looking to benchmark and test Apache Cassandra. It assists with builds and starting instances on AWS.

If you are looking for a tool to aid in benchmarking these clusters please see the companion project tlp-stress.

If you’re looking for tools to help manage Cassandra in production environments please see the Reaper project and cstar

Prerequisites

  • An AWS access key and secret. tlp-cluster uses Terraform to create and destroy instances. You will be prompted for these the first time you start tlp-cluster.

  • The access key needs permissions to create an S3 bucket as well as create SSH keys. Separate keys are used by default for security reasons.

Installation

The easiest way to get started is to use one of our prebuilt packages.

Installing a Package

The easiest way to get started is to use your favorite package manager.

The current version is 0.5.

Deb Packages

$ echo "deb https://dl.bintray.com/thelastpickle/tlp-tools-deb weezy main" | sudo tee -a /etc/apt/sources.list
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 2895100917357435
$ sudo apt update
$ sudo apt install tlp-cluster

RPM Packages

You’ll need the bintray repo set up on your machine. Create this /etc/yum.repos.d/tlp-tools.repo:

[bintraybintray-thelastpickle-tlp-tools-rpm]
name=bintray-thelastpickle-tlp-tools-rpm
baseurl=https://dl.bintray.com/thelastpickle/tlp-tools-rpm
gpgcheck=0
repo_gpgcheck=0
enabled=1

Then run the following to install:

$ yum install tlp-cluster

Tarball Install

If you’re using mac, for now you’ll need to grab our tarball using:

$ curl -L -O "https://dl.bintray.com/thelastpickle/tlp-tools-tarball/tlp-cluster-0.5.tar"
$ tar -xzf tlp-cluster-0.5.tar

To get started, add the bin directory of tlp-cluster to your $PATH. For example:

export PATH="$PATH:/path/to/tlp-cluster/bin"
cd /path/to/tlp-cluster
./gradlew assemble

Setup

We currently have a dependency on shell scripts meaning you’ll need a Mac or Linux box to use this tool.

If you’ve never used to the tool before, the first time you run a command you’ll be asked to supply some information, which will generate a configuration file which will be placed in your $HOME/.tlp-cluster/profiles/default/settings.yaml.

We currently only support the ubuntu 16 ami in us-west-2. We’d love a pull request to improve this!

Running the command without any arguments will print out the usage:

tlp-cluster

You’ll see the help. It looks like this:

Usage: tlp-cluster [options] [command] [command options]
  Options:
    --help, -h
      Shows this help.
      Default: false
  Commands:
    init      Initialize this directory for tlp-cluster
      Usage: init [options] Client, Ticket, Purpose
        Options:
          --ami
            AMI
            Default: ami-51537029
          --cassandra, -c
            Number of Cassandra instances
            Default: 3
          --instance
            Instance Type
            Default: c5d.2xlarge
          --monitoring, -m
            Enable monitoring (beta)
            Default: false
          --region
            Region
            Default: us-west-2
          --stress, -s
            Number of stress instances
            Default: 0
          --up
            Start instances automatically
            Default: false

    up      Starts instances
      Usage: up [options]
        Options:
          --auto-approve, -a, --yes
            Auto approve changes
            Default: false

    start      Start cassandra on all nodes via service command
      Usage: start [options]
        Options:
          --all, -a
            Start all services on all instances. This overrides all other
            options
            Default: false
          --monitoring, -m
            Start services on monitoring instances
            Default: false

    stop      Stop cassandra on all nodes via service command
      Usage: stop [options]
        Options:
          --all, -a
            Start all services on all instances. This overrides all other
            options
            Default: false
          --monitoring, -m
            Start services on monitoring instances
            Default: false

    install      Install Everything
      Usage: install

    down      Shut down a cluster
      Usage: down [options]
        Options:
          --auto-approve, -a, --yes
            Auto approve changes
            Default: false

    build      Create a custom named Cassandra build from a working directory.
      Usage: build [options] Path to build
        Options:
          -n
            Name of build

    ls      List available builds
      Usage: ls

    use      Use a Cassandra build
      Usage: use [options]
        Options:
          --config, -c
            Configuration settings to change in the cassandra.yaml file
            specified in the format key:value,...
            Default: []

    clean      null
      Usage: clean

    hosts      null
      Usage: hosts


Done

Initialize a Cluster

The tool uses the current working directory as a project, of sorts. To get started, run the following, substituting your customer / project, ticket and purpose.

tlp-cluster init CLIENT TICKET PURPOSE

Where:

  • CLIENT - Name of the customer, client, or project associated with the work you are doing with tlp-cluster.

  • TICKET - Jira or github ticket number associated with the work you are doing with tlp-cluster.

  • PURPOSE - Reason why you are creating the cluster.

This will initialize the current directory with a terraform.tf.json. You can open this up in an editor. Here you can change the number of nodes in the cluster, as well as configure the number of stress nodes you want. You can also change the instance type. Generally speaking though, you shouldn’t have to do this. If you find yourself doing it often, please submit an issue describing your requirements and we’ll work with you to solve the problem.

Certain instances types may not work with the ami that’s hard coded at the moment, we’re looking to fix / improve this.

Launch Instances

Launch your instances with the following:

tlp-cluster up

Terraform will eventually ask you to type yes and fire up your instances. Optionally you can pass --yes to the -up command and you won’t be prompted.

tlp-cluster will create a file, env.sh, which has helpful aliases and bash functions that will help you run your cluster. Run the following:

source env.sh

This will set up SSH, SCP, SFTP, and rsync to use a local sshConfig file, as well as some other helpful aliases.

SSH alises for all Cassandra nodes are automatically created as c0-cN. ssh is not required. For example:

c0 nodetool status

In addition, the following are defined:

  • c-all executes a command on every node in the cluster sequentially.

  • c-collect-artifacts will collect metrics, nodetool output and system information into the artifacts directory. It takes a name as a parameter. This is useful when doing performance testing to capture the state of the system at a given moment.

  • c-start Starts cassandra on all nodes.

  • c-restart Restarts cassandra on all nodes. Not a graceful operation. To test true rolling restarts we recommend using cstar.

  • c-status: Executes nodetool status on cassandra0.

  • c-tpstats: Executes nodetool tpstats on all nodes.

Installing Cassandra

The Easy Way - Use a Released Build

The easiest path forward to getting a cluster up and running is the following:

tlp-cluster use 3.11.4
tlp-cluster install
tlp-cluster start

Simply replace 3.11.4 with the release version.

The Hard Way - Use a custom Build

To install Cassandra on your instances, you will need to follow these steps:

  1. Build the version you need and give it a build name (optional)

  2. Tell tlp-cluster to use the custom build

The first step is optional because you may already have a build in the ~/.tlp-cluster/build directory that you want to use.

If you have no builds you will need to run the following:

tlp-cluster build -n BUILD_NAME /path/to/repo

Where:

  • BUILD_NAME - Name you want to give the build e.g. my-build-cass-4.0.

  • /path/to/repo - Full path to clone of the Cassandra repository.

If you already have a build that you would like to use you can run the following:

tlp-cluster use BUILD_NAME

This will copy the binaries and configuration files to the provisioning/cassandra directory in your tlp-cluster repository. The provisioning directory contains a number of files that can be used to set up your instances. Being realistic, since we do so much non-standard work (EBS vs instance store, LVM vs FS directly on a device, caches, etc) we need the ability to run arbitrary commands. This isn’t a great use case for puppet / chef / salt / ansible (yet), so we are just using easy to modify scripts for now.

If you want to install other binaries or perform other operations during provisioning of the instances, you can add them to the provisioning/cassandra directory. Note that any new scripts you add should be prefixed with a number which is used to determine the order they are executed by the install.sh script.

To provision the instances run the following:

tlp-cluster install

Where:

  • SSH_KEY_PATH - Is the full path to the private key from the key pair used when creating the instances.

This will push the contents of the provisioning/cassandra directory up to each of the instances you have created and install Cassandra on them.

Dashboards

This documentation is very rough, and needs improvment before the 1.0 release.
if you want to do anything in the dashboards, you’d need to run ./gradlew buildJsonnet to build the container

You can regenerate the dashboards using the following:

./gradlew generateDashboards

Optionally you can add the -t flag and Gradle will watch for changes, and rebuild the dashboards when it detects changes.

They get output here: src/main/resources/com/thelastpickle/tlpcluster/commands/origin/provisioning/monitoring/config/grafana/dashboards

There’s a docker-compose file, docker-compose-monitoring-dev.yml, which has all the necessary configuration to start a full environment designed to make editing dashboards easier. You can start it with:

./gradlew previewDashboards

This will start Cassandra, tlp-stress, Prometheus and Grafana.

The following ports are open:

Port Purpose

3000

Grafana web interface

9090

Prometheus Web Interface

9042

Cassandra Native Protocol (cql)

9500

tlp-stress prometheus port

9103

Cassandra prometheus port

the normal ports are all mapped for you so you can reach prometheus on 9090 and grafana on 3000