Setting Up a Multiple Node Cluster

To run an OpenDaylight controller in a three node cluster, do the following:

  1. Determine the three machines that will make up the cluster and copy the controller distribution to each of those machines.
  2. Unzip the controller distribution.
  3. Navigate to the <Karaf-distribution-location>/bin directory.
  4. Run Karaf: ./karaf
  5. Install the clustering feature: feature:install odl-mdsal-clustering
[Note]Note

To run clustering, you must install the odl-mdsal-clustering feature on each of your nodes.

  1. If you are using the integration distribution of Karaf, you should also install the open flow plugin flow services: feature:install odl-openflowplugin-flow-services
  2. Install the Jolokia bundle: install -s mvn:org.jolokia/jolokia-osgi/1.1.5
  3. On each node, open the following .conf files:

    • configuration/initial/akka.conf
    • configuration/initial/module-shards.conf
  4. In each configuration file, make the following changes:

    1. Find every instance of the following lines and replace 127.0.0.1 with the hostname or IP address of the machine on which the controller will run:

      netty.tcp {
        hostname = "127.0.0.1"
[Note]Note

The value you need to specify will be different for each node in the cluster.

  1. Find the following lines and replace 127.0.0.1 with the hostname or IP address of any of the machines that will be part of the cluster:

    cluster {
      seed-nodes = ["akka.tcp://opendaylight-cluster-data@127.0.0.1:2550"]
  2. Find the following section and specify the role for each member node. For example, you could assign the first node with the member-1 role, the second node with the member-2 role, and the third node with the member-3 role.

    roles = [
      "member-1"
    ]
  3. Open the configuration/initial/module-shards.conf file and update the items listed in the following section so that the replicas match roles defined in this host’s akka.conf file.

    replicas = [
        "member-1"
    ]

For reference, view a sample akka.conf file here: https://gist.github.com/moizr/88f4bd4ac2b03cfa45f0

  1. Run the following commands on each of your cluster’s nodes:

    • JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
    • JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
    • JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf

The OpenDaylight controller can now run in a three node cluster. Use any of the three member nodes to access the data residing in the datastore.

Say you want to view information about shard designated as member-1 on a node. To do so, query the shard’s data by making the following HTTP request:

GET http://<host>:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore

[Note]Note

If prompted, enter admin as both the username and password.

This request should return the following information:

{
    "timestamp": 1410524741,
    "status": 200,
    "request": {
    "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore",
    "type": "read"
    },
    "value": {
    "ReadWriteTransactionCount": 0,
    "LastLogIndex": -1,
    "MaxNotificationMgrListenerQueueSize": 1000,
    "ReadOnlyTransactionCount": 0,
    "LastLogTerm": -1,
    "CommitIndex": -1,
    "CurrentTerm": 1,
    "FailedReadTransactionsCount": 0,
    "Leader": "member-1-shard-inventory-config",
    "ShardName": "member-1-shard-inventory-config",
    "DataStoreExecutorStats": {
    "activeThreadCount": 0,
    "largestQueueSize": 0,
    "currentThreadPoolSize": 1,
    "maxThreadPoolSize": 1,
    "totalTaskCount": 1,
    "largestThreadPoolSize": 1,
    "currentQueueSize": 0,
    "completedTaskCount": 1,
    "rejectedTaskCount": 0,
    "maxQueueSize": 5000
    },
    "FailedTransactionsCount": 0,
    "CommittedTransactionsCount": 0,
    "NotificationMgrExecutorStats": {
    "activeThreadCount": 0,
    "largestQueueSize": 0,
    "currentThreadPoolSize": 0,
    "maxThreadPoolSize": 20,
    "totalTaskCount": 0,
    "largestThreadPoolSize": 0,
    "currentQueueSize": 0,
    "completedTaskCount": 0,
    "rejectedTaskCount": 0,
    "maxQueueSize": 1000
    },
    "LastApplied": -1,
    "AbortTransactionsCount": 0,
    "WriteOnlyTransactionCount": 0,
    "LastCommittedTransactionTime": "1969-12-31 16:00:00.000",
    "RaftState": "Leader",
    "CurrentNotificationMgrListenerQueueStats": []
    }
}

The key thing here is the name of the shard. Shard names are structured as follows:

<member-name>-shard-<shard-name-as-per-configuration>-<store-type>

Here are a couple sample data short names:

  • member-1-shard-topology-config
  • member-2-shard-default-operational

loading table of contents...