I am programming Apache Hama using the Hama graph API. I have a scalability problem when running my program in a cluster. The problem is that I am increasing the number of machines in the cluster, which, I believe, reduces the execution time, but I have more time to complete.
I run my program with a graph consisting of 8500 vertices. When using a cluster of 2 machines, the task takes 479 seconds, when using 3 machines, the job takes 503 seconds, when using 10 machines, the job takes 530 seconds. Can someone tell me what I am missing?
Here are my configuration data in a file hama-site.xml:
<configuration>
<property>
<name>bsp.master.address</name>
<value>master</value>
</property>
<property>
<name>bsp.system.dir</name>
<value>/tmp/hama-hadoop/bsp/system</value>
</property>
<property>
<name>bsp.local.dir</name>
<value>/tmp/hama-hadoop/bsp/local</value>
</property>
<property>
<name>hama.tmp.dir</name>
<value>/tmp/hama-hadoop</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:54310</value>
</property>
<property>
<name>hama.zookeeper.quorum</name>
<value>master,slave1,slave2,slave3</value>
</property>
</configuration>
File contents groomservers:
master
slave1
slave2
slave3
In the main method of my work, I have the following code:
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
HamaConfiguration conf = new HamaConfiguration();
GraphJob job = new GraphJob(conf, run.class);
job.setJobName("job");
BSPJobClient jobClient = new BSPJobClient(conf);
ClusterStatus cluster = jobClient.getClusterStatus(true);
job.setNumBspTask(cluster.getGroomServers());
...
job.setPartitioner(HashPartitioner.class);
....
if (matcherJob.waitForCompletion(true)) {
System.out.println("Job Finished");
}