Just including the node name in dfs.include and mapred.include is not enough. The slave file must be updated to namenode / jobtracker. Taskwracker and datanode must be run on the new node, and the refreshNodes command must be run on NameNode and JobTracker to inform them of the new node.
Below are instructions on how to do this.
According to 'Hadoop: The Ultimate Guide
The file (or files) specified by the dfs.hosts and mapred.hosts properties is different from the slaves file. The first is used by the pointer and jobtracker to determine which work nodes can connect. The slaves file is used by Hadoop management scripts to perform cluster-wide operations, such as restarting the cluster. It is never used by Hadoop Demons.
source share