I am trying to get around using Stanford CoreNLP. I used some code from the Internet to understand what happens with the binding tool. I tried to run the project in Eclipse, but still ran into memory exception. I tried to increase the heap size, but there is no difference. Any ideas on why this is happening? Is this a code specific issue? Any use of CoreNLP would be great.
EDIT - code added
import edu.stanford.nlp.dcoref.CorefChain; import edu.stanford.nlp.dcoref.CorefCoreAnnotations; import edu.stanford.nlp.pipeline.Annotation; import edu.stanford.nlp.pipeline.StanfordCoreNLP; import java.util.Iterator; import java.util.Map; import java.util.Properties; public class testmain { public static void main(String[] args) { String text = "Viki is a smart boy. He knows a lot of things."; Annotation document = new Annotation(text); Properties props = new Properties(); props.put("annotators", "tokenize, ssplit, pos, parse, dcoref"); StanfordCoreNLP pipeline = new StanfordCoreNLP(props); pipeline.annotate(document); Map<Integer, CorefChain> graph = document.get(CorefCoreAnnotations.CorefChainAnnotation.class); Iterator<Integer> itr = graph.keySet().iterator(); while (itr.hasNext()) { String key = itr.next().toString(); String value = graph.get(key).toString(); System.out.println(key + " " + value); } } }
viki.omega9
source share