Engine PredictionIO

Facing an error due to iteration in the engine for data, due to which stackoverflow an exception:

 ERROR org.apache.spark.executor.Executor [Executor task launch worker-0] - Exception in task 0.0 in stage 30.0 (TID 76) java.lang.StackOverflowError at java.io.ObjectInputStream$BlockDataInputStream.readByte(ObjectInputStream.java:2774) at java.io.ObjectInputStream.readHandle(ObjectInputStream.java:1450) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1512) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) 
+1
source share
1 answer

The solution for the error is received:

1. Simply reduce the numIterations parameter for the algorithm in the engine.json file in your prediction engine.

or If this does not work, refer to another solution below.

2.Add checkpointing, which prevents the recursion used by the codebase from creating overflows. First create a new directory for storing breakpoints. Then use the SparkContext this directory for verification testing. Here is an example in Python:

sc.setCheckpointDir ('breakpoint /') You may also need to add a breakpoint to ALS, but I could not determine if this has a value. To add a breakpoint (perhaps not necessary), simply do:

ALS.checkpointInterval = 2

+3
source

All Articles