I am trying to write my own coprocessor that creates an additional index using the prePut hook. To start, I was just trying to get the prePut coprocessor to work. So far, I could add a coprocessor to the put object passed to it. I found that I cannot force the coprocessor to write to a string other than what the put put object writes. Obviously, in order to create a secondary index, I need to do this.
Below is the code of my coprocessor, but it does not work.
Yes, all tables exist, and "colfam1" also exists.
HBase Version: HBase 0.92.1-cdh4.1.2 from Cloudera CDH4
Does anyone know what the problem is?
@Override public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e,final Put put, final WALEdit edit, final boolean writeToWAL) throws IOException { KeyValue kv = new KeyValue(Bytes.toBytes("COPROCESSORROW"), Bytes.toBytes("colfam1"),Bytes.toBytes("COPROCESSOR: "+System.currentTimeMillis()),Bytes.toBytes("IT WORKED")); put.add(kv); }
I get the following error:
ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues:
UPDATE:
I changed my coprocessor to the following, but I still get the error message. Now post-Put (secondary index) is written, but there is still a timeout error.
The entire table in the region crashing also requires a reboot of the region. Sometimes restarting a region does not work, and the entire region (all tables) is damaged, requiring server recovery.
I do not know why...!?
@Override public void start(CoprocessorEnvironment env) throws IOException { LOG.info("(start)"); pool = new HTablePool(env.getConfiguration(), 10); } @Override public void postPut(final ObserverContext<RegionCoprocessorEnvironment> observerContext,final Put put,final WALEdit edit,final boolean writeToWAL) throws IOException { byte[] tableName = observerContext.getEnvironment().getRegion().getRegionInfo().getTableName(); //not necessary though if you register the coprocessor for the specific table , SOURCE_TBL if (!Bytes.equals(tableName, Bytes.toBytes(SOURCE_TABLE))) return; try { LOG.info("STARTING postPut"); HTableInterface table = pool.getTable(Bytes.toBytes(INDEX_TABLE)); LOG.info("TURN OFF AUTOFLUSH"); table.setAutoFlush(false); //create row LOG.info("Creating new row"); byte [] rowkey = Bytes.toBytes("COPROCESSOR ROW"); Put indexput = new Put(rowkey); indexput.add(Bytes.toBytes ( "data"), Bytes.toBytes("CP: "+System.currentTimeMillis()), Bytes.toBytes("IT WORKED!")); LOG.info("Writing to table"); table.put(indexput); LOG.info("flushing commits"); table.flushCommits(); LOG.info("close table"); table.close(); } catch ( IllegalArgumentException ex) { //handle excepion. } } @Override public void stop(CoprocessorEnvironment env) throws IOException { LOG.info("(stop)"); pool.close(); }
Here is the region server log: (pay attention to my log entries)
2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: STARTING postPut 2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: TURN OFF AUTOFLUSH 2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Creating new row 2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Writing to table 2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: flushing commits 2013-01-30 19:31:39,813 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Failed all from region=test_table,,1359573731255.d41b77b31fafa6502a8f09db9c56b9d8., hostname=node01, port=60020 java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Call to node01/<private_ip>:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/<private_ip>:56390 remote=node01/<private_ip>:60020] at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1557) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:949) at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.flushCommits(HTablePool.java:449) at my.package.MyCoprocessor.postPut(MyCoprocessor.java:81) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postPut(RegionCoprocessorHost.java:682) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1901) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1742) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3102) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1345)
Solved: I tried to write to the same table in my coprocessor that the coprocessor worked: in short, when I wrote a cell, CP wrote a cell that made CP start up again and write another and so on and on. I stopped it by checking the line b4 by typing a CP line to prevent this loop.