【主题】:解决HBase版本高低差异造成表查询失败问题

最近应公司业务需求,需要研究kettle的HBase表输入和表输出读取操作,自己用java写完测试demo,测试库是用公司的环境,HBase版本是2.1.2,一切ok,但是当产品让客户测试的时候,出了问题了,日志上显示

org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family table does not exist in region hbase:meta,,1.1588230740 in table 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', BLOOMFILTER => 'NONE', VERSIONS => '10', IN_MEMORY => 'true', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 
    at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7721)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6876)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2007)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32381)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
    at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:279)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:266)
    at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:129)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
    at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
    at org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1066)
    at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:389)
    at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:437)
    at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:434)

日了狗了,于是开始了漫漫bug路,最后发现是版本不对,产品使用的HBase驱动包是2.1.2新版本,客户那边的是1.2.1的版本,我去,这版本相差太大,由于客户那边数据库一直在用,经过一番决定之后,老板让降版本,将代码版本回退到1.2.1,我于心不忍啊,决定好好查查到底是哪个地方出了问题。

设置断点一路追踪下去,发现是在这个地方抛错了,查看源码发现问题是这样的,应该是新版的表Exits验证跟旧版本里的方法有冲突。

 /*try {
        if (!admin.tableExists(tbname)) {
          throw new KettleException( BaseMessages.getString
        		  ( HBaseInputMeta.PKG,"HBaseInput表不存在", sourceName ) );
        }
      } catch ( Exception ex ) {
        throw new KettleException( BaseMessages.getString
        		( HBaseInputMeta.PKG, "HBaseInput.读表失败", sourceName ), ex );
      }*/

解决办法是去掉你项目里的这个表是否存在的验证即可。其余增,删,改,查方法在新版本里还是好使的。

当然你重新更换jar包依赖是最好的办法,不过工作量要多一些。 

 发表评论     发表时间:『2019-01-25 10:46:43』


扫描二维码关注网站最新动态