解决ORC元数据片段超过protobuf message大小限制问题

问题描述

用datax将数据从业务库同步到数仓,hive表采用的是orc存储格式。业务表数据量比较大,4000多万,表有点复杂,表字段超过100个。花了3个小时左右同步完,select count(*)统计表数据量时报错,如下

Diagnostic Messages for this Task:
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:266)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:333)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:720)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:438)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:252)
... 11 more
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawBytes(CodedInputStream.java:811)
at com.google.protobuf.CodedInputStream.readBytes(CodedInputStream.java:329)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StringStatistics.(OrcProto.java:1331)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StringStatistics.(OrcProto.java:1281)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StringStatistics$1.parsePartialFrom(OrcProto.java:1374)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StringStatistics$1.parsePartialFrom(OrcProto.java:1369)
at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$ColumnStatistics.(OrcProto.java:4897)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$ColumnStatistics.(OrcProto.java:4813)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$ColumnStatistics$1.parsePartialFrom(OrcProto.java:5005)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$ColumnStatistics$1.parsePartialFrom(OrcProto.java:5000)
at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeStatistics.(OrcProto.java:14334)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeStatistics.(OrcProto.java:14281)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeStatistics$1.parsePartialFrom(OrcProto.java:14370)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeStatistics$1.parsePartialFrom(OrcProto.java:14365)
at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Metadata.(OrcProto.java:15008)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Metadata.(OrcProto.java:14955)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Metadata$1.parsePartialFrom(OrcProto.java:15044)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Metadata$1.parsePartialFrom(OrcProto.java:15039)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Metadata.parseFrom(OrcProto.java:15155)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl$MetaInfoObjExtractor.(ReaderImpl.java:473)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:319)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237)
at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat.getRecordReader(VectorizedOrcInputFormat.java:155)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createVectorizedReader(OrcInputFormat.java:1089)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1102)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:67)
... 16 more

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 198 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

定位原因

1. 查看hive表对应的文件大小,有两个文件,其中一个文件超过50G

[hdfs@www /usr/local/xyhadoop/hive/conf]$ hdfs dfs -ls /user/hive/warehouse/xy_ods.db/java_cardloan_db_tb_cardloan_apply_new_20200417/xy_date=20200416
Found 2 items
-rw-r--r-- 3 hdfs hive 52966880342 2020-04-17 18:57 /user/hive/warehouse/xy_ods.db/java_cardloan_db_tb_cardloan_apply_new_20200417/xy_date=20200416/path__3f1a3d19_2acd_45cb_9ab5_057f58b98555
-rw-r--r-- 3 hdfs hive 34243399 2020-04-17 18:57 /user/hive/warehouse/xy_ods.db/java_cardloan_db_tb_cardloan_apply_new_20200417/xy_date=20200416/path__f240230a_d363_4079_8dba_662bdb566012

2. 根据错误提示网上查找原因和解决办法

https://issues.apache.org/jira/browse/HIVE-11592

If there are too many small stripes and with many columns, the overhead for storing metadata (column stats) can exceed the default protobuf message size of 64MB. Reading such files will throw the following exception

上面描述跟我的问题很匹配,我用的是hive-exec-1.2.2版本,这个版本上应该是有问题的。

解决方案

解决思路:

1. 更新hadoop版本,或者hive版本。动作太大,风险太大,准备不充分,可能影响线上业务。

2. 下个最新的hive-exec版本,以衣距离hive-exec-1.2.2最近的版本,可行的话最简单。

但实际测试,下载的hive-exec包跟其他hive 1.2.2所依赖的包版本不兼容,此法行不通。

3. 根据https://issues.apache.org/jira/browse/HIVE-11592提供的解决办法。

重新从github上下载hive 1.2.2版本源码,用https://issues.apache.org/jira/secure/attachment/12751102/HIVE-11592.3.patch,修改ReaderImpl.java文件,修改后重新编译hive-exec,替换客户端机器上的hive-exec-1.2.2.jar包,重新测试,ok!

[hdfs@www /home/xyhadoop/apache-hive-1.2.2-bin/lib]$ hive
ls: cannot access /usr/local/xyhadoop/spark/lib/spark-assembly-*.jar: No such file or directory

Logging initialized using configuration in file:/home/xyhadoop/apache-hive-1.2.2-bin/conf/hive-log4j.properties
hive> SELECT count(1) FROM xy_ods.java_cardloan_db_tb_cardloan_apply_new_20200417 LIMIT 100;
Query ID = hdfs_20200506182655_1751da8a-fa7f-41a2-ba9b-0f4c450f0f7e
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1585740815900_682744, Tracking URL = http://xydw56:8088/proxy/application_1585740815900_682744/
Kill Command = /usr/local/xyhadoop/hadoop/bin/hadoop job -kill job_1585740815900_682744
Hadoop job information for Stage-1: number of mappers: 198; number of reducers: 1
2020-05-06 18:27:05,816 Stage-1 map = 0%, reduce = 0%
2020-05-06 18:27:13,267 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.98 sec
2020-05-06 18:27:14,294 Stage-1 map = 4%, reduce = 0%, Cumulative CPU 109.65 sec
2020-05-06 18:27:15,319 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 378.91 sec
2020-05-06 18:27:16,345 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 775.53 sec
2020-05-06 18:27:17,370 Stage-1 map = 40%, reduce = 0%, Cumulative CPU 1123.49 sec
2020-05-06 18:27:18,394 Stage-1 map = 64%, reduce = 0%, Cumulative CPU 1787.71 sec
2020-05-06 18:27:19,418 Stage-1 map = 88%, reduce = 0%, Cumulative CPU 2453.68 sec
2020-05-06 18:27:20,443 Stage-1 map = 98%, reduce = 0%, Cumulative CPU 2751.21 sec
2020-05-06 18:27:21,472 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2794.03 sec
2020-05-06 18:27:26,600 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2798.89 sec
MapReduce Total cumulative CPU time: 46 minutes 38 seconds 890 msec
Ended Job = job_1585740815900_682744
MapReduce Jobs Launched:
Stage-Stage-1: Map: 198 Reduce: 1 Cumulative CPU: 2798.89 sec HDFS Read: 11140081325 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 46 minutes 38 seconds 890 msec
OK
42413937
Time taken: 32.638 seconds, Fetched: 1 row(s)