TA的每日心情 | 开心 2021-12-13 21:45 |
---|
签到天数: 15 天 [LV.4]偶尔看看III
|
最近使用Spark处理较大的数据文件,遇到了分区2G限制的问题,spark日志会报如下的日志:
WARN scheduler.TaskSetManager: Lost task 19.0 in stage 6.0 (TID 120, 10.111.32.47): java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123)
at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132)
at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:517)
at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432)
at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:146)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
解决方法:
手动设置RDD的分区数量。当前使用的Spark默认RDD分区是18个,后来手动设置为500个,上面这个问题就迎刃而解了。可以在RDD加载后,使用RDD.repartition(numPart:Int)函数重新设置分区数量。
val data_new = data.repartition(500)
下面是一些相关的资料,有兴趣的读者可以进一步的阅读:
2GB limit in spark for blocks
create LargeByteBuffer abstraction for eliminating 2GB limit on blocks
Why does Spark RDD partition has 2GB limit for HDFS
抛异常的java代码:FileChannelImpl.java
|
|