When the data size of a partition exceeds the configuration size of spark.task.resource.gpu.amount, will the framework automatically batch processes?
For example, the size of a partition is 50MB, and spark.task.resource.gpu.amount=0.001625, the gpu memory resources obtained by each task=16GB0.0016251024=26.6MB. In this case, how will a partition data be processed ?
For this partition, will it be handled by 2 tasks or by one task and then continue processing?
When the data size of a partition exceeds the configuration size of spark.task.resource.gpu.amount, will the framework automatically batch processes?
For example, the size of a partition is 50MB, and spark.task.resource.gpu.amount=0.001625, the gpu memory resources obtained by each task=16GB0.0016251024=26.6MB. In this case, how will a partition data be processed ?
For this partition, will it be handled by 2 tasks or by one task and then continue processing?