Description:修复Trainer里check_code函数忽略pin_memory参数导致的内存不足bug
Main reason:
在使用fastNLP库时发生内存不足错误。使用场景是在使用CPU训练模型时,发生了内存错误。经过DEBUG发现,是core/trainer.py文件里,_check_code函数在调用Tester类时没有指定pin_memory参数,而Tester类默认初始化pin_memory为True。
具体错误调用栈:
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=2 : out of memory
Traceback (most recent call last):
File "/data/ouyhlan/TextClassification/main.py", line 52, in <module>
trainer = Trainer(train_data=data_bundle.get_dataset('train'), model=model, loss=loss,
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/fastNLP/core/trainer.py", line 558, in __init__
_check_code(dataset=train_data, model=self.model, losser=losser, forward_func=self._forward_func, metrics=metrics,
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/fastNLP/core/trainer.py", line 1013, in _check_code
evaluate_results = tester.test()
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/fastNLP/core/tester.py", line 184, in test
for batch_x, batch_y in data_iterator:
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/fastNLP/core/batch.py", line 266, in __iter__
for indices, batch_x, batch_y in self.dataiter:
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 477, in _next_data
data = _utils.pin_memory.pin_memory(data)
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 55, in pin_memory
return [pin_memory(sample) for sample in data]
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 55, in <listcomp>
return [pin_memory(sample) for sample in data]
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 51, in pin_memory
return {k: pin_memory(sample) for k, sample in data.items()}
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 51, in <dictcomp>
return {k: pin_memory(sample) for k, sample in data.items()}
File "/home/ouyhlan/miniconda3/envs/env1/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 47, in pin_memory
return data.pin_memory()
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/THCCachingHostAllocator.cpp:278
pin_memory参数设为False后问题消失。同时,根据https://github.com/pytorch/pytorch/issues/57273 ,建议所有的torch版本里Trainer和Tester类默认不开启pin_memory。
Checklist 检查下面各项是否完成
Please feel free to remove inapplicable items for your PR.
- [x] The PR title starts with [$CATEGORY] (例如[bugfix]修复bug,[new]添加新功能,[test]修改测试,[rm]删除旧代码)
- [x] Changes are complete (i.e. I finished coding on this PR) 修改完成才提PR
- [x] All changes have test coverage 修改的部分顺利通过测试。对于fastnlp/fastnlp/的修改,测试代码必须提供在fastnlp/test/。
- [x] Code is well-documented 注释写好,API文档会从注释中抽取
- [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change 修改导致例子或tutorial有变化,请找核心开发人员
Changes: 逐项描述修改的内容
- Tester和Trainer类默认不开启pin_memory
Mention: 找人review你的PR
@yhcc