[API Compatibility] Support arg closure for paddle.optimizer.optimizer.step#78570
[API Compatibility] Support arg closure for paddle.optimizer.optimizer.step#78570algorithm1832 wants to merge 6 commits intoPaddlePaddle:developfrom
closure for paddle.optimizer.optimizer.step#78570Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (85.71%) is below the target coverage (90.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## develop #78570 +/- ##
==========================================
Coverage ? 85.71%
==========================================
Files ? 3
Lines ? 21
Branches ? 0
==========================================
Hits ? 18
Misses ? 3
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| >>> adam.step() | ||
| >>> adam.clear_grad() | ||
| """ | ||
| if closure is not None: |
There was a problem hiding this comment.
loss = None
if closure is not None:
with imperative_base.enable_grad():
loss = closure()
...
return loss
python/paddle/optimizer/optimizer.py
Outdated
| >>> adam.step() | ||
| >>> adam.clear_grad() | ||
|
|
||
| >>> # With closure |
There was a problem hiding this comment.
两种用法二选一,分别写:
# usage 1:not use closure
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
# usage 2:use closure
def closure():
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
return loss
loss = optimizer.step(closure)
| ... out = linear(a) | ||
| ... loss = paddle.mean(out) | ||
| ... loss.backward() | ||
| ... return loss |
test/legacy_test/test_optimizer.py
Outdated
| parameters=linear.parameters(), | ||
| ) | ||
|
|
||
| def closure(): |
There was a problem hiding this comment.
单测要测全一些,按示例代码的逻辑测全,测一个完整的前反向过程
|
很多CI没过 @algorithm1832 |
|
Coverage没过,一是测试只包含adam,没包含adamw和optimizer类;二是下面这个分支没覆盖上,我再看一下 if paddle.base.dygraph.base.in_to_static_mode():
self._declarative_step()
return loss |
上面这个判断条件,我尝试了一下好像没法让代码进入,转静态图之后似乎就不走原先的step函数了 想请教一下@SigureMo关于这个怎么写测试覆盖,谢谢~ |
这个目前逻辑无法走到,如果只是这行没覆盖到我觉得可以豁免,目前优先让其他行覆盖吧 |
PR Category
User Experience
PR Types
New features
Description
closureforpaddle.optimizer.optimizer.stepUsed AI Studio
是否引起精度变化
否