remove gradient in state.positions after mace forward#540
remove gradient in state.positions after mace forward#540thomasloux wants to merge 1 commit intoTorchSim:mainfrom
Conversation
|
I'm ok with other solutions like pass a copy of |
|
LGTM, just awareness that as we move to external posture for mace on the mace 0.3.16 release this will likely change upstream. I hope this tooling PR will go in before then and I don't think that this issue is addressed over there. c.f. #524 |
|
This should be good in the new external Mace Model because the positions used by the model is positions after wrapping, so when you run |
|
But this problem is a good reminder to be careful about side effects when modifying input arguments |
|
merge or close? Did you try checkout that PR and see if it fixes the issue? if so would appreciate a bump on the thread to prompt ilyas towards merging and cutting a new mace-torch release |
Summary
Mace tends to add gradient to positions by running
data["positions"].requires_grad_(True)https://github.com/ACEsuit/mace/blob/main/mace/modules/models.py#L776
Because Mace TorchSim interface passes
state.positionsdirectly, the gradient flows back to torchsim algorithms. In the following relaxation script, this ends failing with a weird random failure. I add Claude Code diagnostic explaining the steps for those interested:Script to reproduce:
uv sync --extra maceuv run --with pymatgen --with moyopy python fix_relax_gradient.pyChecklist
Before a pull request can be merged, the following items must be checked: