freegrad
Alternative backward rules and gradient transforms alongside PyTorch autograd.
🔥 Why freegrad?
freegrad is a lightweight research framework that lets you decouple forward and backward in PyTorch.
It provides:
- Custom gradient rules (e.g. noise, clipping, jamming)
- A clean context manager API for applying rules selectively
- Drop-in activation wrappers with alternative backward behavior
- Compatibility with vanilla autograd — nothing is patched
🚀 Quickstart
pip install -e .[dev]
import torch
import freegrad as xg
from freegrad.wrappers import Activation
x = torch.randn(8, requires_grad=True)
act = Activation(forward="ReLU")
with xg.use(rule="rectangular_jam", params={"a": -1.0, "b": 1.0}, scope="activations"):
y = act(x).sum()
y.backward()
print(x.grad)
📖 Documentation
🤝 Contributing
Contributions are very welcome! Please see CONTRIBUTING.md.
📄 License
Released under the MIT License.