feature_vis.gradient_ascent receives a function $f(x)$ to optimize, an initial estimate $x$ and some optimization parameters like step size and number of iterations.
Optionally, it can receive any of: a differentiable transform$t(x)$ to apply to $x$ at each iteration before evaluating $f$, a differentiable regularization$r(x)$ to be minimized, i.e., optimization becomes:
$$\arg\max_{x} f(t(x)) - r(t(x))\text{ ,}$$
a gradient_f function $g(x)$ to apply to the gradient before applying the update and a post_update function $p(x)$ to apply to the updated $x$ after each iteration:
These functions ($t$, $r$, $g$ and $p$) should cover the most common scenarios when creating feature visualizations for neural network models. We provide implementations for many of these commonly used functions in feature_vis.ops.
You can check the Examples.ipynb notebook to see how to visualize features from a VGG network or real neurons[1] under different configurations.
[1]: Models for real neurons come from a private repo but the examples should still be a useful starting point.