Activations Are Bad for Geometry
A neural network layer is a map. Its Jacobian decides whether that map preserves geometry — distances, angles, volumes on the data manifold — or destroys it. For the standard form , with applied coordinatewise, the Jacobian factors as
where . is everything the activation contributes; does the rotating and mixing. The activation cannot produce new geometric structure — it can only modulate what the linear part provides, and only coordinatewise.
That is a strong constraint. The only structure can have is whatever chooses to look like at the pre-activations the input happens to produce. Under almost every activation in current use, that structure is one of: zero, small, or of bounded magnitude. None of these are accidental — they are what gives the activation its selectivity. They are also what makes it destroy geometry.
The rest of this post makes that destructive role concrete: how rank collapses under ReLU, how the pullback metric warps under saturation, how high-dimensional layers make these pathologies the rule rather than the exception, and why the tradeoff between selectivity and geometric fidelity is structural — no pointwise activation escapes it.
The Jacobian, activation by activation
The singular values of are the singular values of scaled coordinatewise by the entries of . So everything turns on at the pre-activations the layer actually sees.
identity
ReLU
leaky ReLU
sigmoid
tanh
GELU
The picture is uniform. ReLU gives — a hard gate that zeroes rows of on the negative half-plane. Sigmoid and tanh never zero exactly but saturate at both ends; small singular values multiply, and the Jacobian becomes ill-conditioned the moment any coordinate is far from zero. Leaky ReLU keeps strictly positive but pins it into with , taking a factor-of- hit to the condition number in expectation. GELU and softplus are smooth and everywhere; their conditioning cost is mild, gradual, and never zero.
These differences are not stylistic. They are the difference between a layer that can collapse, a layer that can degenerate, and a layer that mostly behaves.
Rank collapse, made concrete
ReLU’s effect on rank is exact. With active set ,
the rank of restricted to its surviving rows. If no longer spans , the Jacobian loses column rank and a neighborhood of is crushed onto a lower-dimensional subset of output space. Information that lived along the killed columns is gone in the strong sense — the next layer receives the same image regardless of where you were inside .
The usual reply is that makes the surviving submatrix wide enough to retain full column rank. That is correct in width but misleading in depth. The end-to-end Jacobian of an -layer network is
and any rank lost at any layer is lost end-to-end. ReLU rank collapse compounds; it does not undo. Residual connections bias each layer toward plus a small perturbation and mitigate the compounding, but they do not guarantee against it.
When activations don’t break things
The negative result has a positive sibling. If is strictly monotone and has full column rank, then restricted to any compact data manifold is a homeomorphism onto its image — distinct points stay distinct, topology is preserved, no rank collapse anywhere. If is also smooth, the pullback metric is a well-defined Riemannian metric (possibly ill-conditioned, but never singular).
The test sorts the standard activations cleanly. Sigmoid, tanh, GELU, softplus, and the identity are all strictly monotone; under a full-rank they preserve topology, and their only sin is condition number. ReLU is not strictly monotone — it has a flat half-line, and that flat half-line is the source of every pathology above. Leaky ReLU with scrapes by: it is strictly monotone, so is well-defined, but has a jump at zero, so is only piecewise smooth.
The corollary is that the activation question is largely the question of strict monotonicity. Lose it, and you lose homeomorphism on a measurable region of input space. Keep it, and the only thing left to manage is conditioning.
What the activation does to the metric
The pullback metric induced by the layer on a submanifold is
This is the metric the network thinks the data lives in. enters twice — squared — with two consequences worth naming.
Directional rescaling. Each row of is weighted by in . Sigmoid and tanh saturation, the leaky-ReLU slope, every situation in which a goes small: all push the corresponding row’s contribution toward zero. The “learned distance” the layer imposes is dominated by the rows whose neurons haven’t saturated; the rest contribute almost nothing to perceived similarity.
Directional erasure. When exactly, the row drops from entirely. The metric becomes singular along directions in : distances collapse to zero. This is the manifold-side picture of rank collapse — the geometric statement that the layer has stopped being a homeomorphism at .
input space
output space φ(Wx)
The unit disk is the cleanest place to see it. Identity and GELU stretch it; sigmoid and tanh compress it without folding; ReLU folds the negative half-planes onto the axes and crushes entire wedges of the disk onto a 1D set. There is no separate metric tensor the network keeps somewhere — the post-activation grid spacing is the metric.
High dimensions make this worse, not better
The intuition that “with a wide enough layer, ReLU sparsification is fine” survives in width but not in pressure. Under the simplest model — with each independent and symmetric — the probability that at least one coordinate is zeroed is
By this is . In a transformer hidden layer of width , every forward pass has approximately half its coordinates zeroed at every point. Whether this turns into rank collapse depends on the structure of , but the pressure toward sparsification does not disappear in the limit — it becomes the operating regime, and the analysis above stops being worst-case and becomes typical.
The point is not that ReLU is bad in high dimensions. It is that high dimensions are exactly where the geometric pathologies of pointwise activations live, and that handwaving about width does not make them go away.
The expressivity–geometry tradeoff
Why have an activation at all? Without one, a stack of layers is the single linear map — no nonlinear class boundary, no useful expressivity. The activation buys selectivity: when a neuron’s prototype matches the input and the projection passes through, when it doesn’t. Selectivity is what the activation is for.
But selectivity is exactly what damages geometry. Sharper activations — derivatives closer to — separate classes better and lose more geometry. Smoother, never-zero activations preserve more geometry but suppress selectivity, leaving the layer near its linear part. The choice of activation is the choice of where to sit on this axis. ReLU is a corner solution: maximum selectivity, maximum geometric damage. GELU and softplus are middle solutions. Identity is the other corner — perfect geometry, no expressivity gain over a single linear layer.
There is no escape from this tradeoff as long as the nonlinearity is pointwise. Every dimension spent on selectivity is taken from the metric.
Reading common tricks as Jacobian regularization
Several standard practices in deep learning, usually treated as separate phenomena, are all variations on a single intervention: keep away from rank collapse and saturation.
Residual connections turn each layer into , whose Jacobian is instead of . The identity term gives the Jacobian a floor — full rank by construction, well-conditioned as long as stays modest. The cumulative rank decay that plagues stacked ReLU layers becomes a perturbation around instead of a multiplicative product of degenerate matrices.
Batch and layer normalization rescale the pre-activation to roughly zero mean and unit variance. This is exactly the regime in which sigmoid/tanh/GELU have their largest and ReLU has its highest active fraction. Without normalization, drifts during training; the saturation set grows; shrinks. Normalization holds the input distribution in the activation’s live zone — stays away from zero.
Weight and spectral normalization bound the singular values of . They have no direct effect on , but by keeping ‘s spectrum tight they prevent the linear factor from compounding whatever damage has already inflicted.
These are not activation replacements. They are stabilizers — they keep the architecture in the regime where is least bad. The fact that the same analysis explains three different “tricks” is the content: each one holds a different piece of the Jacobian away from a different failure mode.
Why this matters for evaluation
Downstream operations on representations — cosine similarity, -nearest neighbors, clustering, retrieval — all assume the representation space carries the geometry they’re reading. If the layer has collapsed rank, cosine similarity compares vectors whose angles are artifacts of the surviving directions rather than properties of the data manifold. If the layer has saturated, small distances in representation space correspond to entirely different scales of input distance depending on which coordinates were saturated where. The metric you evaluate with is not the metric the network actually exposed.
The conclusion is not “don’t use cosine similarity.” It is that cosine similarity (and every other downstream metric) is only meaningful when the network has preserved the geometric structure the metric is reading. Choose activations that preserve Jacobian rank where the task requires it. Control input magnitudes via normalization so the activation does not saturate. Match the evaluation metric to the geometry the architecture actually preserves. None of this is optional if the goal is to compare representations rather than collateral.
The kernel alternative
The tradeoff exists because the architecture has separated geometry (carried by ) from selectivity (provided by ), and each piece can do its job only at the other’s expense. A kernel-machine layer dissolves the separation. A symmetric positive-definite kernel provides selectivity (sharper kernels are more selective) and geometry (a Gram matrix is a metric) at the same time. The function the layer computes is a finite expansion in kernel sections,
with a closed-form RKHS norm . There is no sitting in the middle to collapse rank or saturate the metric. The primitive is the geometry; selectivity is implemented as geometry; kernel similarity is the score.
The pointwise activation is not the price of expressivity. It is the price of refusing to make the primitive a kernel. Pick activations with the same care you pick a loss. They are not there for nonlinearity, and they are not free.
Cite as
Bouhsine, T. (). Activations Are Bad for Geometry. Records of the !mmortal Data Scientist. https://tahabouhsine.com/blog/activations-are-bad-for-geometry/
BibTeX
@misc{bouhsine2026activationsarebadforgeometry,
author = {Bouhsine, Taha},
title = {Activations Are Bad for Geometry},
year = {2026},
month = {feb},
howpublished = {\url{https://tahabouhsine.com/blog/activations-are-bad-for-geometry/}},
note = {Blog post, Records of the !mmortal Data Scientist}
} For the underlying paper
Bouhsine, T. (2026). Manifolds, Activations, and Lost Geometry: How Pointwise Nonlinearities Break the Map. Unpublished manuscript. [PDF]
BibTeX
@unpublished{bouhsine2026manifoldsactivations,
author = {Bouhsine, T.},
title = {Manifolds, Activations, and Lost Geometry: How Pointwise Nonlinearities Break the Map},
year = {2026},
note = {Unpublished manuscript}
}