digga tries to do something clever by loading all inputs' overlays, but fails
roberth opened this issue · 9 comments
Expected Behavior
-
Do not load the overlays of all inputs. Not appropriate for a framework, as it's O(n), and makes a single overlay strict in all overlays.
-
Do not fail if loading an overlay of an input.
Current Behavior
See
Steps to Reproduce
git clone https://github.com/sweenu/nixfiles
cd nixfiles
git checkout 41c881c76fc89f44cbefaf0da430c49c4153933c
nix repl --debugger
# ignore the warning about flake-parts args, that's been fixed
nix-repl> :lf .
nix-repl> overlays
error: attribute 'stdenv' missing
nix-repl> prev
{ callPackage = «lambda @ /nix/store/233086pcwz4qx09l1xgdlzn2yh1p04zy-source/lib/exportOverlays.nix:61:23»; isFakePkgs = true; lib = { ... }; system = "fake-system"; }
Additional Context
pkgs.stdenv.hostPlatform.system
is the recommended way to get a system
from a pkgs
. There's been talk of deprecating pkgs.system
, and Nixpkgs has in large part migrated.
You might want to do something like fakePkgs = mapAttrs (k: v: throw "exportOverlays: not allowing access to the real Nixpkgs for X reason, when the overlay tried to access
${k}") realPkgs
to improve the UX a tiny bit. Might break if pkgs?foo then seq pkgs.foo bar
type of logic though.
Your Environment
[user@system:~]$ nix run nixpkgs#nix-info -- -m; nix flake metadata
output here
Nix 2.13.0pre20230103_1534133.
Checkpoint Note: This originates from a desire to pull the overlay's keys out of an overlay.
Before that, it was (overlay null null)
in futile hopes the first would not be used to construe keys, thus making their evaluation strict.
I've read the file more than twice now, and I can't figure out why that code exists, let alone why it behaves the way it does.
Why not just name overlays by the names they already have?
Why no error when names collide?
Why remove the overlay's effect from final
?
Why does this problem even show up? https://github.com/sweenu/nixfiles/blob/main/flake.nix doesn't to do anything flake related, except importOverlays
which seems ok.
I'm not sure if I actually want to know.
I just barely remember the motivation, to be honest. But at that time, it might have been related to an observation that upstream overlay provider (when optimizing for their own NixOS config) don't design for the granularity that one might want to consume.
In that vein, this may have been an attempt to "unpack the bundle" downstream.
But that's about as much context on the motivation, I may still provide, at this point.
Here is the motivation.
And here even clearer. This is forensics, indeed.
The goal has been to automatically export everything that is local to the repo to downstream users, nicely namespaced per channel, so that downstream has an idea on which channel an overlay is supposed to work ("version bands light").
I've long switched myself to divnix/std
(also for NixOS via divnix/hive
), which has a clearer structure for importers and no predefined opinion for exports. It doesn't even recognize the Nix CLI as "force majeur", and regards it as just another client application.
And since the (client facing) flake schema isn't also (mis-)used via inputs.self
as the "global staging area" anymore (an explicit design choice in std
), it's actually trivial to tell "own art" apart from input art.
Why not remove the overlay's effect from final?
I don't think I actually understand this.
I noticed though that there is a tryEval block in FUP which we may assume was originally intended to catch these problems, but maybe it lacks the attrNames
to even trigger evaluation of the keys, at all. 🤷
Why not remove the overlay's effect from final?
I don't think I actually understand this.
Accidental negation; edited to fix that.
Here is the motivation.
from the linked comment:
can't detect overlays owned by self
Then it seems like the mistake was pulling the overlays out of their context without remembering anything about their context.
I don't understand why you'd lose track of which overlay is which.
I've long switched myself
Unless you're interested in fixing a bunch of expressions you've switched away from, I'm going to leave it here.
Then it seems like the mistake was pulling the overlays out of their context without remembering anything about their context.
Yes! That's not good design and I remember clearly how and why I grew an aversion for accessing self.*
ever since, using it as the global staging area. To the point, I made that impossible to do in std
😊
I'll keep this on my todo, after reloading context from the back of my memory, maybe there's actually a quick fix to flakes-utils-plus possible.
Attempted fix: gytis-ivaskevicius/flake-utils-plus#126 cc @sweenu
attrNames
has the same strictness behavior as isAttrs
, so I would expect this code to be equivalent.
cc @infinisil tryEval
this would make a lazier isAttrs
discernable from the current semantics and lead to a backwards incompatibility.
@blaggacao getting the names does seem like a better behavior for this hypothetical reason (although the rest of the file is still inexcusable imo)
attrNames has the same strictness behavior as isAttrs, so I would expect this code to be equivalent.
Ah! 😏 Then it's strange why it doesn't error on the tryEval
in the first place. The whole file should be a no-op, if tryEval doesn't succeed, iiuc.
abort
and other "programming errors" can't be caught same for most I/O, unless it's custom code that uses throw
after a pathExists
for example. We're not supposed to rely on tryEval
because "it's a DSL", from what I've learned before. Not sure I agree, but Nix works very well without tryEval
. Making some runtime aspects unobservable to the code gives a bit more freedom to refactor and change language semantics, as exhibited in the comment for infinisil.