tc39/proposal-pipeline-operator

Effect of Hack proposal on code readability

arendjr opened this issue · 299 comments

Over the past weekend, I did some soul searching on where I stand between F# and Hack, which lead me into a deep-dive that anyone interested can read here: https://arendjr.nl/2021/09/js-pipe-proposal-battle-of-perspectives.html

During this journey, something unexpected happened: my pragmatism had lead me to believe Hack was the more practical solution (even if it didn't quite feel right), but as I dug into the proposal's examples, I started to dislike the Hack proposal to the point where I think we're better off not having any pipe operator, than to have Hack's.

And I think I managed to articulate why this happened: as the proposal's motivation states, an important benefit of using pipes is that it allows you to omit naming of intermediate results. The F# proposal allows you to do this without sacrificing readability, because piping into named functions is still self-documenting as to what you're doing (this assumes you don't pipe into complex lambdas, which I don't like regardless of which proposal). The Hack proposal's "advantage" however is that it allows arbitrary expressions on the right-hand side of the operator, which has the potential to sacrifice any readability advantages that were to be had. Indeed, I find most of the examples given for this proposal to be less readable than the status quo, not more. Objectively, the proposal adds complexity to the language, but it seems the advantages are subjective and questionable at best.

I'm still sympathetic towards F# because its scope is limited, but Hack's defining "advantage" is more of a liability than an asset to me. And if I have to choose between a language without any pipe operator or one with Hack, I'd rather don't have any.

So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?

Actually, it's a bad manner to tweak the binary operator itself for any purpose that essentially breaks the algebraic structures, and without messing up the math, we can compose "advantage" with the basic operator if we want to.

Mathematically,
HackStyle = MinimalStyle * PartialApplication("advantage" you feel)
where * is an operator of function composition.

@kiprasmel made an excellent presentation, did you read?
#202 (comment)

With it (partial application + F# pipes), you get the same functionality as you'd get with Hack, but 1) without the need of an additional operator |>> , and 2) the partial application functionality would work outside the scope of pipeline operators (meaning the whole language), which is great, as opposed to the special token of Hack that only works in the context of pipeline operators, 3) all other benefits of F# over Hack.

Another insight
#205 (comment)
@SRachamim

Hack style is a bad proposal that tries to combine two great proposals: F# |> + ? partial application.
For me there's no question. F# is the only way to go.

In software design, we must know:
One Task at a Time Code
or
a function should perform only a single task
and
use function composition
image

or more generally,
KISS principle

When we know the two are equal:

  • HackStyle
  • MinimalStyle * PartialApplication

The latter should be chosen for robustness, and finally,

So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?

I've just posted
Concern on Hack pipes on algebraic structure in JavaScript #223
This issue actually mentions the impact to readability, so please read if you have a time.

@stken2050 Thanks for pitching in. So what I got from your post is that the usage of lambda expressions on the RHS of the pipe operator would open up the possibility of accidentally referencing variables that happen to be in scope, and in the worst case developers might try to abuse that scope for implicit state management between invocations. That's indeed a good argument against the usage of lambdas there, and it is also another argument against the Hack proposal specifically, as its ability to embed arbitrary expressions makes the likelihood of such maintainability issues even larger. Hope I understood correctly :)

So what I got from your post is that the usage of lambda expressions on the RHS of the pipe operator would open up the possibility of accidentally referencing variables that happen to be in scope, and in the worst case developers might try to abuse that scope for implicit state management between invocations.

Actually, yes, and I'm very sorry about that I misunderstood your context.

In any way, I firmly believe if we pursuit some advantage on top of the operator, we should compose it. First we provide the minimal and pure operator |>, then using the operator we add on advanced features, which may become |>>.

@arendjr One clarification from your blog post: the Hack proposal doesn't introduce +> as part of the current proposal. That's a potential follow-on proposal. Hack also doesn't kill the partial application proposal, which could advance as an orthogonal proposal (see #221 for some info on its advancement issues).

Otherwise, the only only somewhat-amusing note is I find all of your examples with Hack pipe an improvement over the status quo versions. The only one I'd speak to specifically is the npmFetch version, which I much prefer over the baseline. If that was broken onto multiple lines, rather than one single line, it would separate the data transformation work being done in the first two steps from the API call on the third step in a way I find much clearer & easier to understand.

Beyond that, I appreciate the examples & exploration. clip is a great example of an API written in more mainstream idiomatic JavaScript that would be harmed by the F# proposal, as it would require either a pipe-specific API or arrow function wrapping.

@arendjr One clarification from your blog post: the Hack proposal doesn't introduce +> as part of the current proposal. That's a potential follow-on proposal. Hack also doesn't kill the partial application proposal, which could advance as an orthogonal proposal (see #221 for some info on its advancement issues).

Yes, good clarification!

Otherwise, the only only somewhat-amusing note is I find all of your examples with Hack pipe an improvement over the status quo versions. The only one I'd speak to specifically is the npmFetch version, which I much prefer over the baseline. If that was broken onto multiple lines, rather than one single line, it would separate the data transformation work being done in the first two steps from the API call on the third step in a way I find much clearer & easier to understand.

For that example specifically, I would rather write it like this:

const { escapedName } = npa(pkgs[0]);
const json = await npmFetch.json(escapedName, opts);

No pipe necessary at all, and in my opinion it’s clearer than both options given in the example. It also achieves the separation between processing and actual fetching you were aiming for.

And this is getting to the heart of why I feel the Hack proposal might be actively harmful. It encourages people to “white-wash” bad code as if it’s now better because it uses the latest syntax. But not only do I not find it an improvement, we have better options available today.

And people will mix and match this with nesting at will. It will not rid us of bad code, but it will open up new avenues for bad code we had not seen before. That’s why in the end I think we’re better off without than with.

For the F# proposal, I don’t see so much potential for abuse, hence why I’m still sympathetic to it. But if we could prohibit lambdas from even being used with it, I would probably be in favor. It’d make it even less powerful, but I do believe this is a case where less is more.

Beyond that, I appreciate the examples & exploration. clip is a great example of an API written in more mainstream idiomatic JavaScript that would be harmed by the F# proposal, as it would require either a pipe-specific API or arrow function wrapping.

Yeah, absolutely. I would gladly make the adjustment if the F# proposal were to be accepted, but it’s true it’s a pain point. I do think once libraries had their time to adjust, we might come out for the better, but I cannot deny it will cause short-term friction.

If the Hack proposal were accepted however, things might initially appear more smooth. But once we have to deal with other people’s code that uses pipes willy-nilly, I fear we may regret it forever.

And this is getting to the heart of why I feel the Hack proposal might be actively harmful. It encourages people to “white-wash” bad code as if it’s now better because it uses the latest syntax. But not only do I not find it an improvement, we have better options available today.

And people will mix and match this with nesting at will. It will not rid us of bad code, but it will open up new avenues for bad code we had not seen before. That’s why in the end I think we’re better off without than with.

This is a reasonable take, even if I disagree with the cost/benefit calculation.

For the F# proposal, I don’t see so much potential for abuse, hence why I’m still sympathetic to it. But if we could prohibit lambdas from even being used with it, I would probably be in favor. It’d make it even less powerful, but I do believe this is a case where less is more.

I think this makes the use-case for F# unreasonably narrow which would really limit the utility of the operator.

Yeah, absolutely. I would gladly make the adjustment if the F# proposal were to be accepted, but it’s true it’s a pain point. I do think once libraries had their time to adjust, we might come out for the better, but I cannot deny it will cause short-term friction.

My general perspective is forcing mainstream JS to adapt to this API will be more painful than asking functional JS to adapt to Hack. Note that if you were to adapt to the curried style you reference in the blog post, you would no longer be able to use clip outside of a pipe without a very odd calling convention (clip(length, options)(text)) or using a curry helper.

Note that if you were to adapt to the curried style you reference in the blog post, you would no longer be able to use clip outside of a pipe without a very odd calling convention (clip(length, options)(text)) or using a curry helper.

My idea was to just check for the type of the first argument, if it's a number, return a lambda. If it's a string, continue as before. It's JS so why not :) But alternatively, I could just ask the FP folks to use import { clip } from "text-clipper/fp" and expose the new signature there.

I'm still surprised to see claims against the minimal/F# style, saying that it will force curried style. It won't, just as .then and .map won't. Why don't we introduce ^ in the then and map methods as well?

PFA proposal is a universal solution to those who worries about the arrow function noise on all three cases: map, then and pipe. If PFA is not ready, and we don't want minimal/F# without PFA, then let's wait, instead of introducing an irreversible Hack pipes.

Pipe, in any resource you'll find, is defined as a composition of functions. From UNIX to Haskell, that's what it is. I really think we should hold that proposal ASAP and continue discussing it.

@SRachamim The loss of tacit programming is explicitly a complaint against Hack (see #215 and #206). If the intention is to just use lambdas for everything, then F# is at a significant disadvantage vs Hack.

See #221 for concerns about PFA's advancement.

I think it's worth trying to define what "readability" can mean across paradigms and across time.

Functional Programmers worry about mathematical consistency: g -> f is the same as f(g).

  • In F#, this looks like: g |> f
  • In Hack, it's g |> f(^)

You really don't have to be on one side or the other to appreciate that Hack robs the JavaScript FP community of a certain kind of transferable mathematical thinking. "Readability" to them means something very different to "avoiding temporary variables". F# affords another kind of functional compositional to JavaScript.

Should it have it? JS is of course a multi-paradigm language, but despite the currently declarative push it's no secret that it's still very much used in an imperative way, so perhaps Hack suits it "better"?

But JavaScript also has -- by pure luck of history thanks to its original author -- first-class functions. It's baby steps away from being completely FP friendly, and it has an active FP community that seems acutely aware of this.

I'm really trying to avoid falling on one side or the other here, but after that throat-clearing I contend that:

  • Hack solves "soft" readability issues for imperative steps that are better solved with existing syntax and patterns, especially the "unrolling" of logic into dedicated functions / across temporary-variables
  • F# solves "hard" FP readability issues and opens-up more in terms of transferable knowledge from the discipline of mathematics / other FP languages

In other words, Hack is the readability ceiling for imperative programming, whereas F# raises the ceiling for compositional readability for those working in FP, either now in in the future.

@SRachamim The loss of tacit programming is explicitly a complaint against Hack (see #215 and #206). If the intention is to just use lambdas for everything, then F# is at a significant disadvantage vs Hack.

See #221 for concerns about PFA's advancement.

Lambda is a (apparently scary) name to something that we're all comfortable with: a function. It's already everywhere: class methods are functions, map, filter and reduce, then. This is a simple concept. But more importantly, it's the essence of the pipe concept.

Every concern you'll have about minimal/f# proposal can be applied to anywhere else you use a function. So why won't we tackle it all with a universal PFA solution?

Why won't we use promise.then(increment(^))) instead of promise.then(increment)? You see, the Hack style is a verbose proposal that tries to tackle two things at once.

And as I said, if PFA is stuck, then it's not a good reason to introduce Hack. We should either wait, or avoid pipe at all (or introduce minimal/f# style anyway).

Functional Programmers worry about mathematical consistency: g -> f is the same as f(g).

Does this imply that Elixir isn't mathematically consistent because x |> List.map(fn num -> 1 + num end) is the same as Enum.map(x, fn num -> 1 + num end)?

Elixir pipes allow you to take a lambda or function. Whether it pipes to the first value or the last value is an interesting stylistic question, and I get what you're referring to. In javascript our functions can be thought of as taking a tuple. This adds the wrinkle of, are you injecting into the first value of the tuple, or the last since there is an isomorphism between (a * b * c) -> d and a -> b -> c -> d. The elixir is still mathematically consistent in my opinion with the general idea, because we have various flip functions which could if need be take an a -> (b * c ) -> d and turn it into (b * c) -> a -> d . So they're algebraically equivalent, so we don't really need to worry about it. If we get the a -> (b * c ) -> d , we can just use a flip function to get (b * c) -> a -> d and vice versa.

What is a concern is whether I have to learn a new construct to reason about the new operator. The functional community will simply avoid placeholder pipes, because they're pipes that don't take functions, and we think in terms of passing functions around. You can call it superstitious or close minded, but it is what it is. I don't think the fp community will ever use placeholder pipes because they are difficult for us to reason about in terms of edge cases. I actually agree with @arendjr in that I would prefer to have no pipe rather than hack pipes, and many many functional devs I have talked to this week have agreed with that. As excited as we are about pipes, we want to avoid things which are going to make code harder to reason about the potential consequences and read. Functions and lambdas are easy to understand and read, expressions by contrast are very open ended in what they can accomplish and what they mean. Expressions are very powerful but we fp devs often explicitly trade power for the ability to think clearly. In my opinion javascript is already very open ended, and some creative constraints can make things easier to use.

Whether it pipes to the first value or the last value is an interesting stylistic question, and I get what you're referring to.

Yeah, my general point is that whether the syntax of a given language specifically matches the syntax of the math is less important than whether it adheres to the values/principles that math.

we think in terms of passing functions around.

Nothing about Hack pipe prevents you from passing functions around. x |> f vs x |> f(^) is immaterial to the underlying math, and is immaterial to your desire to just do function passing. x |> f implies that f is called; x |> f(^) makes that difference explicit. It doesn't change the actual behavior.

Functions and lambdas are easy to understand and read, expressions by contrast are very open ended in what they can accomplish and what they mean.

The body of a lambda, specifically arrow functions in JavaScript, is an expression. Most of the open-ended things you can do with Hack pipe you can do in the body of a lambda (save await / yield) Hack enables you do that in the same scope as the pipeline, instead of the nested scope of the lambda/arrow. Neither pipe prevents you from using expressions, but because expressions are so central to the language, enabling their ease would be significantly more beneficial to the broader language & ecosystem than it will be harmful to your ability to chain together function calls.

Neither pipe prevents you from using expressions, but because expressions are so central to the language, enabling their ease would be significantly more beneficial to the broader language & ecosystem than it will be harmful to your ability to chain together function calls.

[Citation needed]
I’m literally arguing the other way around. I even proposed it might be better to even limit F# pipes from piping into lambdas, to prevent exactly this (which you rejected). I’m okay if you want to argue that this gives neither pipe proposal a reason to exist, but please don’t use this as an argument for why Hack would supposedly be better.

The problem is that in real life I have to deal with other people's code, and expressions are pretty open ended. If you constrain expressions to exactly what a lambda does, that's great but now I have a lambda that is not obvious that it's a lambda, it's confusing. Therefore, yes, in my codebase with myself alone hack pipes are fine, but in real life I have to deal with my peers, and I will tell them "Please avoid the placeholder pipes, they are confusing and may not work as you think" because people in practice will do all kinds of stuff that are, well confusing. My problem with placeholder pipes is explicitly that they are too open ended. The very thing you espouse as a killer feature, makes it to me a DOA feature. I still after all this discussion for example, don't know if anyone fully understands the scope of a expression pipe. If my coworker declares a var in an expression pipe, is that global, or local like a function? Will my coworker know? They will not, because I don't know, and neither will most people. It's an entirely new construct with new expectations that could be frankly anything. It's why I don't think this proposal works well for javascript, because it's actually subtly complicated and javascript is already quite complex. Remember even if you can give me a nice succinct answer to this question, consider that people will forget, because it's unintuitive to have expressions have a scope. Do those placeholder expressions have a this? Can I add attributes to it or treat it like an object? There's so many ways that people will use placeholder pipes to make weird cursed code and we both know it. By contrast with function pipes I can just refactor any sufficiently cursed lambda to a named function and know that the scope will be exactly the same, period. If it's a lambda, I can just say, "It's a lambda" and they get it. I don't think awaiting mid pipe is actually a good feature, and I think it will just be hard to read because awaits normally go at the beginning of the line. So @arendjr if you're worried about cursed lambdas, they would be trivial to refactor into named functions. Probably even a action in vscode and your favorite editor to do that. Whether such a thing is even strictly possible in expressions is frankly unanswerable, especially when you consider the scope of all the things which may be added yet in the language.

[Citation needed]

All of the built-in types, all of the Web apis, none of them are designed around unary functions. Even tools as basic as setTimeout are going to require you to wrap it in a lambda in order to use it in F# pipes. If we're trying to introduce new syntax to the language, requiring all of those APIs to wrapped in functions to be useful significantly impairs the usefulness of the operator.


If my coworker declares a var in an expression pipe, is that global, or local like a function?

It would be a SyntaxError because var is a statement, not an expression.

@mAAdhaTTah You are worried about unary functions on a pipe, but why aren't you worry about unary functions on every other method like then? Why it's ok to accept a lambda there?

If you worry about unary functions, why not pushing the PFA proposal, which will allow you to use setTimeout without a lambda on both a pipe and a then (and actually everywhere).

This claim is not a reason to push the Hack proposal.

Why it's ok to accept a lambda there?

then is a method; it's not new syntax.

If you worry about unary functions, why not pushing the PFA proposal

I already referred you to #221 to discuss the issues PFA had advancing.

@mAAdhaTTah New syntax must not imply inventing new unexpected ways of reasoning. A great syntactic sugar is one which leverages existing familiar constructs. You can describe the minimal proposal as a |> f === f(a). It's simple, and doesn't require anyone to learn anything new. It's just |>. Compare it with the Hack proposal which is entirely new concept not only in the JavaScript world, but in programming in general!

And again, some variation of PFA is the solution to your concern about curried/unary functions. Don't leak it to other proposals. Where PFA proposal stands should not affect the nature of a pipe.

Pipe should be a simple function composition, whatever function means in JavaScript. If you feel functions in JavaScript need another syntactic sugar - that's a different proposal!

All of the built-in types, all of the Web apis, none of them are designed around unary functions. Even tools as basic as setTimeout are going to require you to wrap it in a lambda in order to use it in F# pipes. If we're trying to introduce new syntax to the language, requiring all of those APIs to wrapped in functions to be useful significantly impairs the usefulness of the operator.

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

What you’re suggesting is a new syntax for any call expression in the language, regardless of asking the question whether we should want that. Adding two arbitrarily interchangeable syntaxes for the same is not a benefit to me, it just complicates things. It complicates the language for newcomers, it adds endless discussion about which style should be preferred when and where, it promotes hard-to-read code for which better alternatives exist today and ultimately it has very little to show for it.

Someone on Reddit replied to me:

“I don't look forward to trying to debug a broken npm package and see it's written with pipes for every function call.”

And yet that seems to be exactly what you’re encouraging.

So I did ask this question to myself, and I think that No, we should not want this.

And yet I keep reading statements from champions making sweeping generalizations that this is “beneficial to the broader language” or “beneficial to everyone” and I don’t feel represented by this. And frankly I suspect there might be a large underrepresented, silent majority that will not feel their concerns were represented if this proposal is accepted.

I am confident there are a lot of unexplored edgecases of expressions which my coworkers or teammates in projects will use, and abuse. For example okay so there are expressions not statements, I get it now, what is this? What about super ? What about properties? If we consider the this scope of a pipe, is that even well defined in the spec? There's just countless unanswered questions in my opinion, and what we gain is the ability to write riddles to punish our future selves for hubris. Not to mention any time from here on after we have to carefully consider any expression being added to the language, how that should work with placeholder pipes, what it can and cannot do. Placeholder pipes are in my opinion, to be avoided, as we should not trust that future features will behave well with them, because the design space is far too large and novel. An awkward case I just thought of with expression pipes is setting a property, which is an expression, so then what happens is the value set gets piped through, counterintuitively.

var x = 10 |> this.whatever = 'yeah'
// x is 'yeah' and yeah you can easily forget to put in the placeholder entirely and imo it is not at all obvious. 
var y = {lets: 'go' } |> ^.lets = 'hooray'
//y is 'hooray' and is no longer an object

This is extremely unintuitive, and will definitely blend the mind of a newbie into fresh paste. By contrast, in a function or lambda it's all about returning stuff, there's a clearly communicated path, you can even use braces and a return statement to be explicit (and I often do). In the first example, I don't even know what I'd expect, but probably either 10 to be passed through unaffected, or undefined. You better believe my coworkers not understanding pipes will put every valid expression in there. At least I kinda understand functions, they are simpler, expressions can be so many things. There is a pre-existing guidance on how/when to use functions, I have no such guidance for placeholders.

noppa commented

var x = 10 |> this.whatever = 'yeah'

From the readme

A pipe body must use its topic value at least once. For example, value |> foo + 1 is invalid syntax, because its body does not contain a topic reference. This design is because omission of the topic reference from a pipe expression’s body is almost certainly an accidental programmer error.

So it would be pretty difficult to have this mistake go unnoticed. Any sort of linter would point it out and trying to run the code would just crash with a syntax error.

var y = {lets: 'go' } |> ^.lets = 'hooray'

I don't see how the F# version is any different

var y = {lets: 'go' } |> _ => _.lets = 'hooray'

I'm also wondering where yall find these crazy coworkers that bend over backwards to write as unintelligible code as possible, and can't be leveled with.

The difference is people expect to intend to return a value with a lambda. What value are you intending to return there? By contrast an expression is a lot of things. The root of it is, you're treating two very different things like they're equivalent. I guarantee you when this gets released we will see ^.lets = 'hooray' on stack overflow with placeholders, and we won't with lambdas, because lambdas have been in the language for several years now, and function expressions several years before that. People have a general intuition of what is expected with function expressions and lambdas, that will have to be learned with placeholder expressions.

Specifically, they're your bootcamp grads, fresh out of college kids, mom and pop shops, and sometimes your boss who mostly writes Java/C# but thinks javascript is easy. You can explain things to them, but then someone new will come with the same misconception, or perhaps the same person who forgot, so I'd really appreciate it if we make sure to value making things intuitive the first go around. If you think I'm being absurd, okay, but like it's going to happen, and it would be a lot less likely to happen if we leverage an existing construct that is more generally understood.

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

Right, it's not like when they added OOP syntax sugar to JS they expected everyone to wrap all their existing code in Classes or everyone to use Classes for all new code. It's an option if you want to write that way. They didn't pull back from OOP because devs would have to write code differently to interface with it.

Same goes for adding FP syntax sugar, why do we step back from true FP principles like function composition and currying because someone might have to call a curried function or wrap their code? If you don't want to use the features -- you don't have to! But don't kneecap the language with Hack because to use it you have to write new/different code! This argument doesn't make much sense.

You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place.

I'm not. I am arguing that the universe of things you can put in a Hack pipe is far greater than the universe of things you can put in an F# pipe. I would by extension argue that the universe of things that benefit from the Hack pipe is far great than F# pipe.

What you’re suggesting is a new syntax for any call expression in the language, regardless of asking the question whether we should want that.

I'm not sure what you mean by this, especially insofar as x |> f(^) looks familiar to anyone who has written f(x). The whole goal of Hack pipe is to avoid a novel call expression.

“I don't look forward to trying to debug a broken npm package and see it's written with pipes for every function call.”

Amusingly enough, this is how I feel every time I pull up a codebase that uses Ramda! I used that library extensively for like 2 years, it's an absolute nightmare to debug because there's no reasonable place to put breakpoints, and it's impossible to explain to non-functional coders because there are too many concepts to explain. So I don't use it anymore, which is a shame, cuz there are significant benefits to the library but they're impossible to integrate without taking on a lot of cognitive overhead.

With syntax we can put a breakpoint in between individual steps in the pipe, similar to how you can put a breakpoint after the arrow in a one-line arrow function. Including an explicit placeholder means I can hover it like a variable & get its type in VSCode. It's easier to debug when its integrated into the language proper, and we still get many of the benefits of Ramda's pipe / compose.

Let me provide an illustrative example, loosely modeled on an actual example of something I had to do at work. At a high level, I have some user input that I need to combine with some back-end data which I then need to ship off to another API. Using built-in fetch, the code looks something like this:

// Assume this is the body of a function
// `userId` & `input` are provided as parameters
const getRes = await fetch(`${GET_DATA_URL}?user_id=${userId}`)
const body = await getRes.json();
const data = {...body.data, ...input }
const postRes = await fetch(`${POST_DATA_URL}`, {
  method: "POST",
  body: JSON.stringify({ userId, data })
});
const result = await postRes.json();
return result.data

This sort of data fetching, manipulation, & sending (all async) is a very common problem to solve. Any of the fetch calls could be other async work (writing to the file system, asking the user for input from a CLI, requesting bluetooth access (!), interacting with IndexedDB) – all of these APIs are built around promises. Sure, I could linearize it with intermediate variables as I did but these variables are basically useless to me except as inputs to the next step (I actually wrote res as the first variable and had to go back & change it once I got to the second fetch). There was an extensive discussion about this in #200 (see my comment here for another real-world example).

Sure, maybe in the real world, I'd use axios, and maybe I'd wrap up some of these functions up into an API client. Even then, we'd have like:

const fetchedData = await api.getData(userId);
const mergedData = { ...fetchedData, ...input };
const sentData = await api.sendData(userId, mergedData);
return sentData;

Compare all of that to Hack:

// The `fetch` example:
return userId
|> await fetch(`${GET_DATA_URL}?user_id=${^}`)
|> await ^.json();
|> {...^.data, ...input }
|> await fetch(`${POST_DATA_URL}`, {
  method: "POST",
  body: JSON.stringify({ userId, data: ^ })
});
|> await postRes.json();
|> ^.data

// Or the axios example:
return userId
|> await api.getData(^)
|> { ...^, ...input }
|> await api.sendData(userId, ^);

This is significantly clearer to see, step-by-step, what's going on. Admittedly, I could combine steps (maybe the data merging can happen inline with the API call; it would save me from writing mergedData) or use better variable names (the data suffix everywhere annoys me), but at core, this what the pipe operator is for: linearizing a nested sequence of expressions.

There are even like small cases, where I've got a small function call with maybe an expression in it, and then I need to call one other function after it to post-process. I would love to be able to just do |> Object.keys(^) (this is a common one for me) after evaluating that function + expression, rather than having to put Object.keys in front and wrap everything, which now puts the order-of-operations in the wrong direction – Object.keys reads first, but evaluates last. I'm not intending to use it everywhere, but this one-off case is more readable than the base case because the code now evaluates in the same order it's read.

I guarantee you when this gets released we will see ^.lets = 'hooray' on stack overflow with placeholders, and we won't with lambdas, because lambdas have been in the language for several years now, and function expressions several years before that.

Oh yeah, you'd never see that with arrow functions.

@mAAdhaTTah It is already easy to linearize your code example using .then():

return fetch(`${GET_DATA_URL}?user_id=${userId}`)
  .then((response) => response.json())
  .then(({ data }) => ({...data, ...input }))
  .then((data) => fetch(`${POST_DATA_URL}`, {
    method: "POST",
    body: JSON.stringify({ userId, data }),
  }))
  .then((response) => response.json())
  .then(({ data }) => data);

In fact I would probably write a utility function here: getResponseData.

async function getResponseData(response) {
  const { data } = await response.json();

  return data;
}

return fetch(`${GET_DATA_URL}?user_id=${userId}`)
  .then(getResponseData)
  .then((data) => fetch(`${POST_DATA_URL}`, {
    method: "POST",
    body: JSON.stringify({
      userId,
      data: { ...data, ...input },
    }),
  }))
  .then(getResponseData);

@mAAdhaTTah at least you can google that. Placeholder now creates a context where all these questions need to be asked, again, because they could have new answers (regardless of whether they do). It's part of what I'm going through with this proposal, and it's part of why I don't support it.

Personally I find the ^'s everywhere just visually noise, I've mostly abstained from bringing this up since it's like almost entirely taste, but it's an opinion I've seen a lot and seems somewhat on topic for #225. I genuinely had to stare for a while at ...^.data and maybe I'm dumb as a bag of bricks, but I'm probably not the dumbest. To be clear I don't think other placeholder values that have been discussed are particularly more readable, they're just different shaped line noise to me. The most readable may have been ## because at least it is a little harder to overlook. I personally would discourage inline code and encourage named functions. I think lambdas are more okay because they're easier to refactor to named functions and they have variable names baked in. In this way hack pipes work directly against my personal tastes because the easiest thing to do is to put code after the |> and nothing is named.

On the point of breakpoints, there's no rule saying you can't put breakpoints with a pipe operator that takes functions. If I were implementing it, I would.

edit: wow just saw runarberg's translation to then and that's a lot easier for me to read and broadly a lot more descriptive.

Functional Programmers worry about mathematical consistency: g -> f is the same as f(g).

  • In F#, this looks like: g |> f
  • In Hack, it's g |> f(^)

You really don't have to be on one side or the other to appreciate that Hack robs the JavaScript FP community of a certain kind of transferable mathematical thinking. "Readability" to them means something very different to "avoiding temporary variables". F# affords another kind of functional compositional to JavaScript.

Thank you for mentioning that @shuckster, and actually, my concern is not only "Readability" in the mathematical sense but also with Hack pipe, we are no longer be able to write a mathematically consistent code because it has broken the law of Algebra.

Please refer to Concern on Hack pipes on algebraic structure in JavaScript
My #223 (comment)

The thing is, there aren't fewer temporary variables in hack style pipes, they're just all labeled ^. If we want to pipe to a lambda, so we can have a variable name with placeholder pipes, this is what we'll have to do.

var x = 10
var y = x |> (goodName => goodName * 12)(^)
//instead of
var y = x |> goodname => goodName * 12

I don't think ^ for all intermediate variable names is very readable at all. ^ may be readable in this trivial case, but we're under the understanding that this would benefit from a name, otherwise we wouldn't have variable names in javascript, we'd just have $1, $2, $3 etc... So then they say "well you can create a function" okay let's compare that case...

var x = 10
var y = x |> goodFunction(^)
//instead of 
var y = x |> goodFunction

So if you like names for intermediate variables, placeholder pipes aren't great, regardless of your views of functional programming. They are in my opinion, more verbose in the cases where you name things, and more difficult to read when you don't. Placeholders sound nice with multicase functions, but I think wrapping it in a unary function is vastly more readable because it's actually named. I find ^ hard to visually tease out and difficult to predict what it was supposed to be, for example f(x,z,^,y,a), you literally can't tell what ^ should be without looking at the function definition anyway. I feel like placeholder pipes avoid work to make things more murky, which really is the opposite of what I would think most people want. People complain all the time about bad variable names. I don't like the idea of having the equivalent of 20 functions or "expressions" all with the variable named ^ and I genuinely think that it's going to bite all of us later on when we're wondering what ^ means in a given pipe.

#225 (comment)

My general perspective is forcing mainstream JS to adapt to this API will be more painful than asking functional JS to adapt to Hack.

As several others indicated including the issue owner, the readability does not come from Imperative Programming.

Fundamentally, essentially, this proposal is to introduce a new binary operator to JS, which is the same league of exponentiation operator ** introduced In ES2016
Syntax
Math.pow(2, 3) == 2 ** 3
Math.pow(Math.pow(2, 3), 5) == 2 ** 3 ** 5
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Exponentiation

The mainstream has been the OOP or Imperative Programming, as a result, we have been forced to write Math.pow(Math.pow(2, 3), 5).

Having obtained the new binary-operator **, I think the mainstreaming are going to use 2 ** 3 ** 5 because this is much easier to read and understand.

Replacements an expression of OOP style to a binary operator in an algebraic sense make the code concise and readable, easier to grasp the math structure which leads us to avoid mistakes then the code becomes robust. This is a very powerful approach, and perhaps some people observe this manner as FP code. Haskell codes are full of binary operators, and Haskellers basically do just Math/algebra in their code.

asking functional JS to adapt to Hack

In principle, functional JS will never adapt to Hack because in my perspective, this hack-"operator" is the first binary-operator that has broken the basic rule of Algebra in JavaScript.

This "operator" is something between an expression of the existing algebraic binary operator of operation of function application and a newly invented statement by human who are free from mathematics and play and tweak mostly because they need to fit to another artificially designed statement, according to the Brief history of the JavaScript pipe operator
, especially async & await that is so unfortunate.

So, it's highly possible that FP community will ignore hack-pipe.

For mainstream, they have new context variables in their hand every time they use the new hack-operator although FP community won't touch such a thing from the beginning, the mainstream will touch that, and I don't think the majority can avoid the complexity of context variables.

Finally, hack operator will produce many syntax-errrors because in design, again, this does not hold associative law, so some ( ) in the context of "operator" itself does not even has meanings and surely not replaceable to others,

#223 (comment)

(foo(1, ^) |> bar(2, ^)) is not even leagal synatax on its own.

In mathematics, there is no such situation (b * c) is not even legal syntax on its own where a * (b * c) is expressed.
The huge disadvantage of loss of robust principle Reverential transparency is also inevitable for the mainstream. I feel sorry for them.

so you're saying x - y is the same as y - x? or x in y is the same as y in x? I'm really not sure where you're getting the idea that "binary operator" guarantees any of the things you're suggesting it does.

Yeah, they are not associative and not monoid.
In contract as you may have seen #223 (comment)

image

g(f(x)) === x |> f |> g is associative and monoid.

and no one here told you every "binary operator" is associative.
What I told you is a binary operator of function application that is associative in the definition becomes no longer associative, and Algebraic structure is spoiled.

Do I make myself clear?

Right, so it's perfectly fine that there's a binary operator that's not associative. So why is it so tragic that pipeline isn't?

it's perfectly fine that there's a binary operator that's not associative.

Yeah, it's perfectly fine. You are right.

What I told you is a binary operator of function application that is associative in the definition becomes no longer associative, and Algebraic structure is spoiled.

This is not fine and very tragic. Do I make myself clear?

It's not a function composition operator though, and never has been. Maybe that's the disconnect?

It's not a function composition operator though, and never has been. Maybe that's the disconnect?

It's a typo and I fixed it.

What I told you is a binary operator of function application that is associative in the definition becomes no longer associative, and Algebraic structure is spoiled.

This is not fine and very tragic. Do I make myself clear?

No, because function application can't possibly be associative in the way you describe, in JS, so it seems like this is a category error.

You don't understsand, for associativity, It's the identical meaning in algebra and just layer is in Monoid (S = S * S) or Monad (M = M * F) .

#223 (comment)
I'm sorry it seems you have not studied the Haskell page I linked:
https://wiki.haskell.org/Monad_laws
image

For your convenience, I would rewrite to
Associativity: (m |> g ) |> h === m |> (x => g(x) |> h)
or
Associativity: (m |> g ) |> h === m |> (x => x |> g |> h)
as (x => x |> g |> h) is the function composition of g and h
Associativity: (m |> g ) |> h === m |> (g . h)

function application IS associative in my describe.

Do I make myself clear?

JS isn't algebra. Saying "something is math" doesn't explain why it's necessary or important.

Function application in javascript is not associative, because f(g(x)) and g(f(x)) have precisely zero guarantees that they do the same things. Obviously you could write an f and g where that holds, but the only thing that matters is that you can write them where it does not hold.

So do you accept

What I told you is a binary operator of function application that is associative in the definition becomes no longer associative, and Algebraic structure is spoiled.

? Then accepting the fact I told you, now you are opening new topic?

JS isn't algebra. Saying "something is math" doesn't explain why it's necessary or important.

Well, then again I must repeat:

In mathematics, there is no such situation (b * c) is not even legal syntax on its own where a * (b * c) is expressed.
The huge disadvantage of loss of robust principle Reverential transparency is also inevitable for the mainstream. I feel sorry for them.

I'm still not understanding. It seems like you have some assumptions that aren't shared - for one, that it makes remotely any difference whatsoever if a binary operator is associative; that algebraic laws matter even a little.

At the risk of inciting debate, programming isn't mathematics. Math education is not required to program, and mathematical laws do not constrain programming. You seem to be presuming that they do - but that's not the case.

Function application in javascript is not associative, because f(g(x)) and g(f(x)) have precisely zero guarantees that they do the same things.

It's a Commutative law, and function application generally does not have property of Commutative and that is the general fact in algebra, not only in JS.

No one discuss on Commutative law, and Associative law is spoiled that I told you. Ok?

No, I still don't understand. Maybe assume I have no college degree and no formal math education, and maybe assume that if a concept requires those things, then it's not necessarily a good concept to apply to JS.

Assumption of degree level of a programmer is nothing to with binary operator implementation to a programming.

Are you insisting if a programmer only has a knowledge of elementary school. the rule of algebra could be ignored? Do I understand you correctly?

I have a knowledge that far exceeds "elementary school" and yet isn't "formal math education". I'm saying that if you want to explain your argument successfully, you need to be capable of explaining it to someone whose education stopped after high school (after age 18, since "high school" may be a colloquial American term).

So I assume you basically insist that language scheme of JavaScirpt should be at most for someone whose education stopped after high school, and since they don't care, the higher math operator's claim does mean nothing in general. Do I understand you correctly?

Never the less of whatever educational level of programmers, the form of f(x) === x |> f is monad. |> has a monadic structure that is the fact, and the function application is not a special rare operation at all. It is Not promise functor.

Spoiling the math element does have effects on things even you don't care now. Still you don't care?

Would you mind explaining it as if we were 5 years old? What's the concrete point you are trying to make?

Since it is elementary school, we should be able to understand your explanation. Otherwise, I will put the ownership on you as the teacher here.

I respect these inquiry from all of the members here. I appreciate and I will.
Just give me a time like 24 hours as I have stuff to do like anyone else. Thanks.

Also @stken2050 I am not appreciating your condescending way to express yourself right now, you are one of the few people that have no activity in GitHub,

Eah, I'm sorry for your misunderstanding, I have other accounts for other projects that have been very active.

hijack the conversations around the operator forcing everything else to deal with it, we like it or not.

This is a new concept to me, and please stop personal attack that is against code of conduct here. Be careful.

I think the TC39 are really smart people that they know what they are doing; and assuming that such people are not capable of making decisions taking into consideration what you explained, it is a bit far off.

I do not appeciate Authoritarianism and you speak up without any evidence.

Maybe it is a matter of a combination of how you express yourself thru language and patience. I would appreciate it if you are a bit more friendly communicating.

Sure that is why I told you

I respect these inquiry from all of the members here. I appreciate and I will.
Just give me a time like 24 hours as I have stuff to do like anyone else. Thanks.

and your reply is

Also @stken2050 I am not appreciating your condescending way to express yourself right now, you are one of the few people that have no activity in GitHub, and hijack the conversations around the operator forcing everything else to deal with it, we like it or not.

and personal attacks. Thanks.

My comment has moved to

#223 (comment)

I’m sorry, I’m feeling somewhat feverish so I’ll postpone replying to other comments, but please @stken2050, you have your own issue about algebraic concerns. Can we keep the discussion here focused on readability implications?

Ok, I've moved my post, sorry about that, and take care.

Lokua commented
james 
|> goosify(^)
|> unfoo(^)

As a reader, I cannot understand what the second instance of ^ is without reading the previous line. It means every line I read with ^ is a double take. As a reader, I have to constantly compute the value of ^, and it doesn't help that ^ is not the same thing but looks the same, so my brain is in constant conflict while reading.

That said, the F# style also requires I read the previous line:

james 
|> goosify
|> unfoo

But I don't have to constantly compute the placeholder. I do have to compute the meaning of one pipe to the next, but now I can do it immediately without a middleman getting in the way. It's like the placeholder is screaming "comprehend me! decode me!" - basically asking me to do the job of a computer.

Also, as a writer whose job is to communicate meaning, hack style removes my ability to provide descriptive names, which is basically the universal first step of writing readable code. I'm honestly shocked at this proposal. Imagine if you started at a new company and they enforced that all unary functions you write regardless of context had to name their single argument x (or god forbid, ^ :trollface:). That's how I feel and I'm not exaggerating - this proposal scares me because it's going to lead to code that is uglier, harder to read, and harder to refactor/abstract later on!

Edit: I too would rather see no pipe than this and am admittedly biased as I've always thought a pipe operator (or pretty much any more new syntax) is a bad idea. But if we are going to have it, I'd rather see it implemented in a way that improves readability, not (IMHO) hinders it.

again from frontend/fullstack programming perspective, the focus-of-interest is the final return-value (to be message-passed).

readability is improved (at least in frontend-systems) if you use a temp-variable with a descriptive name for the final return-value:

// what's the intent of data being message-passed?
james 
|> goosify
|> unfoo
|> await postMessage

// vs

// temp-variable `userid` clearly describes the final-data being message-passed
let userid = james;
userid = goosify(userid);
userid = unfoo(userid);
await postMessage(userid);

I think there is no difference for the usage of operator between frontend and backend or others, and temp-variable is anti-pattern of software development.
Generally, if the code where let is replaced to const is broken, it's a bad code.
Destructive substitution should be avoided in general and if some mechanism of a programming language fits to a manner to be avoided, that is the mechanism to be avoided.

That is why const is introduced in JS, which is the excellent mechanism to prohibit Destructive substitution that is an "error" in the code.

It is quite interesting that after participating in these threads as an advocate for F# for quite some time now. And reading what people have to say, that I am starting to arrive at the opinion that we might not need a pipe operator at all.

One of the reasons why I wanted a pipe operator in the first place was to allow library authors to provide point free operators on their constructs. So basically because of RxJS and the like. The current pipe function (or method in the case of RxJS) serves this purpose but has limitations. I personally—after reading what people have to say, and realizing what the TC39 committee believes about this programming style in general—am fine continuing the status quo for the time being. If and when Operator overloading arrives, these libraries can then overload the bitwise OR operator | with a Function RHS to give us the pipes that I want (#206 (comment)).

import Pipe from "my-pipe-lib";
import { map, filter, range, take, reduce } from "my-iter-lib";
with operators from Pipe;

const { value } = new Pipe(
  range(0, Number.POSITIVE_INFINITY),
)
  | filter(isPrime)
  | map((n) => n * 2)
  | take(5)
  | reduce((sum, n) => sum + n, 0);

If and when operator overloading arrives and library authors start providing their operators with this style I honestly think that hack pipelines would be in the way. When reading this above example you can clearly see what is going on. However if hack pipelines are also available as an option, and if you have seen a lot of code written with hack pipes, you might be a little confused by what is going on here. “These are pipes, but there is no topic marker. Why? How does this even work? What magic is this?” Without the hack pipelines in the language these questions are irrelevant. If you are confused, you just go look at the docs for the library where the operator overloading is explained. No trying to figure things out in relation to the similar but distinct hack pipes.

If anything that’s an argument against operator overloading, not hack pipes.

The argument against hack pipes is that we believe they are a hazard (sometimes for different reasons, but the conclusion is the same). Most of us in this thread, and I suspect in the wild, would rather have no pipes, than hack pipes.

I also think hack-pipe is so harmful that if it's the default route as they claim, it's far better not to have any. I feel very sorry not to have F# or minimal style, but it's better than JS will be broken forever.
Then, I also hope we have operator overloading, then totally no problem.

If anything that’s an argument against operator overloading, not hack pipes.

It's hard to believe these guys don't get it ... no acquiesce to our point of view, no: "great point" or "that makes sense" or "maybe we need to rethink this" etc.

Just a steadfast wall of rebuttals and basically:

"Hacks right, it's coming like-it-or-not, we can do this all-day-long, deal with it :trollface:".

@aadamsx it was indeed a great point - about how operator overloading can cause confusion by being similar to existing non-overloaded, or visually similar operators. It is not, however, a great point against hack pipes.

It's hard to believe these guys don't get it ... no acquiesce to our point of view, no: "great point" or "that makes sense" or "maybe we need to rethink this" etc.

Since you're interested in seeing things from each others' perspectives, what arguments in favor of Hack do you think is a good point or makes sense?

it was indeed a great point - about how operator overloading can cause confusion by being similar to existing non-overloaded, or visually similar operators. It is not, however, a great point against hack pipes. — @ljharb

Can you please elaborate? I admit that having both hack pipelines and overloaded pipeline operators may be confusing (though I am not 100% sure on this), however why are you so sure that operator overloading is the problem but not hack pipes? It seems to me that you’ve simply jumped to conclusion and picked one.

Here is my reason to think that hack pipelines might not be needed if operator overloading allows simple opt-in pipelines:

The explainer for operator overloading provides a number of good use cases other then pipelines operator overloads, including CSS units calculations and easy calculations on custom users types such as vectors or matrices. Given that both hack pipelines and operator overloading might be redundant in giving us the ability to pipe values into functions, it is often wise to pick the one which has more broad use-cases. It can be debated which operator that is F# pipes or Hack pipes, but it obvious that operator overloading beats hack pipelines in broadness of use-cases.

In many sciences it is often a measure of a quality of a theory how much it can predict out side of the initial scope. If the only thing a theory can predict is how much a 20-22 year old supermarket customer will spend on an average weekday, it is a pretty lousy theory, if however the theory can also not only predict the spending of all ages, in all kinds of shops, but also how likely they are to commute to the supermarket on a bus, how likely they are to bring their own bag, how the shopping changes if they meet their friends, etc. That’s a lot better theory. Hack pipelines are pretty much only good for linearizing nested function calls. Operator overloading can do that an much much more.

Now tell me, given that operator overloading and hack pipelines are redundant in providing pipeline operator to the language, why should we loose the operator overloading but not hack pipelines?

@runarberg your argument in #225 (comment) seems to me to be roughly "with operator overloading, and an overloaded operator that's similar to a default one but has different semantics, users might be confused". That could be true about any operator, not just pipeline - you could overload + to do something that's not semantic addition/combination, and that would be confusing. This confusion has in fact played out in my experience in languages that offer operator overloading.

If operator overloading had advanced as far, or farther, than this proposal, then it would be a reasonable claim to make that "maybe we don't need this proposal, since users can do it with operator overloading". However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

If you believe operator overloading is a better overall proposal, and would obviate this and others, then I'd encourage you to participate on that proposal's repo and help drive it forward.

@runarberg

Given that both hack pipelines and operator overloading might be redundant in giving us the ability to pipe values into functions, it is often wise to pick the one which has more broad use-cases. It can be debated which operator that is F# pipes or Hack pipes, but it obvious that operator overloading beats hack pipelines in broadness of use-cases.

A great insight that I can agree with.

@ljharb

However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

@js-choi wrote a great article in Prior difficulty persuading TC39 about F# pipes and PFA #221 and Brief history of the JavaScript pipe operator for us.

Also, @rbuckton #91 (comment)

I have been a staunch advocate for F#-style pipes since the beginning, but it's been difficult to argue against the groundswell in support of Hack-style pipes due to some of the limitations of F#-style pipes. If we kept debating on F# vs. Hack, neither proposal would advance, so my position as Co-champion is "tentative agreement" with Hack-style.

"tentative agreement" as he emphasis, so I think the stage-2 is not with a full consensus and not fair to treat as if the established manner, and most importantly the reaction against the achievement of stage-2

Frustrations of people looking forward to tacit syntax #215

Since this is a significant spin-off issue here to discuss why don't you open a new issue? @runarberg

@stken2050 yes, but my comment was speaking to operator overloading, altho the same statement can be applied to PFA.

Stage 2 is stage 2, there is no "partial consensus" in the TC39 process. Hack Pipelines for stage 2 have the only kind of consensus we have - 100% consensus - but that doesn't preclude individual delegates being hesitant or unexcited about it.

"Consensus" also doesn't mean "set in stone" - the process allows for changes, and certainly a delegate could decide to block stage 3, and that is more what I infer from the "tentative" part of the agreement: that we should not assume stage 3 consensus will be automatic.

@ljharb

"Consensus" also doesn't mean "set in stone" - the process allows for changes, and certainly a delegate could decide to block stage 3, and that is more what I infer from the "tentative" part of the agreement: that we should not assume stage 3 consensus will be automatic.

It's the virtue and I appreciate we don't have to debate this point, and we are in such a process, therefore I'm afraid to say I don't see much reasoning to

However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

Furthermore, it's possible to announce a pointer to https://github.com/tc39/proposal-operator-overloading from here then I'm sure the human resource will migrate, in fact, the reason the discussion here suddenly heated up is we noticed the announcement Hack-style has proceeded to stage-2.

https://github.com/tc39/proposal-operator-overloading/blob/master/README.md

  • Bitwise operators: unary ~; binary &, ^, |, <<, >>, >>>

looks nice to overload for pipeline-operator.

I created a new issue to talk about the potential confusion of pipelines created with overloaded operators #228.

Responding to @shuckster's comments on readability from here:

I simply cannot get my head around the fact that you'd consider the latter easier to read than the former, other than taking your word for it. I'm sure you do find the latter easier to read - I'm not saying otherwise - except to say that perhaps the "readability" argument isn't quite as clear-cut as we both might think?

I'm actually not arguing that this is more readable to me, because frankly, both of those syntaxes are equally comfortable to me. I did point-free programming extensively for a period of time, and I understand why the point-free version feels elegant, like everything superfluous has been trimmed down to its cleanest form. But I don't have any problem reading or comprehending the second one either. It's explicit & clear, I can see exactly where the arguments are going, and I don't really have a problem with the placeholder.

My argument for the readability of Hack isn't just based on my perception of its readability (although obviously, that plays into it). It's based on my experience having written point-free style code in a team and the level of comprehension (or lack thereof) I got from coworkers about that code. It just didn't look anything like the code they read & write on a regular basis. I worked at an agency so there were a number of projects of going on at a time, and I was the only one who wrote code like that. A coworker once joked that my code came "preminified" cuz it was a bunch of tiny, one-line compositions 😂! Unless they were familiar and comfortable with that style, the amount of overhead going from React/Vue/Backbone/jQuery/etc. projects into heavy Ramda was significant, and I'm relieved the project I wrote that code for has long since shuffled off to the great bitbucket in the sky, because I have no faith another developer could maintain it.

Further, projecting outward, I don't think my experience is out of sorts relative to the wider ecosystem. I think functional and/or point-free JavaScript is a minority style, which means me being the only one writing that style in a team of 10 is... about right. I have not run across that much point-free code since I stopped writing it regularly myself, and when I have, it's generally regarded as the more gnarly part of the codebase. That means non-point-free code (or whatever you want to call it) is what people are going to be most familiar with. They're going to mostly have seen foo(x) and bar(x), and if they need to sequence them, they're more likely to do bar(foo(x)) than pipe(foo, bar)(x).

This means that for most developers, they're going to see x |> foo(^) and recognize the RHS as a function call. They've seen that before, they know what it means, so they'll already be familiar with part of what's going on here. Because familiarity is most closely correlated with readability, I'm thus really arguing that x |> foo(^) will be more readable to more people because it's more syntactically similar to a wider swatch of the JavaScript code written, so more developers will have experienced code like it.

I don't think there's a single idiomatic JavaScript, but there are communities within JavaScript with their own idioms. What is readable, what is familiar, in your JavaScript community is very likely to be different than mine, which makes arguing about readability very difficult. Even worse, if you've been using pipe(foo, bar)(x), you're being asked to take a readability hit compared to your current baseline, requiring now a point where you were previously point-free. I get that, and I want to acknowledge that it's frustrating. I hope this at least explains why I think Hack is more readable generally, even if it isn't for you.


For that matter, I'll also mention I know OP is opposed to any pipe, so this comment doesn't really pertain to the primary thrust of the this thread, but I'm posting in here in the interest of keeping similar arguments collected together.

@mAAdhaTTah The pipeline operator is ubiquitous with writing code primarily in a left-to-right functional composition manner in a variety of disciplines. That in and of itself is a minority way of programming technically. A JS function doesn't have to be written in a composable way at all after all–let alone an entire codebase.

Being more involved to do than to not do, using and composing functions will always be a minority aspect of JS code. To me, the idea of the hack-style pipeline operator is to do pipelining that makes non-function abstractions less arduous to write than functions with it as a result.

However, pipelining non-functions abstractions isn't what most think of when it comes to pipelining in multiple disciplines/programming contexts–most certainly not the primary intent of those that clamored for it to exist in the language; this is why there has been substantial pushback of this version of the operator to not continue as-is.

The majority of people that will use the pipeline operator as early adopters, have championed it to be a standard, and accordingly will be the majority of people who will use it long after it long exists in the language (hopefully without the hack-style semantics) will always accordingly be a (vocal) "minority" to you since functional programming can always be framed as the minority way of writing code among all who decide to code in JS.

That said, such group of people happens to be a meaningful amount of the JS community that have created the attention of the need of this proposal to exist.

Overall, the commonality of functional programming use cases shouldn't be downplayed in the manner it seems you are (and potentially other TC39 members?) considering how popular functional libraries are and how well-received functional-programming-related features added to the language have been. This is a concern shared by a meaningful amount of the JS community in recent years with how the TC39 has operated regarded things related to functional programming; such sentiment has only gained considerable traction with how this pipeline operator proposal has been handled thus far.

Anyhow, a factor of code not being written in a pipeline-oriented way more often is because the pipeline operator is not natively available to the language, which is problematic for engineers in a variety of ways.

Even with that in mind, it's telling in an overwhelming amount of surveys polling the JS community on what new features the TC39 should consider addressing, the pipeline operator is consistently one of the most desired features desired + the efforts TC39 companies such as Microsoft and Netflix have made to create or use libraries such as Reactive Extensions (RxJS, RxPython, RxSwift, etc) available to leverage its semantics reliably across platforms.

Overall, the pipeline operator has been in demand for quite some time, and certainly not in the matter the Hack-style currently combats arbitrarily.

The problem with the Hack-style pipeline operator when it comes to readability by a meaningful amount of the JS community that write their code in a pipeline-oriented manner today

The hack-style has undoubtedly disappointed many who feel that a tacit approach to functions is being arbitrarily removed to support non-functional expressions being linearized when those things should be represented more as functions towards retaining the tacit syntax of representing the simplest functions to functionally compose with: Functionals that have one parameter (or none).

While it is applauded the effort and thought to linearize expressions and things not explicitly syntactically written as functions that @tabatkins helpfully explained, it's long been evident the majority that wanted this operator to exist natively in the language don't think it should be at the expense of the simplest way of representing a functional composition that the pipeline operator–historically tied to being a functional-composition-oriented operator–should accept: unary functions expressing first-class composition. First-class composition is typically understood to be done without tokens like (^) hack-style currently requires. Read more about it here

The Hack-style had a choice of offloading additional characters for more generic linearizing of data between functions and non-functions and chose the latter which is problematic for a operator tied to functional composition–so much so that it has been objected to by some of the most prominent functional programmers in the language–including maintainers of some of the most popular libraries associated with the language today (i.e. @benlesh).

It shifts away from this ubiquitous understanding of linearizing being functional-oriented that it's inconvenient to use with the code majority of people write to compose things today. The way the hack-style handles unary functions explicitly violates first-class composition semantics.

Some have attempted to dismiss this blaming the functions being written in a HOF manner which is a tangent from pipelining. Functions that are augmented to return functions or embedded with techniques to handle invocations of the functions without all the core parameters filled in (currying) are functions merely being written to be more flexible to be composed regardless if it's used with a pipeline operator or not.

That's something the pipeline operator itself doesn't need to concern itself with.

Having a partial application token to compose with functions with multiple parameters (+1 arity) should instead be closely tied/leveraged by the pipeline operator, but there doesn't seem to be contention with that standpoint besides what token to use for that purpose.

Regardless, the Hack-style adds cognitive noise requiring (^) for even the simplest function composition calls (first-class composition); many want to use additional tokens to functions primarily only to communicate pipelining data to a function that won't be immediately the first parameter

It shouldn't matter much if a developer decides to do

x |> multiply(2, ?) |> add(10, ?)

or (F#/minimal-style)

x |> multiply(2) |> add(10)

or more succinctly (point-free)

x |> double |> add10

to represent

add(10, multiply(2, x))

Also, in contexts where a function is passed in other places for later execution in JS like as a parameter of another function, you don't need any additional characters for the functions:

function composeWithTwoFunctions (aFunction, anotherFunction) {
  // You'd use `arguments` object /`...` operator + `reduceRight` to compose with any number of arguments that are functions passed in. 
  return (x) => anotherFunction(aFunction(x));
}

So why the pipeline operator with the current Hack-style syntax arbitrarily wants to break away from such conventions associated with composition?

Anyhow, the pipeline operator being strongly tied to linearizing functions historically, the tacit means of expressing function composition of unary functions present in alternate forms of the pipeline operator proposed is ideally intact for readability that I think can be inferred beyond reasonable doubt being shared by majority of programmers across multiple languages regardless if they primarily functionally program or not.

Expressions such as ? + 2 can easily be understood as (x) => x +2 to enable functions not needing (^) yet non-functional expressions being supported.

The same should be done for expressions using await (turned into async functions) and so on.

The proposal as-is is arbitrarily going out of its way to be a contrarian to how pipelining is seen as a left-to-right function composition operator by the majority of the JS community that led to the original proposal being supported to the extent it was–as well as being contrary to how problem solvers understanding pipelining in general.

My anecdotal remarks since you shared yours

Thanks again for sharing your anecdotal experience with point-free programming to better understand your perspective, @mAAdhaTTah!

Here's my anecdotal take: At Google, I work directly with wide variety of engineers solving problems involving machine learning (ML) and user experience (UX) problems. I often facilitate the manifestation of Web apps as large as YouTube TV, AI image classification tools, and implementing design system tools used by hundreds of Google teams world-wide.

For ML especially, pipelining with the linearization of functions first and foremost with first-class composition of unary functions represented in a tacit manner aligns with the mental modal how people prepare and transform data for ML use cases + CLI tooling (especially in bash).

In Julia you can today do

a |> add6 |> div4 |> println

For functions that aren't unary you'd import the Pipe package to then write something like

@pipe a |> addX(_,6) + divY(4,_) |> println

In Python you can use RxPython to use pipe() or write something like the following with short composable functions usually written using lambda:

compose(f, g, h)(10)

Either way, first class composition is represented in a tacit manner aligning with functional composition conventions.

The lack of a pipeline operator in JS natively often gets in the way of human resource alignment tasks (i.e. interview processes) and performance optimization processes because people that are accustomed to relying on libraries such as RxJS, LoDash, Rambda to model problems in a pipeline-oriented matter are suddenly inconvenienced in such processes because it's not available natively in the language.

Tasks represented as functions programmatically are the default to be passed through pipelining abstractions such as a pipeline operator.

With my experience working with Code School, Treehouse, CodeNewbies, a TA of peers at USC, and several prominent bootcamp in mind, it's been intuitive this fact of pipelining as well as the pipeline operator representing functional composition.

It accordingly hasn't been a problem to communicate f(g(x)) as x |> g |> f the past 8 years I've been a full-time professional working for a variety of entities (start-ups, non-profits, corporate, etc) to existing SWEs, recent grads or interns; and new teammates who may not be accustomed to modeling their problems in a pipeline-oriented way.

For Web apps and UI small and large, I've successfully integrated reactive functional programming in codebases leveraging the RxJS's Observable/Subject as an endofunctor abstraction to pipeline zero or more values over time efficiently. This has been extremely popular and well-received, even by the most harshest critic I've had originally against such things.

Such style of programming is so popular at Google that Angular, our most popular front-end framework, uses such abstractions heavily in its end-user APIs.

Many love it so much that they leverage the framework-agnostic abstraction as is in a variety of front-end and back-end JS code. I use RxJS in my React, Vue, and Lit projects all the time–especially for async problems and data transformation tasks.

While there has been onboarding I needed to lead, I've been credited having new college grads, existing engineers, and new hires quickly be up to speed being able to contribute and maintain code using such abstractions. What consistently has been a large factor of why it has been easy for me to onboard engineering stakeholders to is the intuitiveness of pipe.

The behavior of pipe(), acknowledged by the champions of Hack, coincidently aligns with the pipeline behavior of the F#-style and so on. Unlike the Hack-style, it makes first-class composition clear and concise without needing any tokens other than parentheses around the functions it accepts as params to communicate first-class composition.

A consistent frustration by bootcamp students, recent grads, interns, and work colleagues is that it's hard for them to leverage their growing ability to solve problems in a pipeline-oriented manner that they prefer without it being natively in the language.

This is especially the case for global scale projects where every byte counts; this is something that I've had difficulty with complying with performance budgets I could not change. For recent grads and interns, it's frustrating they can't dramatically rely on simplifying their functional code in interviews and so on with the semantics of pipeline operator not yet being natively available.

Since the prominence of Hack reaching this stage, my attempts to leverage it through TypeScript forks and Babel replacing pipe() behavior from libraries–or the F# TypeScript fork with the Hack-style one–have been universally panned. So much so, I've had to revert such changes to keep using pipe or revert back to using the F#/minimal proposal TypeScript fork..

Throughout the Summer, newcomers (especially CS grads) and existing eng alike are particularly consistently confused why (^) is needed for unary functions/tasks that will be executed by JS in a chain communicated by the pipeline operator. "It breaks first-class composition semantics" was a common complaint.

It also added significant cognitive noise chaining functions that returns a function which was expected. It confused them and I on how much parentheses they need; whether they set-up the high-order function correctly the first time, and so on; this is especially the case with shallow curried functions:

// Hack style
x |> multiply(2)(^) |> add(10)(^)

Such problems were essentially gone by merely replacing hack-style pipeline operator with pipe or the TypesScript fork supporting the F#-style/minimal-style version of the operator.

The hack-style operator in comparison seemed more arduous and verbose than it needed to be; as one intern best put it:

It's like JavaScript [is] now actually trying to be Java in its verbosity with this take on functional composition!

Because of this experience, I do not think Hack-style should go forward with its current treatment of passing in functions–especially unary ones–in its effort to accommodate non-functional expressions.

The Hack-style approach to linearization isn't what majority of people had in mind with the realization of this operator in JS; the pipeline operator majority of people had in mind is a functional composition operator to better communicate consumable data processed left-to-right sequently by functions with no tokens such as ((^)) needed to represent first-class composition.

I'm of the opinion that the sentiment about pipeline operator needing to better accommodating unary functions in a tacit matter for the sake of readability succeeds the popularity of #SmooshGate(Array.prototype.flat); the impasse between JS developers and TC39 members on the matter is accordingly understandably weird to many why more isn't done to accommodate this ubiquitous understanding of the pipeline operator accordingly.

@lozandier

The pipeline operator is ubiquitous with writing code primarily in a left-to-right functional composition manner in a variety of disciplines. That in and of itself is a minority way of programming technically.

This is not true.
We use 1 + 2 + 3 or "hello " + "world" in JS.

Fundamentally, essentially, this proposal is to introduce a new binary operator to JS, which is the same league of exponentiation operator ** introduced In ES2016
Syntax
Math.pow(2, 3) == 2 ** 3
Math.pow(Math.pow(2, 3), 5) == 2 ** 3 ** 5
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Exponentiation

The Hack-style approach to linearization isn't what majority of people had in mind with the realization of this operator in JS; the pipeline operator majority of people had in mind is a functional composition operator

To be exact, the pipeline operator majority of people had in mind is a functional application operator, and which is like this:

a |> f |> g |> h ===
a |> f |> (g . h) ===
a |> f . (g . h) ===
a |> (f . g) |> h ===
a |> (f . g) . h ===
a |> (f . g . h) 

where
|> function application
. function composition

Please read #223 (comment) if you have a time.

For "what majority of people had in mind with the realization of this operator in JS" is functional is true, in fact, there is a proof that I mentioned here: #228 (comment)

Having said that, your statements are very reasonable, and the problem is they no longer won't listen to any reasonable claims as he (RxJs author) said: #228 (comment)

I've said my piece. Pretty thoroughly. I wasn't really heard. And I lost a friend over it. I simply just don't care what happens with this anymore.
Therefore, anything that might help functional programming libraries is unlikely to pass the TC39. It is what it is. It's the hack proposal or nothing.

We agree with you, and why don't you stop persuading members?
Because it's a waste of your precious time.
It's impossible to discuss in any reasonable way to talk someone who already decided to ignore any reasonable claims and you and we are the ones who are forced to listen to unreasonable empty explanations as if it does matter a lot.

It's the hack proposal or nothing.

If it would be a choice of poison or nothing, We want to chose nothing and wait for other features such as operator overloading.

So why don't you join to
Pipelines with operator overloading #228 ?

We don't want pipe any more. It's redundant over the operator overloading. We no longer support this pipeline-operator proposal as a whole.

Thank you for the reply @mAAdhaTTah . I must say that @lozandier's astonishing post is a far better rebuttal than I could conceive. It completely reshapes my own perspective on how up-and-coming the FP JavaScript landscape really is, and I'm grateful to hear the encouraging anecdotes about its take-up.

It also inflames my concern for just how much FP opportunity is up for squandering by choosing the wrong pipe operator. We are so very lucky to find ourselves with first-class functions in this crazy language, and I say this not at all to diminish the hard work of the VM authors who have been practically forced to improve it because of the explosion of the web as a platform. (I think they enjoy the challenge, though. ;) )

To keep it on the topic of readability, I see far greater potential for that in the resulting APIs of FP libraries that would use the pipeline operator they want, rather than a token-based one that only seems to help those on the very first rungs of the FP ladder (and judging from @lozandier 's reply, I'm no longer convinced they stay there for very long!)

@lozandier I appreciate the thoughtful response. I started writing up a response of my own, but I deleted it because it would be shitty of me to argue with your experience, even if it's different than mine. Instead, given that you've done some exploration of a transition to Hack in a functional codebase, I would be appreciative if you could share some of those before and after code samples (if you have them and are able to share them – I understand work code may not be sharable). As I argued here, I believe Hack pipe would be beneficial to the functional community & functional programming in JavaScript, so I'd be curious to see, in practical terms, the downside to that style in Hack in real-world code, rather than the add/multiply trivial examples that have so far left me unconvinced.

I still don't get it. Why do we even discuss the nature of a naturally-occurring, ancient, discovered, proved, concept of function composition? A feature which was implemented correctly in almost all modern languages? A pipe should just let us pipe a value through one or more functions. There's only one way to go: a |> f === f(a). The only thing we need to discuss on a pipeline operator proposal is which symbol to use...

That's how simple a pipe is. Why ruin it and make it something which is not?

I wish we'll abandon this proposal entirely. I rather see the minimal pipe or nothing at all.

As a non-purist I do think that the Hack proposal isn't the shortest and sweetest syntax. But the benefits of having the flexibility of the right hand being any expression is very intriguing / appealing and I can't wait to be able to use it.

I would from my general perspective consider linearising expressions far more beneficial than a pure yet restricted functional approach. So I'm optimistic that it's going to be a great addition to js and save a lot of unnecessary temporary naming and advance declaration of non-unary functions which the F# solution appears to require.

I'm confident as with most things once it has been used enough times, the perceived lack of readability of the operator will slowly cede to the familiarity of using it so frequently.

The prevailing opinion in this thread seems to be "my way or nothing at all" - I'm actually quite happy with this more rounded, general solution.

@Nazzanuk A pipeline is ubiquitously understood in data science and computer science as a series of logic blocks/tasks/subroutines that will be executed, in order, when data becomes available.

Please provide examples of expressions today you can't encapsulate in a function (logical blocks of execution / imperative code packaged as a single unit to perform a specific task) that you think Hack-style helps with to justify

  1. Its right hand expressions not being automatically inferred as (<piped data>) => <expression>

  2. Its verbosity for unary functions that represents the simplest subroutines you can write in JS to consume data that is still an instance of the Function first-class object

Note at the time of this writing, functions only return one value in JS (no tuples).

The most essential representation of a pipeline task is a unary function; it being more verbose to write than expressions that are far rarer to pipeline with via the hack-style pipeline operator is something a meaningful amount of developers over almost half a decade are objecting to.

@lozandier I'll contend that 3 characters worth of verbosity for unary functions is a very fair trade off for less verbosity around literally every other use case.

The syntax for arithmetic, awaiting promises, object literals, template literals etc. all look comparatively nicer. I spend more time with these as a whole than I do composing pure functional pipe operations, I appreciate that probably is not the same for everybody.

The Hack-style looks like a great implementation overall for JS, providing readability benefits outside of the functional scope. It may not be the purest most essential representation of a pipe from a data science and computer science perspective because of an extra 3 characters - but I think that's ok.

@Nazzanuk That doesn't necessarily answer my question, but when it comes to verbosity it needs to be asked why expressions aren't able to be rendered as unary functions for developers automatically with a pipeline operator that still enables tacit syntax with chaining unary functions:

// latter experession should be same as (x) => ({bar: x });
value |> foo |> {bar: ?};

// latter should be the same as (x) => `Hey, ${x}!`; 
value |> foo |> `Hey, ${?}!` 

// latter should be same as async (x) => await bar(x);
value |> foo |> await bar

Readability is pretty subjective, but with regards to helping with readability around arithmetic and awaiting, I'm unsure it's better than what is currently available:

Chaining arithmetic operations with |> is a bit rough looking. x |> ^ + 2 |> 10 / ^ Who would do that when they can just do 10 / (x + 2)?

Chaining awaited promises is already achievable, in a more memory-efficient way*, with .then(). x |> await foo(^) |> await bar(^) |> await baz(^, 'something') is just foo(x).then(bar).then(x => baz(x, 'something')).

(* large async-await blocks will not GC was is in the async function's scope where then chains will GC what they don't use, generally)

Honestly, yield may be the only thing here that is solidly different with the current proposal. Although, much like generator functions, I seriously doubt most people have the chops (or the need) to use them for coroutines.

@lozandier I appreciate the thoughtful response. I started writing up a response of my own, but I deleted it because it would be shitty of me to argue with your experience, even if it's different than mine. Instead, given that you've done some exploration of a transition to Hack in a functional codebase, I would be appreciative if you could share some of those before and after code samples (if you have them and are able to share them – I understand work code may not be sharable). As I argued here, I believe Hack pipe would be beneficial to the functional community & functional programming in JavaScript, so I'd be curious to see, in practical terms, the downside to that style in Hack in real-world code, rather than the add/multiply trivial examples that have so far left me unconvinced.

@mAAdhaTTah Yeah, my work code is unfortunately not sharable; I currently work for Google's AI Responsible Innovation Team; my code is tightly coupled with sensitive and proprietary code. I've only recently begun to slowly do open-source code again with Chrome and Material.

FWIW, @benlesh's code examples that you've probably already seen pretty much align with how I typically write code; I prefer to solve problems using the reactive functional programming (RFP) paradigm.

I use RxSwift, RxJS/IxJs, RxPython, RxJava/RxKotlin, and so on to normalize my ability to use such paradigm across languages to solve problems while simultaneously advocating for end users as a User Experience Engineer.

Concerns with the arduous nature of Hack-style pipelining with mixins

Something I forgot to mention with my earlier response is Hack-style's handling of Mixins:

class Comment extends Model |> Editable(^) |> Sharable(^) {}

Code such as above has been deemed unnecessarily arduous from my experience instead of just

class Comment extends Model |> Editable |> Sharable {}

I utilize mixins fairly often (i.e. mixin shareable behavior onto UI Components).

For the following code:

class AppRoot extends connect(store)(LitElement){}

Hack-style would require me to write the following:

// Hack-style
class AppRoot extends LitElement |> connect(store)(^) {}

instead of the following matching ordinary pipeline functional composition semantics

class AppRoot extends LitElement |> connect(store) {}

The latter is seen by colleagues and I being far more readable with the ubiquitous pipeline functional composition conventions in mind I've mentioned in my previous comment.

@lozandier fair, that is probably optimal and I'd be happy with that as well, I'd argue that the Hack-style would be consistent and still very succinct and readable.

// latter expression should be same as (x) => ({bar: x });
value |> foo(^) |> {bar: ^};

// latter same as (x) => `Hey, ${x}!`; 
value |> foo(^) |> `Hey, ${^}!` 

// latter as async (x) => await bar(x);
value |> foo(^) |> await bar(^)

But it really does seem trivial to be a sticking point, considering the enormous benefits the operator will bring as a whole.

Readability is pretty subjective, but with regards to helping with readability around arithmetic and awaiting, I'm unsure it's better than what is currently available:

Chaining arithmetic operations with |> is a bit rough looking. x |> ^ + 2 |> 10 / ^ Who would do that when they can just do 10 / (x + 2)?

I'd probably expect the chaining to be done around the arithmetic not inside

x |> await fetchVal(^) |> 10 / (^ + 2) |> await saveVals([otherVal, ^])

Chaining awaited promises is already achievable, in a more memory-efficient way*, with .then(). x |> await foo(^) |> await bar(^) |> await baz(^, 'something') is just foo(x).then(bar).then(x => baz(x, 'something')).

Yeah chaining promises are great, but obviously just limited to promises, I think the value is the flexibility in more general usage

x |> await foo(^) |> convertData(arg, ^) |> `hello: ${^.name}` 

(* large async-await blocks will not GC was is in the async function's scope where then chains will GC what they don't use, generally)

This is an interesting point. Maybe it deserves its own discussion (couldn't find one; has there been any?)

The way Hack pipe is specced right now, it keeps intermediate topic values alive for the whole duration of the pipeline. That makes it kinda unsuitable for chaining awaits.

I'll be frank, I don't see this as "arduous":

class Comment extends Model |> Editable(^) |> Sharable(^) {}

This fits into the same category to me as the multiply / add function examples seen elsewhere: they're small examples with marginal differences in the syntax. It's marginally worse than the point-free version, sure, but I don't see that as rising to the level of "arduous", hence the reason I'm interested in more involved examples.

The connect example is more interesting because while yes, the current API would require you to write it as:

class AppRoot extends LitElement |> connect(store)(^) {}

My suggestion is that the API itself should change to an uncurried alternative:

class AppRoot extends LitElement |> connect(^, store) {}

It's only necessary to write const connect = store => element => ... because of its interaction with the pipe function and the tools currently available for function composition in JavaScript. As you mentioned up top, functions need to be designed for composition, requiring its own calling style which looks weird when used outside of the pipeline. An operator which leans into mainstream call syntax doesn't require functions to be designed for it, so you can go back to writing connect such that it takes all its arguments at once.

This dovetails with @lightmare's comments, although I concede the Hack version of RxJS specifically (and importable methods generally) is worse than using .pipe in that example and prefer dedicated syntax for that case.

Funny enough, I think this reads more fluently:

class Comment extends Sharable(Editable(Model)) {}

And this reads better too, IMO:

class AppRoot extends connect(LitElement, store) {}

I don't think I'd even want the F# pipeline for either of those, TBH. Even if it would still be a little cleaner to read than the current proposal. Just doesn't seem like a good use for a pipeline.

@benlesh I agree with that first one, in that it literally reads as a "sharable, editable model". I think if we combined the two examples, we'd see an advantage to piping them (setting aside the fact that we're mixing two paradigms here):

// Unpiped
class MyModel extends Sharable(Editable(connect(LitElement, store))) {}

// With pipe
class AppRoot extends LitElement |> connect(^, store) |> Sharable(^) |> Editable(^) {}

The latter version (imo) makes the base class clear by putting it first, then listing the mixins it's composed with, while the first one starts to suffer from the "inside out evaluation" issue pipes are intended to address.

@benlesh I agree with that first one, in that it literally reads as a "sharable, editable model". I think if we combined the two examples, we'd see an advantage to piping them (setting aside the fact that we're mixing two paradigms here):

// Unpiped
class MyModel extends Sharable(Editable(connect(LitElement, store))) {}

// With pipe
class AppRoot extends LitElement |> connect(^, store) |> Sharable(^) |> Editable(^) {}

The latter version (imo) makes the base class clear by putting it first, then listing the mixins it's composed with, while the first one starts to suffer from the "inside out evaluation" issue pipes are intended to address.

@mAAdhaTTah @benlesh Yeah, this example is much more real world; I try to keep my examples small to a fault. Pipelining making clear the base class which is why I primarily pursue pipeline syntax with mixins that is somewhat marginal in benefit if there are a small amount of mixins needed.

That said, I prefer

class AppRoot extends LitElement |> connect(^, store) |> Sharable |> Editable {}

My chains often are 5+ tasks long–this is especially the case for data transformations common in data science tasks involving things like normalization; as mentioned in my comment up top, solving such problems with a pipeline-oriented approach is common [with data-science-oriented languages like Julia already having a pipeline operator (|>)].

With RxJS / IxJS in mind, it's not common to have multiple instances of map(), tap(), in a data processing pipeline or pipeline processing involving asynchronous UI code.

It accordingly quickly adds up needing to type (^) for first class composition. When I say it's arduous to use the hack-style syntax for mixins here, I mean it is so physically and ergonomically.

Even as a Kinesis Advantage 2 and ZSA Moonlander user, I'm legitimately concerned about my pinkies throughout a workday having to type (^) every-time I want to do first-class composition. :)

So much so, I would daresay rather type at first

Sharable(Editable(connect(LitElement, store)))

For ease of typing instead of rewriting it with the hack-style pipeline operator–even though I find it far less readable than using the hack-style pipeline–because how tedious it'd be to type using (^) over and over again.

It's happened in trials already; the logic of such behavior is that you're replacing () of nested composition with |> and (^) to communicate first-class/basic composition with the hack-style pipeline operator. It's experiences like this is why I don't think it's valuable to add (^) for first-class composition.

The appeal of a dedicated pipeline operator to me is it being more fluid and potentially less characters needed (usually the same being|> vs ()) to functionally compose with unary functions which again merely represent the simplest and most essential subroutine to add to a pipeline of tasks that will be executed, in order, when data becomes immediately available or over time.

All that said, I would very much prefer not needing to rewrite connect at all as you have.

class AppRoot extends LitElement |> connect(store) |> Sharable |> Editable {}

It's also not necessarily easy or possible to rewrite connect (though connect can be curryable in a way to support both for optimal versatility) and it's problematic that I would need to because Hack's cognitive tax with its use of (^).

In my opinion, a good pipeline operator is accommodating of all functions common in composition. It's evident HOFs are a common enough type of valid function people write to compose with; it's common enough in general JS codebases that employers audit JS developers on whether or not they have an understanding of them in interviews for years.

// latter should be same as async (x) => await bar(x);
value |> foo |> await bar

Note that this is not true; this is why the F#-style proposal had to have a special "bare await" syntax. Just putting an async function in a pipeline means the next step in the pipeline will see a promise, not an awaited value; you've effectively written an identity function there. You have to switch over to piping via .then(), and if you want to do normal piping on the value after that point it'll be nested into the callback.

@tabatkins That was hypothetical syntax. I was suggesting something can be hashed out to allow that syntax.

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

I was suggesting something can be hashed out to allow that syntax.

Implicitly awaiting an async function isn't viable; that was discussed back when we were first adding Promises to the language and got rejected pretty hard, for code-predictability and perf reasons. It needs an explicit await somehow, and the only way to do that in F#-style is to have a syntax carve-out (or do nothing, so you'd have to use parens around the pipeline and prefix it with an await, but that's pretty bad).

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

Sure, what sort of example are you looking for? The code's the same as without pipelines - if you await a rejected promise it's turned into an Error which you can try/catch. Or you can do .then() on the promise to handle it directly, if you prefer.

FWIW, this example is effectively the "Smart Mix" proposal we had:

class AppRoot extends LitElement |> connect(^, store) |> Sharable |> Editable {}

The issue with this is the RHS of the pipe changes from "function application" to "expression evaluation" depending on the presence of the placeholder, which is a refactoring hazard. It also wouldn't have supported LitElement |> connect(store) because "bare style" was limited to bare identifiers in order to minimize those hazards, but the champion group eventually dropped Smart Mix in favor of Hack.

It accordingly quickly adds up needing to type (^) for first class composition. When I say it's arduous to use the hack-style syntax for mixins here, I mean it is so physically and ergonomically.

My disagreement here is in privileging the typing of the pipe over the reading of it. If code is generally read far more often than it is written, then we should be privileging reading it. The version with the placeholder to more readable (to me/imo) because it makes explicit that which is implicit in the other examples.

This is admittedly my issue with point-free function composition in general. While the initial writing of said code can produce elegant looking syntax, I have found testing & debugging code written in that style extremely challenging. By extension, while I can see the physical tax of literally type ^, I find the cognitive tax of point-free code to be much higher, especially for anyone transitioning into it from non-point-free style code.

As an aside, I suspect we'll end up back at % as the token (see latest on #91), although I don't think it changes the point you're making.

In my opinion, a good pipeline operator is accommodating of all functions common in composition.

This is the thing tho: F# pipe doesn't actually do that. It specifically, narrowly accommodates functions written in a unary, point-free style. If I want to compose anything else, even other functions, I have to rewrite it in a particular way, or use a curry helper, or wrap it in an arrow function. We can accommodate function composition with functions that aren't written in narrowly in this style but are otherwise perfectly composable with Hack pipe.

I was suggesting something can be hashed out to allow that syntax.

Implicitly awaiting an async function isn't viable; that was discussed back when we were first adding Promises to the language and got rejected pretty hard, for code-predictability and perf reasons. It needs an explicit await somehow, and the only way to do that in F#-style is to have a syntax carve-out (or do nothing, so you'd have to use parens around the pipeline and prefix it with an await, but that's pretty bad).

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

Sure, what sort of example are you looking for? The code's the same as without pipelines - if you await a rejected promise it's turned into an Error which you can try/catch. Or you can do .then() on the promise to handle it directly, if you prefer.

Showing errors using then to handle the error directly would suffice what I'm looking for.

This is the thing tho: F# pipe doesn't actually do that. It specifically, narrowly accommodates functions written in a unary, point-free style. If I want to compose anything else, even other functions, I have to rewrite it in a particular way, or use a curry helper, or wrap it in an arrow function. We can accommodate function composition with functions that aren't written in narrowly in this style but are otherwise perfectly composable with Hack pipe.

This is where we're at an impasse often about, and the common reason why the response to the hack-style has been so contentious regardless how frequent someone "functionally programs": The simplest and most essential representation of a subroutine in a pipeline is one that takes the starting data or result of a previous subroutine in the pipeline. That's what a unary function is.

This is accordingly why there's severe frustration why there's a need for (^) when you know the result of the previous task/subroutine must pipe to the following subroutine.

|> alone is sufficient to communicate this; the (^) is very redundant to be tackled on to unary functions when you know the previous result should be passed on to it. From there, it is desired only when you need to redirect the result to be a parameter other than the first parameter of the following subroutine using a placeholder token such as ^ makes sense for efficient typing and clarity:

2 |> double |> adjustStatOfCharacter("Avery", "speed", ^) 

The simplest function you can write to directly integrate in a pipeline is a unary function. It is accordingly strongly desired for unary functions to be tacit in representation accordingly. Everything else that isn't a function being represented as (<piped data>) => <expression> seems pretty clear to me.

Glad to see the discussion advanced healthily :)

I only have a few specifics left I would like to raise: after reading all the replies, I’m still squarely in the camp where I prefer no pipe operator over the current proposal.

However, what really surprised me was someone’s claim that temporary variables are an anti-pattern. I mean, what?! Sure, I see examples with excessive amounts of temporaries that are supposed to justify the need for the Hack proposal, but I’m not buying it. Most of those examples could be cleaned up with a slight amount of common sense and no new operators.

In fact, I see temporaries as something valuable for readability, because they provide context for what is going on. They provide breathing room in trying to understand how a function works.

Let’s look at an innocent example of fetching some data using fetch (in following one of the themes used above):

const headers = { Accept: “text/csv” };
if (token) {
    headers.Authorization = `Bearer ${token}`;
}

const url = `${baseUrl}/data/export`;

const csv = await fetch(url, { headers })
    .then(response => response.text());

return parseCsv(csv);

I would say that’s fine code that doesn’t need any intervention by new operators.

But some would apparently rewrite it like this:

const headers = { Accept: “text/csv” };
if (token) {
    headers.Authorization = `Bearer ${token}`;
}

return `${baseUrl}/data/export`
    |> await fetch(^, { headers })
    |> await ^.text()
    |> parseCsv(^);

But what’s the benefit here? The code was already linearized, is it really so we can eliminate all temporaries? In fact, I think we already lost something here: context as to what all the temporaries are! It might be hard to see, because I take it everyone in this thread is familiar enough with the fetch() API the above poses no real problems. But imagine you’re a junior and you have to parse the above: you just cannot catch a break. You have to understand the whole thing to even make sense of the individual lines.

In a pipe, you need to understand the lines before a step to understand what the input is and you need to understand the lines after to understand what the output is. If you’re not familiar with the API being used that can be a real challenge. One that named temporaries help you with.

But if elimination of temporaries is really the goal, why not go all the way:

return Object.fromEntries(
    [[“Accept”, “text/csv”]]
        .concat(token ? [[“Authorization”, `Bearer ${token}`]] : [])
)
    |> await fetch(`${baseUrl}/data/export`, { headers: ^ })
    |> await ^.text()
    |> parseCsv(^);

I hope no one thinks this is desirable. So where do we draw the line? Apparently not all temporaries are an anti-pattern. And it’s just a subjective matter based on our values where we draw the line for each of us.

And it’s with those values that I think the crux lies. Maybe my values are out-of-line, but I suspect the values of this proposal’s champions are out-of-line. I can very well imagine how after championing a pipe proposal for years, you might feel so inclined towards it that common sense becomes an anti-pattern.

I find it telling that when I posted my article to Reddit I got several dozens of responses that could be roughly grouped into two camps: people that were happy because they read it as an endorsement of F# and people that were appalled because they don’t want any pipe operator at all. Not a single person spoke up in defense of Hack.

Maybe Reddit doesn’t represent a majority opinion, but maybe the champions don’t either. Personally, I think I’m a rather mainstream guy doing React and Redux at work. As far as I know, none of my current or previous co-workers are wishing for Hack, but maybe that’s not a majority opinion either.

But yet I keep seeing these claims from the champions that Hack is better for the language overall and it is better for “everyone”. But they make these claims through abstract reasoning, where they assume “everybody” (or at least a majority) shares their values. And they might feel emboldened because TC39 shares their view when it comes to F# vs. Hack. But they might vastly overestimate how many in the community really want a pipeline operator (if it’s not F#) to begin with.

So my advice is: we should find quantitative evidence that there is a real majority that would find this proposal more helpful than harmful (because of readability concerns or otherwise).

To illustrate the value of temporaries, I looked at some code snippets from my open-source projects and rewrote them using Hack:

return `(${BLOCK_ELEMENTS.join("|")}|br)`
    |> new RegExp(`^<${^}[\t\n\f\r ]*/?>`, "i");
    |> ^.test(string.slice(index));

(Original: https://github.com/arendjr/text-clipper/blob/master/src/index.ts#L556)

index = findResultItem(resultItems, highlightedResult.id)
    |> resultItems.indexOf(^) + delta;

(Original: https://github.com/arendjr/selectivity/blob/master/src/plugins/keyboard.js#L54)

const room2Ref = rooms.get(getId(portal.room))
    |> {
        description: "",
        flags: ^.flags,
        name: "",
        portals: [],
        position: portal.room2.position,
    }
    |> await sendApiCall(`object-create room ${JSON.stringify(^)}`);

(Original: https://github.com/arendjr/PlainText/blob/main/web/map_editor/components/map_editor.js#L99)

This is the type of the code you would be running into in the wild if the Hack proposal were accepted, and frankly I think we'd be off for the worse.

Another readability downside: note how all of these expressions move the actual result away from what we're trying to do with them. In the first example, the return is moved away from the call to RegExp#test(), making it harder to see what is actually being returned. In the third example, const room2Ref = is moved away from await sendApiCall(), making it harder to see what room2Ref ends up being, or (the inverse) to see what we are actually doing with the result of the API call. To some extent, the Hack operator turned already linear code into less linear code.


Interestingly, while doing this exercise I quickly realized how few code snippets actually lend themselves to being "pipelined" at all. I ran into multiple snippets where I thought I could transform them, only to realize a temporary was referenced by more than one expression afterwards, making a pipeline unsuitable. While this is not a readability argument against pipelines, it is a maintainability one: imagine needing to make a modification to a function, only to realize some intermediate value is now needed more than once. Suddenly you'll be stuck tearing open the pipeline, moving results back into temporaries, and rewriting the code.

Interesting discussion regarding class declaration. As mixin itself is an OOP concept, and we're nesting them to achieve code composability and readability in the first place (Sharable, Editable are adjectives and expected to nest), I would never have thought of using them with a pipe operator on my own.

That said, if people want their BaseClass to be leftmost for clarity, while retaining the readability of mixin in general, the solution is to mix (heh) styles to get the best of both OOP/FP worlds; the trick here is to find just "the right blend":

class AppRoot extends LitElement |> Sharable(Editable(storeConnected(^, store))) {}

The above basically reads: "AppRoot is a LitElement that is sharable, editable, and store-connected." And, to me, it reads like a regular English already. It's true that you still need type 3 more chars toward the end, but the difference between F#/Hack styles is minuscule and doesn't affect understandability or maintainability of this code. If anything, people from either FP or OOP background can understand such code and have no trouble maintaining it. And at 6 months, 3 years, or 6 years from now, the code will still be as readable as it was first written.


From my experience, the mistake people often make when utilizing any programming style is to go "too deep in" to the point of absurdity. Yes, if you abuse hack pipe operator hard enough, you reach the point of diminishing returns. But that very same statement can also be made to both F#/Hack styles, or even arrow functions/HOF in general, for that matter. None of the syntax or programming style in existence is invulnerable to abuses. Like many other aspects in life, moderation is key.

From my experience, the mistake people often make when utilizing any programming style is to go "too deep in" to the point of absurdity. Yes, if you abuse hack pipe operator hard enough, you reach the point of diminishing returns. But that very same statement can also be made to both F#/Hack styles, or even arrow functions/HOF in general, for that matter. None of the syntax or programming style in existence is invulnerable to abuses. Like many other aspects in life, moderation is key.

In my opinion, this is a rather half-hearted attempt to dismiss concerns with the Hack proposal (paraphrasing): "anything can be abused, so the fact Hack can be abused is no argument against it". Whereas my argument is not so much that it can be abused, but the fact that to my readability standards, most of the examples touted by the champions as "improvements" would already constitute "abuses" (ie. being worse than what we can write today with the status quo and some common sense), and the fact that being so widely applicable to any expression, it encourages itself to be used in scenarios that I would clearly characterize as abuse. In fact, between the status quo and the various abuses encouraged by this operator, I see only a very thin line of legitimate usage for it, and that is what makes me say its value is outweighed by its harmfulness.

Just because both spoons and guns can be abused, doesn't mean we should allow or disallow both to an equal extent.

In general, twisting the pipeline operator to support anything that is not a function is abuse. A pipeline operator should just pipe a value through one function or more. Doing flip-flops to support things like new and await just shows that we didn't understand the simplicity of a pipeline operator.

Please hold the proposal ASAP, so we'll all have a chance to understand it's simplicity, before we'll introduce irreversible damage to the language.

From @mAAdhaTTah :

My disagreement here is in privileging the typing of the pipe over the reading of it. If code is generally read far more often than it is written, then we should be privileging reading it.

From the proposal:

It is often simply too tedious and wordy to write code with a long sequence of temporary, single-use variables. It is arguably even tedious and visually noisy for a human to read, too.

Temporary variables have a chance of being renamed on refactor. Actually, has refactoring been considered at all?

Code that is hard to intuit can at least be helped in the first-pass of a refactor by renaming. Perhaps we can overcome this in Hack, but there's still no chance of developing the same kind of intuition that is already writ-large in the minds of FP developers clambering for the feature in the first place; that everything in a pipeline is a function that takes on the result of the previous as its only argument.

Come to think of it, intuition is extremely important for readability. It's easy to forget the intuitions we've built-up over time. Reading new code that is written to convention, coupled with good naming practices, rewards our intuitions with the delight of quickly grokking what it does.

Learning a new convention also rewards us with revelatory understanding of code that was abstruse only 5 minutes ago.

Do Hack and F# offer equivalent opportunities at learning conventions of this kind? By themselves they both conceal information, one with a token, the other with tacitness. But the learned conventions that overcome this "mystery meat" are not equivalent.

In Hack's case, the conventions are not even transferable. It may be easy in principle to learn the purpose of the Hack placeholder, but it's an obstacle to developing intuitions about pipelines generally, and the first-class functional nature of the language specifically.

Learning what the Hack tokens mean will reward the developer with an understanding of Hack pipes, but really nothing else. In what sense then is the Hack proposal "privileging reading code"?