airbnb/javascript

What is the recommended way to deal with no-param-reassign in reducers?

callumlocke opened this issue · 10 comments

This problem has been discussed before, but I haven't seen a general agreement on the best way to solve it.

Although this reduce() operation (as a whole) is pure, the callback within it is not, so it breaks no-param-reassign:

const arr = [['fred', 24], ['bob', 32]];

const obj = arr.reduce((accumulator, item) => {
  accumulator[item[0]] = item[1]; // ERROR! no-param-reassign
  return accumulator;
}, {});

// obj === { fred: 24, bob: 32 }

Option 1 – suppress the linter:

const obj = arr.reduce((accumulator, item) => {
  accumulator[item[0]] = item[1]; // eslint-disable-line no-param-reassign
  return accumulator;
}, {});

Option 2 – make a new object each time:

const obj = arr.reduce((accumulator, item) => (
  { ...accumulator, [item[0]]: item[1] }
), {});

Neither option feels great.

  1. Suppressing the linter should be restricted to unusual edge cases imo, not everyday idiomatic JavaScript like .reduce().
  2. Creating a new object on every iteration creates extra garbage. A programmer can take one look at the operation and see that it's pure. I realise that technically it's possible that arr.reduce might refer to something other than Array.prototype.reduce, so the linter can't make a special exception here. But it feels wrong to write slower code just because of a limitation in the linter.

What's the best thing to do?

You can also specify the ignorePropertyModificationsFor option. (http://eslint.org/docs/rules/no-param-reassign#options)

"creates extra garbage" is a memory concern, and in a memory-managed language, that should be none of the programmer's. Create a new object on every iteration, which makes the reducer pure (as well as the overall reduce, which is pure in both cases).

However, ignorePropertyModificationsFor is already enabled in the next release of eslint-config-airbnb-base, so if your accumulator is named one of the things it's enabled for, it will be allowed.

mr21 commented

"creates extra garbage" is a memory concern, and in a memory-managed language, that should be none of the programmer's

But that should be the concern of the great programmers

You like many of the "web developers" are thinking that the GC is magic right? infinitely fast?
Did you heard ONE TIME about browser linking memory? is this ring a bell?

@mr21 please don’t be hostile.

mr21 commented

Please don't write incredibly wrong comments?

@mr21 it’s not wrong, and if you continue that tone you’ll be blocked from the repository.

mr21 commented

Saying a dev should not takes into account the garbage collector process, it's exactly the same that saying a citizen could throw garbage on the street because there are people paying to clean it up after.

The people payed for that are not magic and not everywhere, exactly like the GC. If you create 100 useless objects, they need to be cleaned up this will take 100*n seconds.
And after a while it appears that the GC doesn't collect everything (or google and mozilla doesn't know how to free their mallocs because sometimes i have 0 tabs with 4go taken?)

@mr21 When the n is a number of microseconds, it doesn't matter in most programs. In other words, if you follow best practices, then for most programs, the GC is magic and requires zero thought. JS offers you no control over or observability of GC - you have no way of knowing if you creating those 100 objects is slow, or if the engine is inlining and optimizing it to be virtually instantaneous, and it's none of your concern. The point of a higher-level language abstraction is that you don't have to burden yourself with the layers below the abstraction.

mr21 commented

Yes but you have to multiply it, GC is negligeable only if you code really precisely (a thing that nobody do in the web), currently the devs just use react/redux/etc., and when any datum is changed the WHOLE store is duplicated, and ALL the render functions are recalled, and all this shadowDOM is diffed with the real one.
All the object needed for this huge operation (an operation that surely would takes so more CPU than what was needed to send human on the moon btw) this huge diff by callbacks take 0s to clean each time you think?

you have no way of knowing if you creating those 100 objects is slow

So why? why when i close a tab i don't see my ram going down? if the GC's magie is real why this happens everyday?

Do you think Chrome has a malloc somewhere not free? and firefox too? one day somebody will say
"OH! the leaks was here, now when we close a tab the ram is exactly like before opening the tab!"

No, this will never happens. except for people who reuse their objects with high precision.

You're not describing something a JS developer should be thinking about. GC is negligible most of the time - you don't need to "code really precisely" (something most competent developers do on the web), you just need to write clean code.

If you close a tab and your RAM doesn't go down, then clearly that's a bug in the browser - no amount of attempted memory management in JS can fix that.

You clearly prefer to write your JS code as if it was C, tightly controlling memory usage. Go for it! This guide does not endorse or encourage that mindset, nor will it.