JSRocksHQ/slush-es20xx

Add front-end preset

UltCombo opened this issue · 10 comments

Might also be interesting to support both back-end and front-end transpiling in the same project.

A .babelrc file should be all we need, once we decide on a directory structure and add a prompt to the generator.

My front-end structure proposal:

Project example megazord:

src/
├── app.js
├── lib
│     └── megazord.js
│     └── modules
│         └── utils.js
└── tests
│   └── main.js
└── sample
    └── index.html

_src/app.js_
The application bootstrap file.
Ex:

import Megazord from './lib/megazord';
let app = new Megazord(); 

_src/lib_
The application core files stay here, like the main class Megazord.
Ex:

class Megazord {
  constructor() {
     console.log('App init');
  }
}

_src/lib/modules_
The application modules lives here, like the utils helper.
Ex:

let util = {
  log (msg) {
    console.log(msg);
  }
}
export default util;

_src/tests_
Some initial unit tests are placed here.

_src/sample/index.html_
A sample app up and running with the index HTML file loading the bundle already generated.
Ex:

<html>
  <head>
    <script src="dist/bundle.js"></script>
  </head>
</html>

PS: We'd need to take care of modules on the front-end by using Browserify if the project was generated with CJS or RequireJS for AMD.

I guess the directory structure doesn't (read: shouldn't) matter too much.
I was thinking about something like public/js/**/*.js as the default, so you can use both back-end and front-end in the same project, and the directory structure is up to the user. The default glob(s) (public/js/**/*.js) should also be configurable through the build.js file, of course.

As for the output format, I was thinking about using AMD (RequireJS) so that only the changed files are touched -- easier to implement incremental build, with better performance as you don't have to cache and concatenate the other files on every incremental build.
Of course, browserify can be implemented in the future as well, but I'd see an extra step after the AMD implementation.

Having a skeleton is good for novices, and i think a lot of novices (in ES20xx at least) will use Slush-es20xx.
We can encourage a good and simple structure with modules, a start point main class, etc.
Also, having an HTML file is a good approach because in a simple step the new es20xx app is up and running for the developer, and this is kind rewarding and motivational.

About AMD and CJS, i agree we can start with RequireJS, but IMHO Browserify should be supported soon, as we can keep the same CJS structure both in Node.js and the Browser.

Agreed, a simple and functional starting structure is good. That's what I'm trying to do with the Node skeleton as well.

About AMD and CJS, i agree we can start with RequireJS, but IMHO Browserify should be supported soon, as we can keep the same CJS structure both in Node.js and the Browser.

From the implementation point of view, things are a bit more complicated:

  • In Node.js, CJS is consumed natively. We just have to compile the files that changed and they're ready to go.
  • In the browser, CJS needs a further transform (browserify/webpack/etc.). When a file is changed, it has to be compiled and concatenated with other files that didn't change. This means we need some extra logic to keep the incremental builds efficient.

So far, I've been trying to avoid splitting the JS pipeline logic, but it might as well be inevitable at some point.

Might be worth experimenting with two independent watchers/pipelines in this case.

I'm starting to doubt whether adding a front-end preset would be a good idea.

My main concern is that front-end development has such a diverse ecosystem, and es20xx is not (easily) pluggable. Think about CSS preprocessors, custom build/output formats, sprite generation and so much more.

I've been experimenting with webpack's react-starter, and I'm astonished how efficient it is. Its hot module replacement mode is far more efficient than pretty much any other workflow out there (similar to JetBrains' live DOM update and JS recompilation). Also, webpack is pluggable.

Bottom line is, I don't really feel like releasing a suboptimal non-pluggable front-end workflow.
Perhaps it would be best to keep rocksflow as a workflow for Node.js/io.js applications (focus on CLI, API and web server applications) for now, and launch a sister project for front-end in the future if needs be.

Thoughts?
/cc @jaydson

Hum, i don't know.
I was thinking in something really easy to get started.
Forget about the whole front-end ecosystem and think about a simple starting point for writing JavaScript apps with ES6/ES7, etc.

Yeah, it is not hard to implement a simple, opinionated front-end preset.
We can begin with that, and improve over time. Perhaps we can think about how to integrate rocksflow into more complex workflows, instead of integrating other things into rocksflow.

Here's my current plan:

  • Rename the "node" environment to "base", which will be copied to all generated projects.
  • Add a "Target environments" question to the generator. The available options will be "Node.js" and "Browser" checkboxes. (this would partially fix #11)
  • If the user ticked the "Browser" checkbox, then we add webpack to the gulp pipeline, and copy files from a new "browser" environment template.
  • We should implement #13 so we can easily enable/disable webpack (and other plugins) without messing around with the gulpfile.

Perhaps we can make this much simpler: add Webpack to the build and watch pipelines, allowing the user to disable it through the config options (#13).

"Isomorphic"/"Universal" JavaScript is quite mainstream nowadays, so there's little or no separation between back-end and front-end code. Even if you have completely distinct front-end code, it should be no problem to configure it to be compiled with Babel and Webpack in the current infrastructure.