Client side include feature for HTML
TakayoshiKochi opened this issue Β· 251 comments
Spun off from HTML modules discussion
There are certain amount of interest that including HTML snippet into an HTML document, without using JavaScript. That would be similar to <iframe>
, but more lightweight and merged into the same document.
It would work as a naive way to have your header and footer sections defined in one place.
I personally do not buy this much (sorry!), as we have enough primitives (fetch
, DOM APIs, maybe Custom Elements) to realize a equivalent feature very easily. Other than ease of use, what is the benefit of having this in the platform?
How would this differ from HTML Imports?
HTML Imports load a HTML document, via <link rel="import" href=...>
and its contents are never rendered without DOM manipulation via script. The document is stored in $(link).import
property. HTML Imports have more, like <script>
is executed etc.
This idea is about inserting HTML snippet in HTML document. e.g.
main document
<include src="header.html"></include>
Awesome contents
<include src="footer.html"></include>
header.html
<h1>Welcome!</h1>
footer.html
<footer>Copyright 2017 by me</footer>
will result in
<h1>Welcome!</h1>
Awesome contents
<footer>Copyright 2017 by me</footer>
I was always dreaming to see this feature in the browser. This tag, as exposed by @TakayoshiKochi should allow to put some HTML content in the DOM in a simple way. I think the <include>
tag should stay and not be replaced.
I would like to propose the following:
<include id="my-include" src="an_URL.html"></include>
And the event could be:
var included = document.querySelector('#my-include');
included.addEventListener('load', e => {
// ...
});
included.loaded.then((included_) => {
// here you see that
// included_ === included
// and this promise is ready once the HTML code
// from included.src has been fetched and appended to the DOM
});
Reproducing the behavior from document.currentScript
I found easy to use document.currentInclude
, so if a script is executed inside an <include>
then it should know where it is.
So, an include has a small set of features
- load event
- loaded promise
- currentInclude (or a better name)
Hope this idea will be useful.
There are some questions around this tag that I'd like to expose too.
- How to resolve the
src
path if it's relative? - What about if
src
changes. The fetched content should change too? - If an include contains some tags?
<include src="an_URL.html">
<div class="preloader">...</div>
</include>
After fetching, the innerHTML content should be replaced?
- In which order should be dispatched the
load
event? from most nested up to the top?
I don't think we should do this. The user experience is much better if such inclusion is done server-side ahead of time, instead of at runtime. Otherwise, you can emulate it with JavaScript, if you value developer convenience more than user experience.
I'd encourage anyone interested in this to create a custom element that implements this functionality and try to get broad adoption. If it gets broad adoption we can consider building it into the platform, as we have done with other things like jQuery -> querySelectorAll.
@domenic I tried to develop this idea as a custom element for my projects, and found that it's possible to achieve HTML import, but there are some things that made that solution hard to debug. For instance, beforescriptexecute was removed or even not implemented. Because of that I was forced to turn all my scripts into "inline" scripts.
I'll keep on spreading the word with more cases about how to split the code into small pieces without using extra JS effort.
What's the actual purpose of this? As domenic mentioned, you can already do this quite easily server-side, so why do we need an HTML element to do it less effectively?
Personally, I found this feature very useful in my projects. But, this is only my personal opinion. And, what @domenic said sounds fair. The only thing that I'd like to repeat is the absence of beforescriptexecute
event, that forces me to turn all the scripts into inline scripts. All other primitives are enough to implement this functionality into a custom element.
I'll be happy to share with you @Yay295 or anybody else my experience with this feature, the pros and cons, but that chat should be outside this issue.
I think it would be quite useful for any cases where we want DRY html authoring but not the burden of running code server side or requiring JS. It's actually what I naively expected html imports to do at first.
The use cases may be relegated primarily to the realm of small, static-only websites but I think it's a huge advancement for those cases. Simple static-only sites are a large number of websites, or sites that probably should be purely static but cannot be for reasons such as requiring server side rendering to DRY shared fragments such as header/footer, etc. I'm thinking of all the shared web hosting site builder tools and a large number of wordpress sites (a security/maintenance nightmare for typical site owners in my experience) and things along those lines. These kinds of sites are typically owned/maintained by the least tech-savvy operators and are therefore likely under-represented in these kinds of platform-advancement discussions. I'm aware that dynamic rendering or static build tools can get the job done, but those are inaccessible tools to a majority of simple website owners (again, in my personal experience).
The JS-free aspect gets back into the philosophy of progressive enhancement including "site basically works without scripting enabled" and I think that's still important, personally, particularly when we have Brave browser picking up steam with JS disabled by default for security/privacy purposes.
I may try to take a stab at faking this using a custom element backed by fetch, but it wouldn't fill the same gap IMHO and would merely be a demonstration for illustrating the convenience it can provide to the page authoring experience once it's all set up.
I might also comment that I would expect client side includes to do something efficient with caching based on server headers or whatever, minimizing the UX cost of the extra round trips after first load (and I would presume we could also use link rel=preload etc. to great effect for load time beyond the first page). With http/2 implemented appropriately the UX cost of this feature should go away entirely.
I want to jump in and mention that PHP (Personal Home Page) was literally created to solve this problem "In the most simple way possible". This could be simply done on the browser/markup level so much easier.
Imagine if the client could cache the entire HEADER and FOOTER and only need to d/l the main content... Sounds like pretty dang powerful feature to me!
HTML import feature is what big frameworks offer indirectly. I think, if we've this feature then we've more possibilities to write nice things in a simple way. If HTML imports will be present right into the browser then I'll feel that it is a complete framework.
Further to @brandondees' point, I think I'd point out that offline-first PWAs using Service Worker very much encourage a client-side approach. For example in our PWA (editor.construct.net), despite it being a large and complex web app, we generate virtually nothing on the server side. This is the obvious way to design something that keeps working offline, because everything is static and local, and there's no need for a server to be reachable, especially if all the server is doing is a trivial substitution of content that could easily be done client side. So I think there are actually some significant use cases where you might want to process an include client-side, and "just do it on the server" doesn't cover everything.
FYI, there was a same discussion happened at WICG/webcomponents#280
I've implemented my own very quick-and-dirty demonstration here devpunks/snuggsi#109 to begin experimenting with the pros/cons this feature might have, and we're attempting to keep track of other related efforts for reference as well. @snuggs took it beyond the most basic proof of concept and appears to have brought it close to general production-readiness.
I had a discussion recently with a colleague whose initial impression was that this concept merely re-invents server-side includes, which should otherwise be easy enough to work with for most content authors, but I think there are some significant subtle differences still. It's not clear to me why server side includes have not been well leveraged in commonly used website building tools, and I think the reasons boil down to a lack of accessible (read: free) and user-friendly (enough for non tech-savvy users) authoring tools supporting that technology, and lack of standardization. There can be performance benefits from automatically leveraging client side caching of partial documents, which is something I was always baffled by the absence of since I first began learning web dev. New page loads for a given site can retrieve primarily only the portions of the document that are unique, without the need to re-transmit boilerplate sections such as header, navigation, footer, sidebars, etc. without even getting into how the same kinds of benefits also apply when using web component templates.
Oops - sorry about closing accidentally.
I had not been sure about the advantage of client-side processing against server-side include (including PHP's include()
, which sounds popular but I don't have any data), but PWA (especially, using service worker to save client-server roundtrips) story in Ashley's #2791 (comment) sounds one of the good reasons of having client-side processing of HTML being okay.
Indeed @TakayoshiKochi we created a super simple <include- src=foo.html>
iteration utilizing the DOMParser
. Methinks this is how polyfills are (not doing a good job of) handling HTMLImports.
I'd encourage anyone interested in this to create a custom element that implements this functionality and try to get broad adoption. If it gets broad adoption we can consider building it into the platform, as we have done with other things like jQuery -> querySelectorAll.
I concur with @domenic. on providing a sound iteration/adption/developer ergonomics being worked on in this pull request.
The algoritm was as simple as follows. Also works with nested dependencies due to custom elements lifecycle reactions:
Element `import-html`
(class extends HTMLElement {
onconnect () {
this.innerHTML = 'Content Loading...'
this.context.location = this.getAttribute `src`
let headers = new Headers({'Accept': 'text/html'})
fetch (this.context.location, {mode: 'no-cors', headers: hdrs})
.then (response => response.text ())
.then (content => this.parse (content))
.catch (err => console.warn(err))
}
parse (string) {
let
root = (new DOMParser)
.parseFromString (string, 'text/html')
.documentElement
, html = document.importNode (root, true)
, head = html.querySelector
`head`.childNodes
, body = html.querySelector
`body`.childNodes
this.innerHTML = ''
this.append ( ... [ ... head, ... body ] )
}
})
Any caveats to DOMParser
would be great. Especially older versions of IE.
Hope this helps @TakayoshiKochi
/cc @brandondees
I was thinking last days since @TakayoshiKochi opened this issue. And found really interesting how to integrate this feature include with Worker
, <link>
, <iframe>
and so on, also don't forget to take in count CORS... Looks too hard to achieve the goal of HTML import in a simple way. If <base>
could be more flexible, then this feature could be done "as we have enough primitives".
Ignoring the fact that that code doesn't work, at all, you're really overthinking it. Here's a complete HTML test page. Just change the source to include.
<!DOCTYPE html>
<html>
<head>
<script>
class include extends HTMLElement {
connectedCallback() {
fetch(this.getAttribute('src'), {mode: 'cors', credentials: 'same-origin'})
.then(response => response.text())
.then(text => this.outerHTML = text)
.catch(err => console.warn(err));
}
}
customElements.define('include-html', include);
</script>
</head>
<body>
<!-- Include the partial HTML. -->
<!-- If the included HTML has includes, they will be included too. -->
<include-html id="test" src="to_include.html" />
<!-- No problems here either. It just logs an error if this happens. -->
<!-- script>document.getElementById('test').remove()</script -->
</body>
</html>
This should be a void element in my opinion. There's nowhere to put any nested elements except after everything, so you might as well just put them outside the include instead.
p.s. "Any caveats to DOMParser
would be great. Especially older versions of IE." is irrelevant considering custom elements currently only work in WebKit browsers.
@Yay295 Nice. But how to execute scripts that are present in src="to_include.html"
?
The concept of HTML import should be more than just pasting static HTML, right?
I think the intent here is just to paste DOM content in to another document. HTML imports are a different feature.
Ok @AshleyScirra . You're right. As consumer, If paste DOM content in to another document then I expect to see scripts
, links
, workers
et al and other inclusions parsed and executed. Hope this feature will gain broad adoption.
Ignoring the fact your code doesn't work ...
- @Yay295 the code works fine. Was a snippet from the pr that was clearly referenced in the previous comment. Spared you the details.
- This code is also intended to be used as a polyfill of sorts for the crappy implementation of
webcomponentsjs
polyfill that currently breaks for reasons outside of this thread. Therefore to be clear we PERSONALLY need a bonafiedDocument
not a string. - tried your method but ran into a few issues on different (ancient) platforms. Have you tried (with scripts) more than just your browser @Yay295 ? Just curious.
- was the fastest past could think of that runs external scripts and styles.
DOMParser
is fairly "ancient" based off spec. (thanks for the refactor tho π will add it to our pull request if HTML Imports keels over)
/cc @brandondees @pachonk
@rianby64 Have to understand that from our tracking/observation of the insane multitude of issues related to imports
and includes
. I feel there's a lack of understanding of what the respective terms even mean. At least for my simple brain. CSS is a great example of import
when include
is actually what's happening. I've seen the community fracture over the last x years from the former of the two with module imports. Add the π© show from the w3c for more confusion. Seems like they need tons of help but not too versed in backstory and much duplication.
I Feel understanding the definition is the first step. And I know i'm not the only one in confusion piecing the bigger picture together relative to subresources of type text/html
. Now having to keep an issue track for the issues that round up the issues for the following 3:
- HTML Imports - May have been before its time but feel it's a great spec. Possibly PTSD happening.
- HTML modules - So far everything looks great on paper. More JS heavy.
- HTML Includes - Place fetched content in DOM (AND execute scripts/dep resolution/etc. if need be)
No. 3 must be able to run scripts no differently than loading images. As that's how the browser is intended to work today. What I learned is instead of figuring out implementation details I learned to appreciate connectedCallback CER
. Without that would be difficult to do dependency resolution.
As an aside. Many of the convos happening respectively are people saying similar things but not knowing what to call said thing. #CATAT (Call A Thing A Thing) - @tmornini
I hope i'm not the only one that has empathy for the first day developer which we all were once. And one of the reasons PHP was even made is because everyone had "includes" CSS/JS but ironically no HTML.
My takeaways are clear(er) now. It can be done. It doesn't require any change of any specs. It's a simple implementation. All that said have to use JS still (unfortunately but no longer a concern for our authors). Maybe just the "first day developing as a kid" in me who thought <frameset>
was an amazing way to separate my concerns before I even knew javascript.
Lastly, CER
s are RAD π€
Thanks for input. Just trying to figure out where/who/what(org) to contribute to when/how. And most importantly learn along the way from some bright people. The code is the easy part ;-)
π Happy Friday
@snuggs , Thanks a lot for these 3 definitions.
No. 3 must be able to run scripts no differently than loading images. As that's how the browser is intended to work today.
Can't figure out which part of the HTML standard states that restriction π
What I wanted to point out is the importance of other parts of HTML, like Workers, links with styles and so on. Let's suppose the browser supports HTML include. What can be included? If the main concept is (as stated @AshleyScirra )
to paste DOM content in to another document
Then the first idea I'll try is to put scripts, links and many other things inside that include, and expect that the addresses of all these things are being resolved from the include's base address. Unfortunately, this can't be achieved by using the current primitives fetch
, DOM APIs, Custom Elements, etc...
So, workers can be included? If so, then last question: Try to change the baseURL
of a worker before load it.
Then the first idea I'll try is to put scripts, links and many other things inside that include, and expect that the addresses of all these things are being resolved from the include's base address.
I'm not sure this is a good idea, actually. It means you could end up with two identical-looking scripts that actually load from different URLs:
<script src="script.js"></script> <!-- was in document originally -->
<script src="script.js"></script> <!-- was included from subfolder, loads different script -->
The easiest solution is probably just what @Yay295's polyfill did, essentially setting the outerHTML so DOM content is pasted in place in the main document. That uses the same base URL, but will still load scripts, images etc.
@AshleyScirra thanks for that baseURL
nod. Hadn't even thought of that edge! π
@AshleyScirra , what about the restrictions? Look at this question, please.
But how to execute scripts that are present in src="to_include.html"?
Doesn't inserting a script tag by assigning outerHTML already download and execute it?
No, it doesn't. Please, consider this restriction.
When inserted using the innerHTML and outerHTML attributes, they do not execute at all.
So, what I want to understand is if HTML include should execute scripts shipped inside or not... and not only scripts. Links, workers, images and so on...
Oh, I didn't know that. Well, I agree that a HTML include feature should do that. Perhaps the polyfill could be modified to insert DOM elements instead. If it just fetches a document then does appendChild
for each of the root-level elements, that should execute scripts, right?
No. Neither that way won't work. A good candidate that allows you to execute scripts after appendChild
is Range.createContextualFragment. But the problem with createContextualFragment
is that all references of scripts inside the documentFragment
will be resolved against the first baseURL
. And <base>
can't be changed once defined.
Oh, I didn't know that. Well, I agree that a HTML include feature should do that. Perhaps the polyfill could be modified to insert DOM elements instead. If it just fetches a document then does appendChild for each of the root-level elements, that should execute scripts, right?
@AshleyScirra I proposed this earlier today and got lashback for overthinking. Suites our needs just fine though. Again this works with infinitely nested <include->
s. devpunks/snuggsi#109 (comment) Trick was to document.importNode
on the documentElement
from the import.
@rianby64 Range.reateContextualFragment
never thought of this. What does this do? would this strategy be considered a "hack"? Have any code examples? Curious.
I think we're being derailed in to talking about how to implement the polyfill, rather than focusing on what the spec should say about this feature. Perhaps someone should make a series of test cases for what the feature is expected to do. Then anyone can write a polyfill that passes those tests, and the particular manner the polyfill does that is just an implementation detail that doesn't need to be discussed here.
The key here is to write tests, not polyfills, since they define what is expected of the feature. Writing the polyfill first can accidentally enshrine quirks of the polyfill in to the spec, since if it gains wide adoption it becomes very difficult to change it. It also provides browser makers with a way to verify their implementation is compliant if it ever becomes a real standard.
@AshleyScirra good point. Let's come back to the main thread.
As a consumer, I suggest that HTML include
should allow to paste DOM content which can hold scripts, workers, links, images and other HTML includes
. And the address of every pasted content must be resolved against the HTML include
's address. That's what I'd be happy to see in this feature.
Ok @AshleyScirra pretty simple.
HTML include should bring nodes over (anything that inherits from HTMLElement
& Text/Comment nodes honestly). And act as if they were within that element in the first place when the parser encounters the elements.
I do believe <link>
and <meta>
believe it or not are ok to be in <body>
these days last i checked.
That's the spec i'd love to have. Resolving the baseURL
was something didn't think about but a must for sure.
I think due to my previous comment, URLs in the included document should either resolve against the URL of the main document, or change any src
, href
etc. attributes when inserting to the main document, so elements don't need a hidden base URL. Changing attributes upon insertion is more complex though (you'd have to consider every attribute in all of the HTML spec as well as any future additions), and it would be simpler to just resolve against the main document URL since that's more like a copy-paste of HTML content.
Another problem with changing the base URL might be that any included script that makes a fetch, will do so against the base URL of the main document, not the included document. So if the base URL is changed for subresource requests, that becomes inconsistent with script fetches. So my vote is to not change the base URL.
Here's another example to illustrate the difficulties of changing the base URL:
<img src="image.png">
<script>
fetch("image.png")
</script>
Obviously these both ought to fetch the same resource. However if you include that HTML file with an altered base URL, they fetch different paths, because the script runs in the context of the main document.
@AshleyScirra , as far as I understand you, looks like the HTML include tag should be something that appends a content once fetched from an URL. This point is important to define, because as pointed @snuggs there are three different concepts, and spec should choose one of them.
@domenic said:
The user experience is much better if such inclusion is done server-side ahead of time
It doesn't seem relevant to me what's possible to do on the server.
There are no HTML servers, only HTTP servers, so the HTML spec should be completely indifferent with respect to what's possible in the server, as that's an entirely separate protocol.
Using this along with importable document as a remote example. Had to enable CORS to get <include->
to work. Afterwards all resources <img>
<link>
& <script>
.
The following is a smoke server I threw up just as a proof of concept with two documents within two different domains. Was speaking to @brandondees about this and realized at any given moment an author can (and should) use explicit absolute urls when leaving outside domain. If i'm not mistaken.
I know you'd rather not get into implementation details but doesn't render real tests cases as useless IMHO. Each layer imports a script that states where it's being called from.
Utilizes about 3 layers of hierarchy with <include- >
dependencies. Not sure what's linking to what as i'm currently lost in inception but a demo's worth a million words IMHO. @AshleyScirra I'm sure you can test your baseURL theory out.
Left <include- src=...>
as a wrapper in and didn't replace .outerHTML
to understand placement.
Pardon the bright colors. Red background comes from a <link rel=stylesheet>
resource nested three <include- >
s deep in hierarchy. https://snuggsi.now.sh/examples/include-
What other use cases besides we shall see.?
To add another complication, includes should probably be blocking. They could then use async
and defer
for asynchronous loading, similar to how script elements work.
that's a really key point @Yay295 we'll have to work on that part still I think.
IMHO, I think, instead of trying to develop a YA import/include tool we should rethink our approaches and start using <link rel="import">
. It should be implemented in all browsers...
@rianby64 i couldn't agree with you more. And we continue to use them. However sluggish as guaranteed to be waiting until at least DOMContentLoaded
. There of course is <link rel=pre*
mechanisms but still embelishments around not being able to hook in to constructor
and connectedCallback
if i'm not mistaken. At least that's our current situation and firing order based off current webcomponentsjs polyfill that leaves much to be desired.
We would also not be able to block with polyfill as @Yay295 pointed out would be a nice feature. I'm not saying these have to be our only constraints nor do I like it. But gotta respect the reality they won't ship across the board "ever". And not sure can change that on the FF side.
I fear FF will never implement HTML Imports based on this great summary provided by @annevk. The irony is it was believed in 2014 we would be to concensus by now with (imports, modules, & includes) however much is still the same place. I wonder how Anne feels 3 years later but probably has PTSD from the plethora of people asking like broken records years ago.
Both ES6 modules and service workers open up resource dependency management to web developers. And while ES6 modules is mostly intended for JavaScript, we want to see what kind of dependency systems will be built on these two systems in libraries and frameworks before committing to a standardized design. Especially before committing to one that is not influenced by them. - Anne van Kesteren
On reading again for the first time in a few years there wasn't a No. More a Let's see what you build and if people use before consideration. Just like anything that is considered for the spec. We do have a head nod towards using Service workers and even the polyfill we use doesn't do this for imports let alone includes. What I am thankful for is getting a clearer understanding and separation of needs with the three terms. I feel the people who just wanted include
were getting drowned out (and confusing to) people who are focused on the trajectory of ES6 modules / HTML imports. Which is more ancillary than relative to this include
discussion.
That said I really really like HTML imports. I also think it came to soon and got some unnecessary flack. I also think there should be a more robust HTML Imports polyfill and something we are working on. I do also realize although imports doesn't have the adoption we'd like doesn't mean it doesn't solve problems very similar to what we are discussing. Like the ability to get a sub document into a master document. I realize there are ways to do this as we are showing here. But still somewhat of a hack (conceptually) and requires author to know Javascript. Feels like kicking the can down the wrong alley with HTML modules but that's outside the scope of this discussion.
You can get the desired include by enveloping a link rel=import
inside a custom element. I haven't tried but I'm sure this idea will work.
Correct! I covered that use case in my example above @rianby64. However the convention selects default <template>
within imported doc not all immediate .childNodes
within the document. What I believe we are discussing is all immediate .childNodes
are include
d as well.
Your best pollyfill won't support workers. link rel=import
should.
To circle back to the relevant topic @rianby64 I am uncertain the underpinnings of the import
implementation but surely some details are relevant to this discussion in regards to async
defer
etc.
Conceptually, IMHO, if you've HTML import by link rel=import
then you can emulate HTML include. The inverse is not possible.
Ok... here we go...
class IncludeHTMLElement extends HTMLElement {
connectedCallback() {
if (this.alreadyConnected) return;
this.alreadyConnected = true;
var link = document.createElement('link');
link.rel = 'import';
link.href = this.getAttribute('src');
link.addEventListener('load', e => {
while (link.import.body.children.length > 0) {
this.appendChild(link.import.body.firstElementChild);
}
});
this.appendChild(link);
}
}
customElements.define('x-include', IncludeHTMLElement);
So, I guess that means I quit... Client side include can be achieved by using link rel="import"
The last thing I'd like to understand is if I can link workers via CORS?
@rianby64 I believe can also do the same using the following without defining custom element. @brandondees and i were discussing the other night that since <link>
s are compliant within <body>
now perhaps imports within body are include
d OR <link rel=include>
<< Maybe a bit more of a stretch but kinda like it. Also interesting catch you found on the new Worker(...)
domain URL resolution. Not to dig off topic but I think a question relative to any sub-resource from an include...What if absolute URL is used?
May only work for first occurrence. I do understand would not be feasible to run until DOMContentLoaded
but mostly all Web Components polyfills work around the spec in this manner anyways. That's just an implementation detail. Focus on the what (spec) not the how (implementation). Merely playing with your algo:
void function () {
for (let link of document.body.querySelectorAll `link[rel=import]`)
link.addEventListener
('load', e => e.target.replaceWith ( ... e.target.import.childNodes ))
} ()
What I am sure is that a tag for import is a nice to have feature. The link rel=import
is a good candidate that already covers what we're here discussing.
@rianby64 I don't want to make it look like a spammy ad. But I think what you need is https://github.com/Juicy/imported-template
In general I implemented HTML Include custom element using HTML Imports, that:
- Imports external document,
- Scripts, styles etc. are executed and applied as for any HTML Import,
- Clones
<template>
given in external doc., - Stamps its content into the main/importing document,
- If you stamped
scripts
it will execute them as expected.
It makes use of HTML Imports de-duping, dependencies, etc.
You can read more at https://starcounter.io/html-partialsincludes-webcomponents-way/
I like the idea of having such thing done natively, mainly to support:
- server-less use cases - when HTML is served locally, with no need for remote requests, but we would like to make our pages more DRY in the same manner we did it for years with JS and CSS,
- PWAs as mentioned by @AshleyScirra #2791 (comment)
- non-JS DRY pages,
- empathizing with "first-day dev" who would like to create DRY HTML page without learning any server-side language
Maybe performance overhead of client-side rendering with HTTP/2 Push will not be that bad.
I understand the idea of waiting for broader adoption. Hopefully, now it will be easier to track with Custom Elements.
But I believe we already see adoption and need. Remember AJAX? (back in the days when it was used to load HTML not only JSON). In every single company I worked for, we end up having the case when we loaded and stamped a piece of HTML in JS. I think it would be great to be able to use standardized approach and for example, just stamp <include src="path/to.html">
instead of writing XHRs over and over again.
Anyway, I think the HTML Imports/Modules are letting us finally implement a structured solution for that problem. So, if we are going to put <include>
element discussion on hold, it would be nice to consider use cases mentioned in this issue in HTML Modules discussion.
@tomalec in your support list I β€οΈ # 4 "empathizing with "first-day dev" who would like to create DRY HTML page without learning any server-side language"
I cannot express enough how this is being overlooked but is the most important reasoning behind this feature IMHO. We tend to forget which one we learned first in the HTML, CSS, and JS trifecta. Usually gets learned in that order as well. Just think how many times we copy & pasta'ed the same header, footer, nav in HTML in our early days. Or even worse typed the same thing out and just corrected tons of broken links. Or maybe 'twas just my experience. As @pachonk mentioned before truth be told quite possibly the main reason PHP was invented is because of lack of a partialized HTML implementation within the platform.
I teach web development @ NYU and "open notepad and type <!DOCTYPE html> <html></html>
" Is a real thing on the first day. Some are fearless enough to use an <iframe>
to keep HTML dry in order to avoid learning a programming language. Running out of responses for why that method is considered an anti-pattern. Developer ergonomics is real.
/cc @brandondees
<link rel="include" href="path/to/file.html">
Sorry for coming to the party late, but FWIW this is pretty old, but used fairly widely:
https://github.com/mnot/hinclude/
... and there's a Web Components version here:
https://github.com/gustafnk/h-include
@tomalec Can you please check the case when a HTML include includes a Worker ?
How would you handle the worker's relative path?
And how would be handled the keyword importScripts ?
I really wanted to encapsulate workers into something like HTML include but even native HTML imports from Google can't handle these cases. Let's say, if you want to wire some workers via CORS, or some scrips via CORS which are inside a <link rel="import" ...>
, then the browser won't fetch them... And, after reading this I realized that ReactJS is the Holy Grail.
Hm.. Interesting, I'll try to take a look, however, I'm a newbie to Workers.
Regarding
HTML imports from Google can't handle these cases
What cases do you mean? So far (since 2014) I have no problems with relative paths and executing external scripts with HTML-Import-driven includes - Juicy/imported-template
<!-- for the shake of the example, let's assume that the current URL
points to http://mysite.com/a-directory/index.html
-->
<x-whatever-include
src="/an-absolute-complex/path-or/external-url/case-worker.js">
</x-whatever-include>
// case-worker.js
var myworker = new Worker('myworker-in-current-path.js');
You may wonder, why browser looks for the worker from
http://mysite.com/a-directory/myworker-in-current-path.js
instead of
http://mysite.com/an-absolute-complex/path-or/external-url/myworker-in-current-path.js
?
Try to play with workers and you will find very interesting cases. Also this applies to importScripts
. The same applies too if put not a relative but an absolute path, as far as I remember...
The frustrating part is that WHATWG has no interest (I hope they will get interested...) to cover these cases.
And I'm sure that at some point of your development, you will face the case when you want to use Workers inside of an HTML include. That happened to me and I was forced to put the workers outside of my include tag. After that I found Tcl/Tk and... ok, nevermind π
@domenic @Yay295 I agree that doing it server-side is better UX when the HTML is used to draw the initial page.
The use case where this feature would be desirable is when HTML is only used dynamically. For example a user clicks a button which shows a different screen. You probably don't want to include every possible <template>
you need as part of the initial page response for the same reasons why you might use import()
for JS.
Currently to do this you need to fetch the HTML from JS then parse it somehow (probably innerHTML), so you lose the ability to stream it in. I think @jakearchibald's idea of a streaming document fragment (or some other related idea) would be sufficient to build this include idea as a custom element.
I agree that doing it server-side is better UX when the HTML is used to draw the initial page
@matthewp What if the designer has no ability to do it server-side?
I simply cannot understand why anyone would think this ability, so desirable and prevelent in so much of computing, doesn't already exist in HTML, or why anyone would suggest it doesn't belong.
The designer has full control of the HTML so they do have the ability. You can't design a spec based on the idea that developers are unable to use some other existing spec, that leads to a lot of duplicate features.
Luckily this seems doable as a custom element if we can get the streaming document fragment (or some related idea, I'm not up to date on how it has evolved) through, and if that proves to be popular we can circle back to making it a standard.
People... don't forget to include the Workers and relatives in these HTML imports. The Worker
s' and importScripts
paths must be resolved against the source of HTML import.
@matthewp I was referring to server-side-includes.
Of course the designer can use an external templating tool to stitch together composite HTML pages...but that's not what's being discussed here.
Happy you're open to making it a standard, and agree with the approach.
I'm new to Electron apps but AFAIK there's no tangible server-side and sharing snippets of HTML across pages (header, menu, etc.) is as desirable as it is in web apps that are completely driven by JS and could be served as static resources. Such apps have no rendering server-side, only API server-side. And there are uses of HTML (e.g. Electron, nw.js) which only have filesystem as server-side.
I'm the author of issue #3200 that has just been closed by @domenic and referenced here.
The primary objective of includes is the ability to cache your views client-side, reducing web traffic and server load. You cannot do that with server-side scripting language (except by putting your HTML inside a JS file, which is basically a hack and, well, requires JS enable).
The goal is not to include 3rd-party software, so the question of linking style and JS is irrelevant; You can do that in your 'main' document. The idea is just to split your 'main' document into document fragments (which are NOT documents, but merely just static content, a text file) such that you can reuse them on other pages.
Nobody expects PHP includes to be independent (although they can be with namespaces and such), they are simply pasted into the calling document. It is up to the programmer to make sure both the calling document and the included document are compatible.
Please read issue #3200 for more details on my thoughts about the problem. I don't even think a new element is needed, just extending the definition of an already existing attribute.
One issue with using the src
attribute is that people may already be using it to store data. You're not supposed to do that, but that doesn't mean people aren't.
+1
--1
Why not reinstate <iframe seamless>
?
@maherbo said:
It is up to the programmer to make sure both the calling document and the included document are compatible
π―
@domenic & @Yay295 both wonder why someone would want to do this client-side -vs- server-side.
This really blows my mind.
The answer is that HTML applications should live and be developed on-the-web, rather than on-the-server(s).
There's a huge movement underway where developers are moving applications toward the browser, away from the server and, in fact, the backend toward server-free architectures.
Of course, there are always servers underneath, and yes, I recognize that what you're describing can be accomplished identically in a server-less architecture.
The idea that this should not be implemented because the user-experience is better when it's rendered server-side assumes that the user in question is the end-user of the application, not the web-developer creating it...
Let's let's set our goals away from server-think and towards web applications run in your browser and access resources from the web.
I continue to reject the idea that "the web" includes only clients, and not servers.
I suggested no such thing.
If a developer wishes to render on the server, that's their choice.
If that developer wishes to grab content from-the-filesystem or from-the-web is irrelevant.
I see zero reason to prevent the client from conveniently assembling pages as desired -- the same as the server does.
And, let's face it, client/server is rather quickly simplifying to peer-to-peer.
Do you believe the WebRTC spec should have required server-side participation?
@tmornini Oh, but that's what iframe[seamless]
was, but it did it in a backwards-compatible way. Browsers that didn't understand the attribute would still download and display the HTML fragment, and it reused the useful attributes that <iframe>
had developed over the years, like crossorigin
and friends
Do you believe the WebRTC spec should have required server-side participation?
It does...
Obviously some people write apps on the back-end and some people write apps on the front-end. Nobody is right or wrong or better or worse, they're just different approaches. It's good to bear both in mind when designing features.
Server-side includes are already wide-spread. I think the most interesting use-case for client-side includes is offline-first PWAs, which need to work even when the server is not reachable.
The user experience is much better if such inclusion is done server-side ahead of time, instead of at runtime. Otherwise, you can emulate it with JavaScript, if you value developer convenience more than user experience.
I continue to reject the idea that "the web" includes only clients, and not servers.
@domenic :
I still don't understand how you fail to see how important that is for the server AND UX, both at the same time.
There are 2 ways I know to served document fragments right now:
- include a JS file;
- include an IFrame.
With both methods, if the client already requested the files before, with proper web caching, the server load can be greatly reduced, document fragments can be served by local or proxy cache, thus a better UX and less cost for the server.
The problem with both methods is the loading part when building the document.
With JS files, you have to:
- load the
<script>
source file; - compile the script;
- run the script;
- modify the HTML document;
With IFrame, you have to:
- load the
<IFrame>
source file; - load (i.e. build) the IFrame document;
- copy the IFrame document content (with JS);
- modify the main document by replacing the IFRAME with the copied IFrame content (with JS);
With both methods, steps 2 & 3 are useless and takes time, thus a worst UX. Testing with the IFrame method, I often seen a noticeable shorter loading time by serving a fresh, full-version of the document (no templating or caching), than by sending 304 (Not Modified) responses to all document fragments (including the main document). It's not normal.
So I'm winning server-side (sending no content), but the UX suffers. Having to use JS makes this problematic as well as it can become a nightmare on the dev-side to apply a Progressive Enhancement idealogy with such methods.
On the browser point of view, a document fragment should work like a constant in computer programming (emphasis mine):
Many high-level programming languages, and many assemblers, offer a macro facility where the programmer can define, generally at the beginning of a source file or in a separate definition file, names for different values. A preprocessor then replaces these names with the appropriate values before compiling, resulting in something functionally identical to using literals, with the speed advantages of immediate mode. Because it can be difficult to maintain code where all values are written literally, if a value is used in any repetitive or non-obvious way, it is often done as a macro.
By doing so, a browser would identify the document fragments in a document BEFORE doing anything else, replace them by their content and then parse the document to build it. This would give the best UX and make server admins and developers happy.
A seamless iframe would have behaved a bit differently from a normal iframe. Steps 3 and 4 wouldn't have been necessary. Also, how do you plan on showing HTML without step 2 (load the document)?
@AshleyScirra said:
Obviously some people write apps on the back-end and some people write apps on the front-end
Bingo!
It does...
I don't believe that's true.
Discovery may practically require a server to initiate the connection and, due to most browsers being inaccessible from the open internet thanks to NAT, many streams may be proxied through servers, IIRC WebRTC is technically peer-to-peer.
In any case, we've wandered off topic, and @AshleyScirra's argument is most concise and on-point at the moment.
Do you believe the WebRTC spec should have required server-side participation? @tmornini
It does... @annevk
WebRTC has it built in to choose if you need a server. I can just double click an html page within chrome and on another connected device and fill in an ICE candidate and have communication. I may have to write down the candidate but I believe a server is not required. Merely an implementation detail like the blockchain. Technically a person can use their body as a signaling server. Although doable definitely not a recommendation for security reasons.
A STUN server is used to get an external network address.
TURN servers are used to relay traffic if direct (peer to peer) connection fails.
STUN (which is a concept in and of itself) requires a server to mitigate NAT traversal. And that's only due to IPv4 NAT restrictions. IPv6 removes the need for STUN therefore a server. TURN is defacto distributed however not a requirement for peer2peer. STUN is just a (temporary) rope bridge IMHO. Of course this is in theory and not practical as there are a multitude of pings and handshakes. As long as there is a way to distribute / consume the multitude of post-handshake SDP offers and UDP/TCP packets.
Why is this relevant? I believe from that context I feel just because most use a server doesn't mean it should be assumed as a requirement. Which is totally relevant to our discussion here. If i'm wrong I'll take a "That's false" without description as to not deter this important conversation about including HTML fragments.
I LOVE seeing all the different opinions from the community. Helps me realize there are many ways people are attempting to use the web other than my own. Very humbling. Looking forward to us all coming to a sound consensus.
Just in case... to construct a complete full covered html-fragment web component is still not possible with the present set of primitives.
Check @TakayoshiKochi comment:
I personally do not buy this much (sorry!), as we have enough primitives (fetch, DOM APIs, maybe Custom Elements) to realize a equivalent feature very easily.
The lack of beforescriptexecute, and the fact that importScript and worker don't offer you a solution about how to resolve the URL if a relative path is given, forces this web-component to be incomplete.
Thanks @domenic for pointing to this. This was my post:
Currently, even the simplest templating, such as headers and footers across multiple pages of a static website, require server-side code or build tools, both of which which are complicated for non-developers and take time for everyone. JS solutions exist, but this reduces indexability and makes essential parts of the website content very fragile.
This proposal is for an HTML element that accepts an src
attribute, asynchronously fetches the content (following usual CORS rules) and embeds it in the page's DOM tree, executing any <script>
elements. Since <template>
is taken, one potential name (for the sake of discussion) could be <include>
. Alternatively, <link rel="html">
, so that it can be used in the <head>
too without the issues a new element would introduce.
Not another <iframe seamless>
!
This element transcludes the content of the linked HTML document. This means better performance (no multiple documents and window contexts), and simpler CSS. This essentially mimics how sever-side includes work.
Why not use a custom element or JS?
See comment re:JS solutions above. Several indexers and server-side renderers do not process JavaScript. Also, JS is inherently fragile, and this is needed for essential parts of a website's content.
Furthermore, a declarative solution has some accessibility benefits, since browsers could expose UI to skip certain templates (e.g. a website header with no "skip to content" link).
Questions that need to be answered
Several things need to be ironed out for this to happen, but I firstly wanted to gauge interest and see if there are any major implementation obstacles I'm missing (since this seems like very low-hanging fruit, so there must be a reason it doesn't exist).
Does the <include>
element stay in the DOM?
Regarding how the transclusion happens, I see three possibilities, each with its own pros and cons:
- The
<include>
element remains in the DOM and the fetched HTML is added as a child - The
<include>
element remains in the DOM and the fetched HTML is added after it (akin todocument.write()
in<script>
) - The
<include>
element is replaced by the fetched HTML.
Does DOMContentLoaded
wait for the content to load? What about the load
event?
It seems reasonable that the answer is no and yes respectively, but probably another thing to discuss.
I was under the impression seamless iframes allowed CSS of the parent page to inherit inside, and no longer considered it a separate browsing context.
I'm still interested in extending iframe since it provides a built-in fallback for other browsers, but maybe that could also happen like so:
<include src="foo.html">
<iframe src="foo-iframe.html"></iframe>
</include>
Reply to @domenic:
I don't think we should do this. The user experience is much better if such inclusion is done server-side ahead of time, instead of at runtime. Otherwise, you can emulate it with JavaScript, if you value developer convenience more than user experience.
I'd encourage anyone interested in this to create a custom element that implements this functionality and try to get broad adoption. If it gets broad adoption we can consider building it into the platform, as we have done with other things like jQuery -> querySelectorAll.
- Not everyone can run server-side code or build tools to do this. Not everyone authoring HTML is a programmer or can afford to pay for a server that runs server-side code. Static HTML hosts are often free.
- People typically don't want to depend on JavaScript for content, so if we start from a JS solution, it cannot take off, just like it hasn't all these years even though people need the functionality.
- The user experience is better with server-side includes because you are unfairly comparing them to a JS solution. If this was native in the platform browsers could use all the same tricks for loading resources fast that they use for CSS because they know what you're doing. A JS solution cannot take advantage of the lookahead pre-parser and fetch resources as early as possible. Not to mention the time spent downloading and parsing the JS. Imagine if the same argument was used against CSS in 1996 because you can pre-render presentational HTML on the server so why depend on an external CSS file?
It still seems like this should be prototyped as a web component (or similar) first. In terms of the extensible web, the ability to stream HTML text into the DOM seems like a more important primitive, and could be used to build the web component.
A JS solution cannot take advantage of the lookahead pre-parser and fetch resources as early as possible.
We have <link rel="preload">
to solve this.
I agree that a non-JS solution could perform better than a JS one, but without a prototype it isn't clear how this thing should behave, or how it would perform. JS is suitable for the prototype.
@jakearchibald said:
but without a prototype it isn't clear how this thing should behave, or how it would perform.
The behavior seems clear to me: Every programming language uses some kind of Β«includeΒ» or Β«importΒ» feature.
You write something like include filename;
and before doing any kind of parsing, the line include filename;
is replaced by the content of the file filename.
It is a simple copy & paste function to create the desired HTML file.
That is exactly what needs to be done here.
That behavior is already available; like in such languages, you use a preprocessor or compiler, and before trying to "execute" your output code, the file you feed to the computer has the simple copy & paste to create the desired HTML file done.
@domenic said:
That behavior is already available; like in such languages, you use a preprocessor or
compiler, and before trying to "execute" your output code, the file you feed to the
computer has the simple copy & paste to create the desired HTML file done.
I'm assuming you are talking about using hacks with client-side script like JS. Because doing it server-side is useless as already-sent HTML content has to be sent again with each request, which is exactly what we want to avoid.
But, even with JS, Ajax, <template>
and document fragments, it cannot be done efficiently. Especially for included document fragments within included document fragments. There are too many steps that are repeatedly done over & over, where only one execution at the end would be necessary.
I think people want a lot from browsers.They aren't rubbers that can deform infinitely. Try to keep your apps in the smallest way possible. After all, overdoing is a bad practice... Or try another way to build your apps, not for browsers.
Why is it acceptable to link external CSS, JS, images, videos, etc. and not text or markup itself? If HTML should convey some semantics, then adding header and footer via JavaScript breaks it. If it should not, then why isn't <button>
and many more HTML tags deprecated? Surely they can be implement through JavaScript.
@rianby64 said:
I think people want a lot from browsers.They aren't rubbers that can deform infinitely.
Try to keep your apps in the smallest way possible. After all, overdoing is a bad practice...
Or try another way to build your apps, not for browsers.
That is the thing, browsers are NOT meant for apps, there are made for interpreting HTML. Using scripting to dynamically change the HTML of a particular page is bad practice in my view. Nice feature, but it shouldn't be abused.
What is asked here is really a way to simply - and efficiently - incorporate pure static HTML content. Using browsers for what they were meant for.
Bad practice is sending content you already sent. Since HTML is designed for the web, it should - and can easily - incorporate a mechanism to avoid that.
I'm following this issue with great interest. I'd like to make a few suggestions to help it stay on track.
Arguments about what "browsers" are or aren't and what they should or shouldn't do are probably not very helpful. Browsers have changed a lot over time, and they can continue to change. However, they should change to accommodate use cases.
If someone has a use case that others don't share, that's fine. Not every feature is for everyone. Trust that people have good reasons for the things they want, or ask them to explain the reasons and listen to them. Resist the temptation to label use cases as "good" or "bad".
Alongside any formal proposal, and especially in the absence of one, prototypes help illuminate and test the details of the functionality. If you don't want to build a prototype then you don't have to. If you'd rather write proposed specification language, you can do that. A debate over whether a prototype is necessary is probably not helpful. Discussing specific behavior and whether it addresses the use cases is a better use of energy. Having a prototype will definitely help do that more productively, but it's not a precondition for productive conversation.
Thanks for opening this discussion. This sort of publishing feature has existed in some form in many imaginations even long before HTML existed. I'm looking forward to watching this conversation unfold.