imjasonh/gcping

https support (mixed content error) ?

vans163 opened this issue · 9 comments

If https is supported it would allow querying from a browser page that is https, currently the endpoints only support http and result in a mixed content error.

Free wildcard certificates could be issued via LetsEncrypt and automated to refresh every 3 months.

+1, I'd expect gcping.com/ to be HTTPS.
Where are you hosting it? Firebase Hosting, Cloud Run, or App Engine all provide free and automatic SSL certs.

Definitely agree.

In my limited tests 2+ years ago HTTPS added enough latency that you could notice it, and enough toil to support it (2+ years ago) to not be worth it for a ~.20% time weekend project.

Those options don't have a presence in every GCP zone, so in order to take advantage of them I'd have to have some SSL-terminating proxy, which again adds latency. Self-signed certs are not supported in browsers by default.

More than happy to discuss alternatives, and as always, PRs welcome 🙏

I could continue to just use GCE, with HTTPS load balancing using Google-managed SSL certs, and set up domains like us-east1.gcping.com, europe-west1.gcping.com, etc.

I'm not sure if load balancing adds any noticeable latency, the global LB seems to add 5-10ms, sometimes enough to make it slower than a direct connection to the nearest region and bump it down into second or third place.

Definitely agree.

In my limited tests 2+ years ago HTTPS added enough latency that you could notice it, and enough toil to support it (2+ years ago) to not be worth it for a ~.20% time weekend project.

Those options don't have a presence in every GCP zone, so in order to take advantage of them I'd have to have some SSL-terminating proxy, which again adds latency. Self-signed certs are not supported in browsers by default.

More than happy to discuss alternatives, and as always, PRs welcome

What latency increase did you measure using SSL?

@vans163 I don't remember, it was a couple years ago.

At this point it's probably better to have HTTPS even if it does incur a noticeable latency overhead, since gcping would still be able to report relative latencies between regions, and users should be using HTTPS anyway so it's worth including that in the reported latencies.

The question now is how best/easiest to support HTTPS, without incurring additional unnecessary latency, e.g., by adding unnecessary proxying.

Mmm, I think I see the problem, the latency probably comes from the handshake?

Idea 1, 2 requests. 2nd request measures latency, 1st request is a dummy to establish handshake. Browsers keepalive connections to endpoints, make sure keepalive set on server.

Idea 2, use websockets and ping pong.

The temporary solution I am using (spits out console warnings + insecure site on the green padlock, but works). It loads an image which is allowed over http if origin is https.

export const promise_timeout = (ms) => new Promise(resolve => setTimeout(resolve, ms));

export const promise_img = (img, url) => 
    new Promise(resolve => {
        img.onerror = function () {
            resolve("ok");
        }
        img.src = url;
    });

export const ping = async () => {
    await asyncForEach(zones, async function(zone) {
        var img = new Image();
        var start = performance.now();
        var res = await Promise.race([
            promise_img(img, zone.url),
            promise_timeout(250)
        ])
        if (res === "ok") {
            var end = performance.now();
            var took = Math.round(end-start);
        } else {
           var =  "failed"
        }
        img.src="";
    });
}

Let me know if you need help setting up HTTPS - you should be able to issue an auto-renewing HTTPS cert and attach it [at least] to the Global HTTPS LB [for free] per https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#gcloud

I would expect that although the initial handshake is slightly slower (TLS setup), that subsequent responses are faster due to HTTP/2 & QUIC, as well as session resumption.

One approach might be to discard the first response to each /ping endpoint from a client, treating it as cold, before displaying the live data.

If you're interested in hacking on it, that'd be swell, and any PRs would be welcome. Pricing isn't a terrible concern (I bill gcping usage back to Google anyway 🤑 ) but maintainability definitely is. This is a 20% project, which of course means I spend ~1% of my time on it, if that 😅 .

I also don't want to incur too much overhead latency, but even then the goal of gcping is to display relative latency between regions, so as long as the overhead is roughly even across locations it should be okay.

Fixed by #29