Unleash/unleash-client-node

Bootstrap option with startUnleash method

aleczratiu opened this issue ยท 6 comments

Describe the bug

Currently, I'm facing with a problem, calling startUnleash with a proxy URL. Basically, the URL from the configuration is pointing to a proxy and the proxy itself is going out on the network to the unleash API and is getting the resources.

The issue I'm facing is proxy url is open only for this path http://domain/proxy and the getUrl method from this package add at the end const url = resolve(base, './client/features'); /client/features which will return 404 at the end of the call.
The outgoing request will look like http://unleash:3000/proxy/**client/features** where this route does not exist in the proxy and will return 404.

I can use initialize method instead of startUnleash to use bootstrap to overwrite the URL but this method it's not a promise and it's not waiting for the connection to be established and I'm receiving all the experiments from unleash as (false) when I'm calling unleash.isEnabled('unleashExperimentKey').

If I use directly the API URL from unleash instead of the proxy, everything is working fine with startUnleash but if you think using a proxy (to avoid the rate limit) might be an issue and you will have the same issue as me.

Is there anyone who is faced with this kind of problem? Are there any other solutions to this kind of problem?

The configuration:

export const unleash = async () => {
    return startUnleash({
        url: 'http://unleash:3000/proxy',
        appName: 'app-test',
        environment: 'development',
        customHeaders: {
            Authorization: 'appTest',
            appName: 'app-test',
            environment: 'development',
        },
        bootstrap: {
            url: 'http://unleash:3000/proxy',
        },
    });
};

The proxy I'm using is:

https://hub.docker.com/r/unleashorg/unleash-proxy

The proxy is already used by the react application, now I wanted to add unleash on NodeJS server to call the unleash for some features.

Steps to reproduce the bug

No response

Expected behavior

No response

Logs, error output, etc.

No response

Screenshots

No response

Additional context

No response

Unleash version

No response

Subscription type

No response

Hosting type

No response

SDK information (language and version)

No response

Hey, @aleczratiu! ๐Ÿ‘‹๐Ÿผ

Thanks for opening the issue. If I understand you correctly, you're trying to connect the Node.js SDK to the Unleash proxy and fetch toggle states from there, correct? Unfortunately, that is something we don't support today. The Node.js SDK needs to connect to Unleash and not to the proxy to get toggle configurations.

I can use initialize method instead of startUnleash to use bootstrap to overwrite the URL but this method it's not a promise and it's not waiting for the connection to be established and I'm receiving all the experiments from unleash as (false) when I'm calling unleash.isEnabled('unleashExperimentKey').

The proxy and Unleash have different formats for their responses, which is why the Node.js SDK can't make sense of the response it gets from the proxy.

If I use directly the API URL from unleash instead of the proxy, everything is working fine with startUnleash but if you think using a proxy (to avoid the rate limit) might be an issue and you will have the same issue as me.

Out of curiosity: what rate limiting are you talking about here? The SDKs only poll for features at intervals, so unless you're creating new instances on every request or running them in lambdas, then you shouldn't have any issues.

Just a little follow-up here: I spoke a bit too fast and forgot about the /proxy/client/features endpoint on the proxy. If you use the EXP_SERVER_SIDE_SDK_CONFIG_TOKENS configuration option on the proxy to set an API key, you should be able to connect to the proxy's /proxy/client/features endpoint to fetch toggles for server-side SDKs (such as the Node.js SDK). In your case, however, where only the base /proxy endpoint is exposed, you're probably better off just using Unleash directly as explained above ๐Ÿ’๐Ÿผ

Hi, @thomasheartman thanks for your response.
I already noticed the Unleash proxy returns toggles instead of features.
It's not consistent if I'm looking at the request from calling directly the unleash API it returns features property inside the response, if I'm looking at the implementation from unleash-client-node expecting the response to have features not toggles. Weird why Unleash Proxy decided to return back toggles instead of features, anyhow you're right regarding connecting unleash-client-node to the proxy, I already noticed it's not compatible later after I created this issue here.

I haven't found a place to mention how many requests are allowed per second or minute. I was wondering from the architecture side to not reach any request limitations if exist in production. In many places, an API has a limited number of requests per minute or second this was the only concern why I was looking to go through the Unleash Proxy.

Let me explain an example, the unleash-client-node should be deserved on my backend for some requests let's say, client-side requests from the server an operation, and that operation should return a result based on the feature (passing the context) (based on the feature returned from unleash) if the feature is enabled then the user can be procced the operation or not, and this can be applied in many other examples and having a big number of users who can request these flags I was looking to avoid any rate limits.

Got it! Let me see try to address your items one by one ๐Ÿ˜„

I already noticed the Unleash proxy returns toggles instead of features.
It's not consistent if I'm looking at the request from calling directly the unleash API it returns features property inside > the response, if I'm looking at the implementation from unleash-client-node expecting the response to have features not toggles.

Yes, this is correct. But that's not the only difference: the proxy returns the toggles fully evaluated, only showing you toggles that are enabled for the given context, whereas the Unleash client API returns the full toggle information including all the strategies, constraints, etc.

The reason for this is that the proxy and the client API serve two different use cases:

  1. The proxy is intended to be used with front-end clients, such as browser and mobile app clients. For security/privacy reasons, these SDKs do not receive the full set of features that you have. Instead, the proxy - which runs in an environment you control - gets the full configuration from the Unleash server, and then evaluates the features based on the context it receives in a request. The SDKs then receive a list of enabled toggles. Importantly they get no information on how to evaluate toggles, because they're not built to do that.
  2. The client API returns a toggle's full configuration for the specified environment. This includes all its variants, all its strategies and constraints, and anything else that the SDK needs to evaluate a feature toggle. Importantly, the client API does not have any notion of whether a toggle is enabled or not.

In other words: the proxy's main use case is to take a context provided by a request and evaluate what features are enabled or not based on the context. Behind the scenes, the proxy uses the Node SDK to do this and connects to the Unleash server's client API.

I don't know why the proxy uses toggles and the client API uses features, but it might be that it's because the proxy returns fully evaluated "toggles", whereas the client API returns "feature data" and no evaluation.

I haven't found a place to mention how many requests are allowed per second or minute. I was wondering from the architecture side to not reach any request limitations if exist in production. In many places, an API has a limited number of requests per minute or second this was the only concern why I was looking to go through the Unleash Proxy.

Let me explain an example, the unleash-client-node should be deserved on my backend for some requests let's say, client-side requests from the server an operation, and that operation should return a result based on the feature (passing the context) (based on the feature returned from unleash) if the feature is enabled then the user can be procced the operation or not, and this can be applied in many other examples and having a big number of users who can request these flags I was looking to avoid any rate limits.

I see your concern, but I think you may have misunderstood how the Unleash system works: Server-side SDKs, such as the Node SDK, do not reach out to Unleash when they're evaluating features. All our SDKs use a polling model and only reach out to the client API at specified intervals. When they connect to the API, they receive all the information they need to fully evaluate features. Features are then evaluated in-memory by the SDKs. The Unleash client API never does any feature evaluation.

Because of this, if your application uses a server-side SDK to evaluate features based on provided context, you can take as many requests as you want without your number of requests to the Unleash server increasing. Well, with some caveats: when an SDK starts up, it'll try to connect to an API to fetch configurations. So if you scale horizontally, you'll have more SDKs reach out to the API. And if you start a new SDK per incoming request (which you really shouldn't), that'll quickly become too much.

So as long as you avoid spinning up new SDKs per incoming request, you should be just fine: request count shouldn't be an issue.

If you're using an Unleash-hosted proxy for front-end clients, on the other hand, that does have some request limits as explained on the Unleash plans page.

Does that address your concerns? Feel free to let me know if you've got any follow-up questions or anything โ˜บ๏ธ

Everything is clear now, thank you for your time and your explanation! ๐Ÿ‘โ˜บ๏ธ

Great! And no worries; that's why I'm here ๐Ÿ˜„ But in that case, I'll go ahead and close this issue for now. Feel free to reopen if you feel it hasn't been addressed.