"serverless-offline" v6 alpha feedback
dnalborczyk opened this issue ยท 139 comments
Thank you for giving serverless-offline
v6 alpha a try! ๐
In this issue thread you can provide feedback regarding ongoing work for v6.
The current v6 releases are mainly marked as an alpha version to signal that stuff might be breaking, due to major code refactoring, reasons listed below (breaking changes), as well as the current lack of tests (which hopefully will improve over time). If you encounter any bugs please open a new issue stating the alpha version you are using.
Installation
npm i serverless-offline@next --save-dev
Latest release
New features
-
(initial) OpenAPI/swagger support [alpha.10
] - Worker thread support (requires Node.js v11.7.0+) [
alpha.13
] - Lambda Function Pool to better simulate Lambda behavior and keep Lambdas alive [
alpha.16
] -
Python
andRuby
users should be able to usewebsocket
events now [alpha.22
] - add schedule event source [
alpha.47
] - (initial)
docker
support [alpha.54
] (thanks to @frozenbonito) - display memory usage
Planned new features
- Authorizer support for WebSocket (thanks to @computerpunc) PR: #732, work has been moved to a branch: https://github.com/dherault/serverless-offline/tree/websocket-fixes-authorizer
Maybe new features
- support for custom runtimes,
runtime: provided
see #570, #648 - look into supporting
custom layers
see #630 - Add
Java 8
support - Add
Dotnet core
support
Improvements
- Refactoring
Python
handler execution #742 [alpha.15
] - Refactoring
Ruby
handler execution [alpha.17
] - Added AWS Lambda environment variables [
alpha.15
] -
Billed Duration
was added to the console log [alpha.23
] - use high resolution timer for
duration
andbilled duration
[alpha.27
] - ES6 modules [
alpha.28
] -
rollup build (chunked) [subsequently removed in [alpha.28
]alpha.42
]
Breaking changes (see below for a migration path)
- default hot-reloading of handler module was removed [
alpha.32
] - remove
--binPath
option [alpha.6
] - remove
--exec
option [alpha.9
] - remove
--location
option [alpha.4
] - remove
--noEnvironment
option [alpha.9
] - remove
--showDuration
option,lambda execution time
will be displayed by default [alpha.1
] - remove
--providedRuntime
option [alpha.8
] - remove
--region
option [alpha.14
] - remove
--stage
option [alpha.14
] - remove
--prefix
option [alpha.29
] - remove
--preserveTrailingSlash
option [alpha.29
] -
useSeparateProcesses
was renamed touseChildProcesses
[alpha.32
] - remove
--skipCacheInvalidation
[alpha.32
] - remove
--cacheInvalidationRegex
[alpha.32
] - remove
event.isOffline=true
[alpha.9
] - remove
process.env.IS_OFFLINE=true
[alpha.25
] - remove functionality shared with https://github.com/svdgraaf/serverless-plugin-stage-variables [`alpha.14]
Temporary Breaking changes
-
timeout
feature has been removed temporarily to ease refactoring, (will be reimplemented soon) [alpha.0
]
Bugfixes
- Lambda integration Response parameters exception #756, #757 (thanks to @jormaechea) [
alpha.0
] - Lambda integration Response parameters exception for headers with null and undefined values #758, #759 (thanks to @jormaechea) [
alpha.0
] - Function specific runtimes #750 [
alpha.1
] - Do not remove unhandledRejection listeners as it breaks Serverless #767 [
alpha.1
] - context.callbackWaitsForEmptyEventLoop property missing #770 [
alpha.2
] - LambdaContext does not consider function level memorySize and default memory size [
alpha.5
] - ipcProcess did not contain
memoryLimit
when it was not specified (missing default) [alpha.5
] - plugin should consider path to
serverless
executable: "Proxy Handler could not detect JSON" #774 [alpha.6
] - Return 502 on Error #355, #678 (thanks to @computerpunc) [
alpha.9
] - fixed
env
merging and handler passing (process.env, provider.env, function.env) [alpha.9
] - catch all route causes exception (regression) #782 [
alpha.11
] - JSON response detection from stdout in local invocations #785, #781 [
alpha.13
] (thanks to @cmuto09) - pass correct env to worker threads [
alpha.15
] - added missing apiKeyId to LambdaIntegrationEvent context #797 [
alpha.27
] - fix
go-lang
support (withdocker support) [
alpha.54`] - fix HEAD requests
- fix CORS
Performance
- ipcProcess loads now on first request as well as the module containing the handler [
alpha.4
] - only load and instantiate Api Gateway and Api Gateway WebSocket if used [
alpha.8
] - lazy load process runners [
alpha.12
] - lazy load in-process handlers [
alpha.12
]
Other
- remove experimental warning for
WebSocket
support [alpha.9
] - add apollo-server-lambda (graphql) scenario test
- fix failing Python 2 tests [
alpha.6
] - fix failing Ruby tests [
alpha.6
] - add tests for Python3 support
- add tests for
Go
support - add tests for
Java 8
support - add tests for
Dotnet Core
support - fix failing (skipped)
python
tests onwindows
(not a blocker)
Migration Paths
--binPath
option has been removed
we're not invoking serverless invoke local
anymore, instead we moved the (lightly modified) code into the plugin and calling it ourselves. this is also way faster: ~ 25 times (on a mac, with python)
--exec
option has been removed
just call your script as part of a shell script, npm script, javascript etc.
why was this option removed?
- feature creep
- maintenance burden
- no script hook for server shut down (only startup)
- possible issues with supporting multiple OS (linux, macos, windows)
--location
option has been removed
if your handler file has different locations depending on the environment.
instead, use custom variables!
e.g.
custom:
location:
development: some-other-path/src
production: src
function:
hello:
handler: ${self:custom.location.${opt:stage}}.hello
# or alternatively, whichever you prefer
# handler: ${self:custom.location.${env:NODE_ENV}}.hello
why was this option removed?
- feature creep
- functionality can easily be achieved by using custom variables
- parity with serverless.
serverless
doesn't know about--location
and thereforelocal invokes
(and possibly other things) would be broken serverless
might add this (or a similar) functionality to the main framework, which would complicate things even more
--noEnvironment
option has been removed
- feature creep
- there shouldn't be a need for such a functionality, as everything should be manageable through custom variables
- If you used this feature and you need help migrating, please file an issue
why was this option removed?
- parity with
serverless
, asserverless invoke local
should also cause problems
--showDuration
option has been removed
nothing needs to be done, it just works
why was this option removed?
- because it would be redundant. :)
--providedRuntime
option has been removed
instead, use custom variables!
at provider level, e.g.:
custom:
runtime:
development: nodejs10.x # use supported runtime here
production: provided
provider:
runtime: ${self:custom.runtime.${opt:stage}}
# or alternatively, whichever you prefer
# runtime: ${self:custom.runtime.${env:NODE_ENV}}
or at function level, e.g.:
custom:
runtime:
development: nodejs10.x # use supported runtime here
production: provided
function:
hello:
handler: handler.hello
runtime: ${self:custom.runtime.${opt:stage}} # or alternatively ${env:NODE_ENV}
# or alternatively, whichever you prefer
# runtime: ${self:custom.runtime.${env:NODE_ENV}}
why was this option removed?
- the
--providedRuntime
support falls apart when someone wants to use multiple different provided runtimes at function level. this is not possible with (one)--providedRuntime
option. see #750 - functionality can easily be achieved by using custom variables
- allowing an implementation for
runtime: provided
see #570, #648
--region
option has been removed
coming soon
--stage
option has been removed
coming soon
--prefix
option has been removed
coming soon
--preserverTrailingSlash
option has been removed
coming soon
event.isOffline=true
has been removed
- just check
process.env.YOUR_OWN_VARIABLE
instead (see below)
process.env.IS_OFFLINE=true
has been removed
- you can just provide your own environment variable, in e.g.
serverless.yml
:
# in the provider definition:
provider:
environment:
IS_OFFLINE: true
# or in the function definition
functions:
hello:
handler: handler.hello
environment:
IS_OFFLINE: true
why was this option removed?
- for parity with the AWS event object
remove functionality shared with https://github.com/svdgraaf/serverless-plugin-stage-variables
coming soon
This list will be updated with each alpha release.
@dnalborczyk
What would be the recommended way of pushing updates to v6 alpha?
Iโm thinking of:
@computerpunc awesome!
@dnalborczyk
What would be the recommended way of pushing updates to v6 alpha?
just do a PR against master
. I didn't branch off, as it originally wasn't planned to be what it became.
there's a 5.x-branch
for v5 releases, but I'd rather do that only if absolutely necessary, as the code base diverged too much.
Iโm thinking of:
sure, go for it! it would be great if you could also add some tests, similar to: https://github.com/dherault/serverless-offline/tree/master/src/__tests__/integration/handler
I also want to move everything non-unit test out of the "__tests__" folders, as those should be only used for unit tests.
I'm also planning an additional (optional) test script to automatically deploy to AWS and run the same tests against the deployment and expect the same results. that would be most likely done only locally by any developers or users, unless we have or get an AWS test account.
regarding #678 I introduced a LambdaFunction
class, which also runs the lambda handler. unit tests are included. (unit tests for exceptions I have in a stash, not in master yet). we probably eventually need some generic version of how to reply to 'function errors' with 'lambda', 'proxy', 'websocket' etc.
- Moving all current WebSocket tests Into integration phase so everyone (including CI) can run WebSocket tests and not just me :)
that would be great!
btw, I wanna look at the websocket-authorizer
merge later today so we can merge into master
soon. maybe I'll move the test folders afterwards, otherwise it causes more friction.
Python integration test fails.
python tests โบ should work with python
FetchError: invalid json response body at http://localhost:3000/hello reason: Unexpected end of JSON input
hey @computerpunc
thank you! just fixed it. seems like it was only failing on windows
because shebangs are not supported, and it tried to run serverless
with the windows scripting host and not node.
I'm running on OSX. Now both Ruby and Python fail. This is latest plain vanilla master
.
python tests โบ should work with python
FetchError: invalid json response body at http://localhost:3000/hello reason: Unexpected end of JSON input
ruby tests โบ should work with ruby
FetchError: invalid json response body at http://localhost:3000/hello reason: Unexpected end of JSON input
you probably have to run npm ci
locally. I added a new dependency: execa
, to fix spawning a process on windows. let me know if that fixes it.
you probably have to run
npm ci
locally.
It didn't fully help. Python still fails. Below is the errors generated:
Serverless: GET /hello (ฮป: hello)
Proxy Handler could not detect JSON:
Proxy Handler could not detect JSON: Error --------------------------------------------------
spawn python2 ENOENT
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.7.0
Serverless Version: 1.49.0
Enterprise Plugin Version: 1.3.8
Platform SDK Version: 2.1.0
I'm running on OSX.
me too. plus the tests on travis are passing: https://travis-ci.org/dherault/serverless-offline/builds/570274311
I believe ruby
and python
are pre-installed on macOS.
I'm not a python
nor a ruby
user, but we should probably also check if python and ruby are installed on the machine where the tests are to be run, and if not, skip them. I'm gonna look into this.
could you run: python --version
and ruby --version
on your box just to make sure?
also, could you run the following:
go to: __tests__/integration/python
and then run: npx serverless invoke local -f hello
Ruby test runs OK. Python doesn't but I checked on another Mac it runs OK.
could you run:
python --version
andruby --version
on your box just to make sure?
On the problematic Mac Python 2.7.10
and on the OK Mac Python 2.7.16
.
and then run:
npx serverless invoke local -f hello
Same error as before (on the problematic Mac).
In addition,
I tried to push changes of #772 on a Mac that can run all tests without a problem.
Nevertheless, when trying to push via github client, I get the same error.
ah, ok. thanks for checking!
just noticed:
spawn python2 ENOENT
I guess it can't find python2
-> python2 --version
probably fails.
@computerpunc should be fixed now
I still cannot push any update because the push is rejected (from a Mac that runs the tests without any error). Github Desktop says that both python2 and python3 tests fail.
Where exactly do tests run when I try to push updates of a PR? Is it locally or remotely?
I still cannot push any update because the push is rejected (from a Mac that runs the tests without any error). Github Desktop says that both python2 and python3 tests fail.
it's just a git hook, you can run:
git push --no-verify
to get around it. not always recommended, but in this case a shortcut until we figure out what's wrong.
not sure how to do --no-verify
with github desktop
, and also not sure why github desktop
behaves different in that case.
Where exactly do tests run when I try to push updates of a PR? Is it locally or remotely?
local only
@computerpunc I can also add an ENV flag and/or npm script to skip 'flaky' tests - if that helps?
I managed to push. I didn't need to use --no-verify because from the CLI the tests don't fail only with Github Desktop.
Hello guys, it seems like the Readme has an incorrect npm install
:
Instead of
npm i serverless@next --save-dev
it should probably be
npm i serverless-offline@next --save-dev
Hello guys, it seems like the Readme has an incorrect
npm install
:
Instead ofnpm i serverless@next --save-dev
it should probably be
npm i serverless-offline@next --save-dev
thank you @lightningboss ! Fixed.
I see that for testing #781, you added another folder (python-big-json
) in ./__tests__/integration
.
That's going to be very messy if for each serverless.yml
there will be a new folder.
serverless
has the ability to consume configuration files with different names and locations.
In #732 and #778 (/src/__tests__/manual/websocket
), I use the command sls offline --config ./config/serverless.$1.yml
in which $1
can be main
, authorizer
or RouteSelection
to run tests from .yml
stored in the same folder.
I hope this helps.
That's going to be very messy if for each
serverless.yml
there will be a new folder.
yeah, you are right. that was just a quick drop-in without thinking about the folder structure. that needs a proper rethinking and reorg for sure.
serverless has the ability to consume configuration files with different names and locations.
In #732 and #778 (/src/tests/manual/websocket), I use the command sls offline --config ./config/serverless.$1.yml in which $1 can be main, authorizer or RouteSelection to run tests from .yml stored in the same folder.
I hope this helps.
yeah, thanks for the tip!
I tried to use serverless.yml
sub configuration files but couldn't get it to work at the time: https://github.com/dherault/serverless-offline/blob/master/__tests__/integration/handler/serverless.yml#L15
I might have to go back and have another look at it.
EDIT: Created an issue instead of posting the whole thing here: #787 Sorry about that!
While working on #788, I found that Python 3 tests fail on AWS.
Run command (after sls deploy
from folder ./__tests__/integration/python3
):
npm --endpoint=https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/dev run test __tests__/integration/python3/python3.test.js
Output:
> serverless-offline@6.0.0-alpha.15 test .../serverless-offline
> jest --verbose --silent --runInBand "__tests__/integration/python3/python3.test.js"
FAIL __tests__/integration/python3/python3.test.js
Python 3 tests
โ should work with python 3 (549ms)
โ Python 3 tests โบ should work with python 3
expect(received).toEqual(expected) // deep equality
- Expected
+ Received
Object {
- "message": "Hello Python 3!",
+ "message": "Forbidden",
}
56 | const response = await fetch(url)
57 | const json = await response.json()
> 58 | expect(json).toEqual(expected)
| ^
59 | })
60 | })
61 | })
at Object.toEqual (__tests__/integration/python3/python3.test.js:58:20)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 1 total
Snapshots: 0 total
Time: 1.31s, estimated 3s
While working on #788, I found that Python 3 tests fail on AWS.
thanks for the info @computerpunc ! gonna check it out!
@computerpunc It seems the npm endpoint stuff wasn't defined in that file. I fixed it. a57cc6d
@computerpunc It seems the npm endpoint stuff wasn't defined in that file. I fixed it. a57cc6d
Sorry for not being clear enough, the error I provided above is from running #788 where I already added myself 'endpoint' support in the code.
Checking it again on master
with your update, the outcome is the same error.
Sorry for not being clear enough, the error I provided above is from running #788 where I already added myself 'endpoint' support in the code.
Checking it again onmaster
with your update, the outcome is the same error.
ah, thanks again! the path is wrong https://github.com/dherault/serverless-offline/blob/master/__tests__/integration/python/python3/python3.test.js#L64
I'm fixing some stuff around tests right now. I'm also gonna change the npm_config stuff to use a standard env variable, as it is more commonly used and understood, plus, not sure if yarn
would support it.
ah, thanks again! the path is wrong https://github.com/dherault/serverless-offline/blob/master/__tests__/integration/python/python3/python3.test.js#L64
Yes, I missed it too.
I'm fixing some stuff around tests right now. I'm also gonna change the npm_config stuff to use a standard env variable, as it is more commonly used and understood, plus, not sure if
yarn
would support it.
@dnalborczyk can you check #788 and merge it if OK? All the big changes make it very difficult keep PRs up-to-date with merges. If a PR is merged then there's no hassle because it's already in, but if it's waiting it goes out of date as #732 did :(
can you check #788 and merge it if OK?
@computerpunc
it's very high on my list. just needed to fix the python
and ruby
bugs yesterday, because it felt like getting out of control. now just touching up the test setup (fairly small) so we can implement more tests.
I believe I looked at the HEAD stuff before once, I probably have the stash still waiting around somewhere. I just remember it was a bit simpler, but might have not covered every use case. I try to find it and then we can compare.
btw, running the tests against AWS works great!!! only thing we need is some sort of auto deployment script for all (or individual) serverless.yml
which we could use in a setup and teardown process. do you have something like that in place (or similar)?
btw, running the tests against AWS works great!!! only thing we need is some sort of auto deployment script for all (or individual)
serverless.yml
which we could use in a setup and teardown process. do you have something like that in place (or similar)?
I don't have any specific script as I deployed manually but take a look at this folder which has all kind of scripts for similar tasks (on local machine): https://github.com/computerpunc/serverless-offline/tree/websocket-fixes-authorizer/src/__tests__/manual/websocket/scripts
In any case, I hope you are not going to make git test with AWS upon every commit :)
Check the integration tests again as they do not work with AWS.
In order to fix, you can:
+ const { pathname } = url
+ url.pathname =
`${pathname}${pathname === '/' ? '' : '/'}${path}
`
AWS_ENPOINT
=> AWS_ENDPOINT
I've checked and it now works. Thanks.
If you want to greatly improve the running time of tests when running with AWS, move requiring serverless
and serverless-offline
after the if
in setup()
in setupTeardown.js
as shown below:
if (RUN_TEST_AGAINST_AWS) {
return
}
const Serverless = require('serverless') // eslint-disable-line global-require
const ServerlessOffline = require('../../../src/ServerlessOffline.js') // eslint-disable-line global-require
const serverless = new Serverless({ servicePath })
require('serverless')
and require('../../../src/ServerlessOffline.js')
take a lot of time to load and there's no reason to pay this penalty if a server already exists (such as the case of AWS).
EDIT: BTW it also makes a very big improvement when creating a serverless-offline
server in the tests, i.e when AWS_ENDPOINT
is NULL when running npm run test
(although I don't understand why).
I updated to v6.0.0-alpha.20
and Python endpoints are now failing: #742 (comment)
i have another issue with webpack (followup of #787).
it seems that serverless-offline doesn't pick up the changes after a bundle is being rebuilt.
I see:
Built at: 08/26/2019 4:43:24 PM
Asset Size Chunks Chunk Names
src/handler1.js 4.11 KiB src/handler1 src/handler1
src/handler1.js.map 3.96 KiB src/handler1 src/handler1
src/handler2.js 4.13 KiB src/handler2 [emitted] src/handler2
src/handler2.js.map 3.98 KiB src/handler2 [emitted] src/handler2
Entrypoint src/handler1 = src/handler1.js src/handler1.js.map
Entrypoint src/handler2 = src/handler2.js src/handler2.js.map
[./src/handler1.js] 293 bytes {src/handler1}
[./src/handler2.js] 306 bytes {src/handler2} [built]
Serverless: Watching for changes...
but the actual code that is running is not the one after the build.
I can reproduce a minimal example if needed.
now I realized that it takes ~1 minute to pick up the changes but it's not consistent...
- working with serverless-offline 5.x.x
another bug in 6 alpha regarding env vars:
It looks like process.env
accessed outside of the handler is receiving different value (or no value).
This outputs true
in 6 but false
in 5 with the same configuration (A is declared in the yml to a value):
const A = process.env.A;
const handler = (event, context, callback) => {
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(A !== process.env.A, null, 2),
})
}
module.exports = { handler };
If you want to greatly improve the running time of tests when running with AWS, move requiring
serverless
andserverless-offline
after theif
insetup()
insetupTeardown.js
as shown below:if (RUN_TEST_AGAINST_AWS) { return } const Serverless = require('serverless') // eslint-disable-line global-require const ServerlessOffline = require('../../../src/ServerlessOffline.js') // eslint-disable-line global-require const serverless = new Serverless({ servicePath })
require('serverless')
andrequire('../../../src/ServerlessOffline.js')
take a lot of time to load and there's no reason to pay this penalty if a server already exists (such as the case of AWS).
thank you @computerpunc ! great idea!! it makes sense, since jest
is creating a new module reference for each individual test module.
EDIT: BTW it also makes a very big improvement when creating a
serverless-offline
server in the tests, i.e whenAWS_ENDPOINT
is NULL when runningnpm run test
(although I don't understand why).
not sure either, other than 1 less require
.
in general, there's probably lots of room for improvements regarding the tests. e.g., right now, all tests are running in-band
, meaning not in parallel, because of the currently one port we are using. if we'd use random ports for the tests, they would likely run faster as well. the AWS tests could always run in parallel, since they are pointing to different endpoints anyways.
@Jordan-Eckowitz I answered in the other issue: #742 (comment)
Hi @dnalborczyk ,
I'm still using v6.0.0-alpha.23
and I'm now getting the following error when serverless-offline
boots up:
serverless-offline" initialization errored: Cannot find module './misc/polyfills.js
@Jordan-Eckowitz thank you!
misc
folder is missing in the deployed package. I'll fix that!
update: alpha 26
is out, and the issue should be fixed.
@Jordan-Eckowitz thank you!
misc
folder is missing in the deployed package. I'll fix that!update:
alpha 26
is out, and the issue should be fixed.
Working perfectly now - thanks @dnalborczyk !
process.env.IS_OFFLINE=true
has been removed
- you can just provide your own environment variable, in e.g.
serverless.yml
:# in the provider definition: provider: environment: IS_OFFLINE: true # or in the function definition functions: hello: handler: handler.hello environment: IS_OFFLINE: true
This piece of documentation is not correct as process.env.IS_OFFLINE
was true
only when running offline and not on AWS. With the above approach it will be true
on both of them.
BTW why did you remove it in the first place?
This piece of documentation is not correct as
process.env.IS_OFFLINE
wastrue
only when running offline and not on AWS. With the above approach it will betrue
on both of them.
I don't think this is incorrect. Nobody is saying that you should deploy it with this environment
set. ๐
Although I admit that it might need further clarification to make more sense, e.g.:
provider:
environment:
# pass through env
IS_OFFLINE: ${env:IS_OFFLINE}
# or pass cli option
IS_OFFLINE: ${opt:IS_OFFLINE}
I'll add this to the migration example. I need to update the migrations steps, they are not up-to-date with the latest breaking changes
.
BTW why did you remove it in the first place?
same reason event.isOffline
was removed: aa7fd51
- it adds additional complexity to the plugin
- not everyone needs this
- it doesn't exist on AWS
- it can be easily achieved by the user (see above)
- adding it explicitly shows clear intend, e.g. people not using this plugin would not know what the meaning of
IS_OFFLINE
would be
besides all of the above one could also set process.env.IS_OFFLINE
in the cli, but that would not always work depending on how you run your handlers (in-process, worker-thread, child-process). I have it on my to do list to write something up regarding process.env
, global state, module state etc.
Congratulation @dnalborczyk on becoming the first contributor of serverless-offline thought so much work. You did an amazing job at implementing new features and refactoring the code. Best.
Congratulation @dnalborczyk
on becoming the first contributor of serverless-offline thought so much work. You did an amazing job at implementing new features and refactoring the code. Best.
They say that sometime a single picture is better than a 1000 words :)
Well done!!!
thank you guys very much!! that's nice to hear!! very much appreciated!! ๐
Hi guys, how can i test my defined openapi in serverless.yml?
On a the latest master
, npm run test
reports the following:
Test Suites: 8 failed, 17 passed, 25 total
Tests: 23 failed, 1 skipped, 216 passed, 240 total
Snapshots: 4 passed, 4 total
Time: 77.606s
The 2 main errors are:
ServerlessError: Serverless plugin "./../../../../" initialization errored: Cannot find module '......../serverless-offline/' from 'PluginManager.js'
FetchError: request to http://localhost:3000/xyz failed, reason: connect ECONNREFUSED 127.0.0.1:3000
Do you experience the same?
i have another issue with webpack (followup of #787).
it seems that serverless-offline doesn't pick up the changes after a bundle is being rebuilt.
I see:
Built at: 08/26/2019 4:43:24 PM Asset Size Chunks Chunk Names src/handler1.js 4.11 KiB src/handler1 src/handler1 src/handler1.js.map 3.96 KiB src/handler1 src/handler1 src/handler2.js 4.13 KiB src/handler2 [emitted] src/handler2 src/handler2.js.map 3.98 KiB src/handler2 [emitted] src/handler2 Entrypoint src/handler1 = src/handler1.js src/handler1.js.map Entrypoint src/handler2 = src/handler2.js src/handler2.js.map [./src/handler1.js] 293 bytes {src/handler1} [./src/handler2.js] 306 bytes {src/handler2} [built] Serverless: Watching for changes...
but the actual code that is running is not the one after the build.
I can reproduce a minimal example if needed.
now I realized that it takes ~1 minute to pick up the changes but it's not consistent...
- working with serverless-offline 5.x.x
I'm having the same issue. Is there a known workaround?
Hi, having the same issue as ozsay & uldissturms, but I'm using webpack without the serverless-webpack plugin.
I think it's related to the fact that serverless-offline is replicating the cold start mecanism of serverless function.
This is very useful but should be dismissable on demand, like with a flag like --always-cold
or something.
The only workaround I fould right now is to exit/restart the serverless offline
command, which ios kinda... not fun.
Thanks for your hard work on this useful plugin!
@leny, thank you. That makes sense as I see the changes picked up if I invoke the function again after waiting for a while.
Digging around in the code, the "problem" is here, and already marked as TODO.
I hope we'll be able to specifiy the idle time, including set it to 0
.
hey @uldissturms @leny
there is a discussion about a fix in another issue: #793
Hey All i am new in serverless when i write command servless offline this error is coming but
when i type this command on windows command promt: serverless plugin list
it show serverless-offline plugin
serverless offline
Serverless Error ---------------------------------------
Serverless command "offline" not found. Did you mean "config"? Run "serverless help" for a list of all available commands.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 10.16.0
Framework Version: 1.53.0
Plugin Version: 3.1.1
SDK Version: 2.1.1
Components Core Version: 1.1.1
Components CLI Version: 1.2.3
when i type this command on windows command promt: serverless plugin list
it show serverless-offline plugin
Yes it does because serverless-offline
is a known plugin for serverless
.
To activate please add the following to your serverless.yml
file:
plugins:
- serverless-offline
@computerpunc Yeah but it's already there. actually i have used now this command for install
serverless plugin install --name serverless-offline
in working dir then it add to my serverless.yml file then it's working fine thankyou
when i type this command on windows command promt: serverless plugin list
it show serverless-offline pluginYes it does because
serverless-offline
is a known plugin forserverless
.
To activate please add the following to yourserverless.yml
file:plugins: - serverless-offline
@computerpunc Yeah but it's already there. actually i have used now this command for install
serverless plugin install --name serverless-offline
in working dir then it add to my serverless.yml file then it's working fine thankyou
@dnalborczyk could you not remove process.env.IS_OFFLINE
and event.isOffline
please ?
It is too big a breaking change for most users I believe.
Also i wonder since remove --skipCacheInvalidation [alpha.32]
how will users reload their work ? Using nodemon ? It should be in the README, I'm taking care of it.
@dnalborczyk can I work on display memory usage
?
Is the Replay feature still here ?
@dnalborczyk could you not remove
process.env.IS_OFFLINE
andevent.isOffline
please ?
It is too big a breaking change for most users I believe.
@dherault
Not a big issue while making the transition on my side. Just use process.env.IS_OFFLINE
instead of event.isOffline
and set an environment var as in export IS_OFFLINE=true
.
The main issue here is the incorrect documentation above to set the var in serverless.yml
:
process.env.IS_OFFLINE=true
has been removed
- you can just provide your own environment variable, in e.g.
serverless.yml
:# in the provider definition: provider: environment: IS_OFFLINE: true # or in the function definition functions: hello: handler: handler.hello environment: IS_OFFLINE: true
Hi guys, how can i test my defined openapi in serverless.yml?
@justinlazaro-iselect what do you mean? could you clarify? currently you have an overview of the api in the browser, under /documentation
. the port will likely change to avoid collisions with a potential existing /documentation
path. as an alternative we could also add stage
to the urls.
On a the latest
master
,npm run test
reports the following:Test Suites: 8 failed, 17 passed, 25 total Tests: 23 failed, 1 skipped, 216 passed, 240 total Snapshots: 4 passed, 4 total Time: 77.606s
The 2 main errors are:
ServerlessError: Serverless plugin "./../../../../" initialization errored: Cannot find module '......../serverless-offline/' from 'PluginManager.js'
FetchError: request to http://localhost:3000/xyz failed, reason: connect ECONNREFUSED 127.0.0.1:3000
Do you experience the same?
@computerpunc sorry, been extremely busy lately, but things are calming down now. anyhow, are you still experiencing that problem? I changed a couple things around it I believe, but not sure if it had anything to do with your issue.
@dherault good to have you back!!! sorry, been extremely busy lately, but things are calming down now. let me try to answer some of your questions:
Is the Replay feature still here ?
it should be. I haven't removed it (not intentionally). although I'm not sure how it's being used, and how it's intended to work.
can I work on display memory usage ?
absolutely! I started working on it, but it seems that this feature might not quite fully work, nonetheless, it could be a good addition.
depending on the process, we'd access memory usage either in-process
, worker-thread
or child-process
.
in-process
is entirely shared, so it's probably almost useless, since serverless
, serverless
-offline`, hapi, and all other 3rd party code including all handlers share the same memory.
worker-process
should get us a little closer, but there was another issue, which I don't recall currently.
child-process
should be fairly (more or less) correct I assume.
Note: Ruby
and Python
could be handled similarly to the child-process
functionality
Also i wonder since remove --skipCacheInvalidation [alpha.32] how will users reload their work ? Using nodemon ? It should be in the README, I'm taking care of it.
yes, for the time being, something like nodemon
would be preferred. I saw a tremendous amount of issues with skipCacheInvalidation
(mainly memory leak related). I intend to fix this with worker threads
, where we could have an opt-in-watch-mode, similar to webpack
, or alternatively "hard-reload" on every request. (similar to what skipCacheInvalidation did).
nodemon
has also the advantage that it would reload the process when anything else changes (e.g. yml files, .env files etc.). The only current downside is that it loads serverless
including the plugins as well (which is not desired).
I added a couple examples here: https://github.com/dherault/serverless-offline/tree/master/examples . we definitely should mention it, or point to it, from the README. just haven't gotten to it yet. feel free to add it.
@dnalborczyk could you not remove
process.env.IS_OFFLINE
andevent.isOffline
please ?
It is too big a breaking change for most users I believe.@dherault
Not a big issue while making the transition on my side. Just useprocess.env.IS_OFFLINE
instead ofevent.isOffline
and set an environment var as inexport IS_OFFLINE=true
.The main issue here is the incorrect documentation above to set the var in
serverless.yml
:
process.env.IS_OFFLINE=true
has been removed
- you can just provide your own environment variable, in e.g.
serverless.yml
:# in the provider definition: provider: environment: IS_OFFLINE: true # or in the function definition functions: hello: handler: handler.hello environment: IS_OFFLINE: true
this is just a very simple example
# in the provider definition:
provider:
environment:
IS_OFFLINE: true
# or in the function definition
functions:
hello:
handler: handler.hello
environment:
IS_OFFLINE: true
one can expand on this, and use conditions (e.g. NODE_ENV, stage etc.) as mentioned already here: #768 (comment)
provider:
environment:
# pass through env
IS_OFFLINE: ${env:IS_OFFLINE}
# or pass cli option
IS_OFFLINE: ${opt:IS_OFFLINE}
The main issue here is the incorrect documentation above to set the var in
serverless.yml
:
I don't think it's incorrect, just very simplified. there's also more to it @computerpunc your export xyz
wouldn't work in worker-threads
or in-process
, as well as ruby
and python
if I remember correctly. all that is on my to-do list for documentation. I think I should do that first before going on with any additional coding. ๐
@computerpunc sorry, been extremely busy lately, but things are calming down now. anyhow, are you still experiencing that problem? I changed a couple things around it I believe, but not sure if it had anything to do with your issue.
I don't experience this problem anymore.
What's most problematic on my end is that WebSocket support is out of order since alpha.41
(at least as far as I see on my end) at reported in #814 (comment)
I hope you can fix it ASAP.
@computerpunc I replied in the PR.
yes, you are right. the new lambda mechanism tries to mimic the cold start as well as the container reuse of lambdas. the 1 min hard setting is a bit short, and needs to be a config setting for sure (thank you for the PR @leny)
I think it's related to the fact that serverless-offline is replicating the cold start mecanism of serverless function. This is very useful but should be dismissable on demand, like with a flag like --always-cold or something.
yes, something like that might be needed. the problem is that this would only work for worker-threads
, as in-process reloading does not reliably work. I'm planning on picking up development again, and get back once I gave a bit more thoughts.
Happy Days! :D The new features blow my mind! And the upcoming changes sound awesome!
Regarding --exec
option has been removed
Hopefully someone will set up to create a serverless plugin that allows you to run commands once serverless-offline has successfully started up and is listening (for http calls). It seems like there isn't a hook (ServerlessOffline.js#L45) yet, I'm guessing it could be triggered somewhere after all promises have been resolved (ServerlessOffline.js#L92).
I could give it a go if nobody else is interest in this.
Not having something like that makes it painful to run automated integration tests since you can only run them once the service is listening, and using a sleep
is ugly.
I've found an issue with loading custom authorizer functions when project is using serverless-webpack.
Simmilar issue was fixed in #787 (case with loading handlers directly), so I've explained it within this thread as well: #787 (comment)
There is a PR that fixes custom authorizer case reusing the same solution: #835
Just updated to the latest alpha release and noticed the --prefix option was removed.
Will there be an alternative?
@dorsegal yes, there is an alternative! the stage
option is now implemented in serverless-offline
and provides the same functionality:
provider:
stage: foo
or as cli parameter: --stage foo
@dnalborczyk I noticed that as well but it isn't really an alternative for me.
I use serverless-domain-manager with basePath option that sets the URL for every microservice we have.
https://domain/{some_service}/function
https://domain/{some_other_service}/function
And without the prefix option, I cannot do it offline instead, I'm getting this:
https://domain/dev/function
I use serverless-domain-manager with basePath option that sets the URL for every microservice we have.
https://domain/{some_service}/function https://domain/{some_other_service}/function
And without the prefix option, I cannot do it offline instead, I'm getting this:
https://domain/dev/function
what would you like it to be?
it seems custom parameter basePath
is something serverless-domain-manager
implements (or extends to serverless
). you could either try the formerly mentioned stage
(defaults to dev
), or you could try custom variables in paths for serverless config yaml. or, as an alternative, you could suggest an implementation/extension to serverless
directly.
@dnalborczyk I noticed that as well but it isn't really an alternative for me.
I use serverless-domain-manager with basePath option that sets the URL for every microservice we have.https://domain/{some_service}/function https://domain/{some_other_service}/function
And without the prefix option, I cannot do it offline instead, I'm getting this:
https://domain/dev/function
To help understand better, can you describe your staging strategy when not running on production (dev, test, etc.)?
And what is your strategy when running locally? Since you have a different yml file for each microservice, you cannot run all of them at the same time on the same port locally?
To help understand better, can you describe your staging strategy when not running on production (dev, test, etc.)?
And what is your strategy when running locally? Since you have a different yml file for each microservice, you cannot run all of them at the same time on the same port locally?
We use multi repo to manage all of our microservices and domain manager for different environments.
When running locally I tell my devs to run their service on localhost and just change the URL in postman
from https://dev-application.example.com/
to https://localhost:3000
This is why we need to have the prefix option to work.
Till now I used the stage param to test code against different environments variables that are different between stages.
As for running them all locally, right now they are working on one service at a time so there is no need to run them all locally but I started to wrap all of our services in a docker container with Nginx to emulate AWS API GW base mapping.
I hope I was clear enough but if you have any more questions about my use case please feel free toask
As for running them all locally, right now they are working on one service at a time so there is no need to run them all locally but I started to wrap all of our services in a docker container with Nginx to emulate AWS API GW base mapping.
Then isn't this a solution to the problem?
Run each microservice in the docker container on a different port, for example, microservice1 on http://localhost:3000
, microservice2 on http://localhost:3001
, etc. and let Nginx reroute to expose a single application, for example, https://localhost:8000/microservice1/xyz
goes to http://localhost:3000/dev/xyz
, https://localhost:8000/microservice2/xyz
goes to http://localhost:3001/dev/xyz
, etc.
This solution seems to work, If you don't care about the structure of the URL from within the microservices.
Then isn't this a solution to the problem?
Run each microservice in the docker container on a different port, for example, microservice1 on
http://localhost:3000
, microservice2 onhttp://localhost:3001
, etc. and let Nginx reroute to expose a single application, for example,https://localhost:8000/microservice1/xyz
goes tohttp://localhost:3000/dev/xyz
,https://localhost:8000/microservice2/xyz
goes tohttp://localhost:3001/dev/xyz
, etc.This solution seems to work, If you don't care about the structure of the URL from within the microservices.
@computerpunc
This is exactly what I'm trying to do but I facing issues with it so I cannot say this works just yet.
And as I mentioned, we don't need the docker solution right now and it's quite an overkill for developing one service at a time.
Any chance you will reconsider adding the prefix option again?
I really like the alpha version but I'm afraid I cannot upgrade without --prefix option
@dhruvbhatia I noticed you linked my request to another plugin. Could you please explain how those issues are related? (just out of curiosity)
EDIT: after digging around more Ive found this is probably an already known issue: #793. ignore my comment if so! ;)
With serverless-offline 6.0.0-alpha.59
, serverless-webpack
+typescript
hot reloading is not working.
The easiest way to replicate is to use this template: serverless create --template aws-nodejs-typescript --path myService
Then do yarn add -D serverless-offline
and add it to serverless.yml
:
plugins:
- serverless-webpack
- serverless-offline
sls offline
to start -> check response when calling endpoint. change message
in handler.ts
-> hit endpoint again. it shows code changes....
do yarn add -D serverless-offline@next
and repeat the previous tests. code changes are not reflected in function, only when restarting sls offline
.
@DennisKo re-implementing hot-reloading is on the road map for v6
. it's currently not working, even without serverless-webpack
.
I think I have found an issue with the introduction of the stage parameter into the URL.
Within API Gateway, an API with a URL such as https://**********.execute-api.eu-west-2.amazonaws.com/stage/foo/bar
will pass the event.path
property through to the Lambda as /foo/bar
, stripping out the stage parameter.
The plugin currently passes through the full path to the Lambda handler e.g. /stage/foo/bar
. This causes issues for Lambdas using Express, which handles the routing internally.
For now, I have worked around this by stripping out the stage (local
in my specific use case) from the event before passing it on to Express:
event.path = event.path.replace(/^\/local/, "");
Got the same issue as above while trying to test some Angular + Express + Serverless integration, as everything works fine after deployment but unfortunately, the serverless-offline
plugin is prefixing all urls with the stage
parameter, making it impossible to load assets and such while testing offline.
@jamesmbourne @KingDarBoja thanks for the report. I'll have a look!
Am I correct in understanding that there's currently no way to not have the stage as the first segment of the path in the URL? My goal would be to have my dev host be http://localhost:4000/ (like it's been for years and as I expect most people expect it to be) and NOT http://localhost:4000/development
I shouldn't have to empty the provider.stage
prop for this to happen.
As it has been said, itโs very difficult to understand this change.
I know this is the behavior of AWS and weโre just adding some domains as ยซ aliases ยป to the complete url (with the stage) of the service.
But the developer experience, for this kind of, huh, developer tool, should be the main focus while developing this plugin.
I think thereโs a solution that can make everyone happy: let the new behavior (adding stage to url), as itโs the behavior of AWS, but also adding a new option to the plugin, like noStageInUrl
(not really inspired, sorry), to have the possibility to restore the old behavior.
I don't think it's incorrect, just very simplified. there's also more to it @computerpunc your
export xyz
wouldn't work inworker-threads
orin-process
, as well asruby
andpython
if I remember correctly. all that is on my to-do list for documentation. I think I should do that first before going on with any additional coding
@dherault @dnalborczyk I'm not sure if you guys realize that removing a stupid one-liner in the plugin citing "complexity" is forcing many other people to put more work into their projects just to get around it. See also the link to serverless-dynamodb-client which is now broken. As the comment above says, the goal of this plugin is to make our lives easier. This change is not doing that.
Not everybody's setup is the same; in my case I wanted this to tweak the allowed "methods" for a lambda hosting a GraphQL endpointm, so that it would accept ANY while running in offline mode, but only POST when in AWS. And I am starting serverless offline from multiple places, from a couple of packages.json commands and from VSCode for debugging. Not having the flag anymore means duplicating whatever workaround you may have - such as setting YOUR_ENV - in all these places. Furthermore, export IS_OFFLINE won't work on Windows, and coming up with a cross-plaform solution to all this is... adding complexity?
For anyone else troubled by this, in the end my workaround was to look at the command arguments passed to serverless; invoking "serverless offline" makes "offline" a command. In my case I was already using a config.js as explained here; the commands are available in the serverless variable that you get as config parameter, you can access them as follows:
module.exports = context => {
const offlineMode =
context.config.serverless.processedInput &&
context.config.serverless.processedInput.commands &&
context.config.serverless.processedInput.commands.includes('offline');
return {
get offlineBasedProperty() { return offlineMode ? 'offline-mode-value' : 'real-value'; }
};
};
Another alternative, in serverless.yml
add
custom:
offline-mode: ${opt:offline-mode, 'default'}
# Setting that you want to depend on offline mode
offlineSetting:
offline: 'offline-mode-value'
default: 'real-value'
[...]
yourValueThatNeedsTheOfflineSetting: ${self:custom.offlineSetting.${self:custom.offline-mode}}
Then you can start with serverless offline --offline-mode offline
. Ugly repetition, but should work...
Anyone else experiencing issues with the docker integration?
I keep getting:
FetchError: request to http://localhost:9001/2015-03-31/functions/hello/invocations failed, reason: connect ECONNREFUSED 127.0.0.1:9001
First time I set up it to run it works for a few seconds and then stops working.
I'm using it with a Golang Project.
service: my-project
provider:
name: aws
runtime: go1.x
region: us-east-1
stage: dev
memorySize: 128
plugins:
- serverless-offline
package:
exclude:
- ./**
include:
- ./bin/**
custom:
serverless-offline:
useDocker: true
functions:
hello:
events:
- http:
method: get
path: hello
handler: bin/hello
Using docker for mac V2.2.0.0
Request examples:
ฮป sls offline
offline: Starting Offline: dev/us-east-1.
offline: Offline [http for lambda] listening on http://localhost:3002
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ GET | http://localhost:3000/dev/hello โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
offline:
offline: [HTTP] server ready: http://localhost:3000 ๐
offline:
offline: Enter "rp" to replay the last request
offline:
offline: GET /dev/hello (ฮป: hello)
Lambda API listening on port 9001...
START RequestId: 50d6592f-522a-129e-e740-26f2c232188e Version: $LATEST
END RequestId: 50d6592f-522a-129e-e740-26f2c232188e
REPORT RequestId: 50d6592f-522a-129e-e740-26f2c232188e Init Duration: 213.32 ms Duration: 5.36 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 20 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: f2123202-98f8-1b3a-92c0-16503d1b5335 Version: $LATEST
END RequestId: f2123202-98f8-1b3a-92c0-16503d1b5335
REPORT RequestId: f2123202-98f8-1b3a-92c0-16503d1b5335 Duration: 2.99 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: d4855897-eab7-1e86-0e88-15deef3bdd0d Version: $LATEST
END RequestId: d4855897-eab7-1e86-0e88-15deef3bdd0d
REPORT RequestId: d4855897-eab7-1e86-0e88-15deef3bdd0d Duration: 2.47 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 631b01e7-f8ca-1474-65d9-f614d8329c6a Version: $LATEST
END RequestId: 631b01e7-f8ca-1474-65d9-f614d8329c6a
REPORT RequestId: 631b01e7-f8ca-1474-65d9-f614d8329c6a Duration: 2.50 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 930e2727-24f7-1932-3497-ccfc732e897f Version: $LATEST
END RequestId: 930e2727-24f7-1932-3497-ccfc732e897f
REPORT RequestId: 930e2727-24f7-1932-3497-ccfc732e897f Duration: 4.13 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: ef15ceff-cf97-14e3-814c-95919d328246 Version: $LATEST
END RequestId: ef15ceff-cf97-14e3-814c-95919d328246
REPORT RequestId: ef15ceff-cf97-14e3-814c-95919d328246 Duration: 5.36 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 21 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: db79e834-e328-1a72-905a-532bebff3ac3 Version: $LATEST
END RequestId: db79e834-e328-1a72-905a-532bebff3ac3
REPORT RequestId: db79e834-e328-1a72-905a-532bebff3ac3 Duration: 5.75 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: dc63c16a-9ca1-1196-d829-55a640949277 Version: $LATEST
END RequestId: dc63c16a-9ca1-1196-d829-55a640949277
REPORT RequestId: dc63c16a-9ca1-1196-d829-55a640949277 Duration: 2.94 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: f428c996-5f9e-11a7-8090-5f7cd7f49699 Version: $LATEST
END RequestId: f428c996-5f9e-11a7-8090-5f7cd7f49699
REPORT RequestId: f428c996-5f9e-11a7-8090-5f7cd7f49699 Duration: 3.52 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 0062ac7d-35e7-1115-56f8-636f1800c9f1 Version: $LATEST
END RequestId: 0062ac7d-35e7-1115-56f8-636f1800c9f1
REPORT RequestId: 0062ac7d-35e7-1115-56f8-636f1800c9f1 Duration: 3.14 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 12e71257-46ac-1499-2e14-aa91502a03c4 Version: $LATEST
END RequestId: 12e71257-46ac-1499-2e14-aa91502a03c4
REPORT RequestId: 12e71257-46ac-1499-2e14-aa91502a03c4 Duration: 2.74 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: a98758a0-c586-1b12-8f01-21376faa1efa Version: $LATEST
END RequestId: a98758a0-c586-1b12-8f01-21376faa1efa
REPORT RequestId: a98758a0-c586-1b12-8f01-21376faa1efa Duration: 3.45 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 2f365577-c245-166c-28cb-6ba248a81ce0 Version: $LATEST
END RequestId: 2f365577-c245-166c-28cb-6ba248a81ce0
REPORT RequestId: 2f365577-c245-166c-28cb-6ba248a81ce0 Duration: 2.71 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 384b71b3-99e3-120b-7753-e80ea4f4c8ab Version: $LATEST
END RequestId: 384b71b3-99e3-120b-7753-e80ea4f4c8ab
REPORT RequestId: 384b71b3-99e3-120b-7753-e80ea4f4c8ab Duration: 4.20 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 12ddaff3-d7fd-132c-8542-8a7472aebf79 Version: $LATEST
END RequestId: 12ddaff3-d7fd-132c-8542-8a7472aebf79
REPORT RequestId: 12ddaff3-d7fd-132c-8542-8a7472aebf79 Duration: 3.46 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: d195fe14-ea39-1ea7-b940-b7e8c9abfd80 Version: $LATEST
END RequestId: d195fe14-ea39-1ea7-b940-b7e8c9abfd80
REPORT RequestId: d195fe14-ea39-1ea7-b940-b7e8c9abfd80 Duration: 3.03 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 23 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 1c46b629-cf29-124a-c39b-1743215dd78e Version: $LATEST
END RequestId: 1c46b629-cf29-124a-c39b-1743215dd78e
REPORT RequestId: 1c46b629-cf29-124a-c39b-1743215dd78e Duration: 5.32 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 23 MB
offline: GET /dev/hello (ฮป: hello)
START RequestId: 707de0b4-b97c-12c9-4ac6-7656229f3c49 Version: $LATEST
END RequestId: 707de0b4-b97c-12c9-4ac6-7656229f3c49
REPORT RequestId: 707de0b4-b97c-12c9-4ac6-7656229f3c49 Duration: 3.98 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 23 MB
offline: GET /dev/hello (ฮป: hello)
offline: GET /dev/hello (ฮป: hello)
START RequestId: cc9de168-b8ab-1566-058f-0ed1da5f75ee Version: $LATEST
END RequestId: cc9de168-b8ab-1566-058f-0ed1da5f75ee
REPORT RequestId: cc9de168-b8ab-1566-058f-0ed1da5f75ee Duration: 4.44 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 23 MB
Lambda API listening on port 9001...
offline: Failure: request to http://localhost:9002/2015-03-31/functions/hello/invocations failed, reason: connect ECONNREFUSED 127.0.0.1:9002
FetchError: request to http://localhost:9002/2015-03-31/functions/hello/invocations failed, reason: connect ECONNREFUSED 127.0.0.1:9002
Hi,
Since version 6.0.0-alpha.63
(and also 6.0.0-alpha.64
), I got the following error when using invoke local
command:
$ ./node_modules/.bin/sls invoke local --function sync_users
Error: Cannot find module '[PROJECT]/node_modules/serverless-offline/babel.config.js' from '[PROJECT]'
at module.exports ([PROJECT]/node_modules/resolve/lib/sync.js:74:15)
at sync ([PROJECT]/node_modules/gensync/index.js:177:19)
at resolve ([PROJECT]/node_modules/gensync/index.js:204:19)
at Generator.next (<anonymous>)
at loadConfig ([PROJECT]/node_modules/@babel/core/lib/config/files/configuration.js:147:48)
at loadConfig.next (<anonymous>)
at buildRootChain ([PROJECT]/node_modules/@babel/core/lib/config/config-chain.js:76:47)
at buildRootChain.next (<anonymous>)
at loadPrivatePartialConfig ([PROJECT]/node_modules/@babel/core/lib/config/partial.js:95:62)
at loadPrivatePartialConfig.next (<anonymous>)
at loadFullConfig ([PROJECT]/node_modules/@babel/core/lib/config/full.js:57:46)
at loadFullConfig.next (<anonymous>)
at Function.<anonymous> ([PROJECT]/node_modules/@babel/core/lib/config/index.js:31:43)
at Generator.next (<anonymous>)
at evaluateSync ([PROJECT]/node_modules/gensync/index.js:244:28)
at Function.sync ([PROJECT]/node_modules/gensync/index.js:84:14)
at [PROJECT]/node_modules/@babel/core/lib/config/index.js:41:61
at OptionManager.init ([PROJECT]/node_modules/@babel/core/lib/index.js:257:36)
at compile ([PROJECT]/node_modules/@babel/register/lib/node.js:63:42)
at compileHook ([PROJECT]/node_modules/@babel/register/lib/node.js:104:12)
at Module._compile ([PROJECT]/node_modules/pirates/lib/index.js:93:29)
at Module._extensions..js (internal/modules/cjs/loader.js:947:10)
at Object.newLoader [as .js] ([PROJECT]/node_modules/pirates/lib/index.js:104:7)
at Module.load (internal/modules/cjs/loader.js:790:32)
at Function.Module._load (internal/modules/cjs/loader.js:703:12)
at Module.require (internal/modules/cjs/loader.js:830:19)
at require (internal/modules/cjs/helpers.js:68:18)
at AwsInvokeLocal.invokeLocalNodeJs ([PROJECT]/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:616:33)
at AwsInvokeLocal.invokeLocal ([PROJECT]/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:167:19)
at AwsInvokeLocal.tryCatcher ([PROJECT]/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler ([PROJECT]/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise ([PROJECT]/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromiseCtx ([PROJECT]/node_modules/bluebird/js/release/promise.js:641:10)
at _drainQueueStep ([PROJECT]/node_modules/bluebird/js/release/async.js:97:12)
at _drainQueue ([PROJECT]/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues ([PROJECT]/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] ([PROJECT]/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (internal/timers.js:439:21)
at process.topLevelDomainCallback (domain.js:131:23) {
code: 'MODULE_NOT_FOUND'
}
Error --------------------------------------------------
Error: Exception encountered when loading [PROJECT]/bin/handlers/sync/users
at AwsInvokeLocal.invokeLocalNodeJs ([PROJECT]/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:621:13)
at AwsInvokeLocal.invokeLocal ([PROJECT]/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:167:19)
at AwsInvokeLocal.tryCatcher ([PROJECT]/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler ([PROJECT]/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise ([PROJECT]/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromiseCtx ([PROJECT]/node_modules/bluebird/js/release/promise.js:641:10)
at _drainQueueStep ([PROJECT]/node_modules/bluebird/js/release/async.js:97:12)
at _drainQueue ([PROJECT]/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues ([PROJECT]/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] ([PROJECT]/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (internal/timers.js:439:21)
at process.topLevelDomainCallback (domain.js:131:23)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.10.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
Thoughts on replacing execa
usage on docker runner and use https://github.com/apocas/dockerode instead?
Anyone else experiencing issues with the docker integration?
I keep getting:FetchError: request to http://localhost:9001/2015-03-31/functions/hello/invocations failed, reason: connect ECONNREFUSED 127.0.0.1:9001
I want to investigate the bug, but I was not able to replicate it locally.
Could you create a new issue and provide the following infomation?
SLS_DEBUG=* sls offline
log- output of
docker ps -a -f "ancestor=lambci/lambda:go1.x"
when error occured
It may be related to lambda function cleaner (it cleans lambda function which is idling over 1 minute, please refer to the following). Does the error occur on a request that sent more than 1 minute after the last request?
serverless-offline/src/lambda/LambdaFunctionPool.js
Lines 17 to 38 in 2d87610
@frozenbonito Right now I doubt it's related to the lambda cleaner, I was able to perform a request to http://localhost:9001/2015-03-31/functions/hello/invocations
as a POST replicating the same request, it works, and returns 200. Something really weird happens between creating a http request to the docker container and the port used for that. Trying to investigate.
@frozenbonito Could you point me in the right direction here? I know it's expected that runners could be instantiated every single request. But for docker, we might want to preserve a Singleton ? Right now every request creates a new container with a different port, because of course there will be always a new instance of DockerRunner
and DockerContainer
@frozenbonito I was able to figure out the main problem.
Looks like the container is not ready by the time it performs a new request
. For tests, I was able to put a sleep timer and it started working again. That's one of the reasons I suggested using dockerode so it could properly rely on streams and callbacks for things to be ready.
@pragmaticivan
Thank you for your investigation.
Looks like the container is not ready by the time it performs a new request. For tests, I was able to put a sleep timer and it started working again. That's one of the reasons I suggested using dockerode so it could properly rely on streams and callbacks for things to be ready.
I see, but I think dockerode
cannot not fix this issue.
serverless offline
have to wait for API server starting in container, but it is impossible to get the status of program in the container via docker API.
Therefore, serverless offline
waits for the API server startup by parsing the output.
The promise in above code waits for container output like
Lambda API listening on port 9001...
. This output usually means the container ready (ref: https://github.com/lambci/docker-lambda#running-in-stay-open-api-mode).I think the main problem is that the log appears before API server start completely when golang runtime used.
Please refer init.go
. It is common entry point of all runtimes except golang (e.g.: node12.x).
Roughly speaking, it works as follow:
- waits for bootstrap running (here)
- launches API server (here)
- outputs
Lambda API listening on port 9001...
(here)
In this case, the API server is ready when Lambda API listening on port 9001...
appeared.
In golang image, the entypoint is not init.go
, but aws-lambda-mock.go
(which is entrypoint of lambci/lambda:go1.x
).
It works as follow:
- executes
init.go
to launches API server (here) - In
init.go
, skip waiting for bootstrap becauseDOCKER_LAMBDA_NO_BOOTSTRAP
is set (here) - launches API server in
init.go
- outputs
Lambda API listening on port 9001...
ininit.go
- waits for API server starting (here)
- executes handler binary, create socket communication client for it, etc... (here)
If we send request before 6, connection refused
error occur even if after Lambda API listening on port 9001...
.
For easy reproduction, execute the following command.
$ docker run --rm -e DOCKER_LAMBDA_NO_BOOTSTRAP=1 -e DOCKER_LAMBDA_STAY_OPEN=1 lambci/lambda:provided
Then, send request after Lambda API listening on port 9001...
.
$ curl -d '{}' http://localhost:9001/2015-03-31/functions/myfunction/invocations
You can see connection refused
.
There is no way to know when container ready in golang image. I think it is lambci/lambda issue, but it is possible to add retry of request when connection refused.
updated:
Sorry, the above command is incorrect (missing port exposing). With exposing port, the request will timeout.
@frozenbonito I agree with you, it feel like lambci/lambda
should have some sort of health check and only change its container status/state to running when it's properly ready to be used. And on our side, we would need a way to verify that.
OBS:
It might be something that everyone gets to experience and I was able to solve it by adding a wait of 100ms before performing the request, so for now, it might be worth adding that until we could address the root cause.
Another suggestion for the docker integration. We might want to be able to broadcast stdout
and stderr
of a function.
ex: I have a Golang function that prints some stuff on stdout
and right now that doesn't show anything in my terminal.
It works when I do sls invoke local -f myfunc
though.
ex: I have a Golang function that prints some stuff on stdout and right now that doesn't show anything in my terminal.
I tried fmt.Println()
, fmt.Fprintln()
(to stdout), and fmt.Fprintln()
(to stderr) but their output appeared in serverless-offline
log (but their order did not seem to be kept). What kind of code does not show output?
Output order issue may be related to execa
, so I hope dockerode
will fix this.
@frozenbonito I found out that the stdout was actually my problem when refactoring:
Using dockerode
solves the problem:
{"level":"info","msg":"Sending 1","time":"2020-01-31T20:24:01Z"}
{"level":"info","msg":"Sending 2","time":"2020-01-31T20:24:01Z"}
{"level":"info","msg":"Sending 3","time":"2020-01-31T20:24:01Z"}
{"level":"info","msg":"Sending 4","time":"2020-01-31T20:24:01Z"}
Sending raw
Just tested using github.com/sirupsen/logrus
for the first 4 and fmt.Println()
for the last one
Is there any date for this branch release as a stable npm package?