UniversalDevicesInc/Polyglot

Provide means for a node server to determine success/fail of a message to ISY

Opened this issue · 8 comments

Currently, a node server has no means to determine the success or failure of a message to the ISY, such as a status update, node add, or any other ISY node server API message. It would be desirable for a node server to be able to provide a means to get a response message back from Polyglot with at least a success/fail, or preferably the HTTP status code should the node server desire. This mechanism should be completely optional, and should not require any node servers to be modified (i.e. this mechanism should extend, not modify, the existing APIs.)

Hi Mike,

I very much agree and I think it should be mandatory and not optional.

With kind regards,


Michel Kohanim
CEO

(p) 818.631.0333tel:818.631.0333
(f) 818.436.0702tel:818.436.0702
http://www.universal-devices.comhttp://www.universal-devices.com/


On April 11, 2016 7:28:02 PM Mike Westerhof notifications@github.com wrote:

Currently, a node server has no means to determine the success or failure of a message to the ISY, such as a status update, node add, or any other ISY node server API message. It would be desirable for a node server to be able to provide a means to get a response message back from Polyglot with at least a success/fail, or preferably the HTTP status code should the node server desire. This mechanism should be completely optional, and should not require any node servers to be modified (i.e. this mechanism should extend, not modify, the existing APIs.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHubhttps://github.com//issues/42

Initial version of support has been added. At present, only the "add node" API requests (and handles) the response. This can be extended as needed.

This mechanism is now implemented - this issue is marked closed.

Is this documented on how to use somewhere?

No, there's no real documentation for external use -- the mechanism ended up being almost completely internal and not exposed to the Node and SimpleNodeServer object level.

There's no good reason for that; it's mainly that I couldn't come up with an obvious single solution that made sense (SimpleNodeServer became NotSoSimpleNodeServer pretty quickly!). Some real life use cases would be really helpful!

Re-opening this in hopes that we can revive this one.

@mjwesterhof I have been running this for several days now, everything seems to be stable. I have noticed it is quite a bit noisier in the log with all sorts of stuff like this:

Sonos STDIN: {"result": {"text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?><nodeInfo><node flag=\"128\" nodeDefId=\"sonoscontrol\"><address>n002_sonoscontrol</address><name>Sonos Control</name><family instance=\"2\">10</family><type>1.1.0.0</type><enabled>true</enabled><deviceClass>0</deviceClass><wattage>0</wattage><dcPeriod>0</dcPeriod><pnode>n002_sonoscontrol</pnode><ELK_ID>A04</ELK_ID></node><properties></properties></nodeInfo>", "status_code": 200, "seq": 1007, "elapsed": 0.03100109100341797}} DEBUG [05-28-2016 01:26:29] polyglot.nodeserver_manager: Sonos: **DEBUG: probe: st=200 na="sonoscontrol" text: <?xml version="1.0" encoding="UTF-8"?><nodeInfo><node flag="128" nodeDefId="sonoscontrol"><address>n002_sonoscontrol</address><name>Sonos Control</name><family instance="2">10</family><type>1.1.0.0</type><enabled>true</enabled><deviceClass>0</deviceClass><wattage>0</wattage><dcPeriod>0</dcPeriod><pnode>n002_sonoscontrol</pnode><ELK_ID>A04</ELK_ID></node><properties></properties></nodeInfo>

Is there anything else you want to do with this issue before we work on rolling 0.0.3 up to unstable-rc?

Yep, it's very noisy at DEBUG levels -- I usually run at INFO level unless I'm actually trying to get a debug log. We can disable the STDIN and probe response text altogether (there's no levels beyond DEBUG) - but if we do that, then we can't expect that level of data from a user to help debug a problem.

Thoughts?

I'm good with that. Sounds like this issue is resolved. Nice job.