[Feature Request] ComfyUI integration (code snippet for obtaining json inside)
SRagy opened this issue · 2 comments
I noticed that someone already mentioned this in the discussion section, and it was suggested that the primary issue was that openpose preprocessor nodes for ComfyUI output images rather than json.
The creators of the comfyui_controlnet_aux seem to provide a method for extracting the required json data here: https://github.com/Fannovel16/comfyui_controlnet_aux#faces-and-poses. I'm no expert, but hopefully this saves some time. Relevant part quoted below.
const poseNodes = app.graph._nodes.filter(node => ["OpenposePreprocessor", "DWPreprocessor", "AnimalPosePreprocessor"].includes(node.type))
for (const poseNode of poseNodes) {
const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])
console.log(openposeResults) //An array containing Openpose JSON for each frame
}
Thanks for filing the issue.
I actually have something WIP several weeks ago here: https://github.com/huchenlei/ComfyUI-openpose-editor. I noticed that the ControlNet preprocessor node pack in Comfy exposes the JSON to the front-end. However, I failed to get my initial design working due to ComfyUI dataflow.
I saw someone ported the openpose-editor
project to comfy to serve as a data provider node, but this use case seems too limited in my opinion.
I probably need to implement the editor as an right click action that can modify result from a preprocessor node. WDYT?
Hey, thanks for replying so promptly.
Firstly, I think your openpose edit facility is superior to other ones - you've done an excellent job making it flexible and powerful, so even having a v0.1 as just a data provider node would be useful, especially so to anyone who has used the webui implementation and has already got some saved poses and such. Other editors I've found don't allow for hands, faces or missing joints, whereas yours does this and goes beyond with grouping, reflections etc.
As for everything else, I should start with the disclaimer that I have limited knowledge of ComfyUI - I'm just getting to grips with it myself at the moment, but I know it can be useful to talk these things over so I'll try my best (although I probably haven't properly grasped the scope of the issue).
What aspect of the data flow was limiting in your attempt to implement the node? As I understand, it isn't really possible to do partial graph/workflow evaluations, which would be useful for generating pose previews before computing the rest of the graph. However, it seems to me that a suboptimal workaround would be to generate the poses on one run and then conditionally detach the preceding nodes (i.e. the preprocessor) for subsequent runs - at worst, this could be done manually or perhaps by a toggleable option to rerun the preprocessor or not.