/gpt-torch

Compress the HTML as much as possible for LLM to inference.

No issues in this repository yet.