enable scripted submission workflow
znd4 opened this issue · 2 comments
First of all, thanks for the cool project! love bringing as much to the terminal as possible.
Second, I'm new to rust, so I'll probably use some incorrect lingo
I like having my rust leetcode solutions inside a full cargo repo so that rust-analyzer will work. I'm currently copy-pasting from my editor into the browser, but it'd be cool to be able to fire up a script like
leetcode test ... --code=$(cat .../1234_some_problem.rs | sed ...) 1234
or passing a file path (with exec this time)
cat .../1234_some_problem.rs | sed ... > /tmp/solution.rs
leetcode exec ... --path=/tmp/solution.rs
or via stdin (super fancy):
cat .../1234_some_problem.rs | sed ... | leetcode test ... --path=- 1234
I have a rough idea of where I could add some spaghetti code to support either of these, but I'm curious which you prefer, or if you think this usecase should be supported in some other way (e.g. via some configuration).
Ofc, if there's a workaround to get rust-analyzer working with the standalone leetcode solution files, that'd be great, although I could imagine people still wanting to use this script-based approach (e.g. if they just really want to keep their leetcode solutions in a git repo)
I have also been doing leetcode in Rust, and thinking about how to improve and automate parts of the workflow for interacting with problems. I think it should be possible to:
- query leetcode for
content(both description text and parse for Constraints:),codeSnippet.rust,envInfo.Rust,metaData,sampleTestCase=> curl zsh PoC | graphql getQuestionDetail PoC - create a Cargo package that includes:
- create
testcases.txt(or.json?) beginning withsampleTestCasedata - create
main.rswith a test harness that willuse Solution;, readtestcasesfile, and run cargo test for each test case (probably via declarative macro) by invoking thesolution.rs - create 'solution.rs
with thecodeSnippet.rust` as a standalone file that will match exactly what is written to the leetcode editor UI - optionally request a judge via api and run the code for the loaded list of test cases, and save the judge results locally for additional iterative testing
- create a template commit message and solution post that includes pre-filled data with the problem title, link, description, constraints, example and/or useful / added testcases, and blank/TODO sections for the approach / analysis / lessons learned
Ideally, a script that calls cargo new, or perhaps something like cargo-generate will build a new Cargo project while either querying leetcode, or using the saved result of a cli query, create these files, and we can just open solution.rs and leverage our LSPs or IDEs to see type & build errors, and add any testcases we feel would be useful to automatically test against locally.
This should be flexible enough to add MLE / TLE checks and performance testing, though there will be some guessing / testing required to establish leetcode system limits that reflect what's run locally on arbitrary platform.
It's heartening to find someone who is (or was not too recently) thinking about the same types of problems and solutions, but it may be up to us to implement as @skygragon has been quiet for a few years.
There's also the reliance on the closed-source and documentation-averse (as far as I can tell) leetcode graphql API, which they may change and break at any time, and hypothetically could be more likely to do so if a project like this gains steam ( maybe ¯\_(ツ)_/¯ )