robertvacareanu/llm4regression
Examining how large language models (LLMs) perform across various synthetic regression tasks when given (input, output) examples in their context, without any parameter update
Python
Stargazers
- amihalik
- bhakthan
- blakejenningsAustin, Texas
- codinglover0111
- Dominic789654HK
- eccstartupNAU
- FrancyJGLisboaPromptCompletion
- HakeemDemiLondon UK
- halilozturkciADEO
- ijaycho
- IzzahAlfatih
- JDvorakKunai
- JuliaRJJITMO University
- kashifBerlin, Germany
- LoadingALIASWSABI Labs
- Lucky-LanceThe Chinese University of Hong Kong
- mathusuthan
- minhtucsHo Chi Minh City University Of Technology
- mkaltman
- mrdrozdovNew York, NY
- MuhtashamTU Munich
- nklc@databricks
- pprpData Science and Analytic Thrust, Information Hub, HKUST(GZ)
- raunaqssUnwrangle
- reuankGermany
- roadlabs@17CodingNet
- Ryu1845
- SAMUSENPS
- semiring
- sheikhomarNovo Holdings
- shyamsn97
- smellslikemlSmellsLikeML
- tezhengMicrosoft
- tokarev-i-v
- vavanade
- vishalbelsare