"Don't be afraid to restart, if there is a better way."
Just remember the following things, to get involved in this project:
- Manage Everything That Matters
- Convert all factors capable of influencing model behavior into readable & executable configurations.
- e.g., dataset versioning, data preprocessing logic updates, adjustments in loss functions, learning rates, evaluation protocols, etc.
- Collaborate Widely, Automate Wisely
- Amplify your efforts through collaboration, not only with humans, but also with AI.
- e.g., AI-driven research discussion, auto commenting workflow, AI-based consistency check on logic implementations, automatic cloud launching, automatic tutorial generation, etc.
- Share and Inspire
- Make your small experiments known and find significance in even the minor discoveries. Inspire and be inspired by allowing more people to connect with your vision and imagination.
- e.g., technical blogs, seminars, paper publications, open discussions, participating in academic reviews, etc.
- Dynamic Configuration Tracking
- Ensure that the variable factor between two experiments is unique.
- Help to explore, compare, and collapse super complex configurations.
- Draw a successful experiment as a path in the tree of configurations.
- AI Software Engineer
- Experimental Hub
- MBTI of LLMs
- arXiv HTML (experimental) page translator: Chrome extension
- Practical readme.md generator for huggingface models
- Mining unreached questions
- GPT/Claude/Gemini-based code review and research discussion.
- Pre-training of BERT, RoBERTa, Longformer, GPT, and T5.
- Fine-tuning of machine translation models with state-of-the-art LLMs.
- 1M context length fine-tuning method.
- GPT-based data augmentation.
- arXiv Korean summarization with EEVE.
- It cannot be used instead of GPT-4.
- Participating challenges