openai/weak-to-strong

Some thoughts

SEU-zxj opened this issue · 1 comments

Thank you for your exceptional work!🥰

At present, we humans identify problems and create numerous datasets for various tasks, training models to learn and solve these tasks.
This paradigm relies on the human capacity to supervise and guide model behavior (because these tasks are below the human level). I am contemplating whether superhuman models ought to possess the ability to independently identify and summarize real-world problems (above the human level) and attempt to solve them on their own. We humans, or perhaps other superhuman entities, could then act as peer reviewers, similar to the current academic practice (I do not say reviewers and authors are not at the same level😂).

Just sharing some personal reflections. 🙈

many interesting philosophical questions for humanity to sort out before we welcome any robot overlords :)