Design for Create Skill Contribution wizard
Opened this issue · 3 comments
To improve the contribution experience, UXD recommends dividing the contribution form into wizard steps. The current contribution task is long and complex and would benefit from being broken into smaller, more manageable steps. This allows the user to focus on smaller tasks and not feel overwhelmed and allows us to provide more detailed instructions and tailored contextual help.
UXD is working with the content team to finalize the microcopy. Another issue will be created to capture those changes.
The initial step is to select which contribution type the user wants to submit: Knowledge or Skill. In this case we are looking at the skill contribution experience. For knowledge see issue #402
Step 1: Details
The user details should be pulled in from Git if possible. The user can add contributors. Once the wizard is submitted, the newly added contributor(s) will receive a notification. The brief summary should include a a less than 60 character description of the purpose of this contribution. The detailed description should contain a 40 character or more description of the contribution. The Directory path determines the domain/subdomain in the taxonomy this contribution will appear.
Step 2: Create training data
Users will need a minimum of 5 examples. If thier skill is a grounded skill and requires context, they will need to add a context for each QNA pair. User will not be able to move to the next step until all 5 samples have been filled. Note, a scroll bar will appear within the content area for users to scroll their content if needed.
Step 3: Attributions
We are hoping most of the attribution details could be pulled in during the document ingestion process. A warning icon alerts the user of something that needs their attention. Users can click edit to modify the details.
Step 4: Review
The last step in the wizard includes a summary of what the user has input so they may confirm before submitting their contribution. Clicking Submit sends a notification to the InstructLab triager to review the submission.
Hi @Mdenisco! Some feedback:
UXD recommends
It might be a good idea to explain this acronym for community members who aren't familiar.
Step 1: Details
The user details should be pulled in from Git if possible.
It would probably make sense for there to be a user onboarding step where the user's git username (for whatever git system is backing the installation of instructlab, may be github, may be something else) is configured as a one-time exercise against a user profile.
Once the wizard is submitted, the newly added contributor(s) will receive a notification.
What is the form of the notification? Is it an email? Is it an in-app notification? Is it transitory (eg a toaster popup) or in a queue inside the app?
The brief summary should include a a less than 60 character description of the purpose of this contribution. The detailed description should contain a 40 character or more description of the contribution.
How do these map to the fields in the qna.yaml? The qna.yaml format for skills only has one such field - the task_description
field. Which - brief summary or detailed description - is meant to map to task_description
? For the one that does not - why does it exist and what would it be used for?
The Directory path determines the domain/subdomain in the taxonomy this contribution will appear.
Why not just ask the user for a name for the skill, and use that name as the leaf directory name in the taxonomy? Triagers are going to likely change where in the taxonomy the contribution lives anyway. Exposing end users to the directory tree might make this seem more complicated than it is.
You could probably have them select a suggestion of where it lives in the tree (as a separate field from what they want to name it) as a nested category list, where each directory in the tree past taxonomy/ is a category / subcategory / subsubcategory / etc.
Step 2: Create training data
Calling this step "training data"
I wouldn't name this create training data. Because it isn't training data. :) It's seed data that will be used to generate training data for the model at the end of sdg generation. But we're not at that step yet. I would simply call this "Create quiz" or "create questions and answers" something like that - what it is so it won't cause confusion about other parts of the process.
Disambiguate between knowledge and skill and what is possible
It would be nice to have more contextual information about the differences between the types up front and more examples of what is a skill vs. knowledge. In our initial pilot, the difference between what you can do with a skill vs. a knowledge contribution was the biggest point of confusion so addressing that inline with this design would be very helpful for users.
If thier skill is a grounded skill and requires context, they will need to add a context for each QNA pair.
If they're creating a grounded skill, there must be a context for every question and answer pair. The way this is laid out, it makes it seem as if a context is something that question and answer pairs can have or not have arbitrarily within the skill. They either all have it (grounded skill) or none have it (compositional skill.)
So you probably need a control higher up, or even have a branch in this wizard between composition and grounded skills. I would suggest the latter with additional as to why one would use one vs. the other.
The context should visually probably be above the question and answer drafts since you have to start with the grounding and draft the question and answer from that. Not vice-versa. You want to reinforce the appropriate order of operations.
The context is likely going to be much longer than a single line. You can look through sample skills in the taxonomy repo to see this.
What is the difference between compositional and grounded skills?
Because these are two different skill types, it might make sense to disambiguate between the two types up front for the user and ask which they intend?
Sample question and answer text in fields
The sample question and answer text isn't super helpful. It might be better to have actual sample question and answer text in there.
Step 3: Attributions
Attributions for context data entered in skills qna.yaml is not the same as for knowledge qna.yaml. We don't accept pdfs in skills qna.yaml, so the context in skills qna.yaml right now is not processed in the same way we process PDFs for knowledge qna.yaml. That means there is no automated process for determining attribution. These fields will need to be filled out manually.
It would make sense to display the context above the attribution fields to be filled out.
Even better. Provide the attribution fields on the same screen as the context, instead of having the user copy/paste context from three different sources, then have to backtrack / retrace their steps through the same 3 again.
Step 4: Review
This review is going to be quite long. It might be nice to have each of the qna pairs of the skill in a row of cards or something that could be flipped through. And with the attribution data attached to any contexts that may or may not be there based on the skill being grounded or compositional.
General feedback
Compositional skills are the most difficult type of skill to contribute. It would really make sense to review good, successful skills that are already in the tree, and perhaps provide them as samples or templates users could go through. Dumping them to a long howto document isn't going to be as helpful as being able to look at actual successful skill samples.
I know there have been changes to these designs over the past week, like the ones shown in the project's UI sync yesterday. I just wanted to address some additional notes I'm finding as I go through a first attempt at using the UI as it currently is:
- A lot of folks have multiple email addresses at this point. Can we add a tooltip or some other indicator to make clear it needs to be the same email attached to your GitHub account? And is it the primary email only, or can it search the secondary emails to find anything related?
- I think a lot of users don't know what DCO or the Developer Certificate of Origin means as they're not devs. Can we link to https://github.com/instructlab/community/blob/main/CONTRIBUTING.md#developer-certificate-of-origin-dco and explain that this is a legal process (aka, not something that only devs do)?
Thanks for your feedback, @mairin and @nimbinatus. We'll work on getting this incorporated and post an update/share at the next sync after shutdown.