google/generative-ai-go

How can I manage the safety settings? (Question)

ArsArsArsArs opened this issue · 4 comments

Unlike the other parameters, the safety settings parameter doesn't have it's own method in *genai.GenerativeModel so I don't really understand how I should configure that.
I tried the following code but ended up with this error: googleapi: Error 400: * GenerateContentRequest.safety_settings[0]: element predicate failed: $.category in (HarmCategory.HARM_CATEGORY_HATE_SPEECH, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, HarmCategory.HARM_CATEGORY_HARASSMENT)

model = gemini.GenerativeModel("gemini-pro")
	model.SafetySettings = []*genai.SafetySetting{
		{
			Category:  genai.HarmCategory(1),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(2),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(3),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(4),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(5),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(6),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(7),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(8),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(9),
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategory(10),
			Threshold: genai.HarmBlockNone,
		},
	}

@eliben No, honestly

jba commented

I think some of the values you're passing are not valid for HarmCategory. I would stick to the names defined in the genai package.

The error, translated into human language, is something like: "There was a problem with your request. I have this check that says the harm category has to be one of HATE_SPEECH, etc., and the category you sent me failed to pass that check."

@jba I guess you are right, thanks. I tried the following code and now I don't get the error

model.SafetySettings = []*genai.SafetySetting{
		{
			Category:  genai.HarmCategoryDangerousContent,
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategoryHarassment,
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategoryHateSpeech,
			Threshold: genai.HarmBlockNone,
		},
		{
			Category:  genai.HarmCategorySexuallyExplicit,
			Threshold: genai.HarmBlockNone,
		},
	}