Pixray returns similar output regardless of prompt
Wheest opened this issue · 3 comments
I am running the pixray model using the following code:
import replicate
import requests
def save_image(url, filename):
response = requests.get(url)
if response.status_code == 200:
with open(filename, "wb") as f:
f.write(response.content)
model = replicate.models.get("pixray/text2image")
i = 0
for image in model.predict(prompt="An old man eating a cake"):
print(image)
save_image(image, f"{i}.png")
i += 1
However, I find that regardless of the prompt I give, the output is very similar, what appears to be the default "Cairo skyline at sunset." prompt.
The URLs of the generated images I print are different between runs, and if I do a checksum of the images between runs (e.g. $shasum 19.png
), the images are different. However visually they look very similar.
Here is the final output for the prompt: "An old man eating a cake":
And here is the output for ""A poke of chips":
I got the code for this inference from the "Run with API" tab of the https://replicate.com/pixray/text2image page.
Ah -- this is because there is a small typo in the code. The input to pixray is prompts
, not prompt
.
This isn't your fault though. The problem is pixray doesn't validate its inputs.
But, it isn't pixray's fault either. We should validate inputs inside Replicate before even sending it to pixray, because we know what inputs are required. We could also do it in the client before even sending it to the API. #26
Let me know if updating your code to read model.predict(prompts="An old man eating a cake")
(plural prompts
!) works for you. In the meantime we'll get some validation added.
Thanks for the report. :)
Thanks, I note that the README for the Python API uses the singular "prompt" for pixray, see here, which is where I believe the error was introduced. I've added a patch for it.
However yes on the Pixray page the code is correct, see below:
Nice. Glad to hear that's working, and thanks for the fix. I missed the example on the readme. We'll get better validation added.