huggingface/transformers.js

Uncaught (in promise) TypeError: r.logits is not iterable

Opened this issue · 2 comments

Question

Hey guys,

I have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video
https://www.youtube.com/watch?v=W_lUGPMW_Eg

I keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable

Any ideas on where I might be going wrong or if something has changed since this was released?

This is my basic code, I have python hosting the files locally

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>TinyBERT Model in Vanilla JS</title>
</head>
<body>

    <h1>TinyBERT Model Inference</h1>
    <p>Enter text for classification:</p>
    <input type="text" id="inputText" placeholder="Enter your text here" size="50"/>
    <button id="runModel">Run Model</button>

    <p><strong>Prediction:</strong> <span id="prediction"></span></p>

    <script type="module">

import { pipeline, env } from "https://cdn.jsdelivr.net/npm/@xenova/transformers";

        document.getElementById('runModel').addEventListener('click', async function () {
            const inputText = document.getElementById('inputText').value;
            
            // Load the TinyBERT model for sequence classification from local files
            const classifier = await pipeline('text-classification', './finalModel/');

            // Run the model to get the prediction
            const result = await classifier(inputText);

            // Display the result
            document.getElementById('prediction').innerText = JSON.stringify(result);
        });
    </script>

</body>
</html>

Hi there 👋 We recommend using either our conversion script, or Optimum to convert the model to ONNX, as this will ensure the output names are correct.

For example, you can run this command after cloning the repo and installing the requirements in scripts/requirements.txt:

python -m scripts.convert --quantize --model_id <model_name_or_path>

Hi there 👋 We recommend using either our conversion script, or Optimum to convert the model to ONNX, as this will ensure the output names are correct.

For example, you can run this command after cloning the repo and installing the requirements in scripts/requirements.txt:

python -m scripts.convert --quantize --model_id <model_name_or_path>

would converting a gemma 2b also work?