Getting the same error even after using the slightly older versions.
ayush714 opened this issue · 2 comments
ayush714 commented
I am getting this error, by using below code:-
ValueError: Configurable 'make_layer_stack' doesn't have a parameter named 'use_universal_transformer'.
In file "gs://unifiedqa/models/large/operative_config.gin", line 83
decoder/make_layer_stack.use_universal_transformer = False
MODEL_SIZE = "large"
BASE_PRETRAINED_DIR = "gs://unifiedqa/models/large"
PRETRAINED_DIR = BASE_PRETRAINED_DIR
MODEL_DIR = os.path.join(MODEL_DIR, MODEL_SIZE)
model_parallelism, train_batch_size, keep_checkpoint_max = {
"small": (1, 256, 16),
"base": (2, 128, 8),
"large": (8, 64, 4),
"3B": (8, 16, 1),
"11B": (8, 16, 1)}[MODEL_SIZE]
tf.io.gfile.makedirs(MODEL_DIR)
ON_CLOUD = False
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=None,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 128, "targets": 32},
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,
iterations_per_loop=100,
)
FINETUNE_STEPS = 9
logInfo("Started Training the model")
start = time()
model.finetune(
mixture_or_task_name="qa_t5_meshs",
pretrained_model_dir=PRETRAINED_DIR,
finetune_steps=FINETUNE_STEPS
)
logInfo("Completed model training.", time_taken=time() - start)
I have seen one answer in issues, but I don't know what you're trying to say.
ayush714 commented
I am getting the different error if I try to use 0.12 version