Lightning-Universe/lightning-transformers

Problem with reading in large json files

itamblyn opened this issue ยท 6 comments

๐Ÿ› Bug

JSON files with a large number of rows (e.g. thousands) result in a parsing error. Using the exact same type of data with less rows works fine, so I'm not sure how to debug it.

Currently it is not possible to ingest large amounts of text for transfer learning with json.

I created some test datasets for a toy problem which highlights the issue. The problem I created is a 2-class text classification where examples contain either an even or odd number of characters.

For example

"a a a ." = odd

"a a a a a a ." = even

For json files with a small number of rows (e.g. 1000) files are read in correctly and the model is able to train with good results on validation / test (which I take to mean that they have been parsed correctly).

For datasets with more examples (rows), I get errors related to parsing the json, even though the format is identical (all files were generated with the same python script, also attached). I had to change the file extension to txt in order to get github to accept it.
even_odd.txt

Files are attached.

To Reproduce

2021-08-19 18:10:37.246138: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
dataset:
target: lightning_transformers.task.nlp.text_classification.TextClassificationDataModule
cfg:
batch_size: ${training.batch_size}
num_workers: ${training.num_workers}
dataset_name: null
dataset_config_name: null
train_file: /mnt/nfs/itamblyn/lt-v100/train.json
validation_file: /mnt/nfs/itamblyn/lt-v100/validation.json
test_file: /mnt/nfs/itamblyn/lt-v100/test.json
train_val_split: null
max_samples: null
cache_dir: null
padding: max_length
truncation: only_first
preprocessing_num_workers: 1
load_from_cache_file: true
max_length: 128
limit_train_samples: null
limit_val_samples: null
limit_test_samples: null
task:
recursive: false
target: lightning_transformers.task.nlp..text_classification.TextClassificationTransformer
optimizer: ${optimizer}
scheduler: ${scheduler}
backbone: ${backbone}
downstream_model_type: transformers.AutoModelForSequenceClassification
tokenizer:
target: transformers.AutoTokenizer.from_pretrained
pretrained_model_name_or_path: ${backbone.pretrained_model_name_or_path}
use_fast: true
backbone:
pretrained_model_name_or_path: bert-base-cased
optimizer:
target: torch.optim.AdamW
lr: ${training.lr}
weight_decay: 0.001
scheduler:
target: transformers.get_linear_schedule_with_warmup
num_training_steps: -1
num_warmup_steps: 0.1
training:
run_test_after_fit: true
lr: 5.0e-05
output_dir: .
batch_size: 16
num_workers: 16
trainer:
target: pytorch_lightning.Trainer
logger:
target: pytorch_lightning.loggers.TensorBoardLogger
save_dir: logs/
checkpoint_callback: true
callbacks: null
default_root_dir: null
gradient_clip_val: 0.0
process_position: 0
num_nodes: 1
num_processes: 1
gpus: 1
auto_select_gpus: false
tpu_cores: null
log_gpu_memory: null
progress_bar_refresh_rate: 1
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 5
min_epochs: 1
max_steps: null
min_steps: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
val_check_interval: 1.0
flush_logs_every_n_steps: 2
log_every_n_steps: 1
accelerator: ddp
sync_batchnorm: false
precision: 16
weights_summary: top
weights_save_path: null
num_sanity_val_steps: 2
truncated_bptt_steps: null
resume_from_checkpoint: null
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_epoch: false
auto_lr_find: false
replace_sampler_ddp: true
terminate_on_nan: false
auto_scale_batch_size: false
prepare_data_per_node: true
plugins: null
amp_backend: native
amp_level: O2
move_metrics_to_cpu: false
experiment_name: ${now:%Y-%m-%d}_${now:%H-%M-%S}
log: true
ignore_warnings: true

[2021-08-19 18:10:46,966][datasets.builder][WARNING] - Using custom data configuration default-9eb11fc83a0213c5
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-9eb11fc83a0213c5/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...
0 tables [00:00, ? tables/s][2021-08-19 18:10:48,456][datasets.packaged_modules.json.json][ERROR] - Failed to read file '/mnt/nfs/itamblyn/lt-v100/train.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 2372
Error executing job with overrides: ['task=nlp/text_classification', 'trainer.gpus=1', 'trainer.min_epochs=1', 'trainer.max_epochs=5', 'trainer.precision=16', 'trainer.log_every_n_steps=1', 'trainer.flush_logs_every_n_steps=2', 'log=true', '+trainer/logger=tensorboard', 'dataset.cfg.train_file=/mnt/nfs/itamblyn/lt-v100/train.json', 'dataset.cfg.validation_file=/mnt/nfs/itamblyn/lt-v100/validation.json', 'dataset.cfg.test_file=/mnt/nfs/itamblyn/lt-v100/test.json']
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
dataset = json.load(f)
File "/usr/lib/python3.7/json/init.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.7/json/init.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 5 column 1 (char 142)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 371, in
overrides=args.overrides,
File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/hydra.py", line 110, in run
_ = ret.return_value
File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 233, in return_value
raise self._return_value
File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "/mnt/nfs/itamblyn/lightning-transformers/train.py", line 10, in hydra_entry
main(cfg)
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/cli/train.py", line 78, in main
logger=logger,
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/cli/train.py", line 53, in run
data_module.setup("fit")
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/datamodule.py", line 428, in wrapped_fn
fn(*args, **kwargs)
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/core/nlp/data.py", line 31, in setup
dataset = self.load_dataset()
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/core/nlp/data.py", line 67, in load_dataset
return load_dataset(extension, data_files=data_files)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1163, in _prepare_split
generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1185, in iter
for obj in iterable:
File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/json/json.py", line 134, in _generate_tables
raise e
File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/json/json.py", line 115, in _generate_tables
BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a name for object member. in row 2372

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/nfs/itamblyn/lightning-transformers/train.py", line 14, in
hydra_entry()
File "/usr/local/lib/python3.7/dist-packages/hydra/main.py", line 53, in decorated_main
config_name=config_name,
File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 368, in _run_hydra
lambda: hydra.run(
File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 251, in run_and_report
assert mdl is not None
AssertionError

Environment

print(torch.version)
1.9.0+cu102

  • OS (e.g., Linux):
    uname -a
    Linux 1fe8d7c6ea19 4.19.0-17-cloud-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 x86_64 x86_64 GNU/Linux
    lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description: Ubuntu 18.04.5 LTS
    Release: 18.04
    Codename: bionic

  • How you installed PyTorch (conda, pip, source):
    I installed the most recent version of lightning-transformer from github with
    pip install .

  • Build command you used (if compiling from source):

  • Python version:
    -Python 3.7.5 (default, Feb 23 2021, 13:22:40)

  • CUDA/cuDNN version:

Thu Aug 19 18:18:11 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |
| N/A 36C P0 40W / 300W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

  • GPU models and configuration:
    GPU = V100 (16 GB)

  • Any other relevant information:

Additional context

json_problem.tar.gz

Hi, may be you generating data incorrectly.
I just see your train.json file and this is not correct json

if you want to write a lot of data, then it should look like this

 [ 
  {"label": "even","text": "a a a a a a a a a a a a a a a a a a a a a a ."}, 
  {"label": "odd","text": "a a a a a a a a a a a ."} 
 ] 

import numpy as np
import json

num_train = 1000

out = [{'label':'', 'text':''} for i in range(num_train)]

for i in range(num_train):
    number = np.random.randint(1, 100)
    if number % 2 == 0:
        out[i]['label'] = 'even'
    else:
        out[i]['label'] = 'odd'
    out[i]['text'] = 'a ' * number + '.'
    
    
with open('data.json', 'w+') as f:
    json.dump(out, f)

Hello, thank you for the suggestion; unfortunately it still does not seem to work. I ran your code (with num_train=2), but I still get errors, see below (I also tried to just copy-and-paste the example json you posted and have the same issue.

more test.json
[{"label": "odd", "text": "a a a a a a a a a ."}, {"label": "even", "text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a ."}]

./go.sh
2021-08-20 15:14:50.820417: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/usr/local/lib/python3.7/dist-packages/hydra/internal/defaults_list.py:251: UserWarning: In 'config': Defaults list is missing _self_. See https://hydra.cc/docs/upgrades/1.0_to_1.1/default_composition_order for more information
warnings.warn(msg, UserWarning)
dataset:
target: lightning_transformers.task.nlp.text_classification.TextClassificationDataModule
cfg:
batch_size: ${training.batch_size}
num_workers: ${training.num_workers}
dataset_name: null
dataset_config_name: null
train_file: /mnt/nfs/itamblyn/lt-v100/train.json
validation_file: /mnt/nfs/itamblyn/lt-v100/validation.json
test_file: /mnt/nfs/itamblyn/lt-v100/test.json
train_val_split: null
max_samples: null
cache_dir: null
padding: max_length
truncation: only_first
preprocessing_num_workers: 1
load_from_cache_file: true
max_length: 128
limit_train_samples: null
limit_val_samples: null
limit_test_samples: null
task:
recursive: false
target: lightning_transformers.task.nlp..text_classification.TextClassificationTransformer
optimizer: ${optimizer}
scheduler: ${scheduler}
backbone: ${backbone}
downstream_model_type: transformers.AutoModelForSequenceClassification
tokenizer:
target: transformers.AutoTokenizer.from_pretrained
pretrained_model_name_or_path: ${backbone.pretrained_model_name_or_path}
use_fast: true
backbone:
pretrained_model_name_or_path: bert-base-cased
optimizer:
target: torch.optim.AdamW
lr: ${training.lr}
weight_decay: 0.001
scheduler:
target: transformers.get_linear_schedule_with_warmup
num_training_steps: -1
num_warmup_steps: 0.1
training:
run_test_after_fit: true
lr: 5.0e-05
output_dir: .
batch_size: 16
num_workers: 16
trainer:
target: pytorch_lightning.Trainer
logger:
target: pytorch_lightning.loggers.TensorBoardLogger
save_dir: logs/
checkpoint_callback: true
callbacks: null
default_root_dir: null
gradient_clip_val: 0.0
process_position: 0
num_nodes: 1
num_processes: 1
gpus: 1
auto_select_gpus: false
tpu_cores: null
log_gpu_memory: null
progress_bar_refresh_rate: 1
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 5
min_epochs: 1
max_steps: null
min_steps: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
val_check_interval: 1.0
flush_logs_every_n_steps: 2
log_every_n_steps: 1
accelerator: ddp
sync_batchnorm: false
precision: 16
weights_summary: top
weights_save_path: null
num_sanity_val_steps: 2
truncated_bptt_steps: null
resume_from_checkpoint: null
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_epoch: false
auto_lr_find: false
replace_sampler_ddp: true
terminate_on_nan: false
auto_scale_batch_size: false
prepare_data_per_node: true
plugins: null
amp_backend: native
amp_level: O2
move_metrics_to_cpu: false
experiment_name: ${now:%Y-%m-%d}
${now:%H-%M-%S}
log: true
ignore_warnings: true

[2021-08-20 15:15:00,412][datasets.builder][WARNING] - Using custom data configuration default-735c8f66108cbed6
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-735c8f66108cbed6/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...
0 tables [00:00, ? tables/s][2021-08-20 15:15:01,769][datasets.packaged_modules.json.json][ERROR] - Failed to read file '/mnt/nfs/itamblyn/lt-v100/train.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column() changed from object to array in row 0
Error executing job with overrides: ['task=nlp/text_classification', 'trainer.gpus=1', 'trainer.min_epochs=1', 'trainer.max_epochs=5', 'trainer.precision=16', 'trainer.log_every_n_steps=1', 'trainer.flush_logs_every_n_steps=2', 'log=true', '+trainer/logger=tensorboard', 'dataset.cfg.train_file=/mnt/nfs/itamblyn/lt-v100/train.json', 'dataset.cfg.validation_file=/mnt/nfs/itamblyn/lt-v100/validation.json', 'dataset.cfg.test_file=/mnt/nfs/itamblyn/lt-v100/test.json']
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/json/json.py", line 115, in _generate_tables
BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/nfs/itamblyn/lightning-transformers/train.py", line 10, in hydra_entry
main(cfg)
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/cli/train.py", line 78, in main
logger=logger,
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/cli/train.py", line 53, in run
data_module.setup("fit")
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/datamodule.py", line 428, in wrapped_fn
fn(*args, **kwargs)
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/core/nlp/data.py", line 31, in setup
dataset = self.load_dataset()
File "/mnt/nfs/itamblyn/lightning-transformers/lightning_transformers/core/nlp/data.py", line 67, in load_dataset
return load_dataset(extension, data_files=data_files)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1163, in _prepare_split
generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1185, in iter
for obj in iterable:
File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/json/json.py", line 136, in _generate_tables
f"Not able to read records in the JSON file at {file}. "
AttributeError: 'list' object has no attribute 'keys'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Any movement on this?

stale commented

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Looking into this. It seems like a good opportuninty to learn how to write custom train/valid/test data.

This issue is still persisting. Is there any intention of fixing this?