SmallDeer34 No worries, I'm happy to hear the issue disappeared 🙂
Sure, I don't seem to be having any trouble with 1.03rc1. As for 1.02, like I said, the original issue seems to have mysteriously gone away, like some sort of heisenbug that goes away when I mess with the Notebook.
With a completely fresh notebook I added the cells to install clearml 1.02 and initiate a Task, and ran the notebook again, and... the issue seems to have disappeared again.
Not sure how to even replicate the original issue anymore, sorry I couldn't be of more help!
SmallDeer34
I think this is somehow related to the JIT compiler torch is using.
My suspicion is that JIT cannot be initialized after something happened (like a subprocess, or a thread).
I think we managed to get around it with 1.0.3rc1.
Can you verify ?
But then I took out all my additions except for pip install clearml
andfrom clearml import Task task = Task.init(project_name="project name", task_name="Esperanto_Bert_2")
and now I'm not getting the error? But it's still install 1.02. So I'm just thoroughly confused at this point. I'm going to start with a fresh cop of the original colab notebook from https://huggingface.co/blog/how-to-train
Did a couple tests with Colab, moving the installs and imports up to the top. Results... seem to suggest that doing all the installs/imports before actually running the tokenization and such might fix the problem too?
It's a bit confusing. I made a couple cells at the top, like thus:!pip install clearml
andfrom clearml import Task task = Task.init(project_name="project name", task_name="Esperanto_Bert_2")
and# Check that PyTorch sees it import torch torch.cuda.is_available()
and
` # We won't need TensorFlow here
!pip uninstall -y tensorflow
Install transformers
from master
!pip install git+
!pip list | grep -E 'transformers|tokenizers'
transformers version at notebook update --- 2.11.0
tokenizers version at notebook update --- 0.8.0rc1 `and it seems that no matter what order I run them in, I don't get an error. This is complicated by the fact that I'm trying to get Colab to give me a clean runtime each time but I'm having some odd issues with that.
So I wonder if it's got something to do with not just the installs but all the other imports along the way, e.g. importing the tokenizer object and so forth?
OK, so with the RC, the issue has gone away. I can now import torch without issue.
One additional question, if you import clearml after you call torch
does it work ?
Hi SmallDeer34
Can you try with the latest RC , I think we fixed something with the jupyter/colab/vscode support!pip install clearml==1.0.3rc1