What OS are you using?
Hi @<1654294828365647872:profile|GorgeousShrimp11> , it appears the issue is due to running with different python versions. It looks like the python you're running the agent on, doesn't have virtualenv
Is something failing? I think that's the suggested method
I noticed that the base docker image does not appear in the autoscaler task'
configuration_object
It should appear in the General section
@<1542316991337992192:profile|AverageMoth57> , here you go - None
Please do 🙂
Hi @<1562610703553007616:profile|CloudyCat50> , you can use Task.set_tags()
to 're-set' tags and omit the tag you want removed.
EnviousPanda91 , in the Task object what do you see under 'Execution' tab in the 'Container' section?
Hi @<1546303254386708480:profile|DisgustedBear75> , what do you mean?
You get different results or your experiment fails?
Running in venv mode can be more prone to failure if you're running between different operating systems & python versions.
The default behavior of ClearML when running locally is to detect the packages used in the code execution (You can also provide specific packages manually or override auto detection entirely) and log them in the backend.
When a worker in a virtual...
Hi @<1659005876989595648:profile|ExcitedMouse44> , you can simply configure the agent not to install anything and just use the existing environment 🙂
The relevant env variables for this are: CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
None
Please open developer tools (F12) and see if you're getting any console errors when loading a 'stuck' experiment
FreshKangaroo33 ,
On the top right of the experiments view you have a cog wheel, if you click on it, it will give you an option to add hyper parameters to the table. I think from the API calls from there you can figure something out 🙂
Please try the following:
` In [1]: from clearml.backend_api.session.client import APIClient
In [2]: client = APIClient()
In [3]: tasks = client.tasks.get_all()
In [4]: tasks[0]
Out[4]: <Task: id=0a27ca578723479a9d146358f6ad3abe, name="2D plots reporting">
In [5]: tasks[0].data
Out[5]:
<tasks.Task: {
"id": "0a27ca578723479a9d146358f6ad3abe",
"name": "2D plots reporting",
"user": "JohnC",
"company": "",
"type": "training",
"status": "published",
"comment": "Aut...
Same repo as the private repo?
Do you have a custom certificate for SSL by chance? If this is the case please see the following:
https://github.com/allegroai/clearml/issues/7
The solution would be changing the following to false:
https://github.com/allegroai/clearml/blob/9624f2c715df933ff17ed5ae9bf3c0a0b5fd5a0e/docs/clearml.conf#L16
How did you verify that the files are still there? Not just you can see the folder name itself but are there contents? If you use the same links through the web UI, can you still download files even when task/dataset is deleted?
DrabCockroach54 , you can set it all up. I suggest you open developer tools (F12) and see how it is done in the UI. You can then implement this in code.
For example to filter tasks that started 10 minutes ago is something you can view via the UI
Can you please elaborate what AWS Lambda is and what your use case is with it? When running in a regular state does this error occur?
RotundSquirrel78 , can you please check the webserver container logs to see if there were any errors
TrickyRaccoon92 , after looking in the documentation a bit more it seems I am a bit wrong.
If you look here: https://clear.ml/docs/latest/docs/references/sdk/task/#upload_artifact
there is a parameter called wait_on_upload
which is set by default to false. If you set it to 'True' then you should get the artifact object as well. This is due to the artifacts upload being asynchronous and I'm guessing the artifact didn't finish uploading by the time you called it.
Hi @<1523701066867150848:profile|JitteryCoyote63> , you mean a global "env" variable that can be passed along the pipeline?
Can you curl
http://10.3.19.183:8008 Â from your workstation?
Hi EnviousPanda91 , are you running in docer mode? It looks like you're trying to use a CUDA image without a GPU on it
Can you please add the full log of the task here?
Hi @<1529633475710160896:profile|ThickChicken87> , I would suggest opening developer tools (F12) and observing what api calls go out when going over the experiment object. This way you can replicate the api calls to pull all the relevant data. I'd suggest reading more here - None
MinuteGiraffe30 , Hi ! 🙂
What if you try to manually create such a folder?
SucculentBeetle7 , can you please give an example of the pathing for an artifact?
Hi @<1590514572492541952:profile|ColossalPelican54> , you can use the Logger module to manually report metrics - None
@<1702492411105644544:profile|YummyGrasshopper29> , I suggest you take a look here - None
ScaryBluewhale66 ,
If you want to re-run - you need the agent It's still a Task
object so you can just use Task.close()
I'm not sure if something exists at the moment but you could write it faily easily in code