Or are you trying to change something in the docker compose?
Which version of clearml are you using?
WickedBee96 , I think basically you need to find a way to use docker
commands without requiring sudo
before
Oh I see. Technically speaking the pipeline controller is a task of a special type of itself. So technically speaking you could provide the task ID of the controller and clone that. You would need to make sure that the relevant system tags are also applied so it would show up properly as a pipeline in the webUI.
In addition to that, you can also trigger it using the API also
Hi @<1772795696529805312:profile|LethalCoral80> , I think what you're looking for is the offline mode - None
In that case you are correct. If you want to have a 'central' source of data then Datasets would be the suggested approach. Regarding your question on adding data, you would always have to create a new child version and append new data to the child.
Also maybe squashing the dataset might be relevant to you - None
Hi @<1562973083189383168:profile|GrievingDuck15> , I think you'll need to re-register it
Hi @<1610083503607648256:profile|DiminutiveToad80> , I'd suggest using the Datasets feature. But you can however of course upload it as artifacts.
Where are you trying to upload it? Can you provide the full log? Also, a code snippet would help.
Oh, I would suggest asking on the main support channel for ClearML 🙂
You mean the community server?
Setting the upload destination correctly and doing the same steps again
Hi @<1717350310768283648:profile|SplendidFlamingo62> , you can also use artifacts on the task itself in order to pass data between tasks - None
What version of clearml
are you using? Can you try in a clean python virtual env?
Hi WickedCat12 ,
During Task.init()
you can specify auto_connect_frameworks=False
for the framework you're working with. However please note that this will stop auto reporting scalars etc
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
@<1544853721739956224:profile|QuizzicalFox36> , are you running the steps from the machine who's config you checked?
Doesn't seem to reproduce for me (just ran pipeline and nothing changed about my project)
And are they the same tasks?
Are you using the OS autoscaler or the PRO version?
ResponsiveHedgehong88 hi,
The best indication would be in the 'INFO' section of the experiment. If it was run via CLI it should have N/A in the worker/queue section
Regarding the packages issue:
What python did you run on originally - Because it looks that 1.22.3 is only supported by python 3.8. You can circumvent this entire issue by running in docker mode with a docker that has 3.7 pre-installed
Regarding the data file loading issue - How do you specify the path? Is it relative?
It looks like there might be a firewall or something of the sort, please try the curl command from the machine itself to verify
Hi,
From the looks of it, it always returns a string. What is your use case for this? Do you have some conditionality on the type of variable the parameters are?
Is there a reason it is requiring pytorch? )
The script you provided has only clearml
as a requirement
Hi @<1739818374189289472:profile|SourSpider22> , can you provide a full log of the run?
Can you see if in the APIserver logs something happened during this time? Is the agent still reporting?
Must have missed this:
I have also hit my computer with a shoe.
Might need bigger shoe 😄
I'm guessing you didn't move ES?
Hi RobustFlamingo1 ,
Can you point to where the website suggests that K8S is a requirement?
I use the ClearML-Agent on a local machine without any K8S. It is certainly not a requirement. From what I understand you can run it on K8S as well.
So to answer your question:
You can definitely use ClearML Orchestration (ClearML-Agent) with OR without K8S
I hope this helps 🙂