Hi @<1539417873305309184:profile|DangerousMole43> , unless both steps run on the same machine you will have to upload it as an artifact somehow
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , did you try the solution in GitHub? Did it not help?
AttractiveShrimp45 , can you please open a GitHub issue to follow on this please?
Hi @<1654294828365647872:profile|GorgeousShrimp11> , it appears the issue is due to running with different python versions. It looks like the python you're running the agent on, doesn't have virtualenv
Hi @<1751777178984386560:profile|ConfusedGoat3> , is the data itself still accessible anywhere in it's entirety?
Hi @<1584716373181861888:profile|ResponsiveSquid49> , what optimization method are you using?
Hi @<1649946171692552192:profile|EnchantingDolphin84> , it's not a must but it would be the suggested approach 🙂
Please also don't spam the main channel. Keep same topic messages in same thread.
Just making sure we cover all bases - you changed updated the optimized to use a base task with _allow_omegaconf_edit_ : True
SubstantialElk6 , can you please verify that you have all the required packages installed locally ? Also in your ~/clearml.conf what is the setting of agent.package_manager.system_site_packages
Try to set agent.enable_git_ask_pass: true for the agent running inside the container, perhaps that will help
The "template" task
Hi UpsetCrow72 ,
Can you please explain which steps you took to make this happen? I'm not sure I understand what exactly happened.
Yes, links to data should all be in mongodb. Under the hood datasets are 'special' type of tasks so you can just find that experiment and check the registered artifacts, there should be the links to the data itself
How does your requirements.txt look like?
Why does the figure change so drastically? And how can I solve it?
What are you referring yo specifically? The data plots seem to be identical.
Sidenote: there seems to be a bug in the plot viewer, as the axis are a bit chaotic..
Do you mean the x/y intersection?
MortifiedDove27 , in the docker ps command you added everything seems to be running fine
Can you try going to <HOST>/login
Are you using a self hosted server or the community server?
If you run in docker mode you can specify startup shell script
Hi AbruptCow41 ,
I think you need to call Task.init before creating the argparser args
Hi 🙂
Please try specifying the file itself explicitly
@<1523704089874010112:profile|FloppyDeer99> , can you try upgrading your server? It appears to be pretty old version.
When looking at the user in MongoDB, is it some special user or just something regular?
Not sure. I think it would require the admin vault to implement something like this via env variables.
You can always instruct the users to add it to their code
ExcitedSeaurchin87 , Hi 🙂
I think it's correct behavior - You wouldn't want leftover files flooding your computer.
Regarding preserving the datasets - I'm guessing that you're doing the pre-processing & training in the same task so if the training fails you don't want to re-download the data?
Hi @<1533619716533260288:profile|SmallPigeon24> , is it possible you're selecting multiple experiments? Or maybe there were two initial steps that were aborted? How does your pipeline look in the UI and do you have something that reproduces that?
I think the pipeline runs from start to end, starting when the first step starts