You also need an agent listening to the queue you're enqueuing to
Hi @<1533257278776414208:profile|SuperiorCockroach75> , what do you mean? ClearML logs automatically scikit learn
Hi @<1531807732334596096:profile|ObliviousClams17> , are you self deployed? Can you please provide the full log?
What happens if you clear the commit and just run with latest master?
VexedCat68 hi!
Hi FrustratingShrimp3 , which framework would you like added?
I think you can periodically upload them to s3, I think the StorageManager would help with that. Do consider that artifacts are logged in the system with links (each artifact is a link in the end) So even if you upload it to and s3 bucket in the backend there will be a link leading to the file-server so you would have to amend this somehow.
Why not upload specific checkpoints directly to s3 if they're extra heavy?
Hi @<1523704461418041344:profile|EnormousCormorant39> , on the agent. Although I think you could even pass them as env variables if you're running in docker mode
Hi @<1835488771542355968:profile|PerplexedShells66> , you can set that up directly with set_repo - None
Hi SteepDeer88 , I think this is the second case. Each artifact URL is simply saved as a string in the DB.
I think you can write a very short migration script to rectify this directly on MongoDB OR manipulate it via the API using tasks.edit endpoint
Hi VividDucks43 ,
I think what you're looking for is this:
https://clear.ml/docs/latest/docs/references/sdk/task#taskforce_requirements_env_freeze
🙂
Why go into the environment variable and not just state it directly?
task = Task.init(
project_name="my_project",
task_name="my_task",
output_uri="
"
)
can you try reinstalling clearml-agent ?
Hi Danil,
You can use the following env variable to set it 🙂CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
What version of python do you need to run on?
Hi @<1714451225295982592:profile|FreshWoodpecker88> , is it possible that you didn't get permissions to the relevant directories to act as the actual storage for mongodb?
Hi VastShells9 , can you add the full log of the execution?
Can you provide a snippet to try and reproduce?
Hi @<1546303277010784256:profile|LivelyBadger26> I'm afraid that in the free version everyone is an admin. In the scale & Enterprise licenses you have full role based access controls on all elements in the system (from experiments to which workers can be provisioned to whom)
Hi @<1569496075083976704:profile|SweetShells3> , do you mean to run the CLI command via python code?
Hi @<1576381444509405184:profile|ManiacalLizard2> , I don't think such a capability currently exists. I would suggest opening a github feature request for this. As a workaround you could zip them up together and then bind them to an output model.
What do you think?
Hi @<1673501397007470592:profile|RelievedDuck3> , I think this is more of a Grafana core capability
If it's metrics why not report them as scalars?
https://clear.ml/docs/latest/docs/references/sdk/logger#report_scalar
SubstantialElk6 , I think this is what you're looking for:
https://clear.ml/docs/latest/docs/references/sdk/dataset#get_local_copyDataset.get_local_copy(..., part=X)
Hi GiganticMole91 ,
Can you please elaborate on what are you trying to do exactly?
Doesn't HyperParameterOptimizer change parameters out of the box?
Hi @<1874264260817719296:profile|RipeSheep74> , I think this is possible with the API. You would need to manually move the tasks to running mode, remove all scalars manually and re-report them also manually via the API.
Hi @<1549202366266347520:profile|GorgeousMonkey78> , at what point does it get stuck? What happens if you remove the Task.init line from the script?
You can set the docker image you want to run with using Task.set_base_docker
None
SarcasticSparrow10 , it seems you are right. At which point in the instructions are you getting errors from which step to which?