Hi AverageRabbit65 , can you elaborate on what you're trying to do?
ClearML-Agent will automatically create a venv and install everything
Also, is there a reason you don't want to work with the default ports provided in the docker-compose.yml
?
Hi Chen, please go to settings -> configuration and mark 'show hidden projects'
Hi CheerfulGorilla72 ,
Tensorboard images aren't captures for you in ClearML?
How are you self hosting the ClearML server?
Why do you want to keep containers running after the code finished executing?
AttractiveShrimp45 , can you please open a GitHub issue to follow on this please?
GiganticTurtle0 Hi!
Which versions are you using? Also do you have an snippet example by chance?
task that reads a message from a queue
Can you give a specific example?
Hi @<1524922424720625664:profile|TartLeopard58> , can you elaborate on what do you mean by code-server?
Hi CostlyFox64 ,
Can you try configuring your ~/clearml.conf
with the following?agent.package_manager.extra_index_url= [ "https://<USER>:<PASSWORD>@packages.<HOSTNAME>/<REPO_PATH>" ]
You can specify a different docker image per experiment, so the same agent can run many different docker images (As long as it is run in docker mode from the start) 🙂
CrookedWalrus33 , you can set in the Task.init
, set the output_uri = True
. This should upload to the fileserver since by default models are saved locally
SubstantialMonkey63 , Hi! What exactly are you looking for ? I think you might find some relevant things here https://github.com/allegroai/clearml/tree/master/examples
I think you can configure agent.reload_config
in clearml.conf
and then push the change in the file programmatically somehow
If I'm not mistaken Task.get_last_iteration()
https://clear.ml/docs/latest/docs/references/sdk/task#get_last_iteration
reports the last iteration that was reported. However someone has to report that iteration. You either have to report it manually yourself during the script OR have something else like tensorflow/tensorboard do that reporting and ClearML should capture it
Does it make sense?
Feels like a cookie issue to me
SuperiorPanda77 , how did you deploy your server?
Hi AdventurousButterfly15 , what version of clearml-agent
are you using?
but you can use it with or without K8s
What specific compatibility issues are you getting?
Hmmmmm do you have a specific usecase in mind? I think pipelines are created only through the SDK but I might be wrong
Hi @<1654294834359308288:profile|DistressedCentipede23> , do you mean completely circumventing that basic function of the agent and simply pointing it to a specific script/env?
Hi @<1570220852421595136:profile|TeenyHedgehog42> , are you using the latest version of clearml-agent
? Can you provide a stand alone code snippet that reproduces this behavior for you?
The setting for the python binary should be explicit since the agent can't really 'detect' where you installed your python
For example:agent.python_binary: "C:\ProgramData\Anaconda3\python.exe"
Hi UpsetTurkey67 ,
Is this what you're looking for?
https://clear.ml/docs/latest/docs/references/sdk/trigger#add_model_trigger
I think as specified here:
https://github.com/allegroai/clearml-server/blob/master/docker/docker-compose.yml#L125
` Status: Downloaded newer image for nvidia/cuda:10.2-runtime-ubuntu18.04
1657737108941 dynamic_aws:cpu_services:n1-standard-1:4834718519308496943 DEBUG docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
time="2022-07-13T18:31:45Z" level=error msg="error waiting for container: context canceled" `As can be seen here 🙂