Reputation
Badges 1
32 × Eureka!Yes it was set to nvidia/cuda:10.1-runtime-ubuntu18.04... ok I'll try again and see if that was the problem, thank you
If it can help understand, this is what I'm doing
I've just seen it is a know issue https://clearml.slack.com/archives/CTK20V944/p1611763839133700 . Has a new version been released meanwhile?
Yes the workaround it's working 🙂
Hi AgitatedDove14 , I'm interested in this feature to run the agent and force it to install packages from requirements.txt. Is it available?
Yes I think it's only related to the UI. Do you think It can be fixed somehow? It would be the easiest way to launch new experiments with a different configuration
It's working correctly, thank you!
Actually I had the same issue even with that value set to False
Also, if I want to modify another parameter, e.g. ui.height I have this problem:
Hi AgitatedDove14 , I noticed that in the Hydra parameters section it is not possible to add as parameters keys string with dots: .(dot) $(dollar) and space are not allowed in parameter key.
However, it's very useful to add parameters with the dot to change something in a sub-configuration as, for example, training.max_epochs=10
. Do you think it's possible to allow this?
` # ClearML - Hydra Example
from clearml import Task
from dataclasses import dataclass
import hydra
from hydra.core.config_store import ConfigStore
from omegaconf import OmegaConf
@dataclass
class MySQLConfig:
host: str = "localhost"
port: int = 3306
cs = ConfigStore.instance()
Registering the Config class with the name 'config'.
cs.store(name="config", node=MySQLConfig)
@hydra.main(config_name="config")
def my_app(cfg: MySQLConfig) -> None:
# type (DictConfig) -> None
...
Hi TimelyPenguin76 , I used api_client.tasks.create
and It works, thank you!
Hi AgitatedDove14
I implemented the pipeline manually as you suggested. I also used task.wait_for_status() after each task.enqueue() so I was able to implement a full pipeline in one script. It seems to be working correctly. Thank you!
However, If I edit directly the OmegaConf in the UI than the port changes correctly. I'd still prefer to override the Args so I can change entire sub-configuration e.g. ['dataset=cifar']
to ['dataset=imagenet']
instead of having to change all the parameters inside the OmegaConf
Ok now I noticed that If I change the value of the port inside the Hydra parameters section ( not the overrides) It does actually change also in the experiment. The overrides doesn't seem to be working
After the agent finished installing the "requirements.txt" it will put back the entire "pip freeze" into the "installed packages", this means that later we will be able to fully reproduce the working environment, even if packages change (which will eventually happen as we cannot expect everyone to constantly freeze versions)
This would be perfect
Yes it does 👍 Btw, at the moment I added import(s3fs) in my entry point and it's working, thank you!
Does it work if I launch the clearml-agent on a docker and pip doesn't know the packages to install?
Make sure you have the S3 credentials in your agent's clearml.conf
Ok this could be a problem, as right now I'm using ec2-instances with a instance-profile (I use it in the autoscaler) so they have by the default the right s3 permissions. But I'll try it anyway
Please let me know if my explanation is not really clear
As an example, in Task.create() there is the possibility to install packages using a requirements.txt, and if not specified, it uses the requirements.txt of the repository. I'd like something like for Task.init() if possible
Because at the moment I'm having a problem with the s3fs package where I have it in my requirements.txt but the import manager at the entry point doesn't install it
if in the "installed packages" I have all the packages installed from the requirements.txt than I guess I can clone it and use "installed packages"
My problem right now is that Pytorch Lightning need the s3fs package to store model checkpoint into s3 buckets, but in my "installed packages" is not imported and I get an import error
Hi AgitatedDove14 , do you mean the the k8s glue autoscaler here https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py ? If yes, I understood that this service deploys pods on the nodes in the cluster, but I'd prefer to have a new instance deployed for each new experiment and that it also terminates when no new experiments are queued