![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/LovelyHamster1.png)
Reputation
Badges 1
32 × Eureka!FriendlySquid61 Your solution seems to have solved the problem. But only after I removed the export CLEARML_API_HOST={api_server}
export CLEARML_WEB_HOST={web_server}
export CLEARML_FILES_HOST={files_server}
command from the bash script executed when the instance is launched
Hi AgitatedDove14
I implemented the pipeline manually as you suggested. I also used task.wait_for_status() after each task.enqueue() so I was able to implement a full pipeline in one script. It seems to be working correctly. Thank you!
Yes the workaround it's working 🙂
Hi TimelyPenguin76 , I used api_client.tasks.create
and It works, thank you!
Hi AgitatedDove14 , I'm interested in this feature to run the agent and force it to install packages from requirements.txt. Is it available?
` # ClearML - Hydra Example
from clearml import Task
from dataclasses import dataclass
import hydra
from hydra.core.config_store import ConfigStore
from omegaconf import OmegaConf
@dataclass
class MySQLConfig:
host: str = "localhost"
port: int = 3306
cs = ConfigStore.instance()
Registering the Config class with the name 'config'.
cs.store(name="config", node=MySQLConfig)
@hydra.main(config_name="config")
def my_app(cfg: MySQLConfig) -> None:
# type (DictConfig) -> None
...
Hi TimelyPenguin76 , I tried your approach and it works, thank you! However it's a bit different to what I was trying to do: instead of cloning an existing task I'd like to specify the repository and a specific commit tag to use as it is done in Task.create. If this is possible with the API client it would be perfect
"Pytorch Lightning need the s3fs " s3fs is not needed, let PL store the model locally and use "output_uri" to automatically upload the model to your S3 bucket.
So I can set output_uri = "s3://<bucket_name>/prefix" and the local models will be loaded into the s3 bucket by ClearML ?
As an example, in Task.create() there is the possibility to install packages using a requirements.txt, and if not specified, it uses the requirements.txt of the repository. I'd like something like for Task.init() if possible
My problem right now is that Pytorch Lightning need the s3fs package to store model checkpoint into s3 buckets, but in my "installed packages" is not imported and I get an import error
Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?
Well, I can set import(s3fs)
even if I don't really use it in my own code. One problem could be if this happen for a lot of packages, therefore I'd need to add this import to all my entry points of all my repos. While if I just download the right packages from the requirements.txt than I don't need to think about...
Make sure you have the S3 credentials in your agent's clearml.conf
Ok this could be a problem, as right now I'm using ec2-instances with a instance-profile (I use it in the autoscaler) so they have by the default the right s3 permissions. But I'll try it anyway
Yes it does 👍 Btw, at the moment I added import(s3fs) in my entry point and it's working, thank you!
Hi AgitatedDove14 , I noticed that in the Hydra parameters section it is not possible to add as parameters keys string with dots: .(dot) $(dollar) and space are not allowed in parameter key.
However, it's very useful to add parameters with the dot to change something in a sub-configuration as, for example, training.max_epochs=10
. Do you think it's possible to allow this?
Actually I had the same issue even with that value set to False
Nice, I didn't know that 🙂
if in the "installed packages" I have all the packages installed from the requirements.txt than I guess I can clone it and use "installed packages"
Hi AgitatedDove14 , do you mean the the k8s glue autoscaler here https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py ? If yes, I understood that this service deploys pods on the nodes in the cluster, but I'd prefer to have a new instance deployed for each new experiment and that it also terminates when no new experiments are queued
AgitatedDove14 that seems like the best option. Once the aws autoscaler is inside a docker container I can deploy it inside a kube pod or a job. This, however, requires that I slightly modify the clearml helm chart with the aws-autoscaler deployment, right?
Yes it was set to nvidia/cuda:10.1-runtime-ubuntu18.04... ok I'll try again and see if that was the problem, thank you
Ok now I noticed that If I change the value of the port inside the Hydra parameters section ( not the overrides) It does actually change also in the experiment. The overrides doesn't seem to be working
Hi AgitatedDove14 , thank you for your answer!
At the moment I can't configure both internal/external with the same dns. Before changing the server infrastructure, i'm trying a workaround where I upload the artifact with the internal file server path, and then I upload a string artifact which is the first artifact url where I replace the internal dns with the external dns, and use it to download the artifact from the UI.
It's working correctly, thank you!
Because at the moment I'm having a problem with the s3fs package where I have it in my requirements.txt but the import manager at the entry point doesn't install it
Hi AgitatedDove14 , FriendlySquid61 ! I managed to grant permission to the AWS autoscaler to spin instances using the instance profile as suggested by FriendlySquid61 . The instances are created and terminated correclty, however the new instances don't executed the queued task and shutdown immediately. I noticed that the clearml credential atself.web_server = Session.get_app_server_host()
self.api_server = Session.get_api_server_host()
` self.files_server = S...