
Reputation
Badges 1
44 × Eureka!So downgrading to python 3.8 would be a workaround?
Yes, but the issue is caused because rmdatasets is installed in the local environments, I needed it installed in order to test the code locally, so it is caught on the package list.
I will probably stop installing the sibling packages and adding them manually to sys.path.
This is being started as a command line script.
Also tried saving the model with:task.set_model_config(cfg.model) task.update_output_model("best_model.onnx")
But got the same exception,
SuccessfulKoala55 , how do I set the agent version when creating the autoscaler?
Its a S3 bucket, it is working since I am able to upload models before this call and also custom artifacts on the same script.
Ubuntu 18.04
Python: 3.9.5
Clearml: 1.0.4
I am using hydra to configure my experiments. Specifically, I want to retrieve the OmegaConf data created by hydra, config = task.get_configuration_objects()
returns a string with those values, but I do not know how to parse it, or whether I can get this data in a nested dict.
I get the same error:
⋊> /d/c/p/c/e/reporting on master ◦ python model_config.py (longoeixo) 17:48:14
ClearML Task: created new task id=xxx
ClearML results page: xxx
` Any model stored from this point onwards,...
Hydra params are still not upload on 1.0.4
Follows the failure part of the log:
` Requirement already satisfied: pip in /root/.clearml/venvs-builds/3.1/lib/python3.10/site-packages (22.2.2)
Collecting Cython
Using cached Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.9 MB)
Installing collected packages: Cython
Successfully installed Cython-0.29.32
Collecting boto3==1.24.59
Using cached boto3-1.24.59-py3-none-any.whl (132 kB)
ERROR: Could not find a version that satisfies the requ...
I think not, I have not set any ENV variable. Just went to the web UI, added an autoscaler, filled the data in the UI and launched the autoscaler.
By inspecting the scaler task, it is running the following docker image: allegroai/clearml-agent-services-app:app-1.1.1-47
Thank you, I set it, but clearml still creates its own evironment regardless of my environment.yaml.
So It could be launched by the clearml cli? I can also try that.
Hi, any update on that?
Yes the example works. As the example, in my code I am basically starting with doing, was not that supposed to work?
` @hydra.main(config_path="config", config_name="config")
def main(cfg: DictConfig):
import os
import pytorch_lightning as pl
import torch
import yaml
import clearml
pl.seed_everything(cfg.seed)
task = clearml.Task.init(
project_name=cfg.project_name,
task_name=cfg.task_name,
) `
Thank you, now I am getting AttributeError: 'DummyModel' object has no attribute 'model_design'
when calling task.update_output_model("best_model.onnx")
. I checked the could I thought that it was related to the model not having a config defined, tried to set it with task.set_model_config(cfg.model)
but still getting the error.
AgitatedDove14 , here follows the full log:
Yes, tried with python 3.8, now it works.
With the account admin email. The one in which I got the receipt.
They are on the same repo on git, something like:my-repo train project1 project2 libs lib1 ...
Thank you, it is working now.
The only reason is that I can specify the python version to be used and conda will install it. On requirements.txt, the default python version will be used.
So I guess I am referring to the auto package detection. I am running the job though the web ui. My actual problem is that I have a private repo on my requirements.txt
(listed with the github url) that is not being installed. Also, my environment.yaml
uses python 3.8, while 3.9 is being installed.
Hi, sorry for the delay. rmdatasets == 0.0.1 is the name of the local package that lives in the same repo as the training code. Instead of picking the relative path to the package.
As as work around I set the setting to force the use of requirements.txt and I am using this script to generate it:
` import os
import subprocess
output = subprocess.check_output(["pip", "freeze"]).decode()
with open("requirements.txt", "w") as f:
for line in output.split("\n"):
if " @" in line...
Hi, AgitatedDove14
How do I set the version to 1.5.1,? When I launch the autoscaler the version 1.5.0 is picked by default.
I run some tests, I think I got it now.
After creating the new dataset, it is necessary to run sync
again, but now only the new files are uploaded.
And when running get
the files on the parent dataset will be available as links.
I used the autogenerated clearml.conf, I will try erasing the unnecessary parts.
Thank you, I have defined the AMI manually instead of using the default, now I am getting the following error:
Error: An error occurred (InvalidParameterValue) when calling the RunInstances operation: User data is limited to 16384 bytes
Basically, I am following the steps in this video:
https://www.youtube.com/watch?v=j4XVMAaUt3E
Thank you for your response, so what is the difference between sync and add? By your description it seems to make no difference whether I added the files via sync or add, since I will have to create a new dataset either way.
Let's see if I got how it works on the CLI.
So if I execute:clearml-data create --name <improved_dataset> --parents <existing_dataset_id>
Where the parent dataset was updated with sync,
I just need to run:clearml-data upload --id <created_dataset_id>
And the delta will be automatically uploaded to the new dataset?