Hi TenseOstrich47 ,
If you want to get all the scalars. you can use task.get_last_scalar_metrics()
, can this help?
SubstantialElk6 you are right, only agent running docker mode will do it, you are running venv mode.
The clearml-agent will try to build a specific virtual environment for your task, with virtualenv
. You can just install it in the environment the clearml-agent is running from (python3.6?) with python3.6 -m pip install virtualenv
and it should work 🙂
yep, you need it to be part of the environment
Hi SubstantialElk6 ,
You can configuration S3 credentials on your ~/clearml.conf
file, or with environment variables:os.environ['AWS_ACCESS_KEY_ID'] ="***" os.environ['AWS_SECRET_ACCESS_KEY'] = "***" os.environ['AWS_DEFAULT_REGION'] = "***"
get_local_copy()
will return the entire dataset, but you can divide the dataset parts and have the same parent for all of them, can this work?
SubstantialElk6 you can try:
dataset_upload_task = Dataset.get(dataset_id=dataset_task) path_with_data = dataset_upload_task.get_local_copy()
I just tried and everything works.
I run this for the template task:
` from clearml import Task
task = Task.init(project_name="Examples", task_name="task with connected dict")
period = {"start": "2020-01-01 00:00", "end": "2020-12-31 23:00"}
task.connect(period, name="period") `
and this for the clone one:
` from clearml import Task
template_task = Task.get_task(task_id="<Your template task id>")
cloned_task = Task.clone(source_task=template_task,
name=templat...
Hi GiganticTurtle0 ,
Does it happen when you change the parameters from the UI too? or only from code? same flow as in https://github.com/allegroai/clearml/blob/master/examples/automation/manual_random_param_search_example.py#L47 ?
can you try those? Do you have an example for the cloning code?
Hi GiganticTurtle0 ,
Not directly with the sdk but you can use the APIClient:
` from clearml.backend_api.session.client import APIClient
api_client = APIClient()
api_client.queues.create("your queue name") `
hi DepressedChimpanzee34 . once you change the parameters in the cloned task from the UI, those will be the parameters your task will use when running with the ClearML agent.
The configuration you see in the UI will be the actual running configuration for task
Hi DepressedChimpanzee34 ,
Hydra should be auto patched, did you try this example?
https://github.com/allegroai/clearml/blob/master/examples/frameworks/hydra/hydra_example.py
Hi KindBlackbird59
You can always clone the first task and change the parameters in the second one, is this what you are looking for?
So for adding a model for serve with endpoint you can use
clearml-serving triton --endpoint "<your endpoint>" --model-project "<your project>" --model-name "<your model name>"
when the model is getting updated, is should use the new one
clearml-agent can listen to one or more queues, once a task is enqueue to those queue, the clearml-agent will pull it and will run the task.
You can allocate you resources to the clearml-agent (like https://clear.ml/docs/latest/docs/clearml_agent#allocating-resources ) and you can prioritize your queues (if you have more than one - https://clear.ml/docs/latest/docs/clearml_agent#queue-prioritization )
Hi UnsightlySeagull42 , didnt really get your setup, you have more than one cuda on your system?
I can’t use Docker because I need 4 different Tensorflow versions and my company is not allowed to use conda.
You can use Docker without conda
Hi NuttyOctopus69 ,
I’m getting the same, suspect its some pypi server issues (from https://status.python.org/ ).
You can install it the latest version from GitHub withpip install git+
seems like an issue with pypi, when trying to install a package with pypi server, it fails
for furl
try with pip install git+
https://github.com/gruns/furl
pip install clearml
works for me now, if you like to try…
From the ClearML UI you can just change the value under BASE DOCKER IMAGE section to your image
Maybe I missed something, whats your flow? Do you have some kind of “template task”? And you clone it?
And how do i pass in new env parameters?
If you don’t value in the task for BASE DOCKER IMAGE, it will use the default, if you are setting the BASE DOCKER IMAGE, add the env vars to it too:
dockerrepo/mydocker:custom --env GIT_SSL_NO_VERIFY=true
Hi SubstantialElk6 , does the task have a docker image too (you can check it in the UI)?
where task
is the value return from your Task.init
call,
task = Task.init(project_name=<YOUR PROJECT NAME>, task_name=<YOUR TASK NAME>)
I can help you with that 🙂
task.set_base_docker("dockerrepo/mydocker:custom --env GIT_SSL_NO_VERIFY=true")
In the task you cloned, do you have torch as part of the requirements?
according to this part
Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
I don’t see the requirements change, lets try without the cache, can you clear it (ClearML cache dir is located at ~/.clearml
)?