hi AbruptHedgehog21
which s3 service provider will you use ?
do you have a precise list of the var you need to add to the configuration to access your bucket ? 🙂
hi RobustRat47
the field name is active_duration, and it is expressed in seconds
to access it for the task my_task , do my_task.d
ata.active_duration
check that your task are enqueued in the queue the agent is listening to.
from the webUI, in your step's task, check the default_queue in the configuration section.
when you fire the agent you should have a log that also specifies which queue the agentis ssigned to
finally, in the webApp, you can check the Workers & Queues section. There you could see the agent(s), the queue they are listening to, and what tasks are enqueued in what queue
hi ScaryBluewhale66
Can please elaborate, i am not sure that i get your question ? What do you need to compare ?
hi GentleSwallow91
Concerning the warning message, there is an entry in the FAQ. Here is the link :
https://clear.ml/docs/latest/docs/faq/#resource_monitoring
We are working on reproducing your issue
Do you think that you could send us a bit of code in order to better understand how to reproduce the bug ? In particular about how you use dotenv...
So far, something like that is working normally. with both clearml 1.3.2 & 1.4.0
`
task = Task.init(project_name=project_name, task_name=task_name)
img_path = os.path.normpath("**/Images")
img_path = os.path.join(img_path, "*.png")
print("==> Uploading to Azure")
remote_url = "azure://****.blob.core.windows.net/*****/"
StorageManager.uplo...
Have you tried try to set your agent in conda mode ( https://clear.ml/docs/latest/docs/clearml_agent#conda-mode ) ?
Hey GentleSwallow91
The bug has been corrected in the new version. Please update your clearml 🙂
hey @<1523704089874010112:profile|FloppyDeer99>
did you manage to get rid of your issue ?
thanks
hi FiercePenguin76
Can you also send your clearml packages versions ?
I would like to sum your issue up , so that you could check i got it right
you have a task that has a model, that you use to make some inference on a dataset you clone the task, and would like to make inferences on the dataset, but with another modelthe problem is that you have a cloned task with the first model....
How have you registered the second model ? Also can you share your logs ?
No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?
I check that
Concerning the snippet example, here is the link :
https://github.com/allegroai/clearml/issues/682
yes it is 🙂 do you manage to upgrade ?
We also brought a lot of new features in the datasets in 1.6.2 version !
Hi UnevenDolphin73
I have reproduced the error :
Here is the behavior of that line, according to the version : StorageManager. download_folder( s3://mybucket/my_sub_dir/files , local_dir='./')
1.3.2 download the my_sub_dir content directly in ./
1.4.x download the my_sub_dir content in ./my_sub_dir/ (so the dotenv module cant find the file)
please keep in touch if you still have some issues, or if it helps you to solve the issue
Hi,
We are going to try to reproduce this issue and will update you asap
you can try something like this - which reproduces the gui behavior
` import math
import datetime
from clearml.backend_api.session.client import APIClient
client = APIClient()
q = client.queues.get_all(name='queue-2')[0]
n = math.floor(datetime.timestamp(datetime.now()))
res = client.queues.get_queue_metrics(from_date=n-1, to_date=n, interval=1, queue_ids=[q.id]) `Be careful though of the null value in the results. It happens when the there are values in the res than intervals between start...
can you share with me an example or part from your code ? I might miss something in wht you intend to achieve
Hi UnevenDolphin73
I am going to try to reproduce this issue, thanks for the details. I keep you updated
hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )
hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.
` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events
task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `
hi PanickyMoth78
from within your function my_pipeline_function here is how to access the project and task names :
task = Task.current_task()
task_name = task.name
full_project_path = task.get_project_name()
project_name = full_project_path.split('/')[0]
Note that you could also use the full_project_path to get both project and task nametask_name = full_project_name.split('/')[-1]
Interesting. We are opening a discussion to weight the pros and cons of those different approaches - i ll of course keep you updated>
Could you please open a github issue abot that topic ? 🙏
http://github.com/allegroai/clearml/issues
AverageRabbit65
Any tool that will permit to edit a text file. I personally use nano . Note that the indentations are not crucial, so any tool, either GUI or CLI will be ok
thanks for all those precisions. I will try to reproduce and keep you updated 🙂
hey
You have 2 options to retrieve a dataset : by its id or by the project_name AND dataset_name - those ones are working together, you need to pass both of them !
You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')
you can freeze your local env and thus get all the packages installed. With pip (on linux) it would be something like that :
pip freeze > requirements.txt
(doc here https://pip.pypa.io/en/stable/cli/pip_freeze/ )
Hi
could you please share the logs for that issue (without the cred 🙂 ) ?