Hey Yossi
Do you want to erase from the ui ?
You first have to erase the dataset/project content : you select and archive. Archive is almost the recycle bin ! Then you can easily erase the empty dataset/project
hi AbruptHedgehog21
clearml-serving will use your clearml.conf file
Configure it to access your s3 bucket - that is the place for bucket, host etc
DepravedSheep68 you could also try to add the port to your uri.
Output_uri: "s3://...... : port"
what bother me is that it worked until yesterday, and you didnt changed your code. So the only thing i can think of is a bug introduced with the new sdk version, that was released yesterday. I am inverstigating with the sdk team, i will keep you updated asap ! 🙂
but that i still not explaining why it was working 2 days ago and now it is not !
i am investigating, and will keep you updated
hum interesting. have you updated your clearml to the latest version ? we released new versions those days
is it a task you are trying to access to or a dataset ? if you need to retrieve a task, you should use Task.get_task()
if i do that :ds=Dataset.create(dataset_project='datasets',dataset_name='dataset_0')
it will result in the creation of 2 experiments :
results page: the task that corresponds to the script that launched the dataset creation - it will be in PROJECTS/datasets/.datasets/dataset_0 dataset page: the dataset itself : would be in DATASETS/dataset_0
hi SparklingElephant70
i was asking myself about this datasets / .datasets / None
this None i weird : if you look at the example i sent, you should see the dataset name here. Just to be sure, can you confirm me than when you fire the command line you pass both dataset_project AND dataset_name ?
Hi SparklingElephant70
The function doesn't seem to find any datasets which project_name matches your request.
Some more detailed code on how you create your dataset, and how you try to retrieve it, could help me to better understand the issue 🙂
Hope it will help 🤞 . Do not hesitate to ask if the error persists
hi TenderCoyote78
can you please give some more precision about what you intend to achieve ? I am afraid not to well understand your question
can you check that your server ports are opened ?
btw Ofir, can you sent me your different clearml packages versions ?
hi ApprehensiveSeahorse83
i am working too on your issue. It seems that there is a wrong behavior here, so we need to get a bit deeper to understand what's happening. We will keep you updated asap, thanks for your contribution ! 🙏
report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values
hey ApprehensiveSeahorse83
can you please check that the trigger is correctly added ? Simply retrieve the return value of add_task_triggerres = trigger.add_task_trigger( .....
print(f'Trigger correctly added ? {res}')
hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂
what do you mean ? the average time that the tasks are waiting before being executed by an agent ? that is to say the average difference between enqueue time and beginning time ?
Hey UnevenDolphin73
Is there any particular reason why not to create the dataset ? I mean, you need to use it in different tasks, so it could make sense to create it , for it to exist on its own, and then to use it at will in any task, by simply retrieving its id (using Dataset.get)
Makes sense ?
Hello Ofir,
in general matter, the agent parses the script and finds all the imports, through an intelligent analysis (it installs the ones you use/need).
It then build an env where it will install them and run (docker, venv/pip.etc).
You can also force a package/ package version
For the pipelines (and the different ways to implement them), it is a bit different
In order to answer you precisely, we would need to have a bit more detais about what you need to achieve :
Is it a pipeline that ...
hey Ofir
did you tried to put the repo in the decorator where you need the import ?
if you can send me some code to illustrate what you are doing, it could help me to reproduce the issue
you can try something like this - which reproduces the gui behavior
` import math
import datetime
from clearml.backend_api.session.client import APIClient
client = APIClient()
q = client.queues.get_all(name='queue-2')[0]
n = math.floor(datetime.timestamp(datetime.now()))
res = client.queues.get_queue_metrics(from_date=n-1, to_date=n, interval=1, queue_ids=[q.id]) `Be careful though of the null value in the results. It happens when the there are values in the res than intervals between start...
hi RobustRat47
the field name is active_duration, and it is expressed in seconds
to access it for the task my_task , do my_task.d
ata.active_duration
hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"
you are right, i think there is a bug here. We will release a fix asap 🙂
Hey UnevenDolphin73
When you use the parameter "use_current_task" the dataset and the resulting task will be the same (same id). So to retrieve this dataset for using it into another task, use Task.get(...) to retrieve its id.
Then when you will need it into another task, simply retrieve it from within that task by using Dataset.get(dataset_id=...)
hey RoughTiger69
Can you describe me how you are setting up the environment variable please ?
Setting up that flag will skip the virtual env installation : the agent will use your environment and the packages installed into it.
Using Task.add_requirements(requirements.txt) allows to add specific packages at will. Note that this function will be executed even with the flag CLEARML_AGENT_SKIP_PIP_VENV_INSTALL set
great to hear that the issue is solved. btw sorry for the time it took me to come back to you
can you please try to replace client.queues.get_all by client.queues.get_default ?
this is a specific function for retrieving the default queue 🙂
can you try to create an empty text file and provide its path to Task.force_requirements_env_freeze( your_empty_txt_file) ?