oups please pardon me I made a confusion, this answer is not related to your issue. my fault 🙏
hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"
you are right, i think there is a bug here. We will release a fix asap 🙂
can you please try to replace client.queues.get_all by client.queues.get_default ?
this is a specific function for retrieving the default queue 🙂
hey SteepDeer88
did you managed to get rid of that issue or you still need support on it ?
If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync functionality which will update the dataset based on the modifications made to the folder.
You can then modify precisely what you need in that structure, and get a new updated dataset version
when you spin a container , you map a host port with a container port using -p parameterdocker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID =<service_id> -e CLEARML_SERVING_POLL_FREQ =5 clearml-serving-inference:latest
Here you map your computer's port 8080 with the container port 8080. If your 8080 port is already used, you can use another, using for example -p 8081:8080
Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section
great to hear that the issue is solved. btw sorry for the time it took me to come back to you
hi TenderCoyote78
can you please give some more precision about what you intend to achieve ? I am afraid not to well understand your question
Do you think that you could send us a bit of code in order to better understand how to reproduce the bug ? In particular about how you use dotenv...
So far, something like that is working normally. with both clearml 1.3.2 & 1.4.0
`
task = Task.init(project_name=project_name, task_name=task_name)
img_path = os.path.normpath("**/Images")
img_path = os.path.join(img_path, "*.png")
print("==> Uploading to Azure")
remote_url = "azure://****.blob.core.windows.net/*****/"
StorageManager.uplo...
Hi WittyOwl57 ,
The function is :
task.get_configuration_object_as_dict ( name="name" )
with task being your Task object.
You could find a bunch of pretty similar functions in the docs. Have a look at here : https://clear.ml/docs/latest/docs/references/sdk/task#get_configuration_object_as_dict
hi ReassuredTiger98
Can you give some details on which function you are calling for deleting please ?
hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )
No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?
AverageRabbit65
Any tool that will permit to edit a text file. I personally use nano . Note that the indentations are not crucial, so any tool, either GUI or CLI will be ok
Hi CourageousKoala93
Yes, you can use Google as a storage. You can have a look at the docs https://clear.ml/docs/latest/docs/integrations/storage/#configuring-google-storage
Basically, this part of the doc will show you how to set the credentials into the configuration file.
You will also have to specify the destination uri, by adding to Task.init() : output_uri="path to my bucket"
Do not hesitate to ask for some precisions if needed
If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script
i am also speaking with another user this morning, who has the very same issue
can you give me some more details about your config, and share your error logs please ?
hey ZanyPig66
Have you set up development.default_output_uri into the configuration file ; when you init your task you add the parameter output_uri=True.
You can bind a local volume to the docker container and make the output_uri point to it
hey SoggyBeetle95
You're right that's an error on our part 🙂
Could you please open an issue in https://github.com/allegroai/clearml-server/issues so we can track it?
We'll update there once a fix for that issue will be released! 😄
Hey Atalya 🙂
Thanks for your feedback. This is indeed a good feature to think asbout.
So far there is no other ordering than the alphabetical. Could you please create a feature request on github ?
Thanks
Hi UnevenDolphin73
I have reproduced the error :
Here is the behavior of that line, according to the version : StorageManager. download_folder( s3://mybucket/my_sub_dir/files , local_dir='./')
1.3.2 download the my_sub_dir content directly in ./
1.4.x download the my_sub_dir content in ./my_sub_dir/ (so the dotenv module cant find the file)
please keep in touch if you still have some issues, or if it helps you to solve the issue
what versions do you have for the clearml packages ?
I check that
Concerning the snippet example, here is the link :
https://github.com/allegroai/clearml/issues/682
This means that the function will create a directory structure at local_folder , which structure will be the same as the minio's. That is to say that it will create directories corresponding to the buckets there - thus your clearml directory, which is the bucket the function found in the server root
i have found some threads that deal with your issue, and propose interesting solutions. Can you have a look at this ?
hi RobustRat47
the field name is active_duration, and it is expressed in seconds
to access it for the task my_task , do my_task.d ata.active_duration