hum interesting. have you updated your clearml to the latest version ? we released new versions those days
is it a task you are trying to access to or a dataset ? if you need to retrieve a task, you should use Task.get_task()
if i do that :ds=Dataset.create(dataset_project='datasets',dataset_name='dataset_0')
it will result in the creation of 2 experiments :
results page: the task that corresponds to the script that launched the dataset creation - it will be in PROJECTS/datasets/.datasets/dataset_0 dataset page: the dataset itself : would be in DATASETS/dataset_0
hi SparklingElephant70
i was asking myself about this datasets / .datasets / None
this None i weird : if you look at the example i sent, you should see the dataset name here. Just to be sure, can you confirm me than when you fire the command line you pass both dataset_project AND dataset_name ?
Hi SparklingElephant70
The function doesn't seem to find any datasets which project_name matches your request.
Some more detailed code on how you create your dataset, and how you try to retrieve it, could help me to better understand the issue 🙂
Hope it will help 🤞 . Do not hesitate to ask if the error persists
you also have to set your agent to use you partner credentials
simply do a :
clearml-agent init
and paste your partner's credentials
hey GiganticMole91
you can set the logger to set your bucket as your default upload destination :
task.get_logger().set_default_upload_destination(' s3://xxxxx ')
Hey UnevenDolphin73
I have tried to reproduce the issue but with no success. I manage to auto report any graph designed according to your description - values between [0,50] and sudden extreme values. So far everything seems to be ok on my side. have you found something new reguarding this issue ? Could you send me more details on the graph which reporting hangs ?
Thanks
🙂
here is a bit of code that seems to do the job. have a look
`
wrapper = Task.get_task(project_name="***", task_name="***")
req_obj = events.DownloadTaskLogRequest(wrapper.id)
res = wrapper.session.send_request(
service=req_obj._service,
action=req_obj._action,
version=req_obj._version,
json=req_obj.to_dict(),
method=req_obj._method,
async_enable=False,
headers=None,
)
print(res.json()) `
hey SoggyBeetle95
You're right that's an error on our part 🙂
Could you please open an issue in https://github.com/allegroai/clearml-server/issues so we can track it?
We'll update there once a fix for that issue will be released! 😄
it works locally and not on a remote exec : can you check that the machine that the agent if executed from is correctly configured ? the agent there needs to be provided with the correct credentials the autolog uses the file extension to determine what it is reporting. can you try to use the regular .pt extension ?
When the pipeline or any step is executed, a task is created, and it name will be taken from the decorator parameters. Additionally, for a step, the name parameter is optional : if not provided, the function name will be used instead.
It seems to me that your script fails creating the pipeline controller task because it fails pulling the name parameter. which is weird ... Weird because in the last error line, we can see that name !
can you provide some mode details please ? Do you intend to store your artefacts locally or remotely ?
Does the manual reporting also fails ?
If you could also give your clearml packages versions it could help 🙂
Hi CrabbyKoala94
I am working on your issue, I will update you asap. Thanks
Hey LuckyKangaroo60
So far there isnt a CLI command to check the conf file format : if there is an error, it is detected from the beginning of the execution and the program fails. Here is what i use as a conf for accessing my local docker based minio :
`
s3 {
# S3 credentials, used for read/write access by various SDK elements
# Default, used for any bucket not specified below
region: ""
# Specify explicit keys
key: "david"
...
Hey Yossi
Do you want to erase from the ui ?
You first have to erase the dataset/project content : you select and archive. Archive is almost the recycle bin ! Then you can easily erase the empty dataset/project
Hi VexedPeacock35
can you share some more precisions about what occurs ?
What are you trying to do (or to be precise when does this error appeared ?)
What are your packages versions (clearml, and server if you are self-hosted)
If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script
oups please pardon me I made a confusion, this answer is not related to your issue. my fault 🙏
Yep sorry I have not pasted the import line. You should add something like this :
from clearml.backend_api.services import events
🙏
great to hear that the issue is solved. btw sorry for the time it took me to come back to you
hi SoggyBeetle95
i reproduced the issue, could you confirm me that it is the issue ?
here is what i did :
i declared some secret env var in the agent section of clearml.conf i used extra_keys to have hidden on the console, it is indeed hidden but in the execution -> container section it is clear
hi DizzyHippopotamus13
Yes you can generate a link to the experiments using this format.
However I would suggest you to use the SDK for more safety :task = Task.get_task(project_name=xxx, task_name=xxx)
url = task.get_output_log_web_page()
Or in one lineurl = Task.get_task(project_name=xxx, task_name=xxx).get_output_log_web_page()
hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.
` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events
task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `
hope it will help. keep me informed 🙂