Reputation
Badges 1
25 × Eureka!Ooops π
task.get_tags()
task.set_tags()
Notice the error code:Action failed <400/401: tasks.create/v1.0 (Invalid project id: id=first_attempt)>
If that is the case, The project ID is incorrect (project id is not the project name)
seems like the server returned 400 error, verify that you are working with your trains-server and not the demoserver :)
Thanks BroadSeaturtle49
I think I was able to locate the issue !=
breaks the pytroch lookup
I will make sure we fix asap and release an RC.
BTW: how come 0.13.x have No linux x64 support? and the same for 0.12.x
https://download.pytorch.org/whl/cu111/torch_stable.html
Can you also share the full log? the numbers seem of (and clearml cannot actually "invent" those numbers they are coming from somewhere...)
AbruptWorm50 can you send full image (X axis is missing from the graph)
Hi AbruptWorm50
I am currently using the repo cache,
What do you mean by "using the repo cache" ? This is transparent, the agent does that, users should not access that folder?
I also looked at the log you send, why do you think it is re-downloading the repo?
I created my own docker image with a newer python and the error disappeared
I'm not sure I understand how that solved it?!
UpsetTurkey67 are you saying there is a sym link in the original repository, and when it copies it, it breaks the symlink ?
OddShrimp85 you can see the full configuration at the top of the Task log. What do you have there? Also what is the clearml python version?
Hi OddShrimp85
If you pass 'output_uri=True' to task init, it will upload the model automatically, or as you said manually with outputmodel class
Hi OddShrimp85
right place to ask about clearml serving.
It is π
I did not manage to get clearml serving work with my own clearml server and triton setup.
Yes it should have been updated already, apologies.
Until we manage to sync the docs, what seems to be your issue, maybe we can help here?
where is it persisted? if I have multiple sessions I want to persist, is that possible?
On the file server, yeah it should be support that, you can specify the --continue-session to continue a previously used one.
Notice it does delete older "snapshots" (i.e. previous workspace) when you are continuing a session (use --disable-session-cleanup
to disable it)
Sorry ScaryLeopard77 I missed the reply,
the tutorial in the readme of clearml-serving repo doesn't mention it though. Where should I set it?
oh dear ... you are right (I think it was there in previous versions)clearml-serving --help
https://github.com/allegroai/clearml-serving/blob/ce6ec847b1e01c6f5bf35d638e6ceb8148db8a7a/clearml_serving/main.py#L142
This is the equivalent of what is created here in the example:
https://github.com/allegroai/clearml-serving/blob/ce6ec847b...
No worries, just wanted to make sure it doesn't slip away π
Thanks ShortElephant92 ! PR looks good, I'll ask the guts to take a look
Hi @<1545216070686609408:profile|EnthusiasticCow4>
Oh dear, I think this argument is not exposed π
- You can open a GH
- If you want to add a PR this is very simple:None
include_archived=False,
):
if not include_archived:
system_tags = ["__$all", cls.__tag, "__$not", "archived"]
else:
system_tags = [cls.__tag]
...
system_tag...
that using a βlocalβ package is not supported
I see, I think the issue is actually pulling the git repo of the second local package, is that correct ?
(assuming you add the requirement manually, with Task.add_requirements) , is that correct ?
Hi @<1545216070686609408:profile|EnthusiasticCow4>
will ClearML remove the corresponding folders and files on S3?
Yes and it will ask you for credentials as well. I think there is a way to configure it so that the backend has access to it (somehow) but this breaks the "federated" approach
Hmm so the SaaS service ? and when you delete (not archive) a Task it does not ask for S3 credentials when you select delete artifacts ?
Hmm, you can delete the artifact with:task._delete_artifacts(artifact_names=['my_artifact']
However this will not delete the file itself.
Do delete the file I would do :remote_file = task.artifacts['delete_me'].url h = StorageHelper.get(remote_file) h.delete(remote_file) task._delete_artifacts(artifact_names=['delete_me']
Maybe we should have a proper interface for that? wdyt? what's the actual use case?
VexedCat68 the remote checkpoints (i.e. Models) represent the local storage, so if you internally overwrite the files, this is exactly what will happen in the backend. so the following should work (and store the last 5 checkpoints):epochs += 1 torch.save("model_{}.pt",format(epochs % 5))
Regrading deleting / getting models:Model.remove(task.models['output'][-1])
What is the Model url?print(model.url)
VexedCat68
delete the uploaded file, or the artifact from the Task ?
SmugLizard25 are you saying that with the latest version it does not work?
Hi VexedCat68
txt file or pkl file?
If this is a string , it just stored it (not as a file, this is considered a "link")
https://github.com/allegroai/clearml/blob/12fa7c92aaf8770d770c8ed05094e924b9099c16/clearml/binding/artifacts.py#L521