Feels like a cookie issue to me
Hi @<1859043976472956928:profile|UpsetWhale84> , output models are registered in the model repository.
It really depends on the use case regarding artifacts vs datasets. If the outputs are relevant only in the context of the pipeline then use artifacts, otherwise use datasets
And what was the result from 19:15 yesterday? The 401 error? Please note that's a different set of credentials
Didn't have a chance to try and reproduce it, will try soon ๐
That's weird. Did you do docker-compose down and up properly?
I think the call tasks.get_all should have you covered to extract all information you would need.
None
The request body should look something like this:
{
"id": [],
"scroll_id": "b77a32d585604b098f685b00f30ba2c2",
"refresh_scroll": true,
"size": 15,
"order_by": [
"-last_update"
],
"type": [
"__$not",
"annotation_manual",
"__$not",
"annotation",
"__$not",
"dataset_i...
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , you can do it if you run in docker mode
Hi @<1717350332247314432:profile|WittySeal70> , can you attach the full log and your clearml.conf ?
Can you please add the ~/clearml.conf for the agent? Also, are you trying to run everything on the same machine or different ones?
Hi @<1523708920831414272:profile|SuperficialDolphin93> , is it possible you're using Optuna as the optimization method?
MoodySheep3 , a screenshot would be useful just to understand the structure via UI ๐
Hi @<1534344465790013440:profile|UnsightlyAnt34> , I'm afraid there is no easy way to do it. You would need to edit the links in mongodb/elastic to properly migrate it
And you get the error only when there are no credentials or is this unconnected?
Hi, SmugTurtle78
Hi, Is there any manifest for the relevant polices needed for the AWS account (if we are using autoscaling)?
I'm not sure. From my experience the autoscaler requires access to spinning instances up & down, listing instances, checking tags and reading machine logs.
You should try running with those permissions. If something is missing, you'll see it in the log ๐
Also, Is there a way to use Github deploy key instead of personal token?
Do you mean git user/...
Hi @<1837300695921856512:profile|NastyBear13> , there is currently no direct visualization per task/project of how much they take.
Experiments that have a lot of console logs, scalars & plots would be the ones that take up most metrics space.
Hi @<1533619725983027200:profile|BattyHedgehong22> , I think it needs to be part of repository
I asking because I used an older version of clearml, when it was named allegro and I remember that I was able to see all the dataset..
Is it possible you had the enterprise version at a previous position? I think you're talking about HyperDS
I notice the links doesn't have hash, do all the features of data versioning is also working when using links (and not actual image files)
Yes
You don't need to have the services queue, but you need to enqueue the controller into some queue if not running locally. I think this is what you're looking for.
None
I don't believe this is part of the open documentation. In the enterprise there is an admin panel, SSO integration and RBAC on top of all the user management system. All of this is managed via an API like everything else in the system.
May I ask why you need docs on this?
Great to hear, and now you also have the latest version ๐
ShinyLobster84 , can you please elaborate? I'm guessing your jupyter notebook? How are you running it? Did you run the experiment from it?
SubstantialElk6 , you can find some neat examples here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
RotundSquirrel78 , do you have an estimate how much RAM the machine running ClearML server? Is it dedicated to ClearML only or are there other processes running?
Is it possible the image you used doesn't have docker? Did you find any errors in the log?
In that case you have the "packages" parameter for both the controller and the steps
Hi SpotlessPenguin79 , can you please elaborate on this?
for non-aws cloud providers?
What exactly are you trying to do?
GiganticTurtle0 , then I'd guess that's the task that would be returned ๐
Did you try?
Can you please add the full log of the task here?
However, when I try to bind a volume and run the code, everything runs perfectly.
Can you please elaborate on what this means?