The difference is whether you are only supplying a "minutes" or you are also passing hour/day etc.
See the examples:
Every 15 minutesadd_task(task_id='1235', queue='default', minute=15)
Every hour on minute 20 of the hour (i.e. 00:20, 01:20 ...)add_task(task_id='1235', queue='default', hour=1, minute=20)
Hmm should be pushed later today, meanwhile:
` from clearml import Task
from clearml.automation.trigger import TriggerScheduler
def func(*args, **kwargs):
print('test', args, kwargs)
if name == 'main':
s = TriggerScheduler(pooling_frequency_minutes=1.0)
s.add_model_trigger(
name='trigger 1', schedule_function=func,
trigger_project='examples', trigger_on_tags=['deploy']
)
s.add_model_trigger(
name='trigger 2',
schedule_task_id='3f7...
Is this like a local minio?
What do you have under the sdk/aws/s3 section
?
GrumpyPenguin23 could you help and point us to an overview/getting-started video?
I want to run only that sub-dag on all historical data in ad-hoc manner
But wouldn't that be covered by the caching mechanism ?
Hi @<1689446563463565312:profile|SmallTurkey79>
App Credentials now disappear on restart.
You mean in the web UI?
Nice 🙂
@<1523710674990010368:profile|GreasyPenguin14> for future reference the agent
part in the clearml.conf is only created when you call clearml-agent init (no need for it for the python SDK). Full default configuration is here:
None
Yes, which looks like a lot, but you only need to d that once.
Auto scheduler would make (1) redundant (as it would spin the instance up/down based on the jobs in the queue)
sorry the point where you select the interpreter for pycharm
Oh I see...
Hi MiniatureShells8
The torch.save triggers the model creation.
If you are using the same filename, then the same model in the system will be used.
New filenames will create new models.
What exactly is your use case ?
MelancholyElk85
How do I add files without uploading them anywhere?
The files themselves need to be packaged into a zip file (so we have an immutable copy of the dataset). This means you cannot "register" existing files (in your example, files on your S3 bucket?!). The idea is to make sure your dataset is protected against changes on the one hand, but on the other to allow you to change it, and only store the changeset.
Does that make sense ?
Hi EnviousPanda91
You mean like collect plots, then generate a pdf?
Hi @<1523701066867150848:profile|JitteryCoyote63>
Hi, how does
agent.enable_git_ask_pass
works
basically it pushes the pass through stdin to git when it asks (it is a git feature)
and the inet of the same card ?
Using agent v1.01r1 in k8s glue.
I think a fix was recently committed, let me check it
I do expect it toÂ
pip
 install though which doesn’t root access I think
Correct, it is installed on a venv (exactly for that).
It will not fail if the apt-get fails (only warnings)
Let me know if it worked
that does make more sense 🙂
RipeWhale0 are you taking them from here?
https://artifacthub.io/packages/helm/allegroai/clearml
ValueError('Task object can only be updated if created or in_progress')
It seems the task
is not "running" hence the error, could that be
I changed them to the one exposed to the users (the same I have in my local clearml.conf) and things work.
Nice!
But I can't really figure out why that would be the case...
So the thing is, the link to the files are generated by the clients, which means the actual code generated a link an internal link to the file server (i.e. a link that only works inside the k8s cluster). When you wanted to see the image/plot you were accessing it from outside the cluster, and the link simply ...
Hi SarcasticSquirrel56
But if I then clone the task, and execute it by sending it to a queue, the experiment succeeds,
I'm assuming that on the remote machine the "files_server" is not configured the same way as the local execution. for example it points to an S3 bucket the credentials for the bucket are missing.
(in your specific example I'm assuming that the plot is non-interactive which means this is actually a PNG stored somewhere, usually the file-server configuration). Does tha...
ReassuredTiger98 there is an open issue on supporting bash script as pre run inside a docker (which will be supported in the next major release)
BTW: if you already have a docker file the fastest way would just to build the docker file and push it once, then you just specify the docker image:tag, this can be done a Task specific level.
:param list(str) xlabels: Labels per entry in each bucket in the histogram (vector), creating a set of labels for each histogram bar on the x-axis. (Optional)
SmarmyDolphin68
BTW: there is no automatic reporting when you have task = Task.get_task(task_id='your_task_id')
It's only active when you have one "main" task.
You can also check the continue_last_task
argument in Task.init , it might be a good fit for your scenario
https://allegro.ai/docs/task.html#trains.task.Task.init
To automate the process, we could use a pipeline, but first we need to understand the manual workflow
Yes, I think the API is probably the easiest:from clearml.backend_api.session.client import APIClient client = APIClient() project_list = client.projects.get_all() print(project_list)
In the UI you can see all the agents and their IDs
Then you can so
clearml-agent daemon --stop <agent id>
(Go to the profile page, and click "Disable HiDPI browser scale override" see if that helps)
apologies @<1798887585121046528:profile|WobblyFrog79> somehow I missed your reply,
My workflow is based around executing code that lives in the same repository, so it’s cumbersome having to specify repository information all over the place, and changing commit hash as I add new code.
It automatically infers the repo if the original as long as the pipeline code itself is inside the repo, by that I mean the pipeline logic, when you run it the first time (think development etc), if it s...