Hi @<1582542029752111104:profile|GorgeousWoodpecker69> , I think you can also manually register them using Task.upload_artifact
None
I'm not entirely sure which steps you took and if you missed something. Elastic is complaining about permissions - Maybe you missed one of the steps?
And for future reference - Always a good thing to do a backup before upgrading elastic versions (or upgrades in general). Periodic upgrades are also advised 🙂
I was suspecting connectivity issues. Glad to hear it's working
Can you try with Task.connect()
?
https://clear.ml/docs/latest/docs/references/sdk/task#connect
If on a linux, that's one option. Basically any way you see fit to make sure the env variable is available for the agent
Hi @<1732933002259861504:profile|ComfortableRobin65> , I believe that you would be pulling all 150 files. Why not test it out?
Hi @<1603560525352931328:profile|BeefyOwl35> , what do you mean set up the agent remotely? You run the agent on your machine and want the agent to run when it's shut down?
Hi @<1561885921379356672:profile|GorgeousPuppy74> , yes it should be possible
Build a Docker container that when launched executes a specific experiment, or a clone (copy) of that experiment.
From the docs
Doesn't work for me either. I guess the guys are already looking into it
RobustFlamingo1 , I think this is because you looked at 'Orchestrate for DevOps' and not 'Automate for Data Scientist'. If you switch to the other option you will see no K8S is required 🙂
I am guessing that the use-case shown there would be more what you're looking for. The K8S is something for larger scale deployments when the DevOps guys set up the system to run on K8S cluster
Hi AdventurousButterfly15 , what version of clearml-agent
are you using?
Internal references are resolved only in decorator/function pipelines if I recall correctly
the experiments themselves 🙂
Imagine if you have very large diffs or very large (several mb) configuration files logged into the system - this is sitting somewhere in the backend on a database
Do try with the port through
As I wrote, you need to remove the s3 from the start of the host section..
Hi @<1720249421582569472:profile|NonchalantSeaanemone34> , happy to hear you also enjoy using ClearML 😄
You are spot on, ClearML provides the full end to end solution for your MLOps needs, meaning you don't need to using DVC,MLrun, MLflow and many others as all these capabilities are covered in ClearML and more!
Are you currently looking for any specific capability?
Can you please provide a screenshot of how it looks?
You can do it in one API call as follows:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
There is a CLI for working with datasets but nothing specific for task artifacts I think, only the SDK. What is your use case?
Hi @<1726772411946242048:profile|CynicalBlackbird36> , what you're looking at is the metrics storage, this is referring to all the console outputs, scalars, plots and debug samples.
This is saved in the backend of ClearML. There is no direct way to pull this but you can technically fetch all of this using the API.
wdyt?
Hi UnevenDolphin73 , I think this is analyzed in the code
And when you run it again under exactly the same circumstances it works fine?
Are you able to access other sites? What about http://app.clear.ml ?
For example artifacts or debug samples
The chart already passes the --create-queue command line option to the agent, which means the agent will create the queue(s) it's passed. The open source chart simply doesn't allow you to define multiple queues in detail and provide override pod templates for them, however it does allow you to tell the agent to monitor multiple queues.
None
Hi @<1603198153677344768:profile|ExasperatedSeaurchin40> , I think this is what you're looking for - None
Does this happen always?