Please run the following commands and share the results. Chances are that somehow the default mappings that we apply on the index creation were not applied to your events scalar index.
- First run the following command
curl -XGET "localhost:9200/_cat/indices/events-training_stats_scalar-*"
- And then for each of the returned indices run the following:
curl -XGET "localhost:9200/<index_name>/_mappings"
Hi JumpyPig73 ,
It appears that only the AWS autoscalar is in the open version and other autoscalars are only in advanced tiers (Pro and onwards):
https://clear.ml/pricing/
I think you can get the task from outside and then add tags to that object
ExcitedSeaurchin87 , Hi 🙂
I think it's correct behavior - You wouldn't want leftover files flooding your computer.
Regarding preserving the datasets - I'm guessing that you're doing the pre-processing & training in the same task so if the training fails you don't want to re-download the data?
Hi UpsetSheep55 ,
Permissions feature is indeed exists only in the enterprise version. There are no examples for this since this an enterprise only feature.
As far as I know Hyper-datasets also support csv/tabular data quite well 🙂
Hi RoughTiger69 , how are you running the pipeline? Locally or on agents? How is the controller running?
Hi @<1566596960691949568:profile|UpsetWalrus59> , yes that sounds like a good way 🙂
Can you connect directly to the instance? If so, please check how large /opt/clearml is on the machine and then see the folder distribution
RobustFlamingo1 , I think this is because you looked at 'Orchestrate for DevOps' and not 'Automate for Data Scientist'. If you switch to the other option you will see no K8S is required 🙂
I am guessing that the use-case shown there would be more what you're looking for. The K8S is something for larger scale deployments when the DevOps guys set up the system to run on K8S cluster
If you set Task.init(..., output_uri=<PATH_TO_ARTIFACT_STORAGE>) everything will be uploaded to your artifact storage automatically.
Regarding models. I to skip the joblib dump hack you can simply connect the models manually to the task with this method:
https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#connect
Are you running a self deployed server? What is the version if that is the case?
Hi AbruptHedgehog21 , it looks like you need to parameters.dataset_id on step data_creation
Can you add a bit more from the log for more context as well?
Hi @<1523701842515595264:profile|PleasantOwl46> , I suggest opening a GitHub feature request for this 🙂
Any specific reason not to use the autoscaler? I would imagine it would be even more cost effective
Are you using the community server or are you using the open source and self hosting?
What is the use case of accessing clearml.conf during runtime?
Hi!
I believe you can stop and resume studies by adding these actions to your script:
Add save points via joblib.dump()
and connect them to clearml via clearml.model.OutputModel.connect()
Then, when you want to start or resume a study, load latest study file via joblib.load() and connect to clearml with http://clearml.model.InputModel.co nnect()
This way you can stop your training sessions with the agent and resume them from nearly the same point
I think all the required references are h...
Hi @<1524922424720625664:profile|TartLeopard58> , can you elaborate on what do you mean by code-server?
Hi SoreHorse95 ,
Does ClearML not automatically log all outputs?
Regarding logging maybe try the following setting in ~/clearml.conf sdk.network.metrics.file_upload_threads: 16
Hi SillyGoat67 ,
Hmmm. What if you run these in separate experiments and each experiment reports it's own result? This way you could use comparison between experiments to see the different results grouped together.
Also you can report different scalars for the same series so you can see something like this:
Can you gain access to the apiserver logs?
Also can you provide the full log for better context?
VexedCat68 Hi 🙂
Please try with pip install clearml==1.1.4rc0
DeliciousSeal67 , you need to update the docker image in the container section - like here:
Hi @<1523701842515595264:profile|PleasantOwl46> , you can use users.get_all to fetch them - None