Reputation
Badges 1
662 × Eureka!Of course Im using report_table in the above; it seems the support for Pandas DataFrame does not include support for MultiIndex other than by concatenating the indices together
That's fine (as in, it works), but it looks a bit weird and defies the purpose of a MultiIndex π€ Was wondering if there are plans to add better support for it
The only thing I could think of is that the output of pip freeze would be a URL?
It does, but I don't want to guess the json structure (what if ClearML changes it or the folder structure it uses for offline execution?). If I do this, I'm planning a test that's reliant on ClearML implementation of offline mode, which is tangent to the unit test
But to be fair, I've also tried with python3.X -m pip install poetry etc. I get the same error.
I couldn't find it directly in the SDK at least (in the APIClient)... π€
Something like this, SuccessfulKoala55 ?
Open a bash session on the docker ( docker exec -it <docker id> /bin/bash ) Open a mongo shell ( mongo ) Switch to backend db ( use backend ) Get relevant project IDs ( db.project.find({"name": "ClearML Examples"}) and db.project.find({"name": "ClearML - Nvidia Framework Examples/Clara"}) ) Remove relevant tasks ( db.task.remove({"project": "<project_id>"}) ) Remove project IDs ( db.project.remove({"name": ...}) )
But it does work on linux π€ I'm using it right now and the environment variables are not defined in the terminal, only in the .env π€
Hm, I did not specify any specific versions previously. What was the previous default?
Maybe they shouldn't be placed under /tmp if they're mission critical, but rather the clearml cache folder? π€
- in the second scenario, I might have not changed the results of the step, but my refactoring changed the speed considerably and this is something I measure.
- in the third scenario, I might have not changed the results of the step and my refactoring just cleaned the code, but besides that, nothing substantially was changed. Thus I do not want a rerun.Well, I would say then that in the second scenario itβs just rerunning the pipeline, and in the third itβs not running it at all π
(I ...
I think now there's the following:
Resource type Queue (name) defines resource + max instancesAnd I'm looking for:
Resource type "pool" of resources (type + max instances) A pool can be shared among queues
Heh, my bad, the term "user" is very much ingrained in our internal way of working. You can think of it as basically any technically-inclined person in your team or company.
Indeed the options in the WebUI are too limited for our use case, so we're developed "apps" that take a yaml configuration file and build a matching pipeline.
With that, our users do not need to code directly, and we can offer much more fine control over the pipeline.
As for the imports, what I meant is that I encounter...
Of course now it's not there anymore π If/when it happens again I'll ping you here π
Maybe it's better to approach this the other way, if one uses Task.force_requirements_env_freeze() , then the locally updated packages aren't reflected in poetry π€
That's fine for the current use-case I believe.
Once the team is happy with the logging functionality, we'll move on to remote execution and things will update.
Another example - trying to validate dataset interactions ends with
` else:
self._created_task = True
dataset_project, parent_project = self._build_hidden_project_name(dataset_project, dataset_name)
task = Task.create(
project_name=dataset_project, task_name=dataset_name, task_type=Task.TaskTypes.data_processing)
if bool(Session.check_min_api_server_version(Dataset.__min_api_version)):
get_or_create_proje...
I'm guessing that's not on pypi yet?
Hurrah! Addedgit config --system credential.helper 'store --file /root/.git-credentials' to the extra_vm_bash_script and now it works
(logs the given git credentials in the store file, which can then be used immediately for the recursive calls)
That's probably in the newer ClearML server pages then, I'll have to wait still π
yes, a lot of moving pieces here as we're trying to migrate to AWS and set up autoscaler and more π
Yes π I want ClearML to load and parse the config before that. But now I'm not even sure those settings in the config are even exposed as environment variables?
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
Can I query where the worker is running (IP)?
Thanks AgitatedDove14 , I'll first have to prove viability with the free version :)
Indeed. I'll open an issue, sure!
One more UI question TimelyPenguin76 , if I may -- it seems one cannot simply report single integers. The report_scalar feature creates a plot of a single data point (or single iteration).
For example if I want to report a scalar "final MAE" for easier comparison, it's kinda impossible π
We're not using the docker setup though. The CLI run by the autoscaler is python -m clearml_agent --config-file /root/clearml.conf daemon --queue aws_small , so no docker
Using the PipelineController with add_function_step