Reputation
Badges 1
662 × Eureka!Okay trying again without detached
It does, but I don't want to guess the json structure (what if ClearML changes it or the folder structure it uses for offline execution?). If I do this, I'm planning a test that's reliant on ClearML implementation of offline mode, which is tangent to the unit test
Hey SuccessfulKoala55 ! Is the configuration file needed for Task.running_locally() ? This is tightly related with issue #395, where we need additional files for remote execution but have no way to attach them to the task other then using the StorageManager as a temporary cache.
Any leads TimelyPenguin76 ? I've also tried setting up a minio s3 bucket, but I'm not sure if the remote agent has copied the credentials and host π€
So where should I install the latest clearml version? On the client that's running a task, or on the worker machine?
I wouldn't put past ClearML automation (a lot of stuff depend on certain suffixes), but I don't think that's the case here hmm
Heh, good @<1523704157695905792:profile|VivaciousBadger56> π
I was just repeating what @<1523701070390366208:profile|CostlyOstrich36> suggested, credits to him
We have the following, works fine (we also use internal zip packaging for our models):
model = OutputModel(task=self.task, name=self.job_name, tags=kwargs.get('tags', self.task.get_tags()), framework=framework)
model.connect(task=self.task, name=self.job_name)
model.update_weights(weights_filename=cc_model.save())
FWIW, we prefer to set it in the agentβs configuration file, then itβs all automatic
Oh nono, more like:
- Create a pipeline
- Add N steps to it
- Run the pipeline
- It fails/succeeds, the user does something with the output
- The user would like to add/modify some steps based on the results now (after closer inspection).I wonder if at (5), do I have to recreate the pipeline every time? π€
Well you could start by setting the output_uri to True in Task.init .
I can only say Iβve found ClearML to be very helpful, even given the documentation issue.
I think theyβve been working on upgrading it for a while, hopefully something new comes out soon.
Maybe @<1523701205467926528:profile|AgitatedDove14> has further info π
Something like this, SuccessfulKoala55 ?
Open a bash session on the docker ( docker exec -it <docker id> /bin/bash ) Open a mongo shell ( mongo ) Switch to backend db ( use backend ) Get relevant project IDs ( db.project.find({"name": "ClearML Examples"}) and db.project.find({"name": "ClearML - Nvidia Framework Examples/Clara"}) ) Remove relevant tasks ( db.task.remove({"project": "<project_id>"}) ) Remove project IDs ( db.project.remove({"name": ...}) )
TimelyPenguin76 here's the full log (took a moment to anonynomize completely):
`
Using environment access key CLEARML_API_ACCESS_KEY=xxx
Using environment secret key CLEARML_API_SECRET_KEY=********
Current configuration (clearml_agent v1.3.0, location: /tmp/.clearml_agent.zs4e7egs.cfg):
sdk.storage.cache.default_base_dir = ~/.clearml/cache
sdk.storage.cache.size.min_free_bytes = 10GB
sdk.storage.direct_access.0.url = file://*
sdk.metrics.file_history_size = 100
sdk.m...
Sure, for example when reporting HTML files:

Still; anyone? π₯Ή @<1523701070390366208:profile|CostlyOstrich36> @<1523701205467926528:profile|AgitatedDove14>
Would be great if it is π We have few files that change frequently and are quite large in size, and it would be quite a storage hit to save all of them
I'll try upgrading to 1.1.5, one moment
I couldn't find it directly in the SDK at least (in the APIClient)... π€
We have a read-only user with personal access token for these things, works seamlessly throughout and in our current on premise servers... So perhaps something missing in the autoscaler definitions?
SuccessfulKoala55 could this be related to the monkey patching for logging platform? We have our own logging handlers that we use in this case
I think I may have brought this up multiple times in different ways :D
When dealing with long and complicated configurations (whether config objects, yaml, or otherwise), it's often useful to break them down into relevant chunks (think hydra, maybe).
In our case, we have a custom YAML instruction !include , i.e.
` # foo.yaml
bar: baz
bar.yaml
obj: !include foo.yaml
maybe_another_obj: !include foo.yaml `
The results from searching in the "Add Experiment" view (can't resize column widths -> can't see project name ...)
We have a more complicated case but I'll work around it π
Follow up though - can configuration objects refer to one-another internally in ClearML?
Last but not least - can I cancel the offline zip creation if I'm not interested in it π€
EDIT: I see not, guess one has to patch ZipFile ...