Reputation
Badges 1
662 × Eureka!One must then ask, of course, what to do if e.g. a text refers to a dictionary configuration object? 🤔
Okay this was a deep dive into clearml-agent code 😁
Took a long time to figure out that there was a specific Python version with a specific virtualenv that was old (Python 3.6.9 and Python 3.8 had latest virtualenv, but Python 3.7.5 had an old virtualenv).
Then the task requested to use Python 3.7, and that old virtualenv version was broken.
As a result -> Could the agent maybe also output the virtualenv
version used with setting up the environment for the first time?
I'll have yet another look at both the latest agent RC and at the docker-compose, thanks!
There was no "default" services agent btw, just the queue, I had to launch an agent myself (not sure if it's relevant)
AgitatedDove14
I'll make a PR for it now, but the long story is that you have the full log, but the virtualenv
version is not logged anywhere (the usual output from virtualenv
just says which Python version is used, etc).
I also tried setting agent.python_binary: "/usr/bin/python3.8"
but it still uses Python 2.7?
Yes; I tried running it both outside venv and inside a venv. No idea why it uses 2.7?
Any follow up thoughts SuccessfulKoala55 or CostlyOstrich36 ?
Oh and clearml-agent==1.1.2
I've also followed https://clearml.slack.com/archives/CTK20V944/p1628333126247800 but it did not help
I also tried switching to dockerized mode now, getting the same issue 🤔
I'm using 1.1.6 (upgraded from 1.1.6rc0) - should I try 1.1.7rc0 or smth?
I'll try it out, but I would not like to rewrite that code myself maintain it, that's my point 😅
Or are you suggesting I Task.import_offline_session
?
I'm working on the config object references 😉
It does, but I don't want to guess the json structure (what if ClearML changes it or the folder structure it uses for offline execution?). If I do this, I'm planning a test that's reliant on ClearML implementation of offline mode, which is tangent to the unit test
I guess the thing that's missing from offline execution is being able to load an offline task without uploading it to the backend.
Or is that functionality provided by setting offline mode and then importing an offline task?
I dunno :man-shrugging: but Task.init is clearly incompatible with pytest and friends
Seems like Task.create
is the correct use-case then, since again this is about testing flows using e.g. pytest, so the task is not the current process.
I've at least seen references in dataset.py
's code that seem to apply to offline mode (e.g. in Dataset.create
there is if output_uri and not Task._offline_mode:
, so someone did consider datasets in offline mode)
This seems to be fine for now, if any future lookups finds this thread, btwwith mock.patch('clearml.datasets.dataset.Dataset.create'): ...
I'm running tests with pytest
, it consumes/owns the stream
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? 🙂
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? 🫣
Or is just integrated in the ClearML slack space and for some reason it's showing the clearml address then?
(in the current version, that is, we’d very much like to use them obviously :D)
Sure, for example when reporting HTML files:
Any simple ways around this for now? @<1523701070390366208:profile|CostlyOstrich36>
I tried that, unfortunately it does not help 😞
minio was a tiny bit of headache to configure, but I'd be happy to help if you want CrookedWalrus33 , I just went through this process yesterday and today (see a few threads up...)
Thanks CostlyOstrich36 !
And can I make sure the same budget applies to two different queues?
So that for example, an autoscaler would have a resource budget of 6 instances, and it would listen to aws
and default
as needed?