Reputation
Badges 1
662 × Eureka!Those are for specific packages, I'm wondering about the package managers as a whole
Local changes are applied before installing requirements, right?
Fair enough ๐
Could be nice to be able to define the fallbacks under type
maybe?type: [ poetry, pip ]
(current way under the hood) vs type: [ pip, poetry ]
Also something we are very much interested in (including the logger-based scatter plots etc)
StorageManager.download_folder(remote_url='
s3://some_ip:9000/clearml/my_folder_of_interest ', local_folder='./')
yields a new folder structure, ./clearml/my_folder_of_interest
, rather than just ./my_folder_of_interest
Not necessarily on the same branch, no
AgitatedDove14
I'll make a PR for it now, but the long story is that you have the full log, but the virtualenv
version is not logged anywhere (the usual output from virtualenv
just says which Python version is used, etc).
I also tried setting agent.python_binary: "/usr/bin/python3.8"
but it still uses Python 2.7?
My suspicion is that this relates to https://clearml.slack.com/archives/CTK20V944/p1643277475287779 , where the config file is loaded prematurely (upon import
), so our dotenv.load_dotenv()
call has not yet registered.
Yes and no SmugDolphin23
The project is listed, but there is no content and it hides my main task that it is attached to.
Or if it wasn't clear, that chunk of code is from clearml's dataset.py
That could be a solution for the regex search; my comment on the pop-up (in the previous reply) was a bit more generic - just that it should potentially include some information on what failed while fetching experiments ๐
Is there a preferred way to stop the agent?
Okay so the only missing thing of the puzzle I think is that it would be nice if this propagates to the autoscaler as well; that then also allows hiding some of the credentials etc ๐ฎ
That still seems to crash SuccessfulKoala55 ๐ค
EDIT: No, wait, the environment still needs updating. One moment still...
And task = Task.init(project_name=conf.get("project_name"), ...)
is basically a no-op in remote execution so it does not matter if conf
is empty, right?
I think I may have brought this up multiple times in different ways :D
When dealing with long and complicated configurations (whether config objects, yaml, or otherwise), it's often useful to break them down into relevant chunks (think hydra, maybe).
In our case, we have a custom YAML instruction !include
, i.e.
` # foo.yaml
bar: baz
bar.yaml
obj: !include foo.yaml
maybe_another_obj: !include foo.yaml `
I dunno :man-shrugging: but Task.init is clearly incompatible with pytest and friends
Any thoughts @<1523701070390366208:profile|CostlyOstrich36> ?
I wouldnโt want to run the entire notebook, just a specific part of it.
I guess in theory I could write a run_step.py
, similarly to how the pipeline in ClearML worksโฆ ๐ค And then use Task.create()
etc?
Not really - it will just show the string. A preview would be more like a low-res version of the uploaded image or similar.
We're wondering how many on-premise machines we'd like to deprecate. For that, we want to see how often our "on premise" queue is used (how often a task is submitted and run), for how long, how many resources it consumes (on average), etc.
Is it currently broken? ๐ค
Hm, just a small update - I just verified and it does indeed work on linux:
` import clearml
import dotenv
if name == "main":
dotenv.load_dotenv()
config = clearml.backend_api.Config.load() # Success, parsed with environment variables `
Interesting, why wonโt it be possible? Quite easy to get the source code using e.g. dill
.
It can also log generate a log file with this method, it does not have to output it to CONSOLE tab.
I wouldn't mind going the requests
route if I could find the API end point from the SDK?
~
is a bit weird since it's not part of the package (might as well let the user go through clearml-init
), but using ${PWD} works! ๐ ๐
(Though I still had to add the CLEARML_API_HOST and CLEARML_WEB_HOST ofc, or define them in the clearml.conf)