
Reputation
Badges 1
119 × Eureka!Using detect_with_pip_freeze: true
runs into package version not found for some of the ones I have locally.
Hey AgitatedDove14 after playing around seems that if the callback filepath points to an hdf5 file it is not uploaded.
AgitatedDove14 I filed an issue of fire for them to point us to the argument parsing method https://github.com/google/python-fire/issues/291
Pigar is capturing different versions that the ones I have installed on my local machine (not a problem except for one). I just want to force the version of that package in a way that I don’t have to manually change it from the UI for every experiment.
Basically one points to an hdf5 and the other one has no extensiion
Thanks TimelyPenguin76 , the example works fine! I’ll debug further on my side!
On the server through the command line?
Using the get_weights(True)
I get ValueError: Could not retrieve a local copy of model weights <ID>, failed downloading <URL>
Makes sense! Then where would I have to add output_uri
to save the weights?
Yes, everything is that way (work dir and args are ok) except the script path . It shows -m module arg1 arg2
.
I’ll show you what I have through PM!
I’ll open the PR!
I am using the code inside the on_train_epoch_end
inside a metric. So the important part is:
` fig = plt.figure()
my plot
logger.experiment.add_figure("fig", fig)
plt.close() `
AgitatedDove14 I am not sure why the packages get different versions, maybe since the package is not directly imported in my code it is possible to get a different version to what I have locally (?). Should all the libraries versions match exactly between local and the code that runs in the agent? The Task.add_requirements(package_name, package_version=None)
workaround works perfectly! I just add the previous version that doesn’t break the code. Yes, definitely a force flag could help ...
Yes! I think thats what I will do 👌 Let me know if there is a way to contribute a mode to keep logging off. We just don’t want to pollute the server when debugging.
Hi CostlyOstrich36 ! The message is the following:clearml.model - INFO - Selected model id: 27c1a1700b0b4e25a4344dc4ef9868fa
They are not models, those are intermediate tensors I am caching to make training faster. I don't need to log them.
Thanks SuccessfulKoala55 !
Yes! What env variables should I pass
I'll give that a try! Thanks CostlyOstrich36
Thanks AgitatedDove14 !