Reputation
Badges 1
25 × Eureka!Hi PanickyMoth78
` torch.save(net.state_dict(), PATH) # auto-uploads to GCS
get all the models from the Task
output_models = Task.current_task().models["output"]
get the last one
last_model = output_models[-1]
set meta-data
last_model.set_metadata(key="my key", value="my value", type="str") `
Also there was a truck that worked in the previous big, could you zoom out in the browser, and see if you suddenly get the plot?
This is an odd error, could it be conda is not installed in the container (or in the Path) ?
Are you trying with the latest RC?
Yes, I think we just found out it breaks clearml π
could you test with the latest stable, just in case ?
(I'll make sure we have an RC that supports the hydra dev version)
is there a built in programmatic way to adjustΒ
development.default_output_uri
?
How about: In your Task.init(output_uri='...')
oh sorry my bad, then you probably need to define all OS environment variable for python temp folder for the agent (the Task process itself is a child process so it will inherit it)
TMPDIR/new/tmp TMP=/new/tmp TEMP=/new/tmp clearml-agent daemon ...
we made two tb versions of / task and wrote in parallel.
And I wanted to know if it is possible here as well.
Basically you will have different series (based on the TB log file) on the same graph so you can compare π all automatically
And you have the exact same folder structure / content, and server A/B give a different set of experiments ?
(is serverB empty, meaning no experiments at all?)
OddAlligator72 FYI, in you current code you can always doif use_trains: from trains import Task Task.init()
Might be easier π
OddAlligator72 quick question:
suggest that you implement a simple entry-point API
How would the system get the correct packages / git repo / arguments if you are only passing a single function entrypoint ?
OddAlligator72 okay, that is possible, how would you specify the main python script entry point? (wouldn't that make more sense rather than a function call?)
How do you determine which packages to require now?
Analysis of the actual repository (i.e. it will actually look for imports π ) this way you get the exact versions you hve, but nit the clutter of the entire virtual environment
Hi ClumsyElephant70
So do you need both requirements.txt combined ?
How will the agent be able to reproduce both repo on the remote machine ?
Will this still be considered asΒ
global site-packages
This is a pip settings, I "think" it inherits from the local user's installation, but I would actually install with "sudo pip" that will definitely be "inherited"
Hi UnevenDolphin73
I think there is an open issue on github, I'm not sure but I think there is already some internal progress
https://github.com/allegroai/clearml/issues/199
Seems already supported for a while now ...
Could it be it defaulted to the demo server instead of your own server?
or by trains
We just upload the image as is ... I think this is SummaryWriter issue
i keep getting an failed getting token error
MiniatureCrocodile39 what's the server you are using ?
Hi UnsightlySeagull42
How can I reproduce this behavior ?
Are you getting all the console logs ?
Is it only the Tensorboard that is missing ?
Hi UnsightlySeagull42
Could you test with the latest RCpip install clearml==1.0.4rc0
Also could you provide some logs?
Hi @<1523722267119325184:profile|PunySquid88> I guess it's a good thing we talk, because I believe that what you are looking for is already available :)
Logger.current_logger().report_media('title', 'series', iteration=1337, local_path='/tmp/bunny.mp4')
This will actually work on any file, that said, the UI might display the wrong icon (which will be fixed in the next version).
We usually think of artifacts as data you want to reuse, so all the files uploaded there are accessibl...
Hmm MiniatureHawk42 how many files in the zip ?
Hi OutrageousSheep60
Is there a way to instantiate a
clearml-task
while providing it a
Dockerfile
that it needs to build prior to executing the task?
Currently not really, as at the aned the agent does need to pull a container,
But you can cheive basically the same by adding the "dockerfile" script as --docker_bash_setup_script
Notice of course that this is an actual bash script not Docker script, so no need for "RUN" prefix.
wdyt?
SweetGiraffe8 Task.init will autolog everything (git/python packages/console etc), for your existing process.
Task.create purely creates a new Task in the system, and lets' you manually fill in all the details on that Task
Make sense ?
Hi JumpyDragonfly13
Let's assume we have two machines, one we call remote, one we call laptop (at least for this discussion)
On the Remote machine we need to run: (notice we must have docker preinstalled on the remote machine, it can work without docker, let me know if this is the case for you)clearml-agent daemon --queue interactive --create-queue --docker
On the Laptop we runclearml-session --docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
What clearml-session will do is crea...