Does ClearML automatically capture
all
stdout/stderr
, like TensorFlow C++
stdout
? Is there an extra process for that? Where is this done and what are the assumptions?
ClearML should capture any output from python code. C++ is not supported
Are you referring to the pipeline task?
Hi IrateDolphin19 ,
Can you give a bit of a simplistic schema of what you're doing or trying to achieve? Are you using pipelines for this?
Hi @<1564060263047499776:profile|ThoughtfulCentipede62> , you can specify the Python interpreter in clearml.conf
on the remote machine (Search for 'binary' or 'python')
Also, yes ClearML can pull it from an artifactory as long as the machine has access 🙂
I think maybe you're right. Let me double check. I might be confusing it with the previous version
Hi @<1690896098534625280:profile|NarrowWoodpecker99> , can you please elaborate on what you mean by limit code access? You define access to the code via the git credentials on the agent config
Try creating a new version and syncing the local folder (or try to specifically add files) 🙂
Hi WittyOwl57 ,
Can you give a screenshot of how it's saved in the UI currently? Also can you look at the developer tools and see what tasks.get_configuration_names
& tasks.get_configurations
return when looking at the experiment's configurations in the UI
Hi @<1534344465790013440:profile|UnsightlyAnt34> , try installing clearml==1.10.4rc1
Hi @<1540867420321746944:profile|DespicableSeaturtle77> , I think you need to define it per step
Hi @<1638712141588467712:profile|ExuberantTurtle48> , I think you can use Task.create()
to write similar code - None
However I would suggest you also investigate the pipelines
IrateDolphin19 , can you give a bit of an explanation on how and what you're doing, and what on the clearml
side seems to fail - how do you create the tasks and manage them...
You need to follow the instructions here - None
Hi NervousFrog58 , versions 1.1.1 seem to be quite old. I would suggest upgrading your server. Please note that since then there have been a couple of DB migrations, so make sure to follow all steps 🙂
Can you please explain your use case?
Hi HarebrainedBaldeagle11 , not that I know of. Did you encounter any issues?
Hi 🙂
Are you asking if you can share experiments between a self hosted server and http://app.clear.ml ?
I think in that case you should be using environment variables that are set on the machine and then access them via the task
Hi @<1544853695869489152:profile|NonchalantOx99> , do you mean config parameters on the task itself?
Hi @<1544853695869489152:profile|NonchalantOx99> , can you please add the full log?
I think in this case you can fetch the task object, force it into running mode and then edit whatever you want. Afterwards just mark it completed again.
None
Note the force
parameter
Hi @<1578193384537853952:profile|MoodyOx45> , those are actually pretty good questions 🙂
- I think so, yes, but your code & pipeline inputs would need to allow this. Your pipeline would need to be written with decorators and there would need to be some logic dependent on the parameters you give the pipeline when running
- I'm afraid that currently not. I would suggest opening a github feature request for this 🙂
Then you'd need to change the image in the docker compose or to spin up the webserver individually and make the necessary changes in the docker compose. Either way, you need a backend to work with the web ui
Hi SmugTurtle78 , I'm not sure it's possible. Maybe SuccessfulKoala55 knows some workaround. Do you want to get rid of them in all scenarios or in just some specific use cases?
What do you mean? You can report as many artifacts as you want
@<1556812486840160256:profile|SuccessfulRaven86> , did you install poetry inside the EC2 instance or inside the docker? Basically, where do you put the poetry installation bash script - in the 'init script' section of the autoscaler or on the task's 'setup shell script' in execution tab (This is basically the script that runs inside the docker)
It sounds like you're installing poetry on the ec2 instance itself but the experiment runs inside a docker container