SparklingHedgehong28 , have you tried upgrading to pro? That is the easiest way to evaluate 🙂
SarcasticSquirrel56 , you're right. I think you can use the following setting in ~/clearml.conf
: sdk.development.default_output_uri: <S3_BUCKET>
. Tell me if that works
Hi @<1673501379764686848:profile|VirtuousSeaturtle4> , what do you mean? Connect to a server someone else set up?
Hi @<1674226153906245632:profile|PreciousCoral74> , you certainly can, just use the Logger
module 🙂
None
Hi ExasperatedCrocodile76 ,
When running in docker mode the agent should handle all the points you raised above and just work 🙂
Hi @<1655744373268156416:profile|StickyShrimp60> , I think it would be good to open a GitHub issue if there isn't one 🙂
@<1610083503607648256:profile|DiminutiveToad80> , can you give a stand alone code example for such a pipeline that reproduces the issue? Each task should have it's own requirements logged. What is failing, the controller or individual steps?
Can you add the api section of your clearml.conf
and also a log of a task?
Hi @<1541954607595393024:profile|BattyCrocodile47> , how does ClearML react when you run the scripts this way? The repository is logged as usual?
Hi GorgeousMole24 , you can certainly compare across different projects.
Simply go to "all projects" and select the two experiments there (you can search for them at the top right to find them easily)
Hi UpsetTurkey67 ,
Is this what you're looking for?
https://clear.ml/docs/latest/docs/references/sdk/trigger#add_model_trigger
Hi @<1534496192929468416:profile|EagerGiraffe33> , what if you try to put a specific version of pytorch you've tested on your remote environment in the requirements section of the cloned task?
AdventurousButterfly15 please try upgrading to 1.4.0 - this should solve the issuepip uninstall clearml-agent -y && pip install -U clearml-agent
Hi GentleSwallow91 , I would highly recommend upgrading to 1.9 as it brings new also a new major feature (as well as minor bug fixes). I'm not sure about DB migration - there might be one or two. I suggest taking a look at the versions in between 🙂
Hi VivaciousBadger56 , This is a good question. In my opinion it's best to start with viewing videos from ClearML's YouTube video. This one is especially useful:
https://www.youtube.com/watch?v=quSGXvuK1IM
As regards to which steps to take, I think the following should cover most bases:
Experiment tracking & management - See that you can see all of the expected outputs in the ClearML webUI Remote experiment execution - Try and execute an experiment remotely using the agent. Change some c...
Hi @<1761199244808556544:profile|SarcasticHare65> , and if you run locally for the same amount of iterations this does not happen?
Hi @<1540867420321746944:profile|DespicableSeaturtle77> , what didn't work? What showed up in the experiment? What was logged in the installed packages?
SmugDolphin23 , maybe you have an idea?
Hmmmmm do you have a specific usecase in mind? I think pipelines are created only through the SDK but I might be wrong
Hi @<1544853721739956224:profile|QuizzicalFox36> , currently there is no SDK option for this, however you can automate this using the API. I suggest opening developer tools (F12) to see what the UI sends when creating/editing reports and that way you can automate it
GrievingTurkey78 , did you try calling task.set_resource_monitor_iteration_timeout
after the task init?
And regarding model deployment you mean serving the model through a serving engine such as triton?
Hi @<1523702251011444736:profile|ScaryBluewhale66> , I think the only port you need is the one that is allocated to the apiserver
Hi AdventurousButterfly15 , what version of clearml-agent
are you using?
Hi @<1547028031053238272:profile|MassiveGoldfish6> , regarding the login errors - it looks like it's failing to connect to the clearml backend. Is it possible there is something mistyped in the credentials or the server address? Any chance that the credentials were revoked?
Hi @<1660817806016385024:profile|FantasticMole87> , you should have Task.init()
regardless of set_base_docker. But it appears you can't set it in Task.init
Hi OddShrimp85 ,
I think it's about your own preference and how you like to work.
Hi @<1547028031053238272:profile|MassiveGoldfish6> , what version of clearml-serving
do you have? Can you please add the full terminal outputs for better context?
Please try setting it to True, that should fix it
Hi @<1637624992084529152:profile|GlamorousChimpanzee22> , did you test this section and it doesn't work or you just didn't find in the code where it's being read?