The webUI uses the API for everything, I'd suggest using the webUI as a reference to how to approach this.
Hi @<1523701132025663488:profile|SlimyElephant79> , are you running both from the same machine? Can you share the execution tab of both pipeline controllers?
Also, reason that they in queued state is because no worker is picking them up. You can control the queue to which every step is pushed and I think by default they are sent to the 'default' queue
Also, I would suggest trying pipelines from decorators, I think it would be much smoother for you
In that case you have the "packages" parameter for both the controller and the steps
I'm not sure if I'm missing something, but why not use that environment in pycharm then?
Hi @<1639799308809146368:profile|TritePigeon86> , where in the documentation did you see these parameters active_duration, job_started and job_ended
?
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , you mean you'd like to have some web interface to interact with the API?
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , what error are you getting?
Hi, what version of clearml-agent are you using? Does this always happen? Can you add your clearml.conf
file here(Make sure to remove any credentials/personal data)?
I'd suggest using the agent in --docker mode
Hi @<1523701842515595264:profile|PleasantOwl46> , I think that is what happening. If server is down, code continues running as if nothing happened and ClearML will simply cache all results and flush them once server is back up
Hi @<1543766544847212544:profile|SorePelican79> , can you provide a sample of how this looks? The suggested method is the one in the examples:
None
PunyWoodpecker71 , what do you mean by 'experiment detail of the a project'? Can you give me an example?
The reports is a separate area, it's between 'Pipelines' and 'Workers & Queues' buttons on the bar on the left 🙂
Hi @<1547028031053238272:profile|MassiveGoldfish6> , you should set output_uri
in Task.init
to point towards your S3 bucket 🙂
From code perspective it looks like you're basically saving a pickle to a file via dump, and that file just happens to be the model. ClearML doesn't patch into pickle
. You can save the pickle as an artifact with ClearML using Task.register_artifact
- None
FiercePenguin76 , we'll try to reproduce. Thanks for the input!
You can save it as a dataset and then fetch it during run time, or am i missing something?
Might be good idea to specify the python binary as well. You should go over the various configurations here:
None
is it normal that it's slower than my device even though the agent is much more powerful than my device? or because it is just a simple code
I'm not sure I understand. Can you elaborate please?
Hi @<1570583227918192640:profile|FloppySwallow46> , can you please add the full log?
GiganticTurtle0 , which ClearML version are you using? From what I can see in the documentation to add the new parameters, you'll have to task.connect() again to add the new args
Can you maybe provide a snippet I can play with?
PanickyMoth78 , pipeline tasks are usually hidden. If you go to Settings -> Configuration you will have an option to show hidden projects. This way you can find the projects that the tasks reside in + the pipeline steps
ReassuredTiger98 , In the settings page on bottom right there should be a version. Can you please tell me what it shows?
Hi @<1523704757024198656:profile|MysteriousWalrus11> , do you have a snippet that reproduces this? When running locally and remotely the way the pipeline shows up is the same?
Hi RoughTiger69 ,
If you create a child version and add the delta of the files to the child, fetching the child version will also fetch the parents files as well
Hi @<1575656665519230976:profile|SkinnyBat30> , what version of ClearML are you using? Are you uploading datasets from the same machine also to GCS?