That's the theory, I still see it is not there
Any chance you can zip the entire folder? I can't figure out what's missing, specifically "from config_files" , i.e. I have no packages nor file named config_files
Python3.8 I can quickly check, give me a minute
well at this point I'm not sure it is still essential, we have 3 run-modes offline, local-server, cloud-sever and this option made it work for all of them.. can be that it is not required anymore and its just legacy..
LOL, sure if you have so many setups, that makes sense 🙂
this is strange.. you ran it with the dataclass config I added?
Yes but I had to remove the:from config_files import cfg
and instead used:
` @hydra.main(config_path="config_files", config_name="confi...
Hi DepressedChimpanzee34
Why do you need to have the configuration added manually ? isn't the cleaml.conf easier ? If not I think OS environments are easier no? I run run above code, everything worked with no exception/warning... What is the try/except solves exactly ?
RipeGoose2 That sounds familiar. Could you test with the latest RC?pip install trains==0.16.4rc0
Sounds good to me. DepressedChimpanzee34 any chance you can add a github feature request, so we do not forget to add it?
Hmm can you run the agent in debug mode, and check the specific console log?
'''
clearml-agent --debug daemon --foreground ...
ThickFox50 I also have to point that there is a free hosted server here 🙂 https://app.community.clear.ml
Hi TightElk12
Are you looking for a way to set the output_uri
from environment variable ? Is this it?
And if you could also update the docs with all env vars possible to set up it would awesome!
Yes, I'll pass it on, that is a good point
Thanks! Yes, this could be great !
Could you please open a GitHub issue, so we remember to update the feature ?
Okay, I'm pretty sure there is a hack, let me see if there is something "nicer"
Hi PanickyMoth78 an RC with a fix is out, let me know if it works (notice you can now set the max_workers from CLI or Dataset functions) pip install clearml==1.8.1rc1
I want the task of human tagging a model to be “just another step in the pipeline”
That makes total sense.
Quick question, would you prefer the pipeline controller to "wait" for the tagging and then continue, or would it make more sense to create a trigger on the tagging ?
Based on your code snippet:Logger.current_logger().report_confusion_matrix(title='confusion', series=confusion', value=confmat_tensor.cpu().numpy(), iteration=i)
or Task.current_task().get_logger()
which is the same as Logger.current_logger()
LudicrousParrot69
I "think" I have a better handle on what you wish to do.
Is it kind of generic "serving" solution?
FYI:
Model artifact is, usually, a weights/model file. The idea that later you will be able to access it and serve it. Now the problem is (and I think this is what you are referring to) there is usually a specific piece of code tied to that model that can use it (a.k.a pyfunc)
A few ideas:
These days everyone is trying to build their models with generic interface, so that scik...
Hi TightElk12
One option will be to call task.close() at the end of each step and task.init at the beginning of another.
Will that do?
JitteryCoyote63 that makes total sense!!
The reporting subprocess is not being updated with the new value! Let me check how we can pass it along...
After you call task.set_initial_iteration(0)
what do you get with task.get_initial_iteration()
, is it 0 ?
pywin32 isnt in my requirements file,
CloudySwallow27 whats the OS/env ?
(pywin32 is not in the direct requirements of the agent)
Hi JitteryCoyote63 , I cannot reproduce it... when I call set initial iteration 0, it does what I'm expecting, and resend the scalar. I tested with the clearml ignite example, any thoughts on how I can reproduce?
My question is what happens if I launch in parallel multiple doit commands that create new Tasks.
Should work out of the box.
I would like to confirm that current_task ...
Correct.
Hmm so I guess the actual code adds it into the reporting itself ...
How about we call:task.set_initial_iteration(0)
Hi TightElk12
would like to understand the limitations of
Task.current_task()
Basically this will always get you an instance of the current Task. This will work from sub-processes as well as the main process. Is there a specific scenario you have in mind, or a challenge with the use case ?
BTW: if you want to sync between artifacts / settings, I would recommend calling task.reload() to get the latest values back from the server.
Let me see if I can reproduce something