Reputation
Badges 1
25 × Eureka!JitteryCoyote63 try to add the prefix to the parameter name, e.g. instead of "artifact_name" use "Args/artifact_name"
MelancholyElk85 that looks great, let me see how quickly we can push it (I think 1.1.5 needs to be pushed very soon, I'll check if we can have it before π )
Hi DashingHedgehong5
Is the text the ,labels on the histogram bucket ?
Notice the xlabels
arguments, id this what you are looking for ?
Hi RobustRat47
What do you mean by "log space for hyperparameter" , what would be the difference ? (Notice that on the graph itself you can switch to log scale when viewing in the UI) ?
Or are you referring to the hyper parameter optimization, allowing you to add log space ?
Oh I see, these are to secure your server (basically we recommend you replace the default key/secret π )
Make sense ?
EnviousStarfish54 you can use Use Task.set_credentials
Notice that OS environment or trains.conf will override the programmatic credentials
https://allegro.ai/docs/task.html#trains.task.Task.set_credentials
Hi JitteryCoyote63
If you want to stop the Task, click Abort (Reset will not stop the task or restart it, it will just clear the outputs and let you edit the Task itself) I think we witnessed something like that due to DataLoaders multiprocessing issues, and I think the solution was to add 'multiprocessing_context='forkserver' to the DataLoaderhttps://github.com/allegroai/clearml/issues/207#issuecomment-702422291
Could you verify?
But a warning instead of an error would be good.
Yes, that makes sense, I'll make sure we do that
Does this sound like a reasonable workflow, or is there a better way maybe?
makes total sense to me, will be part of next RC π
When is clearml-deploy coming to the open source release?
Currently available under clearml-serving (more features are being worked on, i.e. additional stats and backends)
https://github.com/allegroai/clearml-serving
Whatβs interesting to me (as a ClearML newbie) is itβs clearly compiling that wheel using my host machine (MacOS).
Hmm kind of, and kind of not.
If you take a look at the Tasks created (regardless on how they are created,. pipeline, manually, etc.), you have a list of python packages required by the code, as they are detected at runtime (i.e. when the code was first executed, on the development machine). When creating a Pipeline controller (runner), the pipeline Tasks are just lists, ...
ReassuredTiger98 maybe we should add an option to send a text next to the abort?
(Actually it is just a matter of passing the argument)
wdyt?
The issue itself is the name of the function (bottom line it has to be unique for every call). So the only very ugly hack is to copy paste the function X times?! π
(I'll see if we can push the fix to GitHub sooner)
named asΒ
venv_update
Β (I believe it's still in beta). Do you think enabling this parameter significantly helps to build environments faster?
This is deprecated... it was a test to use the a package that can update pip venvs, but it was never stable, we will remove it in the next version
Yes, I guess. Since pipelines are designed to be executed remotely it may be pointless to enable anΒ
output_uri
Β parameter in theΒ
PipelineDecorator.componen...
If the same Task is run with different parameters...
ShinyWhale52 sorry, I kind of missed that in the explanation
The pipeline will always* create a new copy (clone) of the original Task (step), then modify the step's inputs etc.
The idea is that you have the experiment management (read execution management) to create full transparancy into the pipelines and steps. Think of it as the missing part in a lot of pipelines platforms where after you executed the pipeline you need to furthe...
Shout-out to Emilio for quickly stumbling on this rare bug and letting us know. If you have a feeling your process is stuck on exit, just upgrade to 1.0.1 π
Hi ComfortableHorse5
Yes this is more of a suggestion that you should write them using the platform capabilities, the UI implementation is being worked on, as well as a few helpers classes, I thin you'll be able to see a few in the next release π
ClearML seems to store stuff that's relevant to script execution outside of clearml.Task
Outside of the cleaml.Task?
Hi JitteryCoyote63 , is there a callback for that?
a task of queue B if the next task is of type A it will have to wait,
It seems you imply there are two types of Tasks and they need to be executed one after the other ?
Hmm, you are correct
Which means this is some conda issue, basically when installing from env file, conda is not resolving the correct pytorch version π
Not sure why... Could you try to upgrade conda ?
Amazing! π
Let me know how we can help π
Hmm and you are getting empty list for thi one:
server_info['url'] = f"http://{server_info['hostname']}:{server_info['port']}/"
Notice the args will be set on the connect
call, so the check on whether they are empty should come after