this only affects single files, if you wish to add directories (with wildcards as well) you should be able to
We used to have "<=20" as the default pip version in the agent. Looks like this default value still exists on your machine. But that version of pip doesn't know how to install your version of pytorch...
Hi @<1691620883078057984:profile|ConfusedSeaanemone5> ! Those are the only 3 charts that the HPO constructs and reports. You could construct other charts/plots yourself and report them when a job completes using the job_completed_callback parameter.
One more question FierceHamster54 : what Python/OS/clearml version are you using?
@<1566596968673710080:profile|QuaintRobin7> not for now. Could you please open a GH issue about it? Maybe we can fit this in a future patch.
Hi @<1694157594333024256:profile|DisturbedParrot38> ! We weren't able to reproduce, but you could find the source of the warning by appending the following code at the top of your script:
import traceback
import warnings
import sys
def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
log = file if hasattr(file,'write') else sys.stderr
traceback.print_stack(file=log)
log.write(warnings.formatwarning(message, category, filename, lineno, line))
...
I meant the code where you upload an artifact, sorry
Hi @<1864479785686667264:profile|GrittyAnt2> ! For OS datasets, this is currently not supported unfortunately
Hi @<1765547897220239360:profile|FranticShark20> ! Do you have any other logs that could help us debug this, such as tritonserver logs?
Also, can you use model.onnx as the model file name both in the upload and default_model_filename, just to make sure this is not a file extension problem (this can happen with triton)
Hi @<1555000557775622144:profile|CharmingSealion31> ! When creating the HyperParameterOptimizer , pass the argument optuna_sampler=YOUR_SAMPLER .
Hi @<1523715429694967808:profile|ThickCrow29> ! What do you think of this behavior when using pipelines from decorators: suppose we have the following controller:
a = step_1() # step_1 gets aborted/failed
b = step_2(a)
c = step_3()
In this case, if abort_on_failure is set to False, then step_2 will be skipped.
If the controller uses a , doing something like:
a = step_1() # step_1 gets aborted/failed
print(a)
then an exception will be thrown.step_3 will run...
Hi @<1571308010351890432:profile|HurtAnt92> ! Yes, you can create intermediate datasets. Just batch your datasets, for each batch create new child datasets, then create a dataset that has as parents all of these resulting children.
I'm surprized you get OOM tho, we don't load the files in memory, just the name/path of the files + size, hash etc. Could there be some other factor that causes this issue?
@<1643060801088524288:profile|HarebrainedOstrich43> we released 1.14.1 as an official version
Hi EnergeticGoose10 . This is a bug we are aware of. We have already prepared a fix and we will release it ASAP.
Hi DeliciousKoala34 . I was able to reproduce your issue. I'm now looking for a solution for your problem. Thank you
ClearML does not officially support a remotely executed task to spawn more tasks we do through pipelines, it that helps you somehow. Note that doing things the way you do them right now might break some other functionality.
Anyway, I will talk with the team and maybe change this behaviour because it should be easy 👍
Hi DilapidatedDucks58 ! Browsers display double spaces as a single space by default. This is a common problem. What we could do is add a copy to clipboard button (it would copy the text properly). What do you think?
Regarding number 2. , that is indeed a bug and we will try to fix it as soon as possible
Hi @<1817731756720132096:profile|WickedWhale51> ! ClearML is tolerant to network failures. Anyway, if you wish the upload the offline data periodically, you could zip the offline mode folder and import it:
# make sure the state of the offline data is saved
Task.current_task()._edit()
# create zip file
offline_folder = Task.current_task().get_offline_mode_folder()
zip_file = offline_folder.as_posix() + ".zip"
with ZipFile(zip_file, "w", allowZip64=True, compression=ZIP_DEFLATED) as zf:
...
Hi @<1654294828365647872:profile|GorgeousShrimp11> ! add_tags is an instance method, so you will need the controller instance to call it. To get the controller instance, you can do PipelineDecorator.get_current_pipeline() then call add_tags on the returned value. So: PipelineDecorator.get_current_pipeline().add_tags(tags=["tag1", "tag2"])
Hi @<1523701713440083968:profile|PanickyMoth78> ! Make sure you are calling Task.init in my_function (this is because the bindings made by clearml will be lost in a spawned process as opposed to a forked one). Also make sure that, in the spawned process, you have CLEARML_PROC_MASTER_ID env var set to the pid of the master process and CLEARML_TASK_ID to the ID task initialized in the master process (this should happen automatically)
Each step is a separate task, with its own separate logger. You will not be able to reuse the same logger. Instead, you should get the logger in the step you want to use it calling current_logger
Hi ExasperatedCrocodile76 I noticed you try to install 'torch==1.10.0+cu113'
` Conda: Installing requirements: step 2 - using pip:
['Send2Trash==1.8.0', 'clearml==1.7.2', 'detectron2==0.6+cu113', 'fvcore==0.1.5.post20220512', 'imgaug==0.4.0', 'numpy==1.23.4', 'omegaconf==2.2.3', 'open3d==0.15.2', 'opencv_python==4.6.0.66', 'pycocotools==2.0.5', 'pytest==7.1.3', 'scikit_learn==1.1.2', 'scipy==1.9.2', 'tensorboard==2.10.1', 'torch==1.10.0+cu113', 'torch_cluster==1.6.0', 'torchvision==0...
Hi @<1717350332247314432:profile|WittySeal70> ! The pre_execute_callback runs before the task is even created. For better control, I recommend using status_change_callback
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...