Hi @<1610083503607648256:profile|DiminutiveToad80> ! You need to somehow serialize the object. Note that we try different serialization methods and default to pickle if none work. If pickle doesn't work then the artifact can't be uploaded by default. But there is a way around it: you can serialize the object yourself. The recommended way to do this is using the serialization_function argument in upload_artifact . You could try using something like dill which can serialize more ob...
The only expection is the models if I'm not mistaken, which are stored locally by default.
UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?
no problem. we will soon release a RC that solves both issues
I think I understand. In general, if your communication worked without clearml, it should also work when using clearml.
But you won't be able to upload an artifact using None for example, to the shared memory. Same thing for debug samples etc.
Hi @<1657918706052763648:profile|SillyRobin38> ! If it is compatible with http/rest, you could try setting api.files_server to the endpoint or sdk.storage.default_output_uri in clearml.conf (depending on your use-case).
Hi @<1566596968673710080:profile|QuaintRobin7> ! Sometimes, ClearML is not capable of transforming matplotlib plots to plotly , so we report the plot as an image to Debug Samples. Looks like report_interactive=True makes the plot unparsable
basically, I think that the pipeline run starts from __ main_ _ and not the pipeline function, which causes the file to be read
Hi FlutteringWorm14 ! Looks like we indeed don't wait for report_period_sec when reporting data. We will fix this in a future release. Thank you!
Hi @<1523702652678967296:profile|DeliciousKoala34> ! Looks like this is a bug in set_metadata . The model ID is not set, and set_metadata doesn't set it automatically. I would first upload the model file, then set the meta-data to avoid this bug. You can call update_weights to do that. None
Hi DangerousDragonfly8 ! The file is there to test the upload to the bucket, as the name suggests. I don't think deleting it is a problem, and we will likely do that automatically in a future version
DeliciousKoala34 can you upgrade to clearml==1.8.0 ? the issue should be fixed now
QuaintJellyfish58 We will release later today an RC that adds the region to boto_kwargs . We will ping you when it's ready to try it out
Hi @<1558986821491232768:profile|FunnyAlligator17> ! There are a few things you should consider:
- Artifacts are not necessarily pickles. The objects you upload as artifacts can be serialized in a variety of ways. Our artifacts manager handles both serialization and deserialization. Because of this, you should not pickle the objects yourself, but specify
artifact_objectas being the object itself. - To get the deserialized artifact, just call
task.artifacts[name].get()(not get_local...
Hi @<1545216070686609408:profile|EnthusiasticCow4> ! This is a known bug, we will likely fix it in the next version
@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None
Hi @<1523701132025663488:profile|SlimyElephant79> ! Looks like this is a bug on our part. We will fix this as soon as possible
Hi @<1547752791546531840:profile|BeefyFrog17> ! Are you getting any exception trace when you are trying to upload your artifact?
Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage you are using tf.Summary which no longer exists since tensorflow 2.2.3 , which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your prog...
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...
Also, do you need to close the task? It will close automatically when the program exits
Hi RoundMole15 ! Are you able to see a model logged when you run this simple example?
` from clearml import Task
import torch.nn.functional as F
import torch.nn as nn
import torch
class TheModelClass(nn.Module):
def init(self):
super(TheModelClass, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
s...
Hi @<1628202899001577472:profile|SkinnyKitten28> ! What code do you see that is being captured?
Hi @<1603198163143888896:profile|LonelyKangaroo55> ! Each pipeline component runs in a task. So you first need the IDEs of each component you try to query. The you can use Task.get_task None to get the task object, the you can use Task,get_status to get the status None .
To get the ids, you can use something like [None](https://clear.ml/docs/...
Hi @<1626028578648887296:profile|FreshFly37> ! You could try getting the version via user properties as well: None .
so something like p._task.get_user_properties().get("version")
That makes sense. You should generally have only 1 task (initialized in the master process). The other subprocesses will inherit this task which should speed up the process
Hi @<1719162259181146112:profile|ShakySnake40> ! It looks like you are trying to update an already finalized dataset. Datasets that are finalized cannot be updated. In general, you should create a new dataset that inherits from the dataset you want to update (via the parent_datasets argument in Dataset.create ) and operate on that dataset instead
Hi @<1587615463670550528:profile|DepravedDolphin12> ! get() should indeed return a python object. What clearml version are you using? Also, can you share the code?
Hi @<1803598647749775360:profile|VividSpider84> ! Thank you for reporting, we were able to reproduce the issue. We will fix it in the next version
Hi RoundMosquito25 ! What clearml version are you using? Do you get any error messages when you are setting floats instead of strings?