Reputation
Badges 1
25 × Eureka!Yeah I think using voxel for forensics makes sense. What's your use case ?
StraightDog31 can you elaborate? where are the parameters stored? who is trying to access them, and maybe for what purpose ?
Thatβs the question i want to raise too,
No file size limit
Let me try to run it myself
GrievingTurkey78 I have to admit I can't see the difference, can you help me out π
Task status change to "completed" is set after all artifacts upload is completed.
JitteryCoyote63 that seems like the correct behavior for your scenario
I have the agent configured to force install requirements.txt
what do you mean by that?
Could you verify you have 8 subfolders named 'venv.X' in the cache folder ~/. trains ?
Follow up: I see that if I move an Experiment to a new project, it does not copy the associated model files and must be done manually.Β Once I moved the models to the new project, the query works as expected.
Correct π
Nice catch!
URLs that it was uploaded with, as that URL could change.
How would that change, the actual files are there ?
UnsightlyShark53 Awesome, the RC is still not available on pip, but we should have it in a few days.
I'll keep you posted here :)
CrookedWalrus33 any chance you can think of a sample code to reproduce?
feature is however available in the Enterprise Version as HyperDatasets. Am i correct?
Correct
BTW you could do:datasets_used = dict(dataset_id="83cfb45cfcbb4a8293ed9f14a2c562c0") task.connect(datasets_used, name='datasets') from clearml import Dataset dataset_path = Dataset.get(dataset_id=datasets_used['dataset_id']).get_local_copy()
This will ensure that not only you have a new section called "datasets" on the Task's configuration, buy tou will also be able to replace the datase...
Hi SpotlessWorm70
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program.
This seems like OpenMP issue
I would assume something is off with the local environment (not really connected to clearml but to one of the frameworks, for example TF, Keras, etc.)
Are you sure you added the pytorch channel in clearml.conf ?
https://github.com/allegroai/clearml-agent/blob/822984301889327ae1a703ffdc56470ad006a951/docs/clearml.conf#L64
Hi YummyMoth34 they will keep on trying to send reports.
I think they try for at least several hours.
What's the error you are getting ?
SubstantialElk6
The CA is taken automatically by urllib, check the OS environments you need to configure it.
https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-errorSSL_CERT_FILE REQUESTS_CA_BUNDLE
Hi GreasyPenguin14
Yes, I think you are right the series name should be next to the title. Let me check it...
try:
None
docker_install_opencv_libs: true
I have timeseries dataset with dimension 1,60,1 which the first dimension is number of data, the second one is timestep
I think it should be --input-size 1 60 ` if the last dimension is the batch size?
(BTW: this goes directly to Triton configuration, it is the information Triton needs in order to run the model itself)
BeefyCow3 if you are trying to optimizer a specific metric (i.e. a scalar on a graph). The template Task should report it with the same title/series combination, which should be easy enough to verify in the UI π
You can either report with Tensorboard or with the Trains Logger, either way will work.
Hi GrievingTurkey78
Could you provide some more details on your use case, and what's expected?
Hi JitteryCoyote63
If you want to stop the Task, click Abort (Reset will not stop the task or restart it, it will just clear the outputs and let you edit the Task itself) I think we witnessed something like that due to DataLoaders multiprocessing issues, and I think the solution was to add 'multiprocessing_context='forkserver' to the DataLoaderhttps://github.com/allegroai/clearml/issues/207#issuecomment-702422291
Could you verify?
My driver says "CUDA Version: 11.2" (I am not even sure this is correct, since I do not remember installing code in this machine, but idk) and there is no pytorch for 11.2, so maybe it fallbacks to cpu?
For some reason it detect CUDA 11.1 (I assume this is what you have installed, the driver CUDA version is the highest it will support not necessary what you have installed)
Notice that you have to Have the task already started by the Master process
the storage configuration appears to have changed quite a bit.
Yes I think this is part of an the cloud ready effort.
I think you can find the definitions here:
https://artifacthub.io/packages/helm/allegroai/clearml
Hi RipeGoose2
What exactly is being uploaded ? Are those the actual model weights or intermediate files ?
Oh that is odd... let me check something