Does the other PC have the package locally there somewhere?
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I see that there is also no force
flag for the SDK. Maybe open a github request to add the option to force or just allow deleting archived published tasks.
Currently through code you can use the API to delete those experiments.
By default these tasks are hidden. You can go into settings and show hidden projects
Is there a vital reason why you want to keep the two accounts separate when they run on the same machine?
Also, what if you try aligning all the cache folders for both configuration files to use the same folders?
Hi GiganticMole91 , what version of ClearML server are you using?
Also, can you take a look inside the elastic container to see if there are any errors there?
If you shared an experiment to a colleague in a different workspace, can't they just clone it?
UnevenDolphin73 , can you provide a small snippet of exactly what you were running? Are you certain you can see the task in the UI? Is it archived?
Hi BroadSeaturtle49 , can you please elaborate on what the issue is?
Were there any changes to your Elastic or your server in the past few days?
Does the issue occur for all users in the workspace?
I would suggest directly using the API for this. Then simply look at what the web UI sends as a reference 🙂
Hi @<1648134232087728128:profile|AlertFrog99> , I don't think there is an automatic way to do this out of the box but I guess you could write some automation that does that via the API
Hi @<1664079296102141952:profile|DangerousStarfish38> , it means that it's not supported out of the box and might require more tinkering, but I've managed to run agent with docker mode on a windows machine previously 🙂
Hi @<1570220852421595136:profile|TeenyHedgehog42> , the docker images are stored together with all images - this is managed by docker and not clearml
Hi ObedientToad56 , you can simply delete all of them since it's only cache. It's safe to delete cache 🙂
@<1523703961872240640:profile|CrookedWalrus33> , you can use the UI as reference. Open dev tools (F12) and see the network (filter by XHR).
For example, in scalars/plots/debug samples tabs the relevant calls seem to be:
events.get_task_single_value_metrics
events.scalar_metrics_iter_histogram
events.get_task_plots
events.get_task_metrics
Hi @<1686909730389233664:profile|AmiableSheep6> , I could suggest using the StorageManager module to pull specific files from S3.
There is no option to download specific files from a dataset. I would suggest breaking it into maybe smaller versions.
You would however need to pull the data locally for training anyways, wouldn't breaking it into smaller versions help this issue?
Hi @<1799974757064511488:profile|ResponsivePeacock56> , in that case I think you would need to actually migrate the files from files server to S3 and then also change the links logged in MongoDB associated to the artifacts.
RotundSquirrel78 , can you please check the webserver container logs to see if there were any errors
Hi HarebrainedBaldeagle11 , not that I know of. Did you encounter any issues?
I think this is what you're looking for 🙂
https://clear.ml/docs/latest/docs/references/sdk/dataset#datasetlist_datasets
Sounds like some issue with queueing the experiment. Can you provide a log of the pipeline?
Hi SuperiorCockroach75 , yes you should be able to run it on a local setup as well 🙂
@Alex Finkelshtein, if the parameters you're using are like this:
parameters = { 'float': 2.2, 'string': 'my string', }
Then you can update the parameters as mentioned before:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Please note that parameters['float'] = '9.9' will update the parameter specifically. I don't think you can update the parameter en masse...
DepressedChimpanzee34 you can try naming the connected configuration differently. Let me see if there is some other more elegant solution 🙂
Hi @<1562610703553007616:profile|CloudyCat50> , you can use Task.set_tags()
to 're-set' tags and omit the tag you want removed.
And what was the result from 19:15 yesterday? The 401 error? Please note that's a different set of credentials