Can you verify you ~/.clearml.conf has proper configuration. If you dofrom clearml import Task t=Task.init()Does this work?
Hi! The status is in_progress from backend perspective. Please try like that 🙂
Edit clearml.conf on the agent side and add the extra index url there - https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Hi @<1644147961996775424:profile|HurtStarfish47> , published is one step after finalized technically, similar to tasks
VexedCat68 , can you give a small example?
I would ask the IT people managing your server to check server uptime and for any errors in the apiserver log. This is something who's managing the server will know what to do.
Is the agent running on the same machine as the original code that didn't get any errors?
Do you see any errors in the dev tools console (F12)?
Also are there any errors in elastic?
Hi!
I believe you can stop and resume studies by adding these actions to your script:
Add save points via joblib.dump()
and connect them to clearml via clearml.model.OutputModel.connect()
Then, when you want to start or resume a study, load latest study file via joblib.load() and connect to clearml with http://clearml.model.InputModel.co nnect()
This way you can stop your training sessions with the agent and resume them from nearly the same point
I think all the required references are h...
Hi @<1670964687132430336:profile|SpicyFrog56> , can you please add the full log?
My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:
auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
Can you add a full log from startup of both Elastic and apiserver containers?
Is it your own server installation or are you using the SaaS?
Hi @<1676762887223250944:profile|FancyKangaroo34> , it would be possible for example if the docker image has a different python version from what you ran on previously
Hi SuperiorCockroach75 , yes you should be able to run it on a local setup as well 🙂
DeliciousBluewhale87 , Hi!
I think you can have models/artifacts automatically copied to a location if the experiment is initialized withoutput_uri
For example:task = Task.init('examples', 'model test', output_uri=' ')What version of ClearML are you using? I'd suggest upgrading to the latest 🙂
Hi @<1529271085315395584:profile|AmusedCat74> , can you please provide the full log of the autoscaler?
Hi @<1645235137467650048:profile|WobblyHamster93> , what exactly are you trying to achieve?
Hi @<1523703961872240640:profile|CrookedWalrus33> , metrics are considered as scalers, logs, plots, and the experiment objects themselves that are saved in the backend databases.
You must be reporting some very metric heavy experiments 🙂
Sounds like an issue with your deployment. Did your Devops deploy this? How was it deployed?
Also please try a different station from scratch. Again I suspect something is misconfigured in your environment
How are you trying it programatically? Are you providing API keys for authentication?
AbruptWorm50 , that's strange. I'll take a look as well. What version of clearml are you using?
You can add scalars/plots manually with the following:
https://clear.ml/docs/latest/docs/references/sdk/logger#report_scalar
And you can see usage in the following example:
https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py
You can also report various plots as shown in the following examples as well:
https://github.com/allegroai/clearml/blob/master/examples/reporting/scatter_hist_confusion_mat_reporting.py
https://github.com/allegroai/clearml/blob/...
AgitatedDove41 , What version of ClearML are you using?
Was the artifact very large per chance or is there any chance you were having network issues at the time?
RotundSquirrel78 , try going into localhost:8080/login
SubstantialElk6 , Hi 🙂
In the UI do you get ubuntu:20:04 as the docker container for the experiment?
Hi @<1797075640948625408:profile|MotionlessSeagull29> , you can get it with the following:
from clearml import Dataset
ds = Dataset.get(dataset_id="<SOME_ID>")
print(ds.project)
You can always use
dir(<PYTHON_OBJECT>)
to see it's different attributes/methods
DefiantLobster38 , please try the following - Change the verify_certificate to False
https://github.com/allegroai/clearml/blob/aa4e5ea7454e8f15b99bb2c77c4599fac2373c9d/docs/clearml.conf#L16
Tell me if it helps 🙂
I think you might find this video helpful