You can fetch the task object via the SDK and inspect task.data
or do dir(task)
to see what else is inside.
You can also fetch it via the API using tasks.get_by_id
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , HyperDatasets are built mainly for unstructured data since the problem itself is more difficult, but all features can be applied also to tabular data. Is there something specific you're looking for?
Hi @<1523703397830627328:profile|CrookedMonkey33> , not sure I follow. Can you please elaborate more on the specific use case?
Currently you can add plots to the preview section of a dataset
Hi @<1603198153677344768:profile|ExasperatedSeaurchin40> , I think this is what you're looking for - None
What about tasks.get_all and you specify the ID of the task you want as well:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
It looks like you're on a self hosted server, the community server is app.clear.ml where you can just sign up and don't have to maintain your own server 🙂
The communication is done via HTTPS so relevant ports should be open.
Did you try with a hotspot connection from your phone?
Hi @<1541592204353474560:profile|GhastlySeaurchin98> , how are you running the experiments - which type of machines - local or cloud? Are you running your own server or using the community?
Hi @<1523701842515595264:profile|PleasantOwl46> , you can use users.get_all
to fetch them - None
Hi @<1523711002288328704:profile|YummyLion54> , can you please add a full log of both runs for reference?
GreasyPenguin14 Hi!
If I understand you correctly, you would have to change the url's of the models yourself so they would point to the now downed instances.
You can also use the following setting:sdk.development.default_output_uri: "SOME_URL"
in your ~/clearml.conf to set it to send the models anywhere you want them to go from the get go 🙂
Is that helpful?
If there is a change in code (Not just the script itself but a different commit / different uncommitted changes in the repo). Makes sense?
Hi @<1623491856241266688:profile|TenseCrab59> , are you self deployed? Can you provide some logs/screenshots? If you go directly into the task information of each step there console is empty?
Where is the error?
Hi @<1717350332247314432:profile|WittySeal70> , are you using a self hosted server or the community?
TrickyRaccoon92 , Hi!
Yes I believe this is the intended behavior. Since if you upload automatically you can upload many artifacts during a single run, whereas when you upload manually you create the object yourself.
Hi NervousFrog58 , versions 1.1.1 seem to be quite old. I would suggest upgrading your server. Please note that since then there have been a couple of DB migrations, so make sure to follow all steps 🙂
My bad, should have asked you to go to Network as well to see if anything returns errors
Hi @<1612982606469533696:profile|ZealousFlamingo93> , I'm not sure I understand. You're trying to run the autoscaler, how is the clearml-agent
connected to this?
Think of it this way. You have the pipeline controller which is the 'special' task that manages the logic. Then you have the pipeline steps. Both the controller and the steps need some agent to execute them. So you need an agent to execute the controller and also you need another agent to run the steps themselves.
I would suggest by clicking on 'task_one' and going into full details. My guess it is in 'enqueued' state probably to the 'default' queue.
Hi @<1670964687132430336:profile|SpicyFrog56> , can you please add the full log?
Are you using the community server or are you using the open source and self hosting?
AbruptWorm50 , the guys tell me that it's under progress and we will be updated in the following minutes 🙂
Hi @<1533619716533260288:profile|SmallPigeon24> , is it possible you're selecting multiple experiments? Or maybe there were two initial steps that were aborted? How does your pipeline look in the UI and do you have something that reproduces that?
CluelessElephant89 , the relevant command should be something of the sort sudo docker logs clearml-apiserver
Is there a way to lower the needed credentials for specific actions such as: run, stop, start instances etc...? for example: fixing it to work only with conditions of specific subnet, security group and instance types? ( I was trying doing it but as I said it failed with this message:
Can you elaborate on the specific configuration?
Hi DepravedCoyote18 , as long as you have everything backed up (configurations and data) on /opt/clearml/
(I think this is the default folder for storing clearml related stuff) the server migration should work (Data is a different issue).
However, ClearML holds links internally for datasets/debug samples/artifacts and a few other outputs maybe. Everything currently logged in the system to a certain minio server will still be pointing to that minio server.
Does that make sense?
Hi @<1577468611524562944:profile|MagnificentBear85> , can you please elaborate a bit on how exactly you want to stricture this?
AbruptWorm50 , can you confirm it works for you as well?
Hi @<1523703572984762368:profile|SlimyDove85> , conceptually I think it's possible. However, what would be the use case? In the end it would all be abstracted to a single pipeline