you're always running a single task at a time. The whole point is that everything is reported to the task (auto-magic bindings, console logs etc.), so there cannot be any ambiguity. You can close the current task ( task.close()
) and init a new one if you'd like, but you can't init several at the same time.
@<1523701553372860416:profile|DrabOwl94> , I would suggest restarting the elastic container. If that doesn't help, check the ES folder permissions - maybe something changed
Hi @<1523701553372860416:profile|DrabOwl94> , can you check if there are some errors in the Elastic container?
Having the latest versions is always a good practice
Just make sure you make regular backups
Did you check permissions?
Hi @<1617693654191706112:profile|HelpfulMosquito8> , can you please elaborate? Did you self deploy on Redshift? I don't think it really matters. What is the workflow you're trying to achieve?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I'm guessing it's a self deployed server. What version are you on? Did you ever see any errors/issues in mongodb/elastic?
Do you mean that ALL experiments are being deleted from all projects?
I think you would have to re-register it
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I believe a hotfix is right around the corner 🙂
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , not sure I understand this line
Is the order of --ids the same as returned rows?
Also, regarding the hash, I'd suggest opening a github feature request for this.
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need add the agent command that you run into the bootup of your system
Yes, links to data should all be in mongodb. Under the hood datasets are 'special' type of tasks so you can just find that experiment and check the registered artifacts, there should be the links to the data itself
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I don't think there are any login credentials for mongodb by default in the OS
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , please try to keep your messages to a single thread and not spamming the main channel on the same topic 🙂
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you can just leave the packages as any other package and add the --extra-index-url in clearml.conf
of the agent
@<1590514584836378624:profile|AmiableSeaturtle81> , I would suggest opening a github feature request then 🙂
It would work from your machine as well, but the machine needs to be turned on... like when an ec2 instance that is running.
The agent is basically a daemon process that is sitting on any machine and is capable of running jobs. You can set it up on any machine you would like but the machine has to be turned on...
But... if the machine is shut down, how do you expect the agent to run?
Hi @<1603560525352931328:profile|BeefyOwl35> , what do you mean set up the agent remotely? You run the agent on your machine and want the agent to run when it's shut down?
I think you should investigate what happens during docker-compose up to see why the services agent docker isn't running
Hi @<1576381444509405184:profile|ManiacalLizard2> , please note the failure error:docker: Error response from daemon: pull access denied for new_docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
I think you need to first login to the repository
Hi @<1576381444509405184:profile|ManiacalLizard2> , it looks like the default setting is still false
Hi @<1564785037834981376:profile|FrustratingBee69> , you can simply run the agent without the --docker command