Hi DeliciousKoala34 , is there also an exceptionally large amount of files in that Dataset? How do you create the dataset? What happens if you use something like s3 if you have available?
CluelessElephant89 , I think the RAM requirements for elastic might be 2GB, you can try the following hack so it maybe will work.
In the machine that it's running on there should be a docker-compose.yml file (I'm guessing at home directory).
For the following https://github.com/allegroai/clearml-server/blob/master/docker/docker-compose.yml#L41 you can try changing it to ES_JAVA_OPTS: -Xms1g -Xmx1g and this might limit the elastic memory to 1 gb, however please note this might ...
Hmmm, but what should be the default task state? What is the use case by the way?
Hi EnviousPanda91 , I'm not quite sure what you want to extract but you can extract everything from the UI using the API. The docs can be found here: https://clear.ml/docs/latest/docs/references/api/events
And for the best reference - You can open developer tools in the UI and see how the requests are handled there 🙂
Hi ShaggySquirrel23 , is this package inside some artifactory?
The agent needs access to the package while running, you need to have it accessible somehow on a remote machine as well. What is your setup?
In that case I suggest you write some basic code that will aggregate and compute those values for you for comparison
connected_config = task.connect({})
Looks like you're connecting an empty config..
Hi @<1856869640882360320:profile|TriteCoral46> , you can add custom columns in the webUI and filter/arrange according to them. The webUI uses the API in order to get this data from the apiserver. So you can use the webUI in order to generate whatever filtering you want to have in your code and then implement it via the API/SDK depending on what you want to create.
Hi @<1531807732334596096:profile|ObliviousClams17> , I think for your specific use case it would be easiest to use the API - fetch a task, clone it as many times as needed and enqueue it into the relevant queues.
Fetch a task - None
Clone a task - None
Enqueue a task (or many) - [None](https://clear.ml/docs/latest/docs/references/api/ta...
Hi, I think this is the default behavior but I think you can probably edit the source code ( output_uri parameter of Task.init would be a good lead).
In what format would you like it saved?
And if you switch back to 1.1.2 in the setup that 1.1.1 worked, does it still fail?
the question how does ClearML know to create env and what files does it copy to the task
Either automatically detecting the packages in requirements.txt OR using the packages listed in the task itself
@<1577468638728818688:profile|DelightfulArcticwolf22> , after checking internally with the guys I think you should have received an email from Tina about 9 days ago
Hi @<1787653566505160704:profile|EnchantingOctopus35> , what are you running?
Can you see if in the APIserver logs something happened during this time? Is the agent still reporting?
I recall a big fix to plots in server version 1.6.0, can you try upgrading to see if it fixes the issue?
Hi @<1774245260931633152:profile|GloriousGoldfish63> , can you please elaborate on what your use case is?
Hi MoodySheep3 ,
Can you please provide screenshots from the experiment - how the configuration looks like
Hi PerfectMole86 ,
how do I connect it to clearml installed outside my docker container?
Can you please elaborate?
I see. When you're working with catboost, as what type of object is it being passed?
Hi @<1829328217773707264:profile|DiminutiveButterfly84> , how are you building the pipeline? Is it from tasks or from decorators?
Hi @<1717350310768283648:profile|SplendidFlamingo62> , are you using a self hosted server or the community?
Yes, however I think you might be able to expose this via an env variable on the Task object itself
Did you download the same data directly to the NAS and it took 5 secs?
VexedCat68 , It appears to be a bug of sorts, we'll sort it out 🙂
Hi @<1829328217773707264:profile|DiminutiveButterfly84> , Is the code part of some repository or its just a folder with some script files?
Hi RoughTiger69 , you can specify a queue per step with execution_queue parameter in add_function_step
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller
Same goes for the docker image - docker parameter add_function_step