Hi TrickySheep9 , can you be a bit more specific?
Hi DistressedGoat23 , can you please elaborate a bit on what you're like to do?
Can you look in the UI if the execution parameters were logged?
I think it depends on your code and the pipeline setup. You can also cache steps - avoiding the entire need to worry about artifacts.
Hi @<1654294828365647872:profile|GorgeousShrimp11> , are you running in docker mode?
AbruptWorm50 , can you confirm it works for you as well?
Hi NuttyCamel41 , what kind of additional information are you looking to report? What is your use case?
Community server?
Interesting, how long ago do you figure?
Hi @<1635813046947418112:profile|FriendlyHedgehong10> , can you please elaborate on the exact steps you took? When you view the model in the UI - can you see the tags you added during the upload?
@<1526734383564722176:profile|BoredBat47> , that could indeed be an issue. If the server is still running things could be written in the databases, creating conflicts
I mean in the execution section of the task - under container section
Hi EnviousPanda91 , what version of ClearML are you using? Are you running on a self hosted server?
Did you try what I added? Also the screenshot is too small, nothing is readable
Hi @<1562610703553007616:profile|CloudyCat50> , can you provide some code examples?
Hi @<1668427950573228032:profile|TeenyShells80> , you would need to configure it in the clearml.conf
of the machine running the clearml-agent
Hi @<1582904448076746752:profile|TightGorilla98> , can you check on the status of the elastic container?
I see. Leave the files_server section as it was by default. Then in the CLI specify the --output-uri
flag
None
Hi EnormousCormorant39 , how did it fail?
Hi VivaciousBadger56 , This is a good question. In my opinion it's best to start with viewing videos from ClearML's YouTube video. This one is especially useful:
https://www.youtube.com/watch?v=quSGXvuK1IM
As regards to which steps to take, I think the following should cover most bases:
Experiment tracking & management - See that you can see all of the expected outputs in the ClearML webUI Remote experiment execution - Try and execute an experiment remotely using the agent. Change some c...
Do any of these API calls have a "Dataset Content" field anywhere in the "configuration" section?
Hi ImmenseMole41 , so your issue is specifically when trying to download compressed csv files? You mentioned that the values are correct when downloading via the StorageManager. Do you get corrupted values somewhere?
Also, how are you saving these csv files?
@<1556812486840160256:profile|SuccessfulRaven86> , did you install poetry inside the EC2 instance or inside the docker? Basically, where do you put the poetry installation bash script - in the 'init script' section of the autoscaler or on the task's 'setup shell script' in execution tab (This is basically the script that runs inside the docker)
It sounds like you're installing poetry on the ec2 instance itself but the experiment runs inside a docker container
Where do these arguments ( --input-test-data
) come from?
BoredPigeon26 , Please try the following setting in your ~/clearml.confsdk.metrics.tensorboard_single_series_per_graph: true
and see if it helps 🙂
I'm guessing this is a self deployed server, correct?
The chart already passes the --create-queue command line option to the agent, which means the agent will create the queue(s) it's passed. The open source chart simply doesn't allow you to define multiple queues in detail and provide override pod templates for them, however it does allow you to tell the agent to monitor multiple queues.
None