1.0.1 is only for the cleaml python client, no need for a server upgrade (or agent)
is there a way that i can pull all scalars at once?
I guess you mean from multiple Tasks ? (if so then the answer is no, this is on a per Task basis)
Or, can i get experiments list and pull the data?
Yes, you can use Task.get_tasks to get a list of task objects, then iterate over them. Would that work for you?
https://clear.ml/docs/latest/docs/references/sdk/task/#taskget_tasks
Hi PompousBeetle71
Could you test the latest RC, I think the warning were fixed:pip install trains==0.16.2rc0
Let me know...
So if everything works you should see "my_package" package in the "installed packages"
the assumption is that if you do:pip install "my_package"
It will set "pandas" as one of its dependencies, and pip will automatically pull pandas as well.
That way we do not list the entire venv you are running on, just the packages/versions you are using, and we let pip sort the dependencies when installing with the agent
Make sense ?
BTW: server-side vault is in progress, hopefully will be available in the upcoming releases :)
Hi FiercePenguin76
So currently the idea is you have full control over per user credentials (i.e. stored locally). Agents (depending on how deployed) can have shared credentials (with AWS the easiest is to push to the OS env)
ShaggyHare67 could you send the console log trains-agent
outputs when you run it?
Now theÂ
trains-agent
 is running my code but it is unable to importÂ
trains
Do you have the package "trains" listed under "installed packages" in your experiment?
Hi VexedCat68
Could it be the python version is not the same? (this is the only reason not to find a specific python package version)
WorriedParrot51 I now see ...
Two solutions that I can quickly think of:
In the code add:import sys sys.path.append('./my_sub_module')
Assuming you always have to add the sub-directories to make the code work, and assuming they are part of the repository, this is probably the table stolution
2. In the the UI in the Docker base image, add -e PYTHONPATH=/folder
or from code (which is exactly what you did)
a clean interface task.set_base_docker('nvidia/cids -e PYTHONPATH=/folder")
Hi MagnificentSeaurchin79
This sounds like a deeper bug (of a sort), I think the best approach is to open a GitHub issue with some code that can reproduce this behavior, or at least enough information so that we could try to catch the bug.
This way we will make sure it is not forgotten.
Sounds good ?
EnviousStarfish54
oh, this is a bit different from my expectation. I thought I can use artifact for dataset or model version control.
You totally can use artifacts as a way to version data (actually we will have it built in in the next versions)
Getting an artifact programmatically:
Task.get_task(task_id='aabb'). artifacts['artifactname'].get()
Models are logged automatically. No need to log manually
What's the difference between the example pipeeline and this code ?
Could it be the "parents" argument ? what is it?
This is because we have a pub-sub architecture that we already use, it can handle retries, etc. also we will likely want multiple systems to react to notifications in the pub sub system. We already have a lot of setup for this.
How would you integrate with your current system? you have a restapi or similar to trigger event ?
but I was hoping ClearML had a straightforward way to somehow represent ALL ClearML events as JSON so we could land them in our system.
Not sure I'm followi...
but never executes/enqueues them (they are all inÂ
Draft
 mode).
All pipeline steps are not enqueued ?
Is the pipeline controller itself running?
Task deletion failed: unhashable type: 'dict'
Hi FlutteringWorm14 trying to figure where this is coming from, give me a sec
using the cleanup service
Wait FlutteringWorm14 , the cleanup service , or task.delete call ? (these are not the same)
Hi ReassuredTiger98
However, the clearml-agent also stops working then.
you mean the clearml-agen daemon (the one that spinned the container) is crashing as well ?
I see. If you are creating the task externally (i.e. from the controller), you should probably call. task.close() it will return when everything is in order (including artifacts uploaded, and other async stuff).
Will that work?
Hi @<1523704207914307584:profile|ObedientToad56>
hat would be the right way to extend this with let's say a custom engine that is currently not supported ?
as you said 'custom' 🙂
None
This is actually a custom
engine, (see (3) in the readme, and the preprocessing.py
implementing it) I think we should actually add a specific example to custom
so this is more visible. Any thoughts on what would...
The problem is that I currently don't have a way to get them "from outside".
Maybe as a hack (until we add the model object)
` class MyModelCB:
current_args = dict()
@classmethod
def callback(load_save, model_info):
if load_save != "save":
return model_info
model_info.name = "my new name" + str(current_args) # make a name from args
return model_info
WeightsFileHandler.add_pre_callback(MyModelCB.callback)
MyModelCB.current_args = {"args": "value"} `wdyt?
Hi AdventurousRabbit79
In the wizard
https://github.com/allegroai/clearml/blob/1ab3710074cbfc6a19dd8a57078b10b31b2df31a/examples/services/aws-autoscaler/aws_autoscaler.py#L214
Add the S3 section like you would in the clearml.conf:
https://github.com/allegroai/clearml/blob/1ab3710074cbfc6a19dd8a57078b10b31b2df31a/docs/clearml.conf#L73
What you actually specified is torch the @ is kind of pip remark, pip will not actually parse it 🙂
use only the link https://download.pytorch.org/whl/cu100/torch-1.3.1%2Bcu100-cp36-cp36m-linux_x86_64.whl
Go to the workers & queues, page right side panel 3rd icon from the top
Could it be pandas was not installed on the local machine ?
Tried context provider for Task?
I guess that would only make sense inside notebooks ?!