Reputation
Badges 1
25 × Eureka!What's the jupyter / noetbook version you have?
Also from within the jupyter could you send me "sys.argv" ?
Hi StickyMonkey98
aΒ
very
Β large number of running and pending tasks, and doing that kind of thing via the web-interface by clicking away one-by-one is not a viable solution.
Bulk operations are now supported , upgrade the clearml-server to 1.0.2 π
Is it possible to fetch a list of tasks via Task.get_tasks,
Sure:Task.get_tasks(project_name='example', task_filter=dict(system_tags=['-archived']))
How can I reproduce it?
but when I run the same task again it does not map the keys..Β (edited)
SparklingElephant70 what do you mean by "map the keys" ?
Hmmm why don't you use "series" ?
(Notice that with iterations, there is a limit to the number of images stored per title/series , which is configurable in trains.conf, in order to avoid debug sample explosion)
Oh I see, what you need is to pass '--script script.py' as entry-point and ' --cwd folder' as working dir
Sure thing, let me know ... π
Hi RotundSquirrel78
Could those be the example experiments ?
Are you running your own server, is it the saas free tier server?
Any chance your code needs more than the main script, but it is Not in a git repo? Because the agent supports either single script file, or a git repo with multiple files
Great if this is what you do how come you need to change the entry script in the ui?
ShallowCat10 so you mean like meta-data on top of the image? or another level of title series ?
because, the iteration
field itself is an integer...
This seems to be the issue:PYTHONPATH = '.'
How is that happening ?
Can you try to run the agent with:PYTHONPATH= clearml-agent daemon ....
(Notice the prefix PYTHONPATH=
clears the environment variable that obviously fails the python commands)
Are you saying you had that odd script entry-point created by calling Task.init? (To clarify this is the problem)
Btw after you clone the experiment you can always manually edit both entry point and working dir, which based on what you said should be "script.py" and "folder"
π It's working as expected for me...
That said I tested on Linux & pip,
Any specific req to test with? from the log I see this is conda on windows, are you using the base conda env or a venv inside conda?
The only workaround I can think of is :series = series + 'IoU>X'
It doesn't look that bad π
(since you are using venv mode, if the cuda is not detected at startup time, it will not install the GPU version, as it has no CUDA support)
I see, so in theory you could call add_step with a pipeline parameter (i.e. pipe.add_parameter etc.)
But currently the implementation is such that if you are starting the pipeline from the UI
(i.e. rerunning it with a different argument), the pipeline DAG is deserialized from the Pipeline Task (the idea that one could control the entire DAG externally without changing the code)
I think a good idea would be to actually allow the pipeline class to have an argument saying always create from cod...
You can try callingtask._update_repository()
I'm still trying to figure out how to reproduce it...
AbruptWorm50 my apologies I think I mislead you you, yes you can pass geenric arguments to the optimizer class, but specifically for optuna, this is disabled (not sure why)
Specifically to your case, the way it works is:
your code logs to tensorboard, clearml catches the data and moves it to the Task (on clearml-server), optuna optimization is running on another machine, trail valies are maanually updated (i.e. the clearml optimization pulls the Task reported metric from the server and updat...
Hi AbruptWorm50
I was wondering if it possible to specify 'patience' of pruning algorithm?
Any of the kwargs passed to **optimizer_kwargs
will be directly passed to the optuna obejct
https://github.com/allegroai/clearml/blob/2e050cf913e10d4281d0d2e270eea1c7717a19c3/clearml/automation/optimization.py#L1096
It should allow you to control the parameters, no?
Regrading the callback, what exactly do you think to put there?
Is the callback this enough?
https://github.com/allegro...
These instructions should create the exact chart:
None
What am I missing ?
Nice! I'll see if we can have better error handling for it, or solve it altogether π
feature value distribution over time
You mean how to create this chart? None
ThickDove42 looking at the code, I suspect it fails interacting with the actual jupyter server (that is running on the same machine, but still).
Any chance you have a firewall on the Windows machine ?
Like what would be the exact query given an endpoint, for requests per sec.
You mean in Grafana ?
SweetGiraffe8 Task.init will autolog everything (git/python packages/console etc), for your existing process.
Task.create purely creates a new Task in the system, and lets' you manually fill in all the details on that Task
Make sense ?