Reputation
Badges 1
25 × Eureka!@<1523701083040387072:profile|UnevenDolphin73> it's looking for any of the files:
None
Should pass only_published:
https://github.com/allegroai/clearml/blob/071caf53026330f3bb8019ee5db3d039562072f3/clearml/model.py#L444
At the top there should be the URL of the notebook (I think)
Sure π
BTW: clearml-agent will mount your host .ssh into the docker to /root/.ssh by default.
So no need to do that manually
ReassuredTiger98 that is a good point, at the moment they are designed as "machine level" configs, but we do have built in support to allow multiple configurations. The technical issue is we have to read the configuration file before we initial the Task object, that means we still are not aware of the git root (which I assume is where we could put a configuration file)
BTW: regrading the detect_with_conda_freeze
we hope that this flag is rarely used, as the Clearml should auto-detect t...
JitteryCoyote63 I remember something with "!" in the name or maybe "/" in the name that might cause this behavior. May I suggest checking with clearml-server 1.3 ?
I am thinking about just installing this manually on the worker ...
If you install them system wide (i.e. with sudo) and add agent.package_manager.system_site_packages
then they will always be available for you π
And then also useΒ
priority_optional_packages: ["carla"]
This actually means that it will always try to install the package clara
first, but if it fails, it will no raise an error.
BTW: this would be a good use case for dockers, just saying :w...
Hi WackyRabbit7 ,
Yes we had the same experience with kaggle competitions. We ended up having a flag that skipped the task init :(
Introducing offline mode is on the to do list, but to be honest it is there for a while. The thing is, since the Task object actually interacts with the backend, creating an offline mode means simulation of the backend response. I'm open to hacking suggestions though :)
I still do not get why this leads to some 0.5 values when in my plot there should only be 0 and 1.
Smart sub-sampling (lowpass filter before, aka averaging on a window)
it should be fairly easy to write such a daemon
from clearml.backend_api.session.client import APIClient
client = APIClient()
timestamp = time() - 60 * 60 * 2 # last 2 hours
tasks = client.tasks.get_all(
status=["in_progress"],
only_fields=["id"],
order_by=["-last_update"],
page_size=100,
page=0,
created =[">{}".format(datetime.utcfromtimestamp(timestamp))],
)
...
references:
[None](https://clear.ml/...
What happens when you call:
from clearml.backend_interface.task.repo import ScriptInfo
print(ScriptInfo._ScriptInfo__legacy_jupyter_notebook_server_json_parsing(None))
Sure, try to run the clearml-agent withclearml-agent daemon -O
https://clear.ml/docs/latest/docs/clearml_agent/clearml_agent_daemon
so I assume clearml moves them from one queue to the other?
Correct. When it creates the k8s job and launches it on the cluster it moves it into the queue.
Can you see it on your k8s cluster (meaning the job/pod)?
Thanks JitteryCoyote63 !
Any chance you want to open github issue with the exact details or fix with a PR ?
(I just want to make sure we fix it as soon as we can π )
clearml will register conda packages that cannot be installed if clearml-agent is configured to use pip. So although it is nice that a complete package list is tracked, it makes it cumbersome to rerun the experiment.
Yes mixing conda & pip is not supported by clearml (or conda or pip for that matter)
Even python package numbers might not exist on both.
We could add a flag not to update back the pip freeze, it's an easy feature to add. I'm just wondering on the exact use case
Basically it is the same as "report_scatter2d"
GiganticTurtle0
If there are several tasks running concurrently, which task shouldΒ
Task.current_task()
Β return?Β (
How could you have that ?
Per process, there is one Main current Task (until you close it).
Are you referring to a pipeline with multiple steps ?
If this is the case, task.current_task
will return the Task of the component (if executed form the component) and the pipeline (if called from the pipeline logic function).
Notice we added the ability to s...
GiganticTurtle0
I'm assuming here that self.dask_client.map(read_and_process_file, filepaths)
actually does the multi process/node processing. The way it needs to work, it has to store the current state of the process and then restore it on any remote node/process. In practice this means pickling the local variables (Task included).
First I would try to use a standalone static function for the map, DASK might be able to deduce it does not need to pickle anything, as it is standalone.
A...
Sorry if it's something trivial. I recently started working with ClearML.
No worries, this has actually more to do with how you work with Dask
The Task ID is the unique id of the any Task in the system (task.id will return the UID str)
Can you post a toy Dash code here, I'll explain how to make it compatible with clearml π
I'm sorry JitteryCoyote63 No π
I do know that the enterprise addition have these features (a.k.a vault & permissions), basically to answer these types of situations.
Basically if I pass an arg with a default value of False, which is a bool, it'll run fine originally, since it just accepted the default value.
I think this is the nargs="?"
, is that right ?
Yep, and this is the root cause of the issue (But easily fixable) π
SubstantialElk6 I know they have full permission control in the enterprise edition, if this is something you need I suggest you contact http://allegro.ai π
ValueError('Task object can only be updated if created or in_progress')
It seems the task
is not "running" hence the error, could that be
how can I for example convert it back to a pandas dataframe?
You can always report csv file with report_media as well, or if this is not for debugging maybe an artifact ?