Hi GentleSwallow91
I think this would be a good start:
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py
wdyt?
Please go ahead with the PR 🙂
Oh, fork the repository (this will create a copy on your GitHub account), this is done from GitHub's web page
Then commit to your repository (on the master branch)
Then in the GitHub page of the repository on your account, you will have a green button suggesting you to PR it 🙂
You are doing great 🙂 don't worry about it
Hi MagnificentSeaurchin79
Yes this is a bit confusing 🙂
Datasets are stored as delta changes from parent versions.
A dataset contains a list of files and list of artifacts where these files exist. This means that if we add a new file to a dataset we create a new dataset from a parent dataset and want to add a file, we have to add a link to the file, and have a new artifact containing just the delta (i.e. the new file) from the parent version When you delete a file you just remove the li...
PanickyMoth78 'tensorboard_logger' is an old deprecated package that meant to create TB events without TB, it was created before TB was a separate package. Long story short, it is not supported. That said if you just run the same code and replace tensorboard_logger with tensorboard, you should see all scalars in the UI
background:
ClearML logs TB events as they are created in real-time, TB_logger is not TB, it creates events and dumps them directly into a TB equivalent event file
MagnificentSeaurchin79 are you using the latest RC ?
(I think this was exactly the issue)
EDIT:
try to create the version withe the file removed after you upgrade to the latest RC (0.17.5rc3) in the summary you should see 1 file removed.
MelancholyChicken65 found it ! thank you for finding this issue.
I'm hoping to get an update soon 🙂
MelancholyChicken65 what's the clearml-serving you are using ? (I believe this issue was fixed in 1.2)
Hmm is "model_monitoring_eps" another version of the model and it does not have all the properties of the "original" one?
I think that what you need is the triggers, check this one:
https://clear.ml/docs/latest/docs/references/sdk/trigger
Task.enqueue will execute immediately, i need execute task to spesific time
Oh I see what you mean, trigger -> scheduled (cron alike) -> Task executed.
Is that correct?
Thanks @<1523701601770934272:profile|GiganticMole91> !
(As usual MS decided to invent a new "standard")
I'll make sure the guys looks at it and get an RC with a fix
each epoch runs about 55 minutes, and that screenshot I posted earlier kind of show the logs for the rest of the info being output, if you wanted to check that out
I thought you disabled the stdout log. no?
Maybe ClearML is using
tensorboard
in ways that I can fine tune? I
You can open your TB and see, every report there is logged into clearml
DeterminedCrab71 that is a good point, how does plotly adjust for nans on graphs?
FranticCormorant35 DeterminedCrab71 please continue the discussion in this thread
The issue is the 400 returned form the server, let me check with backend guys
MagnificentSeaurchin79 YEY!!!!
Very cool!
Do you feel like making it public, I have the feeling a lot of people will appreciate it, this is very useful 🙂
I lost you SmallBluewhale13 is this the Task init call you used:task = Task.init( project_name="examples", task_name="load_artifacts", output_uri="s3://company-clearml/artifacts/bethan/sales_journeys/", )
Hmm worked now...
When Task.init called with output_uri='
s3://my_bucket/sub_folder '
s3://my_bucket/sub_folder/examples/upload issue.4c746400d4334ec7b389dd6232082313/artifacts/test/test.json
So you are saying it ignored everything after the bucket's "/" ?
SmarmySeaurchin8args=parse.parse() task = Task.init(project_name=args.project or None, task_name=args.task or None)
You should probably look at the docstring 😉
:param str project_name: The name of the project in which the experiment will be created. If the project does
not exist, it is created. If project_name
is None
, the repository name is used. (Optional)
:param str task_name: The name of Task (experiment). If task_name
is None
, the Python experiment
...
which to my understanding has to be given before a call to an argparser,
SmarmySeaurchin8 You can call argparse before Task.init, no worries it will catch the arguments and trains-agent
will be able to override them :)
Regrading the project name:
set_project will support project_name in the next version 🙂 project_id=[p.id for p in Task.get_projects() if p.name==project_name][0]
Yes including this. (There was a fix to an issue with trains-agent
and disabling frameworks, it is already part of 0.16.3 )
SmarmySeaurchin8 regarding the original question:task.set_project(project_id)
Task.get_projects() to get all the project names/ids
SmarmySeaurchin8
When running in "dev" mode (i.e. writing the code) only packages imported directly are registered under "installed packages" , then when the agent is executing the experiment, it will update back the entire environment (including derivative packages etc.)
That said you can set detect_with_pip_freeze
to true (in trains.conf) and it will basically store the entire pip freeze.
https://github.com/allegroai/trains/blob/f8ba0495fb3af1f99732fdffbbccd2fa992934a4/docs/trains.c...
In theory task.tags.remove(tag)
might also work, but I'm not sure of it will automatically be updated on the backend