
Reputation
Badges 1
25 × Eureka!This works.
great!
So it is still in master and should be included in 1.0.5?
correct, RC will be released soon with this fix included
Because we are working with very big files, having them stored at multiple locations is something we try to avoid
Just so I better understand, is this for storing files as part of a dataset, or as debug samples ?
In other words can two diff processes create the exact same file (image) ?
Hi @<1523711619815706624:profile|StrangePelican34>
You can either report on the Model itself:
None
or you can force it on the Task:
task = Task.get_task("task id here")
task.mark_started(force=True)
task.get_logger().report_scalar(...)
task.mark_completed(force=True)
LazyTurkey38
The last part makes sense, not sure I get the "if clone", we are calling execute_remotely, so I'm assuming we do not need to clone ourselves, but send the current Task.
Other than that yes, makes sense (BTW, assuming you have upgraded the server >=1.0 you can just do mark_stopped, no need to reset
thought the agent created a new conda env and installed all packages
It does, but I was asking what is written on the Original Task (the one created when you executed the code on your laptop, not when the agent was executing it, when the agent is executing the Task, it writes back All the packages of the entire venv it created, when the Task is run manually, it will list only the packages you import directly (i.e. from package or import package, it actually analyses the code)
My point...
I was just able to reproduce with "localhost"
What I mean is that I don't need to have cudatoolkit installed in the current conda env, right?
Wait, are you using conda as package manager ?
EDIT: meaning configured in trains.conf as package manager
This is assuming you can just run two copies of your code, and they will become aware of one another.
Hi GrittyHawk31
this one?
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server
GiddyTurkey39 Hmm I'm assuming that by default it cannot access that IP range.
Are you using virtual-box for the VM?
EDIT:
Can I assume the machine running the VM (a.k.a the host) can access the trains-server
?
Regarding this, does this work if the task is not running locally and is being executed by the trains agent?
This line: "if task.running_locally():" makes sure that when the code is executed by the agent it will not reset it's own requirements (the agent updates the requirements/installed_packages after it installs them from the requiremenst.txt, so that later you know exactly which packages/versions were used)
@<1523701079223570432:profile|ReassuredOwl55>
Hey, hereβs a quickie β is it possible to specify different βtypesβ of input parameters (βArgs/β¦β) such that they are handled nicely on the front end?
You me cast / checked in the UI ?
MuddySquid7
are you saying that for some reason the models pick the artifacts ? Is that reproducible ? (they are two different things)
Can you see the df.pkl on the Models section of the Task (in the UI) ?
An example for something like spacy would be useful for the community.
That awesome, any chance you can PR something? (no need for it to be perfect, we can take it from there)
Follow up: I see that if I move an Experiment to a new project, it does not copy the associated model files and must be done manually.Β Once I moved the models to the new project, the query works as expected.
Correct π
Nice catch!
GiddyTurkey39 Just making sure, you ran ping IP
not ping ip:port
right ?
PlainSquid19 I will also look into it as well.
maybe for some reason model.keras_model.save_weights
is not caught ...
Hi EnthusiasticCoyote38
Does clearml-agent hasΒ option
Fully supported π
Should work out of the box, it will always clone with --recursive and will bring all submodules
p.s. you should remove this line πextra_index_url: ["git@github.com:salimmj/xxxx"]
And it is not working ? what's the Working Dir you have under the Execution Tab ?
BTW: do notice to install the agent on the system python packages and Not on any venv.
MysteriousBee56 when you run the trains-agent
with --foreground , before it starts the docker it print the full command line, could you send it please?
I can't figure out where the extra ' came from...
Also could you send the trains.conf file?
(feel free to redact and confidential information)
ClumsyElephant70
Can you manually run the same command ?['python3.6', '-m', 'virtualenv', '/home/user/.clearml/venvs-builds/3.6']
Basically:python3.6 -m virtualenv /home/user/.clearml/venvs-builds/3.6'
Regrading resetting it via code, if you need I can write a few lines for you to do that , although that might be a bit hacky.
Maybe we should just add a flag saying, use requirements.txt ?
What do you think?
Is there a way to move existing pipelines between projects?
You should be able to, go to your settings page and turn on "show hidden folders"
Then go to your project, you should see " .pipeline
" sub project there, right click it and move it to another folder.
Hi CurvedDolphin95
I would first check the free space on the instance (it might be that git is reporting an inaccurate error and it's free space not permission that causing it to fail the clone).
I would also check your GitHub account, notice that the now only support user/api-key (and not user/pass), which means you need to create an api-key and add it as your password in the clearml.conf.
Any chance that for some reason some of the Tasks are running from a diff user? or not using a docker ?
Hi SkinnyPanda43
Yes, I think you are right the documentation might be missing it. I'll make sure they know it π
In the meantime :task.update_output_model
https://github.com/allegroai/clearml/blob/d3929033c016476c580557639ff44f900e65904a/clearml/backend_interface/task/task.py#L734
JitteryCoyote63 I think this only holds for the conda distribution.
(Actually quite interesting, I wonder what happens if you already installed cudatoolkit...)