
Reputation
Badges 1
25 × Eureka!Firstly, thank you for your efforts and your support.
Thanks SmugOx94 !
Are you running trains-agent
in docker mode? The aforementioned scripts are executed before, the experiment is being cloned, they are meant to be a part of the docker setup, not a per experiment script.
You could try to edit the experiment and have:
Working Directory: "."
(that means the root of the repository)
Script Path: "experiments_that_uses_library/train.py"
This will make sure you can do "import l...
Well I guess you can say this is definitely not self explanatory line π
but, it is actually asking whether we should extract the code, think of it as:if extract_archive and cached_file: return cls._extract_to_cache(cached_file, name)
Hmm, you are missing the entry point in the execution (script path).
Also as I mentioned you can either have a git repo or script in the uncommitted changes, but not both (if you have a git repo then the uncommitted changes are the git diff)
Yes, that seems to be the case. That said they should have different worker IDs agent-0 and agent-1 ...
What's your trains-agent version ?
I prefer serving my models in-house and only performing the monitoring via ClearML.
clearml-serving
is an infrastructure for you to run models π
to clarify, clearml-serving
is running on your end (meaning this is not SaaS where a 3rd party is running the model)
By the way, I saw there is a project dashboard app which might support the visualization I am looking for. Is it suitable for such use case?
Hmm interesting, actually it might, it does collect matrices over time ...
Hi UnsightlySeagull42
Basically you can get the agent to always add additional arguments for the docker run, such as -v for mounting:
https://github.com/allegroai/clearml-agent/blob/948fc4c6ce1ecf33a74619ad570d69b8188f6db9/docs/clearml.conf#L133
do I need to have the repo that I am running on my account
If it is a public repo, then no need, credentials are only needed for private repos π
Am I missing something ?
WickedGoat98 I suspect the main difference is with GitHub your are cloning with https (i.e. not credentials needed) , but with gitlab you are using SSH authentication to clone the repository .If on the machine running the trains-agent
you can "git clone" your repository (i.e. from command line), the trains-agent should be able to do the same (basically make sure you have the SSH keys in your ~/.ssh folder.
Are you testing the trains-agent service from (i.e. from the docker compose) o...
im not running in docker mode though
hmmm that might be the first issue. it cannot skip venv creation, it can however use a pre-existing venv (but it will change it every time it installs a missing package)
so setting CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 in non docker mode has no affect
Hmm I see, add this for example
extra_docker_shell_script: ["rm ~/.bashrc", "echo removed bashrc"]
Maybe this one?
https://github.com/allegroai/clearml/issues/448
I think it is already there (i.e. 1.1.1)
Hi UnsightlyShark53 , just a quick FYI, you can also log the entire config file config.json
this will be stored as model configuration, and you can see it in the input/output models under the artifacts tab.
See example here you can path either the path to the configuration file, or the dictionary itself after you loaded the json, whatever is more convenient :)
Hi MagnificentPig49 unfortunately it's only in the trains-server docker, we are working on making it "presentable" and uploading it to it's repo.
It's written in Angular (v8 I think). Do you want to help out, it will definitely incentive the guys to tidy up the code and upload it :)
Hi WorriedParrot51
Let me shed some light on this complicated mechanism, because this is not very straight forward.
Basically the agent signals the trains package it should ignore the code calls, and use a specific Task in the backend (i.e. if in manual mode, the trains package logs the data into the trains-server, in agent mode (remote mode), it does the opposite and takes the data from the trains-server "into" the code)
Specifically, just like in manual mode, calling argparse.parse is be...
Regulatory reasons and proprietary data is what I had in mind. We have some projects that may need to be fully self hosted in the end
If this is the case then, yes do self-hosted, or talk to clearml sales to get the VPC option, but SaaS is just not the right option
I might take a look at it when I get a chance but I think I'd have to see if ClearML is a good fit for our use case before I can justify the commitment
I hope it is π
: For artifacts already registered, returns simply the entry and for artifacts not existing, contact server to retrieve them
This is the current state.
Downloading the artifacts is done only when actually calling get()/get_local_copy()
Please send the full log, I just tested it here, and it seems to be working
It's just that to access that comparison page, you have to make a comparison first.
Make total sense to me π
So I wonder - why should an agent be related to a specific user's credentials? Is the right way to go about this is to create a "fake user" for the sake of the agent?
Very true you have to have credentials for the trains-agent, so it can "report" to the trains-server, that said, the creator of the Task (i.e. the person who cloned it) will be registered as the "user" in the UI.
I would recommend to create an "agent" user and put it's credentials on the trains-agent machine (the same way...
I meant even just a link to a blank comparison and one can then add the experiments from that view
Just making sure you are aware, once you are in comparison you can always add Tasks (any Task):
Notice you can press on the "Add experiments", then select Any experiment (including all projects! as filters)
Notice you need to remove all filters (right side red x on the filter Icon)
why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
Apologies, I was not clear. Yes I'm with you, this is a great idea π
Hi UnevenDolphin73
Can one compare experiments/tasks from different projects?
Yes, the easiest way is to go to the parent project ("all projects" if they have no common parent, then search for the specific Tasks (i.e. filter or using the search bar), then multi-select them.
wdyt?
could be nice to have a direct "task comparison" link in the UI somewhere,
you mean like a "cart" for comparison ? or just to "save the state" so you can move between projects ?
Hi MotionlessSeagull22
Hmm I'm not this is possible in the UI.
You can compare multiple experiments and view the images in form of thumbnails one next to the other, But full view will be a single image...
You can however right click on the image and get a direct link, then open a new tab ... :(
Hi AttractiveWoodpecker16
I think is the correct channel for that question.
(any chance you can move your thread there?)
Specifically just email billing@clear.ml they will cancel (no need to worry about the beginning of the month, just explain and they will not charge over Nov)
EDIT: I know they are working on making it a one click in the UI, main limit is what happens with the data that was stored and was above the free tier threshold, anyhow I think next version will sort that as well.
Hi AbruptWorm50
I am currently using the repo cache,
What do you mean by "using the repo cache" ? This is transparent, the agent does that, users should not access that folder?
I also looked at the log you send, why do you think it is re-downloading the repo?
gm folks, really liking ClearML so far as my top choice (after looking at dvc, mlflow), and thank you for your help here!
Thanks HurtWoodpecker30 !
Is there a recommended workflow to be able to βdrop intoβ the
exact
env
(code, venv, data) of a previous experiment (which may have been several commits ago), to reproduce that experiment?
You can use clearml-agent on your local machine to build the env of any Task,
` clearml-agent build --id <ta...