Thanks a lot. I meant running a bash script after cloning the repository and setting the environment
Hmm that is currently not supported 😞
The main issue in adding support is where to store this bash script...
Perhaps somewhere inside clear ml there is an order of actions for starting that can be changed?
Not that I can think of,
but let's assume you could have such a thing, what would you have put in the bash script (basically I want to see maybe there is a worka...
Thanks GreasyPenguin66 ! please keep us updated 🙂
But a warning instead of an error would be good.
Yes, that makes sense, I'll make sure we do that
Does this sound like a reasonable workflow, or is there a better way maybe?
makes total sense to me, will be part of next RC 🙂
GiganticTurtle0 fix was just pushed to GitHub 🙂pip install git+
dataset catalogue as advertised.
Creating the Dataset on ClearML, is the catalog, you can move datasets around, put in sub-folders add tags add meta-data, search etc. I think this qualifies as a dataset catalog , no?
I'm not sure I follow the example... Are you sure this experiment continued a previous run?
What was the last iteration on the previous run ?
MysteriousBee56
Well we don't want to ask sudo permission automatically, and usually setups do no change, but you can diffidently call this one before running the agent 😉sudo chmod 777 -R ~/.trains/
we have a separate cache
Why? they can share
I mean the caching will work, but it will reinstall this repository on top of the cached copy.
make sense ?
JitteryCoyote63 to filter out 'archived tasks' (i.e. exclude archived tasks)Task.get_tasks(project_name="my-project", task_name="my-task", task_filter=dict(system_tags=["-archived"])))
Hmm should not make a diff.
Could you verify it still doesn't work with TF 2.4 ?
clearml python version: 1.91
could you upgrade to 1.9.3 and try?
Minio is on the same server and the 9000 and 9001 ports are open for tcp
just to be clear, the machine that runs your clearml code can in fact access the minio on port 9000 ?
I tested with the latest and everything seems to work as expected.
BTW: regrading "bucket-name" , make sure it complies with the S3 standard, as a test try to change it to just "bucket" bi hyphens
Hi UnsightlySeagull42
Could you test with the latest RCpip install clearml==1.0.4rc0
Also could you provide some logs?
Oh then this should just workcp -R --link b a/
You can achieve the same symbol link link from python as well
JumpyPig73 you should be able to find in in the bottom pf the page, try scrolling down (it should be after the installed packages)
I pass my dataset as parameter of pipeline:
@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df
dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.
Another option is to use pipeline from dec...
SourSwallow36 it is possible.
Assuming you are not logging metrics by the same name, it should work.
try:Task.init('examples', 'training', continue_last_task='<previous_task_id_here>')
Another question, do you have the argparse with type=str
?
Hi John. sort of. It seems that archiving pipelines does not also archive the tasks that they contain so
This is correct, the rationale is that the components (i.e. Tasks) might be used (or already used) as cached steps ...
First let's verify with the manual change, but yes
Should I map the poetry cache volume to a location on the host?
Yes, this will solve it! (maybe we should have that automatically if using poetry as package manager)
Could you maybe add a github issue, so we do not forget ?
Meanwhile you can add the mapping here:
https://github.com/allegroai/clearml-agent/blob/bd411a19843fbb1e063b131e830a4515233bdf04/docs/clearml.conf#L137
extra_docker_arguments: ["-v", "/mnt/cache/poetry:/root/poetry_cache_here"]
CooperativeFox72 this is indeed sad news 😞
When you have the time, please see if you can send a code snippet to reproduce the issue. I'd like to have it fixed
I'll make sure we add the reference somewhere on GitHub
Could you amend the original snippet (or verify that it also produces plots in debug samples) ?
(Basically I need something that I can run 🙂 )
Thanks PompousBaldeagle18 !
Which software you used to create the graphics?
Our designer, should I send your compliments 😉 ?
You should add which tech is being replaced by each product.
Good point! we are also missing a few products from the website, they will be there soon, hence the "soft launch"
So you are saying 156 chunks, with each chunk about ~6500 files ?
DrabOwl94 how many 1M files did you end up having ?