Yep, the trains server is basically a docker-compose based service.
All you have to do is change the ports in the docker-compose.yml
file.
If you followed the instructions in the docs you should find that file in /opt/trains/docker-compose.yml
and then you will see that there are multiple services ( apiserver
, elasticsearch
, redis
etc.) and in each there might be a section called ports
which then states the mapping of the ports.
The number on the left, is ...
I was refering to what is the returned object of Task.artifacts['...']
- when I call .get
I understand what I get, I'm asking because I want to see how the object I'm calling .get
on behaves
When you are inside a project, the search bar searches for experiments
so if you want to search inside a specific project, go to that project and use the search bar, if you want to search all over, go to the project called "All Experiments" and search there
thx! i was looking in the docs and was looking for something like URL/URI now i know why i didnt find it 😅
or its the same palce in the config file for configuring the docker mode agent base image?
Cool - so that means the fileserver which comes with the host will stay emtpy? Or is there anything else being stored there?
🤔 is the "installed packages" part editable? good to know
Isn't it a bit risky manually changing a package version? what if it won't be compatible with the rest?
TimelyPenguin76 I think our problem is that the agent is not using this environment, I'm not sure which one he does... Is there a way to hard-code the agent environment?
I don't even know where trains is coming from... While using the same environment I can't even import trains, see
even though I apply append
I mean I don't get how all the pieces add up
Will try this out and report
192.168.1.71?
Wait, suddenly the UI changed to 0.16.1, seems like I was shown a cached page
Wait but I don't want to execute it
If this includes scheduling through pipelines, in my opinion there should be an option to execute a pipeline without an agent. Sometimes for development I just want to execute a pipeline on my local machine just as I would a task...
For example I have a DATA_DIR
environment variable which points to the directory where disk-data is stored
the level of configurability in this thing is one of the best I've seen
AgitatedDove14 is the scale a part of the problem? Because not only the colors are wrong, the scale does not appear
I tried what you said in the previous response, setting sdk.aws.s3.key
and sdk.aws.s3.secret
to the ones in my MINIO. Yet when I try to download an object, i get the following
` >>> result = manager.get_local_copy(remote_url="s3://*******:9000/test-bucket/test.txt")
2020-10-15 13:24:45,023 - trains.storage - ERROR - Could not download s3://*****:9000/test-bucket/test.txt , err: SSL validation failed for https://*****:9000/test-bucket/test.txt [SSL: WRONG_VERSION_NU...