So just to be clear - the file server has nothing to do with the storage?
I tried what you said in the previous response, setting sdk.aws.s3.key
and sdk.aws.s3.secret
to the ones in my MINIO. Yet when I try to download an object, i get the following
` >>> result = manager.get_local_copy(remote_url="s3://*******:9000/test-bucket/test.txt")
2020-10-15 13:24:45,023 - trains.storage - ERROR - Could not download s3://*****:9000/test-bucket/test.txt , err: SSL validation failed for https://*****:9000/test-bucket/test.txt [SSL: WRONG_VERSION_NU...
Okay Jake, so that basically means I don't have to touch any server configuration regarding the file-server
on the trains server. It will simply get ignored and all I/O initiated by clients with the right configuration will cover for that?
I just tried setting the conf in the section Martin said, it works perfectly
Martin: In your trains.conf, change the valuefiles_server: '
s3://ip :port/bucket'
Isn't this a client configuration ( trains-init
)? Shouldn't be any change to the server configuration ( /opt/trains/config...
)?
I know I can configure the file server on trains-init
- but that only touches the client side, what about the container on the trains server?
To be clearer - how to I refrain from using the built in file-server altogether - and use MINIO for any storage need?
Thia is just keeping getting better and better.... π€©
I think I got it, I'll ping her again if it won't succeed
So could you re-explain assuming my piepline object is created by pipeline = PipelineController(...)
?
AgitatedDove14
So nope, this doesn't solve my case, I'll explain the full use case from the beginning.
I have a pipeline controller task, which launches 30 tasks. Semantically there are 10 applications, and I run 3 tasks for each (those 3 are sequential, so in the UI it looks like 10 lines of 3 tasks).
In one of those 3 tasks that run for every app, I save a dataframe under the name "my_dataframe".
What I want to achieve is once all tasks are over, to collect all those "my_dataframe" arti...
and then how would I register the final artifact to the pipelien? AgitatedDove14 β¬
I want to collect the dataframes from teh red tasks, and display them in the pipeline task
Okay, looks interesting but actually there is no final task, this is the pipeline layout
few minutes and I'll look at it
AgitatedDove14 worked like a charm, thanks a lot!
I'm trying it now
Sorry.. I still don't get it - when I'm launching an agent with the --docker
flag or with the --services-mode
flag, what is the difference? Can I use both flags? what does it mean? π€
I don't think the problem is setting that variable, I think it has something to do with it but not that obvious... Because it did work for me in the past, since then we docker-compose up/downed a few times, changed some other things etc... Can't figure out what made it get to this point
I re-executed the experiemnt, nothing changes
I guess what I want is a way to define environment variables in agents
I followed the upgrading still nothing