Reputation
Badges 1
533 × Eureka!In the larger context I'd look on how other object stores treat similar problems, I'm not that advanced in these topics.
But adding a simple force_download
flag to the get_local_copy
method could solve many cases I can think of, for example I'd set it to true in my case as I don't mind the times it will re-download when not necessary as it is quite small (currently I always delete the local file, but it looks pretty ugly)
Continuing on this line of thought... Is it possible to call task.execute_remotely
on a CPU only machine (data scientists' laptop for example) and make the agent that fetches this task to run it using GPU? I'm asking that because it is mentioned that it replicates the running environment on the task creator... which is exactly what I'm not trying to do 😄
Could be, my message is that in general, the ability to attach a named scalar (without iteration/series dimension) to an experiment is valuable and basic when looking to track a metric over different experiments
I assume we are talking about the IP I would find here right?
https://www.whatismyip.com/
google store package could be the cause, because indeed we have the env var set, but we don't use the google storage package
yeah but I see it gets enquequed to the default
which I don't know what it is connected to
If I execute this task using python .....py
will it execute the machine I executed it on?
Any news on this? This is kind of creepy, it's something so basic that I can't trust my prediction pipeline because sometimes it fails randomly with no reason
whatttt? I looked at config_obj
didn't find any set
method
Can you lend a few a words about how the not-pip freeze mechanism of detecting packages work?
Is there a way to do so without touching the config? directly through the Task object?
In standard docker TimelyPenguin76 this quoting you mentioned is wrong, since the whole argument is being passed - hence the double tricky quotation I posted above
SuccessfulKoala55 here it is
Do i need to copy this aws scaler task to any project I want to have auto scaling on? what does it mean to enqueue hte aws scaler?
logger.report_table(title="Inference Data", series="Inference Values", iteration=0, table_plot=inference_table)
I don't fully get it - it says it has to be enqueued
I'll check if this works tomorrow
Well this will have to wait a bit... my clearml-server is causing problems
now I get this error in my Auto Scaler taskWarning! exception occurred: An error occurred (AuthFailure) when calling the RunInstances operation: AWS was not able to validate the provided access credentials Retry in 15 seconds
Oh I get it, I thought it is only a UI issue... but it actually doesn't send it O_O
And once this is done, what is the file server IP good for? will it redirect to the bucket?