I think poetry should somehow return error if toml is "empty" then we can detect it...
@<1533619716533260288:profile|SmallPigeon24> , failed task should not actually be reused (i.e. cached), are you saying a failed Task is being reused? or are you saying that you want to "invalidate" the cache in the execution but still leave the Task as completed ?
JitteryCoyote63 that makes total sense!!
The reporting subprocess is not being updated with the new value! Let me check how we can pass it along...
${PWD} works!
This will be resolved every call to Task.init (so I would recommend against it), how about "$HOME/" ?
So I think there are two bugs here?
--args overrides="key=value" does not work request: add --hydra to override hydra arguments (and if this is added the first one is not needed)Is that correct?
If a Task is in the 'Completed' I think the only option is to 'Reset' it (see image).
In the UI yes, in code you can do task.mark_aborted(force=True)
You do clear the previous run execution but I think for a repetitive task this is fine.
I would avoid that, no?
Hi WackyRabbit7
Yes, we definitely need to work on wording there ...
"Dynamic" means you register a pandas object that you are constantly logging into while training, think for example the image files you are feeding into the network. Then Trains will make sure it is constantly updated & uploaded so you have a way to later verify/compare different runs and detect dataset contemplation etc.
"Static" is just, this is my object/file upload and store it as an artifact for me ...
Make sense ?
TrickyFox41 are you saying that if you add Task.init inthe code it works, but when you are calling "clearml-task" it does not work? (in both cases editing the Args/overrides ?
It's the same but done from outside, you want the same and "offline" as well right?
Hi UnevenDolphin73
Does ClearML somehow
remove
any loggers from
logging
module? We suddenly noticed that we have some handlers missing when running in ClearML
I believe it adds a logger, it should not remove any loggers,
What's the clearml version you are using ?
FierceRabbit20 it seems the Pipeline Task that was created is missing the "installed requirements" section. How are you creating the actual pipeline Task? is this from code?
Hmm, we could add an optional test for the python version, and the fail the Task if the python version is not found. wdyt?
Won't it be too harsh to have system wide restriction like that ?
oh dear ...
ScrawnyLion96 let me check with front-end guys 😞
Notice that in your example you have
plt.figure()
This actually clears the matplotlib figure, this is why we are getting a first white image then the actual plot,
once I removed it I got a single plot (no need for the manual reporting)
X, y = make_regression(
n_samples=100, # Number of samples
n_features=10, # Number of features
noise=0.1, # Number of informative features
random_state=42 # For reproducibility
)
# Convert to DataFrame for better f...
Hmm Could you check if it makes a difference importing ClearML before shap ?
If this changes nothing, could you put a standalone script to reproduce the issue ?
right now I can't figure out how to get the session in order to get the notebook path
you mean the code that fires "HTTPConnectionPool" ?
Hi @<1784754456546512896:profile|ConfusedSealion46>
Is this reproducible ?
Could it be out of storage?
and since the update the docs seem to be a bit off but I think I got it
Working on a whole new site 😉
The data I'm syncing by an data provider wich supports only an ftp connection....
Right ... that makes sense :)
No worries WickedGoat98 , feel free to post questions when they arise. BTW: we are now improving the k8s glue, so by the time you get there the integration will be even easier 🙂
Sure @<1523720500038078464:profile|MotionlessSeagull22> DM me 🙂
FierceHamster54 what you are saying that Inside the container it took 20 min to run? or that spinning the GCP instance until it registered as an Agent took 20min ?
Most of the time is took by building wheels for
nympy
and
pandas
...
BTW: This happens if there is a version mismatch and pip decides it needs to build the numpy from source, Can you send the full logs of that? Maybe we can somehow avoid that?
Hi @<1655744373268156416:profile|StickyShrimp60>
My hydra OmegaConf configuration object is not always being picked up, and I am unable to consistently reproduce it.
... I am using clearml v1.14.4,
Hmm how can we reproduce it? what are you seeing what it does "miss" the hydra, i.e. are you seeing any Hydra section? how are you running the code (manually , agent ?)
@<1639799308809146368:profile|TritePigeon86> +1
data it is going to s3 as well as ebs. Why so it should only go to s3
This sounds odd, if this is mounted then it goes to the S3 (the link will point to the files server, but it will be stored on the mounted drive i.e. S3)
wdyt?
as a backup plan: is there a way to have an API key set up prior to running docker compose up?
Not sure I follow, the clearml API pair is persistent across upgrades, and the storage access token are unrelated (i.e. also persistent), what am I missing?
Hi @<1689446563463565312:profile|SmallTurkey79>
App Credentials now disappear on restart.
You mean in the web UI?
It reflects what is stored by Keras, so if Keras stores the best model this is what you get. BTW if you pass output_uri=True it will automatically upload the models