Hello everybody, I have been testing ClearML as an all in one solution for MLOPS in our team. I deployed the CLearML server and everything worked fine 🙂 ! T...
EagerGiraffe33
1
Question,
3
Answers
Active since 09 February 2023
Last activity
one year ago
Reputation
0
Badges 1
2 × Eureka!
0
Hey, Is There Some Way / Workaround To Speed Up Working With Datasets With Large Number Of Files? Getting A Local Copy Of One Of Our Dataset With 70K Files Already Takes Longer Than Expected, But Working With A Dataset Of Around 100K Files That Has Multip
Hello, I am a data engineer but new to clearml.
If you train in batches then you should only get acces to the batch of document in those 100k. You could use s3 and implement the fetch in the get_item method :)
one year ago
0
Hello Everybody,
I Have Been Testing Clearml As An All In One Solution For Mlops In Our Team.
I Deployed The Clearml Server And Everything Worked Fine
Here is the full log of the experiment
one year ago
0
Hello Everybody,
I Have Been Testing Clearml As An All In One Solution For Mlops In Our Team.
I Deployed The Clearml Server And Everything Worked Fine
Thanks for the reponse.
I have try this exactly and still the same issue 😕 . I think that the clearml-agent adds a specific search for the pytorch wheels withe ther PytorchRequirements class and then raises this issue .
I don't quite understand why clearml tries to solve specificaly for the pytorch package 🤔 , You could just add the repos url in pip extra and let it solve the wheels for itself ?
one year ago