Reputation
Badges 1
108 × Eureka!I'm using pro. Sorry, for the delay, I didn't notice I never sent the response.
I'm not sure why the logs were incomplete. I think part of the reason it wasn't pulling from the repo was that it was pulling from cache. I cleared the clearml cache for that project and reran it. This should be the full log.
@<1523701087100473344:profile|SuccessfulKoala55> You wouldn't happen to know what's going on here. :D
It seems that the error is related to this part of the code block. However, when I comment this out I get the error I had 2 days ago with the missing configuration object.
Ah, that makes sense. What is supposed to be hidden changes depending on the section your in, which makes sense. Now there needs to a packman sprite easter egg hidden somewhere else.
Let me give that a try. Thanks for all the help.
That's what I was getting at. It wasn't clear to me from the documentation that it saves the state.
So far when I delete a task or dataset using the web interface that has artifacts on S3 it doesn't prompt me for credentials.
Hyperdatasets are the only ones that require a premium. If you're using normal datasets it should be fine.
I might have found the answer. I'll reply if it works as expected.
Thanks for always checking in @<1523701087100473344:profile|SuccessfulKoala55> 😛
Unfortunately, that doesn't seem to have solved the problem. I tried the same thing with https and it seems to skip the lines with the @ symbol like it did before. Honestly, it seems more like it just isn't parsing those lines during the install.
Collecting darts==0.25.0
Using cached darts-0.25.0-py3-none-any.whl (760 kB)
Collecting lightgbm
Using cached lightgbm-4.1.0-py3-none-manylinux_2_28_x86_64.whl (3.1 MB)
Collecting prophet
Using cached prophet-1.1.4-py3-none-manylinux_2_1...
They will be related through the task. Get the task information from the dataset, then get the model information from the task.
Maybe the sleep between scheduler.mark_completed() and scheduler.delete() is too short? But I don't get why deleting the old scheduler task would break the new scheduler. I'm going to try testing by running the scheduler locally.
It's a corporate one. We are also looking into options on Github's end.
No error. Just a new task each time.
Alright, I fixed the issue with the scheduler eating itself. But now I'm still getting the same bug as two days ago. So the Scheduler process starts fine and doesn't "crash." But I don't get the config object in the web-app again. It seems to work if I run it locally.
To answer your earlier question, I'm using the app.clear.ml portal so
- WebApp: 3.20.1-1525
- Server: 3.20.1-1299
- API: 2.28
- And my Python ClearML version: 1.14
Actually, clearing the cache on the other project might have fixed it. I just tested it out and it seems to be working.
This doesn't really make a lot of sense. ClearML would be better served for tracking which version of the code you used for a corresponding task and you'd use something like github or gitlab to track code and host your code. You could use ClearML to help you reconstruct the environment and code from a task given it's being tracked by git and hosted somewhere you can access.
Sorry I disappeared (went on a well deserved vacation). The problem is happening because of the ordering of the install. If I install using pip install -r ./requirements.txt then pip installs the packages in the order of the requirements file. However, during the installation process from ClearML, it installs the packages in order UNLESS there's a custom path provided, then it's saved for last. The reason this breaks my code is I have later packages that depend on the custom packages, as ...
Thanks for your reply @<1523701070390366208:profile|CostlyOstrich36> Is there an example where a pipeline is built from existing tasks? I'd like to experiment with it and I don' t see any examples of what you describe with my (clearly lacking) google-fu. What happens if you wrap a function with a task.init() with a pipeline decorator or is that the process you're speaking of?
Provide a bit more detail. What framework are you using?
I had 2 datasets on archive and 0 unarchived. When I ran the following command:
Dataset.list_datasets(dataset_project=self.task.get_project_name(), only_completed=True)
It returned two entrees for the two datasets I had on archive.
Oh, I get what's happening. That segment of the code is rerun when the task is enqueued remotely. So it's deleting itself. This also explains why it works fine locally. It's an ouroboros, the task is deleting itself.
What version of ClearML server are you using?