Reputation
Badges 1
106 × Eureka!CostlyOstrich36 This is for a step in the pipeline
i didnβt, prefer not to add temporary workarounds
what i'm doing is getting
parent = Task.get_task(task.parent)
and then checkingparent.data.user
but the user is some unknown id that doesn't exist in the all_users
list
TimelyMouse69
Thanks for the reply, this is only regarding automatic logging, where i want to disable logging all together (avoiding the task being added to the UI)
looks like itβs working π tnx
It's models not datasets in our case...
But we can also just tar the folder and return that... Was just hoping to avoid doing that
BTW, i would expect this to happen automtically when running βlocalβ and βdebugβ
Itβs a lot of manual work that you need to remember to undo
also, i donβt need to change it during execution, i want it for a specific run
Nothing that i think is relevant, I'm using latest from master. It might be a new bug on their side, wasn't sure.
This is the next step not being able to find the output of the last step
ValueError: Could not retrieve a local copy of artifact return_object, failed downloading
Yes, and the old version only works without the patch.
I see the model on the artifacts tab, but it's not actually uploaded.
@<1523701435869433856:profile|SmugDolphin23> @<1523701087100473344:profile|SuccessfulKoala55> Yes, the second issue still consists, currently breaking our pipeline
Hey π Thanks for the update!
what iβm missing the is the point where you report to clearml between cast and casting back π€
@<1523701118159294464:profile|ExasperatedCrab78>
Here is an example that reproduces the second error
from clearml.automation import PipelineDecorator
from clearml import TaskTypes
@PipelineDecorator.component(task_type=TaskTypes.data_processing, cache=True)
def run_demo():
from transformers import AutoTokenizer, DataCollatorForTokenClassification, AutoModelForSequenceClassification, TrainingArguments, Trainer
from datasets import load_dataset
import numpy as np
import ...
The pipeline is a bit complex, but it did that with a very dumb example
` args.py #504:
for k, v in dictionary.items():
# if key is not present in the task's parameters, assume we didn't get this far when running
# in non-remote mode, and just add it to the task's parameters
if k not in parameters:
self._task.set_parameter((prefix or '') + k, v)
continue
task.py #1266:
def set_parameter(self, name, value, description=None, value_type=None):
# type: (str, str, Optional[str], O...
confirming that only downgrading to transformers==4.21.3
without the patch worked....
This is a time bomb that eventually we won't be able to ignore... we will need to use new transformers code
I am currently on vacation, I'll ask my team mates. But if not I'll get to it next week
SmugDolphin23 SuccessfulKoala55 ^
@<1523701118159294464:profile|ExasperatedCrab78>
Ok. bummer to hear that it won't be included automatically in the package.
I am now experiencing a bug with the patch, not sure it's to blame... but i'm unable to save models in the pipeline.. checking if it's related
I'm working with the patch, and installing transformers from github
iβll try to work on something that works on 1.7.2