Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi! Would Love Assistance On The Following: I'M Getting

Hi!
Would love assistance on the following:

I'm getting ValueError: Task object can only be updated if created or in_progress [status=failed fields=['hyperparams']]

The scenario is continued logging of a process after process (sequentially)

Until recently I was able to use it without problem.
I found a related answer in this (auto crawler looking website, probably extracted from another place...) - None

Which leads me to the question - why is my task marked "completed" ? what defines when it is marked like that?
And how can I prevent that from happening? (In my scenario, I don't mind if it will be never marked as completed, if that prevents me from adding additional logs into it in the future)

Thanks in advance 🙂

  
  
Posted one year ago
Votes Newest

Answers 5


Hi @<1523703012214706176:profile|GorgeousMole24> , I'm not sure about the exact definition, but I think when the script finishes running or the thread that started Task.init() finishes.

  
  
Posted one year ago

I think in this case you can fetch the task object, force it into running mode and then edit whatever you want. Afterwards just mark it completed again.

None
Note the force parameter

  
  
Posted one year ago

Thank you @<1523701070390366208:profile|CostlyOstrich36> ! Will try it now

  
  
Posted one year ago

so far it looks like it's working, thank you @<1523701070390366208:profile|CostlyOstrich36> !

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36>
In my scenario the process was killed by a cluster scheduler (after reaching a time limit) so that's why I find it weird.

Is there a way to prevent that from ever happening? any property I can or a flag, any method I can call and/or even "monkey patch" to prevent this from happening ?

  
  
Posted one year ago
1K Views
5 Answers
one year ago
one year ago
Tags