Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone, Is The Logging Of "Automodelforcausallm" Models Supported? Even With Manually Logging Of Llama-2-7B-Hf It Does Not Appear In The Model List (Even When Uploading As Artifact)

Hello everyone,
is the logging of "AutoModelForCausalLM" models supported? Even with manually logging of Llama-2-7b-hf it does not appear in the model list (even when uploading as artifact)

  
  
Posted 5 months ago
Votes Newest

Answers 2


Hi @<1637624975324090368:profile|ElatedBat21> , do you have a code snippet that reproduces this? You can also manually log a model to the system using the OutputModel - None

  
  
Posted 5 months ago

Hi @<1523701070390366208:profile|CostlyOstrich36>

Thanks for the quick reply.

Here is a snipped of the code I use to register the model.
I load in the model to set the parameters and try to register it.
The result is a model pickle file in the artifacts tab but no registration of the model.

task_name = "Llama2_model"
task = Task.init(project_name='project_name', task_name=task_name, output_uri="s3://...")


model = AutoModelForCausalLM.from_pretrained(Llama2_source,
                                             trust_remote_code=True,
                                             local_files_only=True,
                                             torch_dtype=torch.float16,
                                             ).to(device)

model.config.do_sample = True
model.config.temperature = temperature

model_kwargs = {
    'do_sample': True,
    'temperature': temperature,
    'max_new_tokens': max_new_tokens,
    'top_k': top_k,
    'num_return_sequences': num_return_sequences,
}

task.upload_artifact('Llama2_model', model)

I also tried to use class OutputModel() but with no results:

task_name = "Llama2_model"
task = Task.init(project_name='project_name', task_name=task_name, output_uri="s3://...")
output_model = OutputModel(task=task)


...


model.save_pretrained('model')
output_model.update_weights('model')

output_model.set_name('Test Model Name')
output_model.publish()
task.close()
  
  
Posted 5 months ago
292 Views
2 Answers
5 months ago
5 months ago
Tags