Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

dbogunowicz
Copy link
Contributor

OPT does not contain a "tokenizer.json", but instead requires the following files:
"special_tokens_map.json"
"vocab.json"
"merges.txt"
to create the tokenizer from pretrained.

I.e if those three files are present in the deployment model this line of code will execute:

model_path = "deployment"
tokenizer = AutoTokenizer.from_pretrained(
            model_path,
        )

@dbogunowicz dbogunowicz requested review from bfineran, natuan, a team and rahul-tuli and removed request for a team May 19, 2023 15:59
@dbogunowicz dbogunowicz requested a review from natuan May 22, 2023 08:31
@dbogunowicz dbogunowicz requested a review from anmarques May 22, 2023 15:06
@natuan natuan merged commit 3dd1f8d into main May 22, 2023
@natuan natuan deleted the feature/damian/tokenizer_files branch May 22, 2023 15:32
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants