Closed
Description
Currently, according to the document, the way of using emotion model (e.g emotion2vec) is
from funasr import AutoModel
model = AutoModel(model="iic/emotion2vec_plus_large")
wav_file = f"{model.model_path}/example/test.wav"
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
print(res)
But what if I wish to use the model as an assistant model ( vad, spk model etc.), what should I do, or if there is any plan to add such a parameter (ser_model="iic/emotion2vec_plus_large" maybe?)