Hello Team,
I am facing issue while creating/deploying Named Entity Recognition ML skills in AI Center.
I am using enterprise edition and successfully deployed ML for Invoice model.
Logs:
Non retryable error occurred while deploying mlskill: NamedEntity_Skill, reason: ModuleNotFoundError: file “/microservice/main.py”, line 3, in import flair modulenotfounderror: no module named ‘flair’ during handling of the above exception, another exception occurred: e[32m wed feb 9 06:09:49 utc 2022 content initialization complete. e(be[m e[32m wed feb 9 06:09:49 utc 2022 requirements downloaded e(be[m tiny-tokenizer depends on sudachidict_core@ https://object-storage.tyo2.conoha.io/v1/nc_2520839e1f9641b08211a5c85243124a/sudachi/sudachidict_core-20190927.tar.gz error: packages installed from pypi cannot depend on packages which are not also hosted on pypi. warning: tiny-tokenizer 3.2.0 does not provide the extra ‘all’ preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading tiny_tokenizer-3.2.0.tar.gz (982 bytes) warning: tiny-tokenizer 3.3.0 does not provide the extra ‘all’ preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading tiny_tokenizer-3.3.0.tar.gz (965 bytes) warning: tiny-tokenizer 3.4.0 does not provide the extra ‘all’ preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading tiny_tokenizer-3.4.0.tar.gz (971 bytes) collecting tiny-tokenizer[all] downloading zipp-0.6.0-py2.py3-none-any.whl (4.1 kb) collecting zipp==0.6.0 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading wrapt-1.11.2.tar.gz (27 kb) collecting wrapt==1.11.2 downloading wcwidth-0.1.8-py2.py3-none-any.whl (17 kb) collecting wcwidth==0.1.8 downloading urllib3-1.24.3-py2.py3-none-any.whl (118 kb) collecting urllib3==1.24.3 downloading transformers-2.3.0-py3-none-any.whl (447 kb) collecting transformers==2.3.0 downloading traitlets-4.3.3-py2.py3-none-any.whl (75 kb) collecting traitlets==4.3.3 downloading tqdm-4.41.1-py2.py3-none-any.whl (56 kb) collecting tqdm==4.41.1 downloading torchvision-0.4.2-cp36-cp36m-manylinux1_x86_64.whl (10.2 mb) collecting torchvision==0.4.2 downloading torch-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (734.6 mb) collecting torch==1.3.1 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading tiny_tokenizer-3.1.0.tar.gz (7.0 kb) collecting tiny-tokenizer==3.1.0 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading tabulate-0.8.6.tar.gz (45 kb) collecting tabulate==0.8.6 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading sqlitedict-1.6.0.tar.gz (29 kb) collecting sqlitedict==1.6.0 downloading sortedcontainers-2.1.0-py2.py3-none-any.whl (28 kb) collecting sortedcontainers==2.1.0 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading smart_open-1.9.0.tar.gz (70 kb) collecting smart-open==1.9.0 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading sklearn-0.0.tar.gz (1.1 kb) collecting sklearn==0.0 downloading six-1.13.0-py2.py3-none-any.whl (10 kb) collecting six==1.13.0 downloading setuptools-44.0.0-py2.py3-none-any.whl (583 kb) collecting setuptools==44.0.0 downloading sentencepiece-0.1.85-cp36-cp36m-manylinux1_x86_64.whl (1.0 mb) collecting sentencepiece==0.1.85 preparing metadata (setup.py): finished with status ‘done’ preparing metadata (setup.py): started downloading segtok-1.5.7.tar.gz (24 kb) collecting segtok==1.5.7 downloading scipy-1.4.1-cp36-cp36m-manylinux1_x86_64.whl (26.1 mb) collecting scipy==1.4.1 downloading scikit_learn-0.22.1-cp36-cp36m-manylinux1_x86_64.whl (7.0 mb) collecting scikit-learn==0.22.1 preparing metadata (setup.py): finished with status ‘done’ preparin… (truncated).
Any Idea @loginerror @Palaniyappan @Lahiru.Fernando @Pablito
Thanks,
Prathamesh.