Instructions to use OpenGVLab/ASM-FT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/ASM-FT with Transformers:
# Load model directly from transformers import AutoProcessor, UnifiedHuskyFlattenCatForCaption processor = AutoProcessor.from_pretrained("OpenGVLab/ASM-FT") model = UnifiedHuskyFlattenCatForCaption.from_pretrained("OpenGVLab/ASM-FT") - Notebooks
- Google Colab
- Kaggle
| license: apache-2.0 | |
| # ASM-FT Model Card | |
| ## Model details | |
| **Model type:** | |
| ASM is a unified vision-language foundation model for open-world panoptic visual recognition and understanding. Aligning with LLMs, it supports versatile generation tasks, demonstrating impressive region comprehension capability. | |
| **Model date:** | |
| ASM was trained in July 2023. | |
| **Paper or resources for more information:** | |
| https://github.com/OpenGVLab/all-seeing | |
| ## License | |
| ASM is open-sourced under the Apache License 2.0. | |
| **Where to send questions or comments about the model:** | |
| https://github.com/OpenGVLab/all-seeing/issues | |
| ## Intended use | |
| **Primary intended uses:** | |
| The primary use of ASM is research on large multimodal models and chatbots. | |
| **Primary intended users:** | |
| The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | |
| ## Training dataset | |
| The pretrain phase employs [AS-1B](https://huggingface.co/datasets/Weiyun1025/AS-100M/tree/main) and [Laion-COCO](https://huggingface.co/datasets/laion/laion-coco). | |
| The finetuning phase employs [AS-Core](https://huggingface.co/datasets/Weiyun1025/AS-Core), [RefCOCOg](https://github.com/lichengunc/refer), [VG](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html), [LLaVA-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K), [COCO Caption](https://cocodataset.org/#home), [TextCaps](https://textvqa.org/textcaps/), [VQAv2](https://visualqa.org/), and [GQA](https://cs.stanford.edu/people/dorarad/gqa/). | |
| ## Evaluation dataset | |
| A collection of 4 benchmarks, including 2 image captioning benchmarks, and 2 region captioning benchmarks. | |