tencent/POINTS-Reader

image text to texttransformerstransformerssafetensorstext-generationimage-text-to-textconversationalcustom_codeapache-2.0
vLLMRunnable with vLLM
296.3K

We recommend using the following prompt to better performance,

since it is used throughout the training process.

prompt = ( 'Please extract all the text from the image with the following requirements: ' '1. Return tables in HTML format. ' '2. Return all other text in Markdown format.' ) image_path = '/path/to/your/local/image' model_path = 'tencent/POINTS-Reader' model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16, device_map='cuda') tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) image_processor = Qwen2VLImageProcessor.from_pretrained(model_path) content = [ dict(type='image', image=image_path), dict(type='text', text=prompt) ] messages = [ { 'role': 'user', 'content': content } ] generation_config = { 'max_new_tokens': 2048, 'repetition_penalty': 1.05, 'temperature': 0.7, 'top_p': 0.8, 'top_k': 20, 'do_sample': True } response = model.chat( messages, tokenizer, image_processor, generation_config ) print(response)


If you encounter issues like repeation, please try to increase the resolution of the image to allievate the problem.

### Deploy with SGLang

We have created a [Pull Request](https://github.com/sgl-project/sglang/pull/9651) for SGLang. You can check out this branch and install SGLang in editable mode by following the [official guide](https://docs.sglang.ai/get_started/install.html) prior to the merging of this PR.

#### How to Deploy

You can deploy POINTS-Reader with SGLang using the following command:

python3 -m sglang.launch_server
--model-path tencent/POINTS-Reader
--tp-size 1
--dp-size 1
--chat-template points-v15-chat
--trust-remote-code
--port 8081


#### How to Use

You can use the following code to obtain results from SGLang:

```python

from typing import List
import requests
import json



def call_wepoints(messages: List[dict],
                 temperature: float = 0.0,
                 max_new_tokens: int = 2048,
                 repetition_penalty: float = 1.05,
                 top_p: float = 0.8,
                 top_k: int = 20,
                 do_sample: bool = True,
                 url: str = 'http://127.0.0.1:8081/v1/chat/completions') -> str:
    """Query WePOINTS model to generate a response.

    Args:
        messages (List[dict]): A list of messages to be sent to WePOINTS. The
            messages should be the standard OpenAI messages, like:
            [
                {
                    'role': 'user',
                    'content': [
                        {
                            'type': 'text',
                            'text': 'Please describe this image in short'
                        },
                        {
                            'type': 'image_url',
                            'image_url': {'url': /path/to/image.jpg}
                        }
                    ]
                }
            ]
        temperature (float, optional): The temperature of the model.
            Defaults to 0.0.
        max_new_tokens (int, optional): The maximum number of new tokens to generate.
            Defaults to 2048.
        repetition_penalty (float, optional): The penalty for repetition.
            Defaults to 1.05.
        top_p (float, optional): The top-p probability threshold.
            Defaults to 0.8.
        top_k (int, optional): The top-k sampling vocabulary size.
            Defaults to 20.
        do_sample (bool, optional): Whether to use sampling or greedy decoding.
            Defaults to True.
        url (str, optional): The URL of the WePOINTS model.
            Defaults to 'http://127.0.0.1:8081/v1/chat/completions'.

    Returns:
        str: The generated response from WePOINTS.
    """
    data = {
        'model': 'WePoints',
        'messages': messages,
        'max_new_tokens': max_new_tokens,
        'temperature': temperature,
        'repetition_penalty': repetition_penalty,
        'top_p': top_p,
        'top_k': top_k,
        'do_sample': do_sample,
    }
    response = requests.post(url,
                             json=data)
    response = json.loads(response.text)
    response = response['choices'][0]['message']['content']
    return response

prompt = (
    'Please extract all the text from the image with the following requirements:
'
    '1. Return tables in HTML format.
'
    '2. Return all other text in Markdown format.'
)

messages = [{
              'role': 'user',
              'content': [
                  {
                      'type': 'text',
                      'text': prompt
                  },
                  {
                      'type': 'image_url',
                      'image_url': {'url': '/path/to/image.jpg'}
                  }
              ]
            }]
response = call_wepoints(messages)
print(response)

Known Issues

  • Complex Document Parsing: POINTS-Reader can struggle with complex layouts (e.g., newspapers), often producing repeated or missing content.
  • Handwritten Document Parsing: It also has difficulty handling handwritten inputs (e.g., receipts, notes), which can lead to recognition errors or omissions.
  • Multi-language Document Parsing: POINTS-Reader currently supports only English and Chinese, limiting its effectiveness on other languages.

Citation

If you use this model in your work, please cite the following paper:

@article{points-reader,
  title={POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Document Conversion},
  author={Liu, Yuan and Zhongyin Zhao and Tian, Le and Haicheng Wang and Xubing Ye and Yangxiu You and Zilin Yu and Chuhan Wu and  Zhou, Xiao and Yu, Yang and Zhou, Jie},
  journal={arXiv preprint arXiv:2509.01215},
  year={2025}
}


```bibtex
@article{liu2024points1,
  title={POINTS1. 5: Building a Vision-Language Model towards Real World Applications}

, author={Liu, Yuan and Tian, Le and Zhou, Xiao and Gao, Xinyu and Yu, Kavio and Yu, Yang and Zhou, Jie}, journal={arXiv preprint arXiv:2412.08443}, year={2024} }

@article{liu2024points,
  title={POINTS: Improving Your Vision-language Model with Affordable Strategies}

, author={Liu, Yuan and Zhao, Zhongyin and Zhuang, Ziyuan and Tian, Le and Zhou, Xiao and Zhou, Jie}, journal={arXiv preprint arXiv:2409.04828}, year={2024} }

@article{liu2024rethinking,
  title={Rethinking Overlooked Aspects in Vision-Language Models}

, author={Liu, Yuan and Tian, Le and Zhou, Xiao and Zhou, Jie}, journal={arXiv preprint arXiv:2405.11850}, year={2024} }

DEPLOY IN 60 SECONDS

Run POINTS-Reader on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.