Skip to content

Latest commit

 

History

History
106 lines (79 loc) · 5.91 KB

README.md

File metadata and controls

106 lines (79 loc) · 5.91 KB

Real-ESRGAN ncnn Vulkan

CI License: MIT Open issue Closed issue

This project is the ncnn implementation of Real-ESRGAN. Real-ESRGAN ncnn Vulkan heavily borrows from realsr-ncnn-vulkan. Many thanks to nihui, ncnn and realsr-ncnn-vulkan 😁

Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration. We also optimize it for anime images.

Contents


If Real-ESRGAN is helpful in your photos/projects, please help to ⭐ this repo or recommend it to your friends. Thanks😊
Other recommended projects:
▶️ Real-ESRGAN: A practical algorithm for general image restoration
▶️ GFPGAN: A practical algorithm for real-world face restoration
▶️ BasicSR: An open-source image and video restoration toolbox
▶️ facexlib: A collection that provides useful face-relation functions.
▶️ HandyView: A PyQt5-based image viewer that is handy for view and comparison.

📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

[Paper]   [Project Page]   [Demo]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences

⏳ TODO List

  • Support further cheap arbitrary resize (e.g., bicubic, bilinear) for the model outputs
  • Bug: Some PCs will output black images
  • Add the guidance for ncnn model conversion
  • Support face restoration - GFPGAN

💻 Usages

Example Command

realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesr-animevideov3 -s 2

Full Usages

Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...

  -h                   show this help"
  -i input-path        input image path (jpg/png/webp) or directory"
  -o output-path       output image path (jpg/png/webp) or directory"
  -s scale             upscale ratio (can be 2, 3, 4. default=4)"
  -t tile-size         tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu"
  -m model-path        folder path to the pre-trained models. default=models"
  -n model-name        model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)"
  -g gpu-id            gpu device to use (default=auto) can be 0,1,2 for multi-gpu"
  -j load:proc:save    thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu"
  -x                   enable tta mode"
  -f format            output image format (jpg/png/webp, default=ext/png)"
  -v                   verbose output"
  • input-path and output-path accept either file path or directory path
  • scale = scale level
  • tile-size = tile size, use smaller value to reduce GPU memory usage, default selects automatically
  • load:proc:save = thread count for the three stages (image decoding + model upscaling + image encoding), using larger values may increase GPU usage and consume more GPU memory. You can tune this configuration with "4:4:4" for many small-size images, and "2:2:2" for large-size images. The default setting usually works fine for most situations. If you find that your GPU is hungry, try increasing thread count to achieve faster processing.
  • format = the format of the image to be output, png is better supported, however webp generally yields smaller file sizes, both are losslessly encoded

If you encounter crash or error, try to upgrade your GPU driver

🌏 Other Open-Source Code Used

📜 BibTeX

@InProceedings{wang2021realesrgan,
    author    = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    title     = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
    date      = {2021}
}

📧 Contact

If you have any question, please email xintao.wang@outlook.com or xintaowang@tencent.com.