LLIE-CvT: A Convolutional Vision Transformer for Low-Light Image Enhancement

Enhancing dark, noisy, and low-contrast images using local convolutional priors and long-range transformer modeling.

Accepted to International Conference on Pattern Recognition (ICPR) 2026

Replace the placeholder links after the paper, code, and arXiv pages are public.

Input

Low illumination, weak contrast, sensor noise.

LLIE-CvT Output

Improved visibility, restored details, natural color.

LLIE-CvT combines convolutional feature extraction with transformer-style global context for robust low-light image enhancement.

Abstract

Low-light image enhancement is a challenging restoration problem because images captured in dim environments often contain poor visibility, low contrast, color distortion, and amplified noise. LLIE-CvT addresses this problem with a convolutional vision transformer design that preserves local spatial structure while modeling long-range dependencies across the image.

The architecture is designed to recover perceptually meaningful illumination and texture details without over-saturating bright regions or suppressing fine structures. Convolutional components capture local edges, textures, and illumination transitions, while transformer components provide broader contextual reasoning for globally consistent enhancement.

The resulting framework is intended for low-light restoration tasks where both quantitative fidelity and visual quality matter, including nighttime scenes, underexposed photographs, and vision pipelines that require reliable inputs under degraded lighting.

Method Overview

A hybrid image restoration pipeline for local detail recovery and global illumination consistency.

Low-Light Input

Underexposed RGB image with limited visibility, non-uniform illumination, and noise.

Convolutional Stem

Extracts local spatial patterns, edges, texture cues, and early restoration features.

CvT Blocks

Models global context while retaining efficient convolutional token projections.

Enhanced Output

Produces a visually enhanced image with stronger contrast and improved details.

LLIE-CvT architecture diagram

Qualitative Results

Add your low-light input and enhanced output image pairs in static/images using the filenames shown below.

Low-light input example 1 Input
LLIE-CvT enhanced output example 1 Enhanced
Low-light input example 2 Input
LLIE-CvT enhanced output example 2 Enhanced
Low-light input example 3 Input
LLIE-CvT enhanced output example 3 Enhanced

Key Ideas

  • Hybrid design: combines convolutional inductive bias with transformer context modeling.
  • Low-light restoration: targets brightness, contrast, color consistency, and detail recovery.
  • Research-ready page: includes sections for paper, code, results, architecture, and citation.

Use Cases

  • Nighttime photography and surveillance imagery.
  • Preprocessing for downstream computer vision models.
  • Robust vision systems under degraded illumination.

BibTeX

@inproceedings{goswami2026lliecvt,
  title     = {LLIE-CvT: A Convolutional Vision Transformer for Low-Light Image Enhancement},
  author    = {Goswami, Debanjan},
  booktitle = {International Conference on Pattern Recognition},
  year      = {2026}
}

Update the author list, venue details, pages, DOI, and URL after the official proceedings entry is available.