PE&RS Call for Papers – Special Issue on “Large Language Models in Remote Sensing: Across Different Modalities”

Special Issue on “Large Language Models in Remote Sensing: Across Different Modalities”

Recent breakthroughs in multimodal Large Language Models (LLMs) have demonstrated their remarkable ability to understand, generate, and reason over natural language as well as complex image-based data modalities. While these models have revolutionized domains such as Natural Language Processing (NLP) and Computer Vision (CV), their application in remote sensing remains in their infancy. This special issue aims to explore the emerging intersection between the many data modalities of remote sensing and LLMs, highlighting their potential to enhance semantic understanding, automate interpretation, and human-like interaction with geospatial data.

Remote sensing data are rich in spatial and temporal information but often lack high-level natural language-based semantic descriptors. While the application of LLMs to RGB images analysis is well understood, their use with other remote sensing modalities such as Synthetic Aperture Radar (SAR), non-RGB bands, and hyperspectral imagery remains underexplored. 3D Point clouds, derived from LiDAR or photogrammetry, and downstream 3D models can provide detailed three-dimensional structural information that can be analyzed by LLMs, who have recently proven to have complex reasoning capabilities. By integrating LLMs with remote sensing pipelines, researchers can generate scene-level captions, facilitate cross-modal retrieval, enable question-answering over Earth Observation (EO) datasets, and construct natural language interfaces for geospatial analysis. Furthermore, LLMs may support knowledge extraction and reasoning from heterogeneous sources, bridging structured EO data with textual corpora such as scientific literature or reports. The goal of the proposed special issue is to highlight and advance research on integrating LLMs with remote sensing modalities, including RGB images, and modalities with previously underexplored LLM integration, like SAR, non-RGB bands, hyperspectral imagery, 3D point clouds, and other 3D models. This integration aims to enhance semantic understanding and cross-modal interaction and improve remote sensing scene analysis capabilities.

Topics of interest focus on the integration of remote sensing and GIS with Large Language Models (LLMs), Vision Language Models (VLMs), and Multimodal Large Language Models (MLLMs). They include, but are not limited to:

  • Multimodal Data Integration/Analysis and AI Techniques with LLMs and VLMs:
  • Fusion of various 2D and 3D remote sensing modalities with LLMs
  • LLM assisted scene analysis and change detection
  • Open set/open vocabulary classification, segmentation, and object detection using 2D and 3D remote sensing data
  • Cross-modal representation learning and transfer learning
  • Temporal-spatial modeling and uncertainty quantification with LLMs

 

Applications of LLM/VLM/MLLM-assisted remote sensing and GIS for environmental, climate, and urban modelling:

  • Environmental Monitoring and Climate Analysis, including Climate change impact assessment and carbon monitoring; Agricultural monitoring and precision farming; Water resource management and drought assessment
  • Disaster Management and Emergency Response, including Natural disaster detection and damage assessment; Wildfire, flood, and extreme weather monitoring; Emergency response planning and resource allocation; Risk assessment and early warning systems
  • Urban Development and Infrastructure, including Urban expansion and smart city planning; Infrastructure monitoring and development assessment; Transportation network analysis; Population dynamics and demographic studies

 

Guest editors:

Kyle Gao, University of Waterloo, Canada, y56gao@uwaterloo.ca
Dening Lu, University of Waterloo, Canada, d62lu@uwaterloo.ca
Lincoln Xu, University of Calgary, Canada, lincoln.xu@ucalgary.ca

Anticipated timeline

Full paper submission deadline: November 1, 2026
Publication time: Spring 2027

All of the Call for Papers can be found on our website: https://my.asprs.org/pers