Call for Evaluation Challenges

IJCKG 2025 will organize evaluation challenges related to Knowledge Graph, aiming to provide researchers with a platform to test technologies, algorithms and systems. Evaluations challenge organizers of IJCKG 2025 can select the platform and evaluation plan by themselves, and we sincerely solicit evaluation challenges from researchers, research institutions and enterprises in related fields.

At IJCKG 2025, each evaluation challenge will receive one slot during the main conference, where organizers present the challenge, and participants present the solutions. At least one organizer must register and be present at the conference. Winners will receive a certificate and be invited to present their systems during the poster and demo session.

Topics
For IJCKG 2025, evaluation challenge proposals are invited for all tasks on Knowledge Graphs, including but not limited to:

  • Knowledge Representation and Reasoning
  • Knowledge Acquisition and Knowledge Graph Construction
  • Knowledge Graph Querying and Management
  • Semantic Web and Data Mining
  • Natural Language Understanding and Semantic Computing
  • Question Answering and Semantic Search
  • Neuro-Symbolic AI
  • Large Language Model and Knowledge Graph
  • Machine Learning and Knowledge Graph

Proposal Submission Guidelines
Proposals for evaluation challenges should be concise and include:

  • Names and affiliations of the organizers
  • Description of the evaluation challenge, including the specific task to be addressed, the details of evaluation dataset, and its relevance to Knowledge Graph
  • Procedure for evaluating the performance of systems, including metrics and availability of evaluation software
  • Expected number of participants with supporting evidence
  • Please submit the proposal to the email: tianxingwu 'AT' seu.edu.cn

Timeline for Evaluation Challenge Organizers:

  • Submission of evaluation challenges: June 1, 2025 July 24, 2025
  • Website & first call for participation: June 7, 2025  August 7, 2025
  • Release datasets (including the test set): June 7, 2025 August 7, 2025
  • Submission of systems & system description papers: August 1, 2025 August 14, 2025
  • Notification of acceptance & winner announcement: August 10, 2025 August 14, 2025
The winners will have the opportunity to publish their results as a short paper
All deadlines are 23:59 AoE (Anywhere on Earth).

Evaluation Challenge Chair

  • Tianxing Wu (Southeast University, China)

Archer: Bilingual Reasoning-aware Text-to-SQL Evaluation

Organizers

  • Jeff Pan, University of Edinburgh
  • Zhichao Yan, Shanxi University
  • Wenyu Huang, University of Edinburgh

Description
Natural language interaction with databases in a more friendly and intuitive way is a challenging work, which aims to translate natural language questions into executable SQL statements. Some recent works have achieved good performance on existing datasets, but they cannot efficiently perform complex reasoning such as arithmetic, common sense, and hypothesis. To this end, we propose Archer, a dataset that incorporates the above three types of inference to make more complex and subtle queries. In addition, we tested with both large language models and fine-tuned models. Archer has three types of reasoning: arithmetic reasoning, commonsense reasoning and hypothetical reasoning. Arithmetic reasoning has an important proportion in the specific application scenarios of SQL. Commonsense reasoning refers to the ability to reason based on implicit commonsense knowledge, Archer contains some questions that require understanding the database to infer missing details; Hypothesis reasoning requires the model to have counterfactual thinking ability, which is the ability to imagine and reason about unseen situations based on visible facts and counterfactual hypotheses.

Website
https://sig4kg.github.io/archer-bench/

Social Media
Slack chat https://join.slack.com/t/archer-ijckg2025/shared_invite/zt-3855g81oj-Ke0YaLuN3mAwjHrqLtUTXw

Challenge Prizes

  • First place: 2,000 USD (1 place)
  • Second place: 1,000 USD (1 place)
  • Third place: 500 USD (2 places)

Important Dates

  • Sign-up for a team: August 21, 2025
  • Commit results and code: August 25, 2025
  • Results announcement: September 10, 2025
  • System and evaluation paper: September 20, 2025