Ontology Reasoning in Knowledge Graphs

KGs Insights

A Python hands-on guide to understanding the principles for generating new knowledge following logical processes

9 min read

2 hours ago

Figure 1 — An end-to-end process illustrating how starting statements lead to inferred ones through ontology reasoning

Introduction

Reasoning capabilities are a widely discussed topic in the context of AI systems. These capabilities are often associated with Large Language Models (LLMs), which are particularly effective in extracting patterns learned from a vast amount of data.

The knowledge captured during this learning process enables LLMs to perform various language tasks, such as question answering and text summarization, showing skills that resemble human reasoning.

It’s not helpful to just say “LLMs can’t reason”, since clearly they do some things which humans would use reasoning for. — Jeremy Howard |
Co-Founder Fast.AI — Digital Fellow at Stanford

Despite their ability to identify and match patterns within data, LLMs show limitations in tasks that require structured and formal reasoning, especially in fields that demand rigorous logical processes.

These limitations highlight the distinction between pattern recognition and proper logical reasoning, a difference humans do not always discern.