Understanding Misunderstandings: Evaluating LLMs on Networking Questions

Research output: Contribution to journalArticlepeer-review

Abstract

Large Language Models (LLMs) have demonstrated impressive abilities in tackling tasks across numerous domains. The capabilities of LLMs could potentially be applied to various computer networking tasks, including network synthesis, management, debugging, security, and education. However, LLMs can be unreliable: they are prone to reasoning errors and may hallucinate incorrect information. Their effectiveness and limitations in computer networking tasks remain unclear. In this paper, we attempt to understand the capabilities and limitations of LLMs in network applications. We evaluate misunderstandings regarding networking related concepts across 3 LLMs over 500 questions. We assess the reliability, explainability, and stability of LLM responses to networking questions. Furthermore, we investigate errors made, analyzing their cause, detectability, effects, and potential mitigation strategies.

Original languageEnglish (US)
Pages (from-to)14-24
Number of pages11
JournalComputer Communication Review
Volume54
Issue number4
Early online dateFeb 11 2025
DOIs
StatePublished - Feb 11 2025

Keywords

  • Characterization Study
  • Computer Networking
  • Large Language Models

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Understanding Misunderstandings: Evaluating LLMs on Networking Questions'. Together they form a unique fingerprint.

Cite this