Abstract
Large Language Models (LLMs) have demonstrated impressive abilities in tackling tasks across numerous domains. The capabilities of LLMs could potentially be applied to various computer networking tasks, including network synthesis, management, debugging, security, and education. However, LLMs can be unreliable: they are prone to reasoning errors and may hallucinate incorrect information. Their effectiveness and limitations in computer networking tasks remain unclear. In this paper, we attempt to understand the capabilities and limitations of LLMs in network applications. We evaluate misunderstandings regarding networking related concepts across 3 LLMs over 500 questions. We assess the reliability, explainability, and stability of LLM responses to networking questions. Furthermore, we investigate errors made, analyzing their cause, detectability, effects, and potential mitigation strategies.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 14-24 |
| Number of pages | 11 |
| Journal | Computer Communication Review |
| Volume | 54 |
| Issue number | 4 |
| Early online date | Feb 11 2025 |
| DOIs | |
| State | Published - Feb 11 2025 |
Keywords
- Characterization Study
- Computer Networking
- Large Language Models
ASJC Scopus subject areas
- Software
- Computer Networks and Communications
Fingerprint
Dive into the research topics of 'Understanding Misunderstandings: Evaluating LLMs on Networking Questions'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS