Learning bidding strategies with autonomous agents in environments with unstable equilibrium

Riyaz T. Sikora, Vishal Sachdev

Research output: Contribution to journalArticlepeer-review

Abstract

The role of automated agents for decision support in the electronic marketplace has been growing steadily and has been attracting a lot of research from the artificial intelligence community as well as from economists. In this paper, we study the efficacy of using automated agents for learning bidding strategies in contexts of strategic interaction involving multiple sellers in reverse auctions. Standard game-theoretic analysis of the problem assumes completely rational and omniscient agents to derive Nash equilibrium seller policy. Most of the literature on use of learning agents uses convergence to Nash equilibrium as the validating criterion. In this paper, we consider a problem where the Nash equilibrium is unstable and hence not useful as an evaluation criterion. Instead, we propose that agents should be able to learn the optimal or best response strategies when they exist (rational behavior) and should demonstrate low variance in profits (convergence). We present rationally bounded, evolutionary and reinforcement learning agents that learn these desirable properties of rational behavior and convergence.

Original languageEnglish (US)
Pages (from-to)101-114
Number of pages14
JournalDecision Support Systems
Volume46
Issue number1
DOIs
StatePublished - Dec 2008
Externally publishedYes

Keywords

  • Automated agents
  • Bidding strategies
  • Evolutionary learning
  • Reinforcement learning
  • Strategic interactions
  • Unstable equilibrium

ASJC Scopus subject areas

  • Management Information Systems
  • Information Systems
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)
  • Information Systems and Management

Fingerprint Dive into the research topics of 'Learning bidding strategies with autonomous agents in environments with unstable equilibrium'. Together they form a unique fingerprint.

Cite this