TY - GEN

T1 - Margin-based decomposed amortized inference

AU - Kundu, Gourab

AU - Srikumar, Vivek

AU - Roth, Dan

PY - 2013/1/1

Y1 - 2013/1/1

N2 - Given that structured output prediction is typically performed over entire datasets, one natural question is whether it is possible to re-use computation from earlier inference instances to speed up inference for future instances. Amortized inference has been proposed as a way to accomplish this. In this paper, first, we introduce a new amortized inference algorithm called the Margin-based Amortized Inference, which uses the notion of structured margin to identify inference problems for which previous solutions are provably optimal. Second, we introduce decomposed amortized inference, which is designed to address very large inference problems, where earlier amortization methods become less effective. This approach works by decomposing the output structure and applying amortization piece-wise, thus increasing the chance that we can re-use previous solutions for parts of the output structure. These parts are then combined to a global coherent solution using Lagrangian relaxation. In our experiments, using the NLP tasks of semantic role labeling and entityrelation extraction, we demonstrate that with the margin-based algorithm, we need to call the inference engine only for a third of the test examples. Further, we show that the decomposed variant of margin-based amortized inference achieves a greater reduction in the number of inference calls.

AB - Given that structured output prediction is typically performed over entire datasets, one natural question is whether it is possible to re-use computation from earlier inference instances to speed up inference for future instances. Amortized inference has been proposed as a way to accomplish this. In this paper, first, we introduce a new amortized inference algorithm called the Margin-based Amortized Inference, which uses the notion of structured margin to identify inference problems for which previous solutions are provably optimal. Second, we introduce decomposed amortized inference, which is designed to address very large inference problems, where earlier amortization methods become less effective. This approach works by decomposing the output structure and applying amortization piece-wise, thus increasing the chance that we can re-use previous solutions for parts of the output structure. These parts are then combined to a global coherent solution using Lagrangian relaxation. In our experiments, using the NLP tasks of semantic role labeling and entityrelation extraction, we demonstrate that with the margin-based algorithm, we need to call the inference engine only for a third of the test examples. Further, we show that the decomposed variant of margin-based amortized inference achieves a greater reduction in the number of inference calls.

UR - http://www.scopus.com/inward/record.url?scp=84907358220&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84907358220&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84907358220

SN - 9781937284503

T3 - ACL 2013 - 51st Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference

SP - 905

EP - 913

BT - Long Papers

PB - Association for Computational Linguistics (ACL)

T2 - 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013

Y2 - 4 August 2013 through 9 August 2013

ER -