Weighted Model Integration (WMI) is a recent and general formalism for reasoning over hybrid continuous/discrete probabilistic models with logical and algebraic constraints. While many works have focused on inference in WMI models, the challenges of learning them from data have received much less attention. Our contribution is twofold. First, we provide novel theoretical insights on the problem of estimating the parameters of these models from data in a tractable way, generalizing previous results on maximum-likelihood estimation (MLE) to the broader family of log-linear WMI models. Second, we show how our results on WMI can characterize the tractability of inference and MLE for another widely used class of probabilistic models, Hinge Loss Markov Random Fields (HLMRFs). Specifically, we bridge these two areas of research by reducing marginal inference in HLMRFs to WMI inference, and thus we open up new interesting applications for both model classes.

Is Parameter Learning via Weighted Model Integration Tractable? / Zeng, Zhe; Morettin, Paolo; Yan, Fanqi; Vergari, Antonio; Passerini, Andrea; Van den Broeck, Guy. - (2021). (Intervento presentato al convegno TPM tenutosi a Virtual nel 30 July , 2021).

Is Parameter Learning via Weighted Model Integration Tractable?

Morettin, Paolo;Passerini, Andrea;
2021-01-01

Abstract

Weighted Model Integration (WMI) is a recent and general formalism for reasoning over hybrid continuous/discrete probabilistic models with logical and algebraic constraints. While many works have focused on inference in WMI models, the challenges of learning them from data have received much less attention. Our contribution is twofold. First, we provide novel theoretical insights on the problem of estimating the parameters of these models from data in a tractable way, generalizing previous results on maximum-likelihood estimation (MLE) to the broader family of log-linear WMI models. Second, we show how our results on WMI can characterize the tractability of inference and MLE for another widely used class of probabilistic models, Hinge Loss Markov Random Fields (HLMRFs). Specifically, we bridge these two areas of research by reducing marginal inference in HLMRFs to WMI inference, and thus we open up new interesting applications for both model classes.
2021
4th Workshop on Tractable Probabilistic Modeling
s.l.
UAI
Zeng, Zhe; Morettin, Paolo; Yan, Fanqi; Vergari, Antonio; Passerini, Andrea; Van den Broeck, Guy
Is Parameter Learning via Weighted Model Integration Tractable? / Zeng, Zhe; Morettin, Paolo; Yan, Fanqi; Vergari, Antonio; Passerini, Andrea; Van den Broeck, Guy. - (2021). (Intervento presentato al convegno TPM tenutosi a Virtual nel 30 July , 2021).
File in questo prodotto:
File Dimensione Formato  
is_parameter_learning_via_weig.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 414.2 kB
Formato Adobe PDF
414.2 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/364921
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact