Summary
XGLUE is a new benchmark dataset for cross-lingual pre-training, understanding and generation. It can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.
1
Unicoder-VL, a universal encoder, is proposed to learn joint representations of vision and language in a pre-training manner.
1
The base versions of Multilingual BERT, XLM and XLM-R are evaluated for comparison.
1
2
According to
See more results on Neeva
Summaries from the best pages on the web
Limit $5 in earned Rewards per day. Rewards earned will show up in Rewards members’ account 48 hours after eligible purchase.
xGlue
michaels.com
Summary
In this paper, we introduce XGLUE, a new benchmark dataset to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora. We introduce Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner, and evaluate the base versions of Multilingual BERT, XLM and XLM-R for comparison. We also propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner, and evaluate the base versions of Multilingual BERT, XLM and XLM-R for comparison.
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation | Request PDF
researchgate.net
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
xglue · Discussions
huggingface.co
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
xglue at main
huggingface.co