Facebook Research – Unsupervised Translation of Programming Languages
In this episode of Machine Learning Street Talk Dr. Tim Scarfe, Yannic Kilcher and Connor Shorten spoke with Marie-Anne Lachaux, Baptiste Roziere and Dr. Guillaume Lample from Facebook Research (FAIR) in Paris. They recently released the paper “Unsupervised Translation of Programming Languages” which was an exciting new approach to learned translation of programming languages (learned transcoder) using an unsupervised encoder trained on individual monolingual corpora i.e. no parallel language data needed. The trick they used what that there is significant token overlap when using word-piece embeddings. It was incredible to talk with this talented group of researchers and I hope you enjoy the conversation too.
00:00:20 YANNIC INTRO TAKE
00:01:20 WHAT DID THEY DO
00:02:10 YANNIC EXPLAINS UNSUPERVISED MACHINE TRANSLATION
00:03:50 COULD THIS REVOLUTIONISE SOFTWARE ENGINEERING?
00:04:50 YANNIC EXPLAINS PAPER #2
00:05:50 LMS USED TO REPRESENT MATHS (FAIR PAPER)
00:06:50 WHAT WAS THEIR PROCESS
00:07:45 HOW DOES THE ENCODER WORK? (YANNIC)
00:11:00 MAIN SHOW START
00:11:30 ELEVATOR PITCH FROM BAPTISTE
00:14:00 LEARNING X-LINGUAL REPRESENTATIONS
00:15:00 HOW MUCH OVERLAP DO YOU NEED BETWEEN LANGUAGES
00:18:40 HOW DO YOU COME UP WITH AN IDEA LIKE THIS
00:24:05 HOW DID YOU EVALUATE?
00:25:30 REASONING ABOUT WHY IT WORKS
00:32:00 COMPARED TO RULES-BASED
00:33:50 COULD IT HAVE BEEN DONE ANOTHER WAY/AST TREE
00:36:30 PATTERN RECOGNITION VS SYMBOLIC
00:39:30 ARE YOU TRYING TO REVOLUTIONISE SOFTWARE ENGINEERING?
00:41:00 WHAT’s IT LIKE AT FAIR?
00:45:15 STATE OF DEEP LEARNING?
00:48:40 SCALING DEBATE
00:50:10 REDDIT QUESTION WHAT WHEN SOMETHING IS IN LANG1 but not LANG2?
00:52:50 AR VS DENOISING AE AND SOFTWARE ENGINEERING USECASES
00:58:00 OTHER APPLICATIONS FOR LMs i.e. MATHS
00:59:45 FAIR COMPARED TO OTHER LABS
01:02:00 END SHOW
Yannic’s video on this got watched over 120K times! Check it out too https://www.youtube.com/watch?v=xTzFJIknh7E
Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, Guillaume Lample
“A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is timeconsuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin.”