Will it be created by OpenAI and will it be advertised? (My guess is that it will not be publicly known until 2021, but other companies may create open versions before it.)
How much data will be used for its training and what type of data? (My guess is 400 GB of text plus illustrating pictures, but not audio and video.)
What it will be able to do? (My guess: translation, picture generation based on text, text generation based on pictures – with 70 per cent of human performance.)
How many parameters will be in the model? (My guess is 100 billion to trillion.)
How much compute will be used for training? (No idea.)
In October 2019, a model was trained by Google with on 750 GB training data and it has 11 billion parameters (vs. 40 Gb and 1.6B for GPT-2 8 months before that.)
When it will appear? (My guess is 2020).
Will it be created by OpenAI and will it be advertised? (My guess is that it will not be publicly known until 2021, but other companies may create open versions before it.)
How much data will be used for its training and what type of data? (My guess is 400 GB of text plus illustrating pictures, but not audio and video.)
What it will be able to do? (My guess: translation, picture generation based on text, text generation based on pictures – with 70 per cent of human performance.)
How many parameters will be in the model? (My guess is 100 billion to trillion.)
How much compute will be used for training? (No idea.)