emedvedev/attention-ocr

Target Embedding Size explanation

mikylucky opened this issue · 2 comments

Hi,
can someone explain me what's the embedding_size parameter used for? I cannot understand what is "Embedding Size".

Thanks!

It's like converting CNN feature maps into a dense representation which can then be fed to Attention mechanism. In NLP, we convert sentences into numbers using vector representations for each word, with/without context(these are called word embeddings). which are later used as an input (representing a sentence) to a model. https://machinelearningmastery.com/what-are-word-embeddings/

These(embeddings) can be learned while training.
We usually choose the dimensions for embeddings. so here, embedding_size represents the dimensions of vector which will be used to map CNN feature maps to a denser vector.

Nice explanation, thanks!