Sequence to Sequence model using Multi-head Attention Transformer Architecture sdev2030 Posted on February 4, 2021