<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/scripts/pretty-feed-v3.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:h="http://www.w3.org/TR/html4/"><channel><title>Wending Zhao&apos;s blog</title><description>Welcome to my blog.</description><link>https://aqu1ver.fun</link><item><title>Conventional Commits 与语义化版本控制</title><link>https://aqu1ver.fun/blog/conventional-commits-and-semantic-versioning</link><guid isPermaLink="true">https://aqu1ver.fun/blog/conventional-commits-and-semantic-versioning</guid><description>标准化 Git Commit 信息格式，规范版本号递增规则，让提交记录可读、可自动化生成 CHANGELOG。</description><pubDate>Wed, 29 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Conventional Commits&lt;/h2&gt;
&lt;p&gt;Conventional Commits 是一种标准化 Git Commit 信息格式，让提交记录可读、可自动化生成版本号、自动生成更新日志（CHANGELOG），也是 GitHub Action、语义化版本、团队协作的通用规范。&lt;/p&gt;
&lt;p&gt;相关文档 &lt;a href=&quot;https://www.conventionalcommits.org/en/v1.0.0/#specification&quot;&gt;https://www.conventionalcommits.org/en/v1.0.0/&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;标准格式&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&amp;#x3C;type&gt;[optional scope]: &amp;#x3C;description&gt;

[optional body]

[optional footer(s)]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;译文：&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&amp;#x3C;类型&gt;[可选 范围]: &amp;#x3C;描述&gt;

[可选 正文]

[可选 脚注]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;type 表格&lt;/h3&gt;
&lt;p&gt;| 提交类型   | 核心含义                                                                                        |
| :--------- | :---------------------------------------------------------------------------------------------- |
| &lt;code&gt;fix&lt;/code&gt;      | 修复代码库中的bug，解决现有问题                                                                 |
| &lt;code&gt;feat&lt;/code&gt;     | 为代码库新增功能或特性                                                                          |
| &lt;code&gt;build&lt;/code&gt;    | 修改项目构建系统、依赖或构建工具配置                                                            |
| &lt;code&gt;chore&lt;/code&gt;    | 处理非业务性杂项，不涉及功能或逻辑变更                                                          |
| &lt;code&gt;ci&lt;/code&gt;       | 调整持续集成（&lt;a href=&quot;https://www.redhat.com/zh-cn/topics/devops/what-is-ci-cd&quot;&gt;CI/CD&lt;/a&gt;）流程及相关配置 |
| &lt;code&gt;docs&lt;/code&gt;     | 修改项目文档、注释及使用说明                                                                    |
| &lt;code&gt;style&lt;/code&gt;    | 调整代码格式与样式，不改变代码逻辑                                                              |
| &lt;code&gt;refactor&lt;/code&gt; | 重构代码结构，不新增功能也不修复bug                                                             |
| &lt;code&gt;perf&lt;/code&gt;     | 优化代码性能，提升运行效率、减少资源占用                                                        |
| &lt;code&gt;test&lt;/code&gt;     | 新增、修改或删除代码测试用例                                                                    |&lt;/p&gt;
&lt;h3&gt;脚注内容&lt;/h3&gt;
&lt;p&gt;可以添加诸如 ：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;BREAKING CHANGE: &amp;#x3C;description&gt;&lt;/code&gt; 强调破坏性变更&lt;/li&gt;
&lt;li&gt;关联 Issue / 工单 &lt;code&gt;Closes #123&lt;/code&gt; &lt;code&gt;Fixes #456&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;添加作者签名 &lt;code&gt;Signed-off-by:&lt;/code&gt; 用户名 &amp;#x3C;邮箱&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://git-scm.com/docs/git-interpret-trailers&quot;&gt;git trailer&lt;/a&gt; 标准格式内容 例如 &lt;code&gt;Reviewed-by:&lt;/code&gt; &lt;code&gt;Co-authored-by:&lt;/code&gt; &lt;code&gt;Refs:&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;需要注意的是&lt;a href=&quot;https://www.conventionalcommits.org/zh-hans/v1.0.0/#:~:text=trailer%20convention%20%E5%90%AF%E5%8F%91%EF%BC%89%E3%80%82-,%E8%84%9A%E6%B3%A8%E7%9A%84%E4%BB%A4%E7%89%8C,-%E5%BF%85%E9%A1%BB%E4%BD%BF%E7%94%A8%20%2D&quot;&gt;脚注令牌&lt;/a&gt;使用&lt;code&gt;-&lt;/code&gt;连接，不能使用空格&lt;/p&gt;
&lt;h3&gt;补充规则&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;提交类型可追加&lt;strong&gt;范围&lt;/strong&gt;提供上下文，格式：&lt;code&gt;类型(范围): 描述&lt;/code&gt;，例：&lt;code&gt;feat(parser): 新增数组解析能力&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;包含了 ! 表明有破坏性改变，例如 &lt;code&gt;feat(api)!: send an email to the customer when a product is shipped&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;除规范内类型外，可自定义类型&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Semantic Versioning&lt;/h2&gt;
&lt;p&gt;语义化版本控制是一种确定软件版本号的规则，基于固定主.次.修订版本号规则、依据公共 API 兼容变更类型逐级升级版本号，用以传递版本间代码与兼容信息的版本约定规范。&lt;/p&gt;
&lt;h3&gt;简要概述&lt;/h3&gt;
&lt;p&gt;版本格式：主版本号.次版本号.修订号（X.Y.Z），版本号递增规则如下：&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;主版本号：当你做了不兼容的 API 修改，&lt;/li&gt;
&lt;li&gt;次版本号：当你做了向下兼容的功能性新增，&lt;/li&gt;
&lt;li&gt;修订号：当你做了向下兼容的问题修正。&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;先行版本号及版本编译信息可以加到“主版本号.次版本号.修订号”的后面，作为延伸。&lt;/p&gt;
&lt;h3&gt;详细内容&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://semver.org/lang/zh-CN/#%E8%AF%AD%E4%B9%89%E5%8C%96%E7%89%88%E6%9C%AC%E6%8E%A7%E5%88%B6%E8%A7%84%E8%8C%83semver&quot;&gt;语义化版本控制规范semver&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;可以简要概述为：bug 升 Z、加功能升 Y、不兼容升 X，大版本升级后面全归零，后缀先行版比正式版低一档。具体来说&lt;/p&gt;
&lt;h4&gt;如何递增版本号&lt;/h4&gt;
&lt;p&gt;初始开发阶段使用0.x.y的形式，此时API不稳定，随时可能变动。1.0.0产品正式定型，API 稳定，按照以下规则进行升级：&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;修订号 Z：只做向下兼容 Bug 修复时 +1，不改功能&lt;/li&gt;
&lt;li&gt;加向下兼容新功能、废弃旧 API 时 +1；也可内部大优化时升级，Y 升级后 Z 归零&lt;/li&gt;
&lt;li&gt;API 出现不兼容改动时 +1；X 升级后 Y、Z 全部归零&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;pre-release 与 build metadata&lt;/h4&gt;
&lt;p&gt;先行版采用 X.Y.Z-先行版本标识符 格式，用 &lt;code&gt;-&lt;/code&gt; 连接。标识符由字母数字和 - 组成、无空格、数字不补 0；优先级低于正式版（例如：1.0.0-alpha &amp;#x3C; 1.0.0）&lt;/p&gt;
&lt;p&gt;编译信息在版本号-先行版本标识符之后，采用 &lt;code&gt;+&lt;/code&gt; 连接；版本优先级对比时直接忽略。例如 1.0.0-alpha+001、1.0.0+20130313144700、1.0.0-beta+exp.sha.5114f85。&lt;/p&gt;</content:encoded><h:img src="/_astro/image.Cjusei4M.png"/><enclosure url="/_astro/image.Cjusei4M.png"/></item><item><title>使用 dev-containers 进行开发</title><link>https://aqu1ver.fun/blog/dev-container</link><guid isPermaLink="true">https://aqu1ver.fun/blog/dev-container</guid><description>Dev Containers 是一种将 Docker 容器作为完整开发环境的技术方案，通过配置文件定义环境镜像、工具链、依赖和 VS Code 设置实现跨设备、跨团队的一致性开发体验。</description><pubDate>Mon, 27 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;使用dev-containers进行开发&lt;/h1&gt;
&lt;p&gt;Dev Containers 是一种将 Docker 容器作为完整开发环境的技术方案，通过 &lt;code&gt;.devcontainer/devcontainer.json&lt;/code&gt; 配置文件定义环境镜像、工具链、依赖和 VS Code 设置，实现跨设备、跨团队的一致性开发体验。&lt;/p&gt;
&lt;h2&gt;开始&lt;/h2&gt;
&lt;p&gt;需要vscode安装Dev Containers 扩展。对于windows环境同时需要依赖于wsl环境，建议安装ubuntu，不建议使用安装时自动创建的docker-desktop，并且通过 Docker Desktop → Settings → Resources → WSL Integration 开启 WSL 集成&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://aqu1ver.fun/_astro/image.DJvzWWJl_Z1B3Ffb.webp&quot; alt=&quot;docker desktop 开启 WSL 集成&quot;&gt;&lt;/p&gt;
&lt;h2&gt;devcontainer.json&lt;/h2&gt;
&lt;h3&gt;最小配置&lt;/h3&gt;
&lt;p&gt;只需要配置 &lt;code&gt;image&lt;/code&gt; 字段即可，有很多模板可以从 &lt;a href=&quot;https://containers.dev/templates&quot;&gt;这里&lt;/a&gt; 找到&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;image&quot;: &quot;ghcr.io/devcontainers/templates/cpp:4.0.0&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;完整配置可以参考 &lt;a href=&quot;https://containers.dev/implementors/json_reference/&quot;&gt;Dev Container metadata reference&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;features&lt;/h3&gt;
&lt;p&gt;features是容器层在基础模板之上添加额外的内容，例如 &lt;a href=&quot;https://github.com/devcontainers/features/tree/main/src/common-utils&quot;&gt;common-utils&lt;/a&gt;，可以在基础模板之上配置&lt;code&gt;zsh&lt;/code&gt;等。例如&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;image&quot;:  &quot;ghcr.io/devcontainers/templates/cpp:4.0.0&quot;,
    &quot;features&quot;: {
        &quot;ghcr.io/devcontainers/features/common-utils:2&quot;: {
            &quot;configureZshAsDefaultShell&quot;: ture
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;这会为我们的容器配置&lt;code&gt;zsh&lt;/code&gt;并设置为默认shell&lt;/p&gt;
&lt;p&gt;更多feature可以参考 &lt;a href=&quot;https://containers.dev/features&quot;&gt;这里&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;进行开发&lt;/h2&gt;
&lt;p&gt;在&lt;strong&gt;wsl&lt;/strong&gt;环境内，创建&lt;code&gt;.devcontainer/devcontainer.json&lt;/code&gt;，使用&lt;code&gt;code .&lt;/code&gt;即可在wsl中打开该文件夹。此时vscode右下角会提示 &lt;code&gt;Reopen in Container&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://aqu1ver.fun/_astro/image-1.C052tAIZ_1PlR0m.webp&quot; alt=&quot;Reopen in Container - right&quot;&gt;&lt;/p&gt;
&lt;p&gt;或者 &lt;code&gt;Ctrl Shift + p&lt;/code&gt; 输入 &lt;code&gt;Reopen in Container&lt;/code&gt;。等待拉取docker镜像。&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;不建议在windows环境中创建&lt;code&gt;.devcontainer/devcontainer.json&lt;/code&gt;，一是跨文件系统io性能较差；二是这样会默认使用docker desktop安装时创建的wsl，配置&lt;a href=&quot;https://learn.microsoft.com/zh-cn/windows/wsl/tutorials/gui-apps&quot;&gt;wslg&lt;/a&gt;较为麻烦。&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded><h:img src="/_astro/image-1.C052tAIZ.png"/><enclosure url="/_astro/image-1.C052tAIZ.png"/></item><item><title>Annotated Transformer</title><link>https://aqu1ver.fun/blog/annotated-transformer</link><guid isPermaLink="true">https://aqu1ver.fun/blog/annotated-transformer</guid><description>The Annotated Transformer - Attention is All You Need 的详细注释实现</description><pubDate>Fri, 05 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;文章原文在 &lt;a href=&quot;http://nlp.seas.harvard.edu/annotated-transformer/&quot;&gt;http://nlp.seas.harvard.edu/annotated-transformer/&lt;/a&gt;，本人为了便于访问只做转载，著作权归原作者所有&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;v2022: Austin Huang, Suraj Subramanian, Jonathan Sum, Khalid Almubarak,
and Stella Biderman.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href=&quot;https://nlp.seas.harvard.edu/2018/04/03/attention.html&quot;&gt;Original&lt;/a&gt;:
&lt;a href=&quot;http://rush-nlp.com/&quot;&gt;Sasha Rush&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Transformer has been on a lot of
people&apos;s minds over the last year five years.
This post presents an annotated version of the paper in the
form of a line-by-line implementation. It reorders and deletes
some sections from the original paper and adds comments
throughout. This document itself is a working notebook, and should
be a completely usable implementation.
Code is available
&lt;a href=&quot;https://github.com/harvardnlp/annotated-transformer/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Prelims&lt;/h1&gt;
&lt;p&gt;Skip&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# !pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# # Uncomment for colab
# #
# !pip install -q torchdata==0.3.0 torchtext==0.12 spacy==3.2 altair GPUtil
# !python -m spacy download de_core_news_sm
# !python -m spacy download en_core_web_sm
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os
from os.path import exists
import torch
import torch.nn as nn
from torch.nn.functional import log_softmax, pad
import math
import copy
import time
from torch.optim.lr_scheduler import LambdaLR
import pandas as pd
import altair as alt
from torchtext.data.functional import to_map_style_dataset
from torch.utils.data import DataLoader
from torchtext.vocab import build_vocab_from_iterator
import torchtext.datasets as datasets
import spacy
import GPUtil
import warnings
from torch.utils.data.distributed import DistributedSampler
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP


# Set to False to skip notebook execution (e.g. for debugging)
warnings.filterwarnings(&quot;ignore&quot;)
RUN_EXAMPLES = True
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Some convenience helper functions used throughout the notebook


def is_interactive_notebook():
    return __name__ == &quot;__main__&quot;


def show_example(fn, args=[]):
    if __name__ == &quot;__main__&quot; and RUN_EXAMPLES:
        return fn(*args)


def execute_example(fn, args=[]):
    if __name__ == &quot;__main__&quot; and RUN_EXAMPLES:
        fn(*args)


class DummyOptimizer(torch.optim.Optimizer):
    def __init__(self):
        self.param_groups = [{&quot;lr&quot;: 0}]
        None

    def step(self):
        None

    def zero_grad(self, set_to_none=False):
        None


class DummyScheduler:
    def step(self):
        None
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;My comments are blockquoted. The main text is all from the paper itself.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Background&lt;/h1&gt;
&lt;p&gt;The goal of reducing sequential computation also forms the
foundation of the Extended Neural GPU, ByteNet and ConvS2S, all of
which use convolutional neural networks as basic building block,
computing hidden representations in parallel for all input and
output positions. In these models, the number of operations required
to relate signals from two arbitrary input or output positions grows
in the distance between positions, linearly for ConvS2S and
logarithmically for ByteNet. This makes it more difficult to learn
dependencies between distant positions. In the Transformer this is
reduced to a constant number of operations, albeit at the cost of
reduced effective resolution due to averaging attention-weighted
positions, an effect we counteract with Multi-Head Attention.&lt;/p&gt;
&lt;p&gt;Self-attention, sometimes called intra-attention is an attention
mechanism relating different positions of a single sequence in order
to compute a representation of the sequence. Self-attention has been
used successfully in a variety of tasks including reading
comprehension, abstractive summarization, textual entailment and
learning task-independent sentence representations. End-to-end
memory networks are based on a recurrent attention mechanism instead
of sequencealigned recurrence and have been shown to perform well on
simple-language question answering and language modeling tasks.&lt;/p&gt;
&lt;p&gt;To the best of our knowledge, however, the Transformer is the first
transduction model relying entirely on self-attention to compute
representations of its input and output without using sequence
aligned RNNs or convolution.&lt;/p&gt;
&lt;h1&gt;Part 1: Model Architecture&lt;/h1&gt;
&lt;h1&gt;Model Architecture&lt;/h1&gt;
&lt;p&gt;Most competitive neural sequence transduction models have an
encoder-decoder structure
&lt;a href=&quot;https://arxiv.org/abs/1409.0473&quot;&gt;(cite)&lt;/a&gt;. Here, the encoder maps an
input sequence of symbol representations $(x_1, ..., x_n)$ to a
sequence of continuous representations $\mathbf{z} = (z_1, ...,
z_n)$. Given $\mathbf{z}$, the decoder then generates an output
sequence $(y_1,...,y_m)$ of symbols one element at a time. At each
step the model is auto-regressive
&lt;a href=&quot;https://arxiv.org/abs/1308.0850&quot;&gt;(cite)&lt;/a&gt;, consuming the previously
generated symbols as additional input when generating the next.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class EncoderDecoder(nn.Module):
    &quot;&quot;&quot;
    A standard Encoder-Decoder architecture. Base for this and many
    other models.
    &quot;&quot;&quot;

    def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
        super(EncoderDecoder, self).__init__()
        self.encoder = encoder
        self.decoder = decoder
        self.src_embed = src_embed
        self.tgt_embed = tgt_embed
        self.generator = generator

    def forward(self, src, tgt, src_mask, tgt_mask):
        &quot;Take in and process masked src and target sequences.&quot;
        return self.decode(self.encode(src, src_mask), src_mask, tgt, tgt_mask)

    def encode(self, src, src_mask):
        return self.encoder(self.src_embed(src), src_mask)

    def decode(self, memory, src_mask, tgt, tgt_mask):
        return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Generator(nn.Module):
    &quot;Define standard linear + softmax generation step.&quot;

    def __init__(self, d_model, vocab):
        super(Generator, self).__init__()
        self.proj = nn.Linear(d_model, vocab)

    def forward(self, x):
        return log_softmax(self.proj(x), dim=-1)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Transformer follows this overall architecture using stacked
self-attention and point-wise, fully connected layers for both the
encoder and decoder, shown in the left and right halves of Figure 1,
respectively.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://aqu1ver.fun/_astro/ModalNet-21.E_wcADzz_1H2Ujx.webp&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Encoder and Decoder Stacks&lt;/h2&gt;
&lt;h3&gt;Encoder&lt;/h3&gt;
&lt;p&gt;The encoder is composed of a stack of $N=6$ identical layers.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def clones(module, N):
    &quot;Produce N identical layers.&quot;
    return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Encoder(nn.Module):
    &quot;Core encoder is a stack of N layers&quot;

    def __init__(self, layer, N):
        super(Encoder, self).__init__()
        self.layers = clones(layer, N)
        self.norm = LayerNorm(layer.size)

    def forward(self, x, mask):
        &quot;Pass the input (and mask) through each layer in turn.&quot;
        for layer in self.layers:
            x = layer(x, mask)
        return self.norm(x)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We employ a residual connection
&lt;a href=&quot;https://arxiv.org/abs/1512.03385&quot;&gt;(cite)&lt;/a&gt; around each of the two
sub-layers, followed by layer normalization
&lt;a href=&quot;https://arxiv.org/abs/1607.06450&quot;&gt;(cite)&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class LayerNorm(nn.Module):
    &quot;Construct a layernorm module (See citation for details).&quot;

    def __init__(self, features, eps=1e-6):
        super(LayerNorm, self).__init__()
        self.a_2 = nn.Parameter(torch.ones(features))
        self.b_2 = nn.Parameter(torch.zeros(features))
        self.eps = eps

    def forward(self, x):
        mean = x.mean(-1, keepdim=True)
        std = x.std(-1, keepdim=True)
        return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That is, the output of each sub-layer is $\mathrm{LayerNorm}(x +
\mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function
implemented by the sub-layer itself. We apply dropout
&lt;a href=&quot;http://jmlr.org/papers/v15/srivastava14a.html&quot;&gt;(cite)&lt;/a&gt; to the
output of each sub-layer, before it is added to the sub-layer input
and normalized.&lt;/p&gt;
&lt;p&gt;To facilitate these residual connections, all sub-layers in the
model, as well as the embedding layers, produce outputs of dimension
$d_{\text{model}}=512$.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class SublayerConnection(nn.Module):
    &quot;&quot;&quot;
    A residual connection followed by a layer norm.
    Note for code simplicity the norm is first as opposed to last.
    &quot;&quot;&quot;

    def __init__(self, size, dropout):
        super(SublayerConnection, self).__init__()
        self.norm = LayerNorm(size)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x, sublayer):
        &quot;Apply residual connection to any sublayer with the same size.&quot;
        return x + self.dropout(sublayer(self.norm(x)))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each layer has two sub-layers. The first is a multi-head
self-attention mechanism, and the second is a simple, position-wise
fully connected feed-forward network.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class EncoderLayer(nn.Module):
    &quot;Encoder is made up of self-attn and feed forward (defined below)&quot;

    def __init__(self, size, self_attn, feed_forward, dropout):
        super(EncoderLayer, self).__init__()
        self.self_attn = self_attn
        self.feed_forward = feed_forward
        self.sublayer = clones(SublayerConnection(size, dropout), 2)
        self.size = size

    def forward(self, x, mask):
        &quot;Follow Figure 1 (left) for connections.&quot;
        x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
        return self.sublayer[1](x, self.feed_forward)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Decoder&lt;/h3&gt;
&lt;p&gt;The decoder is also composed of a stack of $N=6$ identical layers.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Decoder(nn.Module):
    &quot;Generic N layer decoder with masking.&quot;

    def __init__(self, layer, N):
        super(Decoder, self).__init__()
        self.layers = clones(layer, N)
        self.norm = LayerNorm(layer.size)

    def forward(self, x, memory, src_mask, tgt_mask):
        for layer in self.layers:
            x = layer(x, memory, src_mask, tgt_mask)
        return self.norm(x)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In addition to the two sub-layers in each encoder layer, the decoder
inserts a third sub-layer, which performs multi-head attention over
the output of the encoder stack. Similar to the encoder, we employ
residual connections around each of the sub-layers, followed by
layer normalization.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class DecoderLayer(nn.Module):
    &quot;Decoder is made of self-attn, src-attn, and feed forward (defined below)&quot;

    def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
        super(DecoderLayer, self).__init__()
        self.size = size
        self.self_attn = self_attn
        self.src_attn = src_attn
        self.feed_forward = feed_forward
        self.sublayer = clones(SublayerConnection(size, dropout), 3)

    def forward(self, x, memory, src_mask, tgt_mask):
        &quot;Follow Figure 1 (right) for connections.&quot;
        m = memory
        x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
        x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
        return self.sublayer[2](x, self.feed_forward)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We also modify the self-attention sub-layer in the decoder stack to
prevent positions from attending to subsequent positions. This
masking, combined with fact that the output embeddings are offset by
one position, ensures that the predictions for position $i$ can
depend only on the known outputs at positions less than $i$.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def subsequent_mask(size):
    &quot;Mask out subsequent positions.&quot;
    attn_shape = (1, size, size)
    subsequent_mask = torch.triu(torch.ones(attn_shape), diagonal=1).type(
        torch.uint8
    )
    return subsequent_mask == 0
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Below the attention mask shows the position each tgt word (row) is
allowed to look at (column). Words are blocked for attending to
future words during training.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def example_mask():
    LS_data = pd.concat(
        [
            pd.DataFrame(
                {
                    &quot;Subsequent Mask&quot;: subsequent_mask(20)[0][x, y].flatten(),
                    &quot;Window&quot;: y,
                    &quot;Masking&quot;: x,
                }
            )
            for y in range(20)
            for x in range(20)
        ]
    )

    return (
        alt.Chart(LS_data)
        .mark_rect()
        .properties(height=250, width=250)
        .encode(
            alt.X(&quot;Window:O&quot;),
            alt.Y(&quot;Masking:O&quot;),
            alt.Color(&quot;Subsequent Mask:Q&quot;, scale=alt.Scale(scheme=&quot;viridis&quot;)),
        )
        .interactive()
    )


show_example(example_mask)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Attention&lt;/h3&gt;
&lt;p&gt;An attention function can be described as mapping a query and a set
of key-value pairs to an output, where the query, keys, values, and
output are all vectors. The output is computed as a weighted sum of
the values, where the weight assigned to each value is computed by a
compatibility function of the query with the corresponding key.&lt;/p&gt;
&lt;p&gt;We call our particular attention &quot;Scaled Dot-Product Attention&quot;.
The input consists of queries and keys of dimension $d_k$, and
values of dimension $d_v$. We compute the dot products of the query
with all keys, divide each by $\sqrt{d_k}$, and apply a softmax
function to obtain the weights on the values.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://aqu1ver.fun/_astro/ModalNet-19.C-nEKeVT_2sDvWj.webp&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In practice, we compute the attention function on a set of queries
simultaneously, packed together into a matrix $Q$. The keys and
values are also packed together into matrices $K$ and $V$. We
compute the matrix of outputs as:&lt;/p&gt;
&lt;p&gt;$$
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
$$&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def attention(query, key, value, mask=None, dropout=None):
    &quot;Compute &apos;Scaled Dot Product Attention&apos;&quot;
    d_k = query.size(-1)
    scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k)
    if mask is not None:
        scores = scores.masked_fill(mask == 0, -1e9)
    p_attn = scores.softmax(dim=-1)
    if dropout is not None:
        p_attn = dropout(p_attn)
    return torch.matmul(p_attn, value), p_attn
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The two most commonly used attention functions are additive
attention &lt;a href=&quot;https://arxiv.org/abs/1409.0473&quot;&gt;(cite)&lt;/a&gt;, and dot-product
(multiplicative) attention. Dot-product attention is identical to
our algorithm, except for the scaling factor of
$\frac{1}{\sqrt{d_k}}$. Additive attention computes the
compatibility function using a feed-forward network with a single
hidden layer. While the two are similar in theoretical complexity,
dot-product attention is much faster and more space-efficient in
practice, since it can be implemented using highly optimized matrix
multiplication code.&lt;/p&gt;
&lt;p&gt;While for small values of $d_k$ the two mechanisms perform
similarly, additive attention outperforms dot product attention
without scaling for larger values of $d_k$
&lt;a href=&quot;https://arxiv.org/abs/1703.03906&quot;&gt;(cite)&lt;/a&gt;. We suspect that for
large values of $d_k$, the dot products grow large in magnitude,
pushing the softmax function into regions where it has extremely
small gradients (To illustrate why the dot products get large,
assume that the components of $q$ and $k$ are independent random
variables with mean $0$ and variance $1$. Then their dot product,
$q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance
$d_k$.). To counteract this effect, we scale the dot products by
$\frac{1}{\sqrt{d_k}}$.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://aqu1ver.fun/_astro/ModalNet-20.BioF6ALs_Z1NuBVo.webp&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Multi-head attention allows the model to jointly attend to
information from different representation subspaces at different
positions. With a single attention head, averaging inhibits this.&lt;/p&gt;
&lt;p&gt;$$
\mathrm{MultiHead}(Q, K, V) =
\mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O \
\text{where}~\mathrm{head_i} = \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)
$$&lt;/p&gt;
&lt;p&gt;Where the projections are parameter matrices $W^Q_i \in
\mathbb{R}^{d_{\text{model}} \times d_k}$, $W^K_i \in
\mathbb{R}^{d_{\text{model}} \times d_k}$, $W^V_i \in
\mathbb{R}^{d_{\text{model}} \times d_v}$ and $W^O \in
\mathbb{R}^{hd_v \times d_{\text{model}}}$.&lt;/p&gt;
&lt;p&gt;In this work we employ $h=8$ parallel attention layers, or
heads. For each of these we use $d_k=d_v=d_{\text{model}}/h=64$. Due
to the reduced dimension of each head, the total computational cost
is similar to that of single-head attention with full
dimensionality.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class MultiHeadedAttention(nn.Module):
    def __init__(self, h, d_model, dropout=0.1):
        &quot;Take in model size and number of heads.&quot;
        super(MultiHeadedAttention, self).__init__()
        assert d_model % h == 0
        # We assume d_v always equals d_k
        self.d_k = d_model // h
        self.h = h
        self.linears = clones(nn.Linear(d_model, d_model), 4)
        self.attn = None
        self.dropout = nn.Dropout(p=dropout)

    def forward(self, query, key, value, mask=None):
        &quot;Implements Figure 2&quot;
        if mask is not None:
            # Same mask applied to all h heads.
            mask = mask.unsqueeze(1)
        nbatches = query.size(0)

        # 1) Do all the linear projections in batch from d_model =&gt; h x d_k
        query, key, value = [
            lin(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
            for lin, x in zip(self.linears, (query, key, value))
        ]

        # 2) Apply attention on all the projected vectors in batch.
        x, self.attn = attention(
            query, key, value, mask=mask, dropout=self.dropout
        )

        # 3) &quot;Concat&quot; using a view and apply a final linear.
        x = (
            x.transpose(1, 2)
            .contiguous()
            .view(nbatches, -1, self.h * self.d_k)
        )
        del query
        del key
        del value
        return self.linears[-1](x)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Applications of Attention in our Model&lt;/h3&gt;
&lt;p&gt;The Transformer uses multi-head attention in three different ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In &quot;encoder-decoder attention&quot; layers, the queries come from the
previous decoder layer, and the memory keys and values come from the
output of the encoder. This allows every position in the decoder to
attend over all positions in the input sequence. This mimics the
typical encoder-decoder attention mechanisms in sequence-to-sequence
models such as &lt;a href=&quot;https://arxiv.org/abs/1609.08144&quot;&gt;(cite)&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The encoder contains self-attention layers. In a self-attention
layer all of the keys, values and queries come from the same place,
in this case, the output of the previous layer in the encoder. Each
position in the encoder can attend to all positions in the previous
layer of the encoder.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Similarly, self-attention layers in the decoder allow each
position in the decoder to attend to all positions in the decoder up
to and including that position. We need to prevent leftward
information flow in the decoder to preserve the auto-regressive
property. We implement this inside of scaled dot-product attention
by masking out (setting to $-\infty$) all values in the input of the
softmax which correspond to illegal connections.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Position-wise Feed-Forward Networks&lt;/h2&gt;
&lt;p&gt;In addition to attention sub-layers, each of the layers in our
encoder and decoder contains a fully connected feed-forward network,
which is applied to each position separately and identically. This
consists of two linear transformations with a ReLU activation in
between.&lt;/p&gt;
&lt;p&gt;$$\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2$$&lt;/p&gt;
&lt;p&gt;While the linear transformations are the same across different
positions, they use different parameters from layer to
layer. Another way of describing this is as two convolutions with
kernel size 1. The dimensionality of input and output is
$d_{\text{model}}=512$, and the inner-layer has dimensionality
$d_{ff}=2048$.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class PositionwiseFeedForward(nn.Module):
    &quot;Implements FFN equation.&quot;

    def __init__(self, d_model, d_ff, dropout=0.1):
        super(PositionwiseFeedForward, self).__init__()
        self.w_1 = nn.Linear(d_model, d_ff)
        self.w_2 = nn.Linear(d_ff, d_model)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x):
        return self.w_2(self.dropout(self.w_1(x).relu()))
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Embeddings and Softmax&lt;/h2&gt;
&lt;p&gt;Similarly to other sequence transduction models, we use learned
embeddings to convert the input tokens and output tokens to vectors
of dimension $d_{\text{model}}$. We also use the usual learned
linear transformation and softmax function to convert the decoder
output to predicted next-token probabilities. In our model, we
share the same weight matrix between the two embedding layers and
the pre-softmax linear transformation, similar to
&lt;a href=&quot;https://arxiv.org/abs/1608.05859&quot;&gt;(cite)&lt;/a&gt;. In the embedding layers,
we multiply those weights by $\sqrt{d_{\text{model}}}$.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Embeddings(nn.Module):
    def __init__(self, d_model, vocab):
        super(Embeddings, self).__init__()
        self.lut = nn.Embedding(vocab, d_model)
        self.d_model = d_model

    def forward(self, x):
        return self.lut(x) * math.sqrt(self.d_model)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Positional Encoding&lt;/h2&gt;
&lt;p&gt;Since our model contains no recurrence and no convolution, in order
for the model to make use of the order of the sequence, we must
inject some information about the relative or absolute position of
the tokens in the sequence. To this end, we add &quot;positional
encodings&quot; to the input embeddings at the bottoms of the encoder and
decoder stacks. The positional encodings have the same dimension
$d_{\text{model}}$ as the embeddings, so that the two can be summed.
There are many choices of positional encodings, learned and fixed
&lt;a href=&quot;https://arxiv.org/pdf/1705.03122.pdf&quot;&gt;(cite)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this work, we use sine and cosine functions of different frequencies:&lt;/p&gt;
&lt;p&gt;$$PE_{(pos,2i)} = \sin(pos / 10000^{2i/d_{\text{model}}})$$&lt;/p&gt;
&lt;p&gt;$$PE_{(pos,2i+1)} = \cos(pos / 10000^{2i/d_{\text{model}}})$$&lt;/p&gt;
&lt;p&gt;where $pos$ is the position and $i$ is the dimension. That is, each
dimension of the positional encoding corresponds to a sinusoid. The
wavelengths form a geometric progression from $2\pi$ to $10000 \cdot
2\pi$. We chose this function because we hypothesized it would
allow the model to easily learn to attend by relative positions,
since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a
linear function of $PE_{pos}$.&lt;/p&gt;
&lt;p&gt;In addition, we apply dropout to the sums of the embeddings and the
positional encodings in both the encoder and decoder stacks. For
the base model, we use a rate of $P_{drop}=0.1$.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class PositionalEncoding(nn.Module):
    &quot;Implement the PE function.&quot;

    def __init__(self, d_model, dropout, max_len=5000):
        super(PositionalEncoding, self).__init__()
        self.dropout = nn.Dropout(p=dropout)

        # Compute the positional encodings once in log space.
        pe = torch.zeros(max_len, d_model)
        position = torch.arange(0, max_len).unsqueeze(1)
        div_term = torch.exp(
            torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model)
        )
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        pe = pe.unsqueeze(0)
        self.register_buffer(&quot;pe&quot;, pe)

    def forward(self, x):
        x = x + self.pe[:, : x.size(1)].requires_grad_(False)
        return self.dropout(x)
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Below the positional encoding will add in a sine wave based on
position. The frequency and offset of the wave is different for
each dimension.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def example_positional():
    pe = PositionalEncoding(20, 0)
    y = pe.forward(torch.zeros(1, 100, 20))

    data = pd.concat(
        [
            pd.DataFrame(
                {
                    &quot;embedding&quot;: y[0, :, dim],
                    &quot;dimension&quot;: dim,
                    &quot;position&quot;: list(range(100)),
                }
            )
            for dim in [4, 5, 6, 7]
        ]
    )

    return (
        alt.Chart(data)
        .mark_line()
        .properties(width=800)
        .encode(x=&quot;position&quot;, y=&quot;embedding&quot;, color=&quot;dimension:N&quot;)
        .interactive()
    )


show_example(example_positional)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We also experimented with using learned positional embeddings
&lt;a href=&quot;https://arxiv.org/pdf/1705.03122.pdf&quot;&gt;(cite)&lt;/a&gt; instead, and found
that the two versions produced nearly identical results. We chose
the sinusoidal version because it may allow the model to extrapolate
to sequence lengths longer than the ones encountered during
training.&lt;/p&gt;
&lt;h2&gt;Full Model&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Here we define a function from hyperparameters to a full model.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def make_model(
    src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1
):
    &quot;Helper: Construct a model from hyperparameters.&quot;
    c = copy.deepcopy
    attn = MultiHeadedAttention(h, d_model)
    ff = PositionwiseFeedForward(d_model, d_ff, dropout)
    position = PositionalEncoding(d_model, dropout)
    model = EncoderDecoder(
        Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
        Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N),
        nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
        nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),
        Generator(d_model, tgt_vocab),
    )

    # This was important from their code.
    # Initialize parameters with Glorot / fan_avg.
    for p in model.parameters():
        if p.dim() &gt; 1:
            nn.init.xavier_uniform_(p)
    return model
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Inference:&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Here we make a forward step to generate a prediction of the
model. We try to use our transformer to memorize the input. As you
will see the output is randomly generated due to the fact that the
model is not trained yet. In the next tutorial we will build the
training function and try to train our model to memorize the numbers
from 1 to 10.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def inference_test():
    test_model = make_model(11, 11, 2)
    test_model.eval()
    src = torch.LongTensor([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
    src_mask = torch.ones(1, 1, 10)

    memory = test_model.encode(src, src_mask)
    ys = torch.zeros(1, 1).type_as(src)

    for i in range(9):
        out = test_model.decode(
            memory, src_mask, ys, subsequent_mask(ys.size(1)).type_as(src.data)
        )
        prob = test_model.generator(out[:, -1])
        _, next_word = torch.max(prob, dim=1)
        next_word = next_word.data[0]
        ys = torch.cat(
            [ys, torch.empty(1, 1).type_as(src.data).fill_(next_word)], dim=1
        )

    print(&quot;Example Untrained Model Prediction:&quot;, ys)


def run_tests():
    for _ in range(10):
        inference_test()


show_example(run_tests)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Example Untrained Model Prediction: tensor([[ 0, 10,  0, 10,  0,  0,  0,  0,  0, 10]])
Example Untrained Model Prediction: tensor([[ 0,  8,  1, 10,  0,  8,  1, 10,  0,  8]])


Example Untrained Model Prediction: tensor([[ 0,  9,  0, 10,  4,  5,  3,  2,  4,  3]])
Example Untrained Model Prediction: tensor([[0, 5, 5, 5, 5, 5, 5, 5, 5, 5]])


Example Untrained Model Prediction: tensor([[0, 2, 8, 3, 8, 5, 0, 4, 0, 4]])
Example Untrained Model Prediction: tensor([[ 0, 10,  3, 10,  2,  9,  0,  3, 10,  3]])


Example Untrained Model Prediction: tensor([[0, 3, 3, 3, 3, 3, 3, 3, 3, 3]])
Example Untrained Model Prediction: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])


Example Untrained Model Prediction: tensor([[0, 3, 2, 2, 2, 4, 0, 3, 1, 3]])
Example Untrained Model Prediction: tensor([[0, 6, 6, 6, 6, 6, 6, 6, 6, 6]])
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Part 2: Model Training&lt;/h1&gt;
&lt;h1&gt;Training&lt;/h1&gt;
&lt;p&gt;This section describes the training regime for our models.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We stop for a quick interlude to introduce some of the tools
needed to train a standard encoder decoder model. First we define a
batch object that holds the src and target sentences for training,
as well as constructing the masks.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Batches and Masking&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Batch:
    &quot;&quot;&quot;Object for holding a batch of data with mask during training.&quot;&quot;&quot;

    def __init__(self, src, tgt=None, pad=2):  # 2 = &amp;#x3C;blank&gt;
        self.src = src
        self.src_mask = (src != pad).unsqueeze(-2)
        if tgt is not None:
            self.tgt = tgt[:, :-1]
            self.tgt_y = tgt[:, 1:]
            self.tgt_mask = self.make_std_mask(self.tgt, pad)
            self.ntokens = (self.tgt_y != pad).data.sum()

    @staticmethod
    def make_std_mask(tgt, pad):
        &quot;Create a mask to hide padding and future words.&quot;
        tgt_mask = (tgt != pad).unsqueeze(-2)
        tgt_mask = tgt_mask &amp;#x26; subsequent_mask(tgt.size(-1)).type_as(
            tgt_mask.data
        )
        return tgt_mask
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Next we create a generic training and scoring function to keep
track of loss. We pass in a generic loss compute function that
also handles parameter updates.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Training Loop&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class TrainState:
    &quot;&quot;&quot;Track number of steps, examples, and tokens processed&quot;&quot;&quot;

    step: int = 0  # Steps in the current epoch
    accum_step: int = 0  # Number of gradient accumulation steps
    samples: int = 0  # total # of examples used
    tokens: int = 0  # total # of tokens processed
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def run_epoch(
    data_iter,
    model,
    loss_compute,
    optimizer,
    scheduler,
    mode=&quot;train&quot;,
    accum_iter=1,
    train_state=TrainState(),
):
    &quot;&quot;&quot;Train a single epoch&quot;&quot;&quot;
    start = time.time()
    total_tokens = 0
    total_loss = 0
    tokens = 0
    n_accum = 0
    for i, batch in enumerate(data_iter):
        out = model.forward(
            batch.src, batch.tgt, batch.src_mask, batch.tgt_mask
        )
        loss, loss_node = loss_compute(out, batch.tgt_y, batch.ntokens)
        # loss_node = loss_node / accum_iter
        if mode == &quot;train&quot; or mode == &quot;train+log&quot;:
            loss_node.backward()
            train_state.step += 1
            train_state.samples += batch.src.shape[0]
            train_state.tokens += batch.ntokens
            if i % accum_iter == 0:
                optimizer.step()
                optimizer.zero_grad(set_to_none=True)
                n_accum += 1
                train_state.accum_step += 1
            scheduler.step()

        total_loss += loss
        total_tokens += batch.ntokens
        tokens += batch.ntokens
        if i % 40 == 1 and (mode == &quot;train&quot; or mode == &quot;train+log&quot;):
            lr = optimizer.param_groups[0][&quot;lr&quot;]
            elapsed = time.time() - start
            print(
                (
                    &quot;Epoch Step: %6d | Accumulation Step: %3d | Loss: %6.2f &quot;
                    + &quot;| Tokens / Sec: %7.1f | Learning Rate: %6.1e&quot;
                )
                % (i, n_accum, loss / batch.ntokens, tokens / elapsed, lr)
            )
            start = time.time()
            tokens = 0
        del loss
        del loss_node
    return total_loss / total_tokens, train_state
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Training Data and Batching&lt;/h2&gt;
&lt;p&gt;We trained on the standard WMT 2014 English-German dataset
consisting of about 4.5 million sentence pairs. Sentences were
encoded using byte-pair encoding, which has a shared source-target
vocabulary of about 37000 tokens. For English-French, we used the
significantly larger WMT 2014 English-French dataset consisting of
36M sentences and split tokens into a 32000 word-piece vocabulary.&lt;/p&gt;
&lt;p&gt;Sentence pairs were batched together by approximate sequence length.
Each training batch contained a set of sentence pairs containing
approximately 25000 source tokens and 25000 target tokens.&lt;/p&gt;
&lt;h2&gt;Hardware and Schedule&lt;/h2&gt;
&lt;p&gt;We trained our models on one machine with 8 NVIDIA P100 GPUs. For
our base models using the hyperparameters described throughout the
paper, each training step took about 0.4 seconds. We trained the
base models for a total of 100,000 steps or 12 hours. For our big
models, step time was 1.0 seconds. The big models were trained for
300,000 steps (3.5 days).&lt;/p&gt;
&lt;h2&gt;Optimizer&lt;/h2&gt;
&lt;p&gt;We used the Adam optimizer &lt;a href=&quot;https://arxiv.org/abs/1412.6980&quot;&gt;(cite)&lt;/a&gt;
with $\beta_1=0.9$, $\beta_2=0.98$ and $\epsilon=10^{-9}$. We
varied the learning rate over the course of training, according to
the formula:&lt;/p&gt;
&lt;p&gt;$$
lrate = d_{\text{model}}^{-0.5} \cdot
\min({step_num}^{-0.5},
{step_num} \cdot {warmup_steps}^{-1.5})
$$&lt;/p&gt;
&lt;p&gt;This corresponds to increasing the learning rate linearly for the
first $warmup_steps$ training steps, and decreasing it thereafter
proportionally to the inverse square root of the step number. We
used $warmup_steps=4000$.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: This part is very important. Need to train with this setup
of the model.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Example of the curves of this model for different model sizes and
for optimization hyperparameters.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def rate(step, model_size, factor, warmup):
    &quot;&quot;&quot;
    we have to default the step to 1 for LambdaLR function
    to avoid zero raising to negative power.
    &quot;&quot;&quot;
    if step == 0:
        step = 1
    return factor * (
        model_size ** (-0.5) * min(step ** (-0.5), step * warmup ** (-1.5))
    )
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def example_learning_schedule():
    opts = [
        [512, 1, 4000],  # example 1
        [512, 1, 8000],  # example 2
        [256, 1, 4000],  # example 3
    ]

    dummy_model = torch.nn.Linear(1, 1)
    learning_rates = []

    # we have 3 examples in opts list.
    for idx, example in enumerate(opts):
        # run 20000 epoch for each example
        optimizer = torch.optim.Adam(
            dummy_model.parameters(), lr=1, betas=(0.9, 0.98), eps=1e-9
        )
        lr_scheduler = LambdaLR(
            optimizer=optimizer, lr_lambda=lambda step: rate(step, *example)
        )
        tmp = []
        # take 20K dummy training steps, save the learning rate at each step
        for step in range(20000):
            tmp.append(optimizer.param_groups[0][&quot;lr&quot;])
            optimizer.step()
            lr_scheduler.step()
        learning_rates.append(tmp)

    learning_rates = torch.tensor(learning_rates)

    # Enable altair to handle more than 5000 rows
    alt.data_transformers.disable_max_rows()

    opts_data = pd.concat(
        [
            pd.DataFrame(
                {
                    &quot;Learning Rate&quot;: learning_rates[warmup_idx, :],
                    &quot;model_size:warmup&quot;: [&quot;512:4000&quot;, &quot;512:8000&quot;, &quot;256:4000&quot;][
                        warmup_idx
                    ],
                    &quot;step&quot;: range(20000),
                }
            )
            for warmup_idx in [0, 1, 2]
        ]
    )

    return (
        alt.Chart(opts_data)
        .mark_line()
        .properties(width=600)
        .encode(x=&quot;step&quot;, y=&quot;Learning Rate&quot;, color=&quot;model_size:warmup:N&quot;)
        .interactive()
    )


example_learning_schedule()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Regularization&lt;/h2&gt;
&lt;h3&gt;Label Smoothing&lt;/h3&gt;
&lt;p&gt;During training, we employed label smoothing of value
$\epsilon_{ls}=0.1$ &lt;a href=&quot;https://arxiv.org/abs/1512.00567&quot;&gt;(cite)&lt;/a&gt;.
This hurts perplexity, as the model learns to be more unsure, but
improves accuracy and BLEU score.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We implement label smoothing using the KL div loss. Instead of
using a one-hot target distribution, we create a distribution that
has &lt;code&gt;confidence&lt;/code&gt; of the correct word and the rest of the
&lt;code&gt;smoothing&lt;/code&gt; mass distributed throughout the vocabulary.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class LabelSmoothing(nn.Module):
    &quot;Implement label smoothing.&quot;

    def __init__(self, size, padding_idx, smoothing=0.0):
        super(LabelSmoothing, self).__init__()
        self.criterion = nn.KLDivLoss(reduction=&quot;sum&quot;)
        self.padding_idx = padding_idx
        self.confidence = 1.0 - smoothing
        self.smoothing = smoothing
        self.size = size
        self.true_dist = None

    def forward(self, x, target):
        assert x.size(1) == self.size
        true_dist = x.data.clone()
        true_dist.fill_(self.smoothing / (self.size - 2))
        true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
        true_dist[:, self.padding_idx] = 0
        mask = torch.nonzero(target.data == self.padding_idx)
        if mask.dim() &gt; 0:
            true_dist.index_fill_(0, mask.squeeze(), 0.0)
        self.true_dist = true_dist
        return self.criterion(x, true_dist.clone().detach())
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Here we can see an example of how the mass is distributed to the
words based on confidence.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Example of label smoothing.


def example_label_smoothing():
    crit = LabelSmoothing(5, 0, 0.4)
    predict = torch.FloatTensor(
        [
            [0, 0.2, 0.7, 0.1, 0],
            [0, 0.2, 0.7, 0.1, 0],
            [0, 0.2, 0.7, 0.1, 0],
            [0, 0.2, 0.7, 0.1, 0],
            [0, 0.2, 0.7, 0.1, 0],
        ]
    )
    crit(x=predict.log(), target=torch.LongTensor([2, 1, 0, 3, 3]))
    LS_data = pd.concat(
        [
            pd.DataFrame(
                {
                    &quot;target distribution&quot;: crit.true_dist[x, y].flatten(),
                    &quot;columns&quot;: y,
                    &quot;rows&quot;: x,
                }
            )
            for y in range(5)
            for x in range(5)
        ]
    )

    return (
        alt.Chart(LS_data)
        .mark_rect(color=&quot;Blue&quot;, opacity=1)
        .properties(height=200, width=200)
        .encode(
            alt.X(&quot;columns:O&quot;, title=None),
            alt.Y(&quot;rows:O&quot;, title=None),
            alt.Color(
                &quot;target distribution:Q&quot;, scale=alt.Scale(scheme=&quot;viridis&quot;)
            ),
        )
        .interactive()
    )


show_example(example_label_smoothing)
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Label smoothing actually starts to penalize the model if it gets
very confident about a given choice.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;

def loss(x, crit):
    d = x + 3 * 1
    predict = torch.FloatTensor([[0, x / d, 1 / d, 1 / d, 1 / d]])
    return crit(predict.log(), torch.LongTensor([1])).data


def penalization_visualization():
    crit = LabelSmoothing(5, 0, 0.1)
    loss_data = pd.DataFrame(
        {
            &quot;Loss&quot;: [loss(x, crit) for x in range(1, 100)],
            &quot;Steps&quot;: list(range(99)),
        }
    ).astype(&quot;float&quot;)

    return (
        alt.Chart(loss_data)
        .mark_line()
        .properties(width=350)
        .encode(
            x=&quot;Steps&quot;,
            y=&quot;Loss&quot;,
        )
        .interactive()
    )


show_example(penalization_visualization)
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;A First Example&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;We can begin by trying out a simple copy-task. Given a random set
of input symbols from a small vocabulary, the goal is to generate
back those same symbols.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Synthetic Data&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def data_gen(V, batch_size, nbatches):
    &quot;Generate random data for a src-tgt copy task.&quot;
    for i in range(nbatches):
        data = torch.randint(1, V, size=(batch_size, 10))
        data[:, 0] = 1
        src = data.requires_grad_(False).clone().detach()
        tgt = data.requires_grad_(False).clone().detach()
        yield Batch(src, tgt, 0)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Loss Computation&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class SimpleLossCompute:
    &quot;A simple loss compute and train function.&quot;

    def __init__(self, generator, criterion):
        self.generator = generator
        self.criterion = criterion

    def __call__(self, x, y, norm):
        x = self.generator(x)
        sloss = (
            self.criterion(
                x.contiguous().view(-1, x.size(-1)), y.contiguous().view(-1)
            )
            / norm
        )
        return sloss.data * norm, sloss
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Greedy Decoding&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;This code predicts a translation using greedy decoding for simplicity.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def greedy_decode(model, src, src_mask, max_len, start_symbol):
    memory = model.encode(src, src_mask)
    ys = torch.zeros(1, 1).fill_(start_symbol).type_as(src.data)
    for i in range(max_len - 1):
        out = model.decode(
            memory, src_mask, ys, subsequent_mask(ys.size(1)).type_as(src.data)
        )
        prob = model.generator(out[:, -1])
        _, next_word = torch.max(prob, dim=1)
        next_word = next_word.data[0]
        ys = torch.cat(
            [ys, torch.zeros(1, 1).type_as(src.data).fill_(next_word)], dim=1
        )
    return ys
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Train the simple copy task.


def example_simple_model():
    V = 11
    criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0)
    model = make_model(V, V, N=2)

    optimizer = torch.optim.Adam(
        model.parameters(), lr=0.5, betas=(0.9, 0.98), eps=1e-9
    )
    lr_scheduler = LambdaLR(
        optimizer=optimizer,
        lr_lambda=lambda step: rate(
            step, model_size=model.src_embed[0].d_model, factor=1.0, warmup=400
        ),
    )

    batch_size = 80
    for epoch in range(20):
        model.train()
        run_epoch(
            data_gen(V, batch_size, 20),
            model,
            SimpleLossCompute(model.generator, criterion),
            optimizer,
            lr_scheduler,
            mode=&quot;train&quot;,
        )
        model.eval()
        run_epoch(
            data_gen(V, batch_size, 5),
            model,
            SimpleLossCompute(model.generator, criterion),
            DummyOptimizer(),
            DummyScheduler(),
            mode=&quot;eval&quot;,
        )[0]

    model.eval()
    src = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
    max_len = src.shape[1]
    src_mask = torch.ones(1, 1, max_len)
    print(greedy_decode(model, src, src_mask, max_len=max_len, start_symbol=0))


# execute_example(example_simple_model)
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Part 3: A Real World Example&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;Now we consider a real-world example using the Multi30k
German-English Translation task. This task is much smaller than
the WMT task considered in the paper, but it illustrates the whole
system. We also show how to use multi-gpu processing to make it
really fast.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Data Loading&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;We will load the dataset using torchtext and spacy for
tokenization.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Load spacy tokenizer models, download them if they haven&apos;t been
# downloaded already


def load_tokenizers():

    try:
        spacy_de = spacy.load(&quot;de_core_news_sm&quot;)
    except IOError:
        os.system(&quot;python -m spacy download de_core_news_sm&quot;)
        spacy_de = spacy.load(&quot;de_core_news_sm&quot;)

    try:
        spacy_en = spacy.load(&quot;en_core_web_sm&quot;)
    except IOError:
        os.system(&quot;python -m spacy download en_core_web_sm&quot;)
        spacy_en = spacy.load(&quot;en_core_web_sm&quot;)

    return spacy_de, spacy_en
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def tokenize(text, tokenizer):
    return [tok.text for tok in tokenizer.tokenizer(text)]


def yield_tokens(data_iter, tokenizer, index):
    for from_to_tuple in data_iter:
        yield tokenizer(from_to_tuple[index])
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;

def build_vocabulary(spacy_de, spacy_en):
    def tokenize_de(text):
        return tokenize(text, spacy_de)

    def tokenize_en(text):
        return tokenize(text, spacy_en)

    print(&quot;Building German Vocabulary ...&quot;)
    train, val, test = datasets.Multi30k(language_pair=(&quot;de&quot;, &quot;en&quot;))
    vocab_src = build_vocab_from_iterator(
        yield_tokens(train + val + test, tokenize_de, index=0),
        min_freq=2,
        specials=[&quot;&amp;#x3C;s&gt;&quot;, &quot;&amp;#x3C;/s&gt;&quot;, &quot;&amp;#x3C;blank&gt;&quot;, &quot;&amp;#x3C;unk&gt;&quot;],
    )

    print(&quot;Building English Vocabulary ...&quot;)
    train, val, test = datasets.Multi30k(language_pair=(&quot;de&quot;, &quot;en&quot;))
    vocab_tgt = build_vocab_from_iterator(
        yield_tokens(train + val + test, tokenize_en, index=1),
        min_freq=2,
        specials=[&quot;&amp;#x3C;s&gt;&quot;, &quot;&amp;#x3C;/s&gt;&quot;, &quot;&amp;#x3C;blank&gt;&quot;, &quot;&amp;#x3C;unk&gt;&quot;],
    )

    vocab_src.set_default_index(vocab_src[&quot;&amp;#x3C;unk&gt;&quot;])
    vocab_tgt.set_default_index(vocab_tgt[&quot;&amp;#x3C;unk&gt;&quot;])

    return vocab_src, vocab_tgt


def load_vocab(spacy_de, spacy_en):
    if not exists(&quot;vocab.pt&quot;):
        vocab_src, vocab_tgt = build_vocabulary(spacy_de, spacy_en)
        torch.save((vocab_src, vocab_tgt), &quot;vocab.pt&quot;)
    else:
        vocab_src, vocab_tgt = torch.load(&quot;vocab.pt&quot;)
    print(&quot;Finished.\nVocabulary sizes:&quot;)
    print(len(vocab_src))
    print(len(vocab_tgt))
    return vocab_src, vocab_tgt


if is_interactive_notebook():
    # global variables used later in the script
    spacy_de, spacy_en = show_example(load_tokenizers)
    vocab_src, vocab_tgt = show_example(load_vocab, args=[spacy_de, spacy_en])
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Finished.
Vocabulary sizes:
59981
36745
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Batching matters a ton for speed. We want to have very evenly
divided batches, with absolutely minimal padding. To do this we
have to hack a bit around the default torchtext batching. This
code patches their default batching to make sure we search over
enough sentences to find tight batches.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Iterators&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def collate_batch(
    batch,
    src_pipeline,
    tgt_pipeline,
    src_vocab,
    tgt_vocab,
    device,
    max_padding=128,
    pad_id=2,
):
    bs_id = torch.tensor([0], device=device)  # &amp;#x3C;s&gt; token id
    eos_id = torch.tensor([1], device=device)  # &amp;#x3C;/s&gt; token id
    src_list, tgt_list = [], []
    for (_src, _tgt) in batch:
        processed_src = torch.cat(
            [
                bs_id,
                torch.tensor(
                    src_vocab(src_pipeline(_src)),
                    dtype=torch.int64,
                    device=device,
                ),
                eos_id,
            ],
            0,
        )
        processed_tgt = torch.cat(
            [
                bs_id,
                torch.tensor(
                    tgt_vocab(tgt_pipeline(_tgt)),
                    dtype=torch.int64,
                    device=device,
                ),
                eos_id,
            ],
            0,
        )
        src_list.append(
            # warning - overwrites values for negative values of padding - len
            pad(
                processed_src,
                (
                    0,
                    max_padding - len(processed_src),
                ),
                value=pad_id,
            )
        )
        tgt_list.append(
            pad(
                processed_tgt,
                (0, max_padding - len(processed_tgt)),
                value=pad_id,
            )
        )

    src = torch.stack(src_list)
    tgt = torch.stack(tgt_list)
    return (src, tgt)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def create_dataloaders(
    device,
    vocab_src,
    vocab_tgt,
    spacy_de,
    spacy_en,
    batch_size=12000,
    max_padding=128,
    is_distributed=True,
):
    # def create_dataloaders(batch_size=12000):
    def tokenize_de(text):
        return tokenize(text, spacy_de)

    def tokenize_en(text):
        return tokenize(text, spacy_en)

    def collate_fn(batch):
        return collate_batch(
            batch,
            tokenize_de,
            tokenize_en,
            vocab_src,
            vocab_tgt,
            device,
            max_padding=max_padding,
            pad_id=vocab_src.get_stoi()[&quot;&amp;#x3C;blank&gt;&quot;],
        )

    train_iter, valid_iter, test_iter = datasets.Multi30k(
        language_pair=(&quot;de&quot;, &quot;en&quot;)
    )

    train_iter_map = to_map_style_dataset(
        train_iter
    )  # DistributedSampler needs a dataset len()
    train_sampler = (
        DistributedSampler(train_iter_map) if is_distributed else None
    )
    valid_iter_map = to_map_style_dataset(valid_iter)
    valid_sampler = (
        DistributedSampler(valid_iter_map) if is_distributed else None
    )

    train_dataloader = DataLoader(
        train_iter_map,
        batch_size=batch_size,
        shuffle=(train_sampler is None),
        sampler=train_sampler,
        collate_fn=collate_fn,
    )
    valid_dataloader = DataLoader(
        valid_iter_map,
        batch_size=batch_size,
        shuffle=(valid_sampler is None),
        sampler=valid_sampler,
        collate_fn=collate_fn,
    )
    return train_dataloader, valid_dataloader
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Training the System&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def train_worker(
    gpu,
    ngpus_per_node,
    vocab_src,
    vocab_tgt,
    spacy_de,
    spacy_en,
    config,
    is_distributed=False,
):
    print(f&quot;Train worker process using GPU: {gpu} for training&quot;, flush=True)
    torch.cuda.set_device(gpu)

    pad_idx = vocab_tgt[&quot;&amp;#x3C;blank&gt;&quot;]
    d_model = 512
    model = make_model(len(vocab_src), len(vocab_tgt), N=6)
    model.cuda(gpu)
    module = model
    is_main_process = True
    if is_distributed:
        dist.init_process_group(
            &quot;nccl&quot;, init_method=&quot;env://&quot;, rank=gpu, world_size=ngpus_per_node
        )
        model = DDP(model, device_ids=[gpu])
        module = model.module
        is_main_process = gpu == 0

    criterion = LabelSmoothing(
        size=len(vocab_tgt), padding_idx=pad_idx, smoothing=0.1
    )
    criterion.cuda(gpu)

    train_dataloader, valid_dataloader = create_dataloaders(
        gpu,
        vocab_src,
        vocab_tgt,
        spacy_de,
        spacy_en,
        batch_size=config[&quot;batch_size&quot;] // ngpus_per_node,
        max_padding=config[&quot;max_padding&quot;],
        is_distributed=is_distributed,
    )

    optimizer = torch.optim.Adam(
        model.parameters(), lr=config[&quot;base_lr&quot;], betas=(0.9, 0.98), eps=1e-9
    )
    lr_scheduler = LambdaLR(
        optimizer=optimizer,
        lr_lambda=lambda step: rate(
            step, d_model, factor=1, warmup=config[&quot;warmup&quot;]
        ),
    )
    train_state = TrainState()

    for epoch in range(config[&quot;num_epochs&quot;]):
        if is_distributed:
            train_dataloader.sampler.set_epoch(epoch)
            valid_dataloader.sampler.set_epoch(epoch)

        model.train()
        print(f&quot;[GPU{gpu}] Epoch {epoch} Training ====&quot;, flush=True)
        _, train_state = run_epoch(
            (Batch(b[0], b[1], pad_idx) for b in train_dataloader),
            model,
            SimpleLossCompute(module.generator, criterion),
            optimizer,
            lr_scheduler,
            mode=&quot;train+log&quot;,
            accum_iter=config[&quot;accum_iter&quot;],
            train_state=train_state,
        )

        GPUtil.showUtilization()
        if is_main_process:
            file_path = &quot;%s%.2d.pt&quot; % (config[&quot;file_prefix&quot;], epoch)
            torch.save(module.state_dict(), file_path)
        torch.cuda.empty_cache()

        print(f&quot;[GPU{gpu}] Epoch {epoch} Validation ====&quot;, flush=True)
        model.eval()
        sloss = run_epoch(
            (Batch(b[0], b[1], pad_idx) for b in valid_dataloader),
            model,
            SimpleLossCompute(module.generator, criterion),
            DummyOptimizer(),
            DummyScheduler(),
            mode=&quot;eval&quot;,
        )
        print(sloss)
        torch.cuda.empty_cache()

    if is_main_process:
        file_path = &quot;%sfinal.pt&quot; % config[&quot;file_prefix&quot;]
        torch.save(module.state_dict(), file_path)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def train_distributed_model(vocab_src, vocab_tgt, spacy_de, spacy_en, config):
    from the_annotated_transformer import train_worker

    ngpus = torch.cuda.device_count()
    os.environ[&quot;MASTER_ADDR&quot;] = &quot;localhost&quot;
    os.environ[&quot;MASTER_PORT&quot;] = &quot;12356&quot;
    print(f&quot;Number of GPUs detected: {ngpus}&quot;)
    print(&quot;Spawning training processes ...&quot;)
    mp.spawn(
        train_worker,
        nprocs=ngpus,
        args=(ngpus, vocab_src, vocab_tgt, spacy_de, spacy_en, config, True),
    )


def train_model(vocab_src, vocab_tgt, spacy_de, spacy_en, config):
    if config[&quot;distributed&quot;]:
        train_distributed_model(
            vocab_src, vocab_tgt, spacy_de, spacy_en, config
        )
    else:
        train_worker(
            0, 1, vocab_src, vocab_tgt, spacy_de, spacy_en, config, False
        )


def load_trained_model():
    config = {
        &quot;batch_size&quot;: 32,
        &quot;distributed&quot;: False,
        &quot;num_epochs&quot;: 8,
        &quot;accum_iter&quot;: 10,
        &quot;base_lr&quot;: 1.0,
        &quot;max_padding&quot;: 72,
        &quot;warmup&quot;: 3000,
        &quot;file_prefix&quot;: &quot;multi30k_model_&quot;,
    }
    model_path = &quot;multi30k_model_final.pt&quot;
    if not exists(model_path):
        train_model(vocab_src, vocab_tgt, spacy_de, spacy_en, config)

    model = make_model(len(vocab_src), len(vocab_tgt), N=6)
    model.load_state_dict(torch.load(&quot;multi30k_model_final.pt&quot;))
    return model


if is_interactive_notebook():
    model = load_trained_model()
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Once trained we can decode the model to produce a set of
translations. Here we simply translate the first sentence in the
validation set. This dataset is pretty small so the translations
with greedy search are reasonably accurate.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Additional Components: BPE, Search, Averaging&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;So this mostly covers the transformer model itself. There are four
aspects that we didn&apos;t cover explicitly. We also have all these
additional features implemented in
&lt;a href=&quot;https://github.com/opennmt/opennmt-py&quot;&gt;OpenNMT-py&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;BPE/ Word-piece: We can use a library to first preprocess the
data into subword units. See Rico Sennrich&apos;s
&lt;a href=&quot;https://github.com/rsennrich/subword-nmt&quot;&gt;subword-nmt&lt;/a&gt;
implementation. These models will transform the training data to
look like this:&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;▁Die ▁Protokoll datei ▁kann ▁ heimlich ▁per ▁E - Mail ▁oder ▁FTP
▁an ▁einen ▁bestimmte n ▁Empfänger ▁gesendet ▁werden .&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Shared Embeddings: When using BPE with shared vocabulary we can
share the same weight vectors between the source / target /
generator. See the &lt;a href=&quot;https://arxiv.org/abs/1608.05859&quot;&gt;(cite)&lt;/a&gt; for
details. To add this to the model simply do this:&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if False:
    model.src_embed[0].lut.weight = model.tgt_embeddings[0].lut.weight
    model.generator.lut.weight = model.tgt_embed[0].lut.weight
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Beam Search: This is a bit too complicated to cover here. See the
&lt;a href=&quot;https://github.com/OpenNMT/OpenNMT-py/&quot;&gt;OpenNMT-py&lt;/a&gt;
for a pytorch implementation.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Model Averaging: The paper averages the last k checkpoints to
create an ensembling effect. We can do this after the fact if we
have a bunch of models:&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def average(model, models):
    &quot;Average models into model&quot;
    for ps in zip(*[m.params() for m in [model] + models]):
        ps[0].copy_(torch.sum(*ps[1:]) / len(ps[1:]))
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Results&lt;/h1&gt;
&lt;p&gt;On the WMT 2014 English-to-German translation task, the big
transformer model (Transformer (big) in Table 2) outperforms the
best previously reported models (including ensembles) by more than
2.0 BLEU, establishing a new state-of-the-art BLEU score of
28.4. The configuration of this model is listed in the bottom line
of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base
model surpasses all previously published models and ensembles, at a
fraction of the training cost of any of the competitive models.&lt;/p&gt;
&lt;p&gt;On the WMT 2014 English-to-French translation task, our big model
achieves a BLEU score of 41.0, outperforming all of the previously
published single models, at less than 1/4 the training cost of the
previous state-of-the-art model. The Transformer (big) model trained
for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;With the addtional extensions in the last section, the OpenNMT-py
replication gets to 26.9 on EN-DE WMT. Here I have loaded in those
parameters to our reimplemenation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Load data and model for output checks
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def check_outputs(
    valid_dataloader,
    model,
    vocab_src,
    vocab_tgt,
    n_examples=15,
    pad_idx=2,
    eos_string=&quot;&amp;#x3C;/s&gt;&quot;,
):
    results = [()] * n_examples
    for idx in range(n_examples):
        print(&quot;\nExample %d ========\n&quot; % idx)
        b = next(iter(valid_dataloader))
        rb = Batch(b[0], b[1], pad_idx)
        greedy_decode(model, rb.src, rb.src_mask, 64, 0)[0]

        src_tokens = [
            vocab_src.get_itos()[x] for x in rb.src[0] if x != pad_idx
        ]
        tgt_tokens = [
            vocab_tgt.get_itos()[x] for x in rb.tgt[0] if x != pad_idx
        ]

        print(
            &quot;Source Text (Input)        : &quot;
            + &quot; &quot;.join(src_tokens).replace(&quot;\n&quot;, &quot;&quot;)
        )
        print(
            &quot;Target Text (Ground Truth) : &quot;
            + &quot; &quot;.join(tgt_tokens).replace(&quot;\n&quot;, &quot;&quot;)
        )
        model_out = greedy_decode(model, rb.src, rb.src_mask, 72, 0)[0]
        model_txt = (
            &quot; &quot;.join(
                [vocab_tgt.get_itos()[x] for x in model_out if x != pad_idx]
            ).split(eos_string, 1)[0]
            + eos_string
        )
        print(&quot;Model Output               : &quot; + model_txt.replace(&quot;\n&quot;, &quot;&quot;))
        results[idx] = (rb, src_tokens, tgt_tokens, model_out, model_txt)
    return results


def run_model_example(n_examples=5):
    global vocab_src, vocab_tgt, spacy_de, spacy_en

    print(&quot;Preparing Data ...&quot;)
    _, valid_dataloader = create_dataloaders(
        torch.device(&quot;cpu&quot;),
        vocab_src,
        vocab_tgt,
        spacy_de,
        spacy_en,
        batch_size=1,
        is_distributed=False,
    )

    print(&quot;Loading Trained Model ...&quot;)

    model = make_model(len(vocab_src), len(vocab_tgt), N=6)
    model.load_state_dict(
        torch.load(&quot;multi30k_model_final.pt&quot;, map_location=torch.device(&quot;cpu&quot;))
    )

    print(&quot;Checking Model Outputs:&quot;)
    example_data = check_outputs(
        valid_dataloader, model, vocab_src, vocab_tgt, n_examples=n_examples
    )
    return model, example_data


# execute_example(run_model_example)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Attention Visualization&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Even with a greedy decoder the translation looks pretty good. We
can further visualize it to see what is happening at each layer of
the attention&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def mtx2df(m, max_row, max_col, row_tokens, col_tokens):
    &quot;convert a dense matrix to a data frame with row and column indices&quot;
    return pd.DataFrame(
        [
            (
                r,
                c,
                float(m[r, c]),
                &quot;%.3d %s&quot;
                % (r, row_tokens[r] if len(row_tokens) &gt; r else &quot;&amp;#x3C;blank&gt;&quot;),
                &quot;%.3d %s&quot;
                % (c, col_tokens[c] if len(col_tokens) &gt; c else &quot;&amp;#x3C;blank&gt;&quot;),
            )
            for r in range(m.shape[0])
            for c in range(m.shape[1])
            if r &amp;#x3C; max_row and c &amp;#x3C; max_col
        ],
        # if float(m[r,c]) != 0 and r &amp;#x3C; max_row and c &amp;#x3C; max_col],
        columns=[&quot;row&quot;, &quot;column&quot;, &quot;value&quot;, &quot;row_token&quot;, &quot;col_token&quot;],
    )


def attn_map(attn, layer, head, row_tokens, col_tokens, max_dim=30):
    df = mtx2df(
        attn[0, head].data,
        max_dim,
        max_dim,
        row_tokens,
        col_tokens,
    )
    return (
        alt.Chart(data=df)
        .mark_rect()
        .encode(
            x=alt.X(&quot;col_token&quot;, axis=alt.Axis(title=&quot;&quot;)),
            y=alt.Y(&quot;row_token&quot;, axis=alt.Axis(title=&quot;&quot;)),
            color=&quot;value&quot;,
            tooltip=[&quot;row&quot;, &quot;column&quot;, &quot;value&quot;, &quot;row_token&quot;, &quot;col_token&quot;],
        )
        .properties(height=400, width=400)
        .interactive()
    )
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_encoder(model, layer):
    return model.encoder.layers[layer].self_attn.attn


def get_decoder_self(model, layer):
    return model.decoder.layers[layer].self_attn.attn


def get_decoder_src(model, layer):
    return model.decoder.layers[layer].src_attn.attn


def visualize_layer(model, layer, getter_fn, ntokens, row_tokens, col_tokens):
    # ntokens = last_example[0].ntokens
    attn = getter_fn(model, layer)
    n_heads = attn.shape[1]
    charts = [
        attn_map(
            attn,
            0,
            h,
            row_tokens=row_tokens,
            col_tokens=col_tokens,
            max_dim=ntokens,
        )
        for h in range(n_heads)
    ]
    assert n_heads == 8
    return alt.vconcat(
        charts[0]
        # | charts[1]
        | charts[2]
        # | charts[3]
        | charts[4]
        # | charts[5]
        | charts[6]
        # | charts[7]
        # layer + 1 due to 0-indexing
    ).properties(title=&quot;Layer %d&quot; % (layer + 1))
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Encoder Self Attention&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def viz_encoder_self():
    model, example_data = run_model_example(n_examples=1)
    example = example_data[
        len(example_data) - 1
    ]  # batch object for the final example

    layer_viz = [
        visualize_layer(
            model, layer, get_encoder, len(example[1]), example[1], example[1]
        )
        for layer in range(6)
    ]
    return alt.hconcat(
        layer_viz[0]
        # &amp;#x26; layer_viz[1]
        &amp;#x26; layer_viz[2]
        # &amp;#x26; layer_viz[3]
        &amp;#x26; layer_viz[4]
        # &amp;#x26; layer_viz[5]
    )


show_example(viz_encoder_self)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Preparing Data ...


Loading Trained Model ...


Checking Model Outputs:

Example 0 ========



Source Text (Input)        : &amp;#x3C;s&gt; Mehrere Kinder heben die Hände , während sie auf einem bunten Teppich in einem Klassenzimmer sitzen . &amp;#x3C;/s&gt;
Target Text (Ground Truth) : &amp;#x3C;s&gt; Several children are raising their hands while sitting on a colorful rug in a classroom . &amp;#x3C;/s&gt;


Model Output               : &amp;#x3C;s&gt; A group of children are in their hands while sitting on a colorful carpet . &amp;#x3C;/s&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Decoder Self Attention&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def viz_decoder_self():
    model, example_data = run_model_example(n_examples=1)
    example = example_data[len(example_data) - 1]

    layer_viz = [
        visualize_layer(
            model,
            layer,
            get_decoder_self,
            len(example[1]),
            example[1],
            example[1],
        )
        for layer in range(6)
    ]
    return alt.hconcat(
        layer_viz[0]
        &amp;#x26; layer_viz[1]
        &amp;#x26; layer_viz[2]
        &amp;#x26; layer_viz[3]
        &amp;#x26; layer_viz[4]
        &amp;#x26; layer_viz[5]
    )


show_example(viz_decoder_self)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Preparing Data ...


Loading Trained Model ...


Checking Model Outputs:

Example 0 ========



Source Text (Input)        : &amp;#x3C;s&gt; Drei Menschen wandern auf einem stark verschneiten Weg . &amp;#x3C;/s&gt;
Target Text (Ground Truth) : &amp;#x3C;s&gt; A &amp;#x3C;unk&gt; of people are hiking throughout a heavily snowed path . &amp;#x3C;/s&gt;


Model Output               : &amp;#x3C;s&gt; Three people hiking on a busy &amp;#x3C;unk&gt; . &amp;#x3C;/s&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Decoder Src Attention&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def viz_decoder_src():
    model, example_data = run_model_example(n_examples=1)
    example = example_data[len(example_data) - 1]

    layer_viz = [
        visualize_layer(
            model,
            layer,
            get_decoder_src,
            max(len(example[1]), len(example[2])),
            example[1],
            example[2],
        )
        for layer in range(6)
    ]
    return alt.hconcat(
        layer_viz[0]
        &amp;#x26; layer_viz[1]
        &amp;#x26; layer_viz[2]
        &amp;#x26; layer_viz[3]
        &amp;#x26; layer_viz[4]
        &amp;#x26; layer_viz[5]
    )


show_example(viz_decoder_src)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Preparing Data ...


Loading Trained Model ...


Checking Model Outputs:

Example 0 ========



Source Text (Input)        : &amp;#x3C;s&gt; Baby sieht sich die Blätter am Zweig eines Baumes an . &amp;#x3C;/s&gt;
Target Text (Ground Truth) : &amp;#x3C;s&gt; Baby looking at the leaves on a branch of a tree . &amp;#x3C;/s&gt;


Model Output               : &amp;#x3C;s&gt; A baby is looking at the leaves at a tree . &amp;#x3C;/s&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;Hopefully this code is useful for future research. Please reach
out if you have any issues.&lt;/p&gt;
&lt;p&gt;Cheers,
Sasha Rush, Austin Huang, Suraj Subramanian, Jonathan Sum, Khalid Almubarak,
Stella Biderman&lt;/p&gt;</content:encoded><h:img src="/_astro/aiayn.M7sRrIDc.png"/><enclosure url="/_astro/aiayn.M7sRrIDc.png"/></item></channel></rss>