Skip to content

Commit

Permalink
fix default num_attention_heads in segformer doc (#16612)
Browse files Browse the repository at this point in the history
  • Loading branch information
JunMa11 committed Apr 6, 2022
1 parent b18dfd9 commit d55fcbc
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class SegformerConfig(PretrainedConfig):
Patch size before each encoder block.
strides (`List[int]`, *optional*, defaults to [4, 2, 2, 2]):
Stride before each encoder block.
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2, 4, 8]):
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2, 5, 8]):
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to [4, 4, 4, 4]):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
Expand Down

0 comments on commit d55fcbc

Please sign in to comment.