All convolutions within a dense block are ReLU-activated and use batch normalization. Channel-clever concatenation is barely doable if the peak and width Proportions of the information remain unchanged, so convolutions inside a dense block are all of stride one. Pooling levels are inserted between dense blocks for more dimensionality https://financefeeds.com/edi-partners-with-code-willing-for-hedge-funds-and-asset-managers/