Deterministic identification for Bernoulli channels and related channels with continuous input

2026-05-06Information Theory

Information Theory
AI summary

The authors study a type of communication channel where the input can be any continuous value and messages grow roughly like n log n. They focus on deterministic identification (DI) capacity, which had an unclear exact value due to a gap between known upper and lower bounds. Building on recent advances for Gaussian channels, the authors extend their new code design to Bernoulli channels and then to a broad class of channels with certain continuous output distributions. They prove the capacity is exactly 1/2 for these channels, closing the long-standing gap. Additionally, they analyze how the message rate and error trade off, providing better lower bounds on reliability that nearly match known limits for small error rates.

deterministic identificationmemoryless channelcontinuous input alphabetAWGN channelBernoulli channelcapacityconverse boundrate-reliability tradeofferror exponentreliability function
Authors
Pau Colomer, Christian Deppe, Holger Boche, Andreas Winter
Abstract
For memoryless channels with continuous input alphabets, deterministic identification (DI) typically exhibits a linearithmic ($n\log n$) message growth. However, the exact DI capacity has long remained open due to a persistent gap between the best known achievability and converse bounds. This gap was recently closed for AWGN channels via a novel code construction optimising the "galaxy" codes. Here, we extend this approach to the Bernoulli channel and subsequently to any channel $W$ whose image contains a continuous curve of output probability distributions, and hence admits a reduction to the Bernoulli channel restricted to a subinterval of inputs. As a consequence, we prove that the converse bound is tight and establish $\dot{C}_{\text{DI}}(W) = \frac 12$ for this broad class of channels, thereby closing the long-standing capacity gap. A similar gap was also observed for the DI rate-reliability tradeoff. We analyse the tradeoff between rate and error of the proposed code and derive improved lower bounds on the reliability function, approaching the converse at leading order in the regime of small error exponents.