@galvesribeiro - as you pointed out, "Enterprise and Data Center products" is a marketing term and can mean anything. If you are in the data storage business, it's probably wise to assess your technical requirements before making a purchase.
Marketing terms usually mislead customers. It is well known that DCB means "Enterprise/Datacenter Gear" for anyone in the industry.
RoCE traffic can be transported over any standard switch or router and might actually work perfectly well if the traffic is dedicated up to a certain level of throughput using QoS.
Although it is true that RoCE v2 can be carried over regular switches/routers on commodity hardware, that doesn't mean it will work. Any application or OS that validates the RDMA support will check the support of the very specific features/standards on both ends in order to enable that feature. An example of this is Windows Server Storage Spaces Direct, VMWare vSAN, Open-Source Ceph, etc. Those are a non-exhaustive list of services and OSes which will not enabled RDMA unless those requirements are clearly tested against the hardware end-to-end. If any of those fail to be detected, they will not enable it. I've tried already with Mikrotik switches and 2 machines that has Mellanox NICs on them with RDMA support. Neither Windows Server or VMWare ESXI enable the RDMA features when plugged on the Mikrotik switches. If I change it to Dell PowerON switch and enable/configure RDMA on the connected ports, it immediately show up on both OSes the support.
However, for truly demanding applications such as massive central data storage which requires high throughput and extremely low latency, flow control, packet prioritization, and buffering must be handled by specialized hardware (ASIC) to achieve so called "lossless Ethernet". This is normally not supported by the SoC in standard switches.
It doesn't need to be massive. Any modern file share with SMB can have RDMA enabled on Windows Server. Also, the SoC on Mikrotik devices is the CPU. What they call "Switch Chip" on their block diagrams is in fact an ASIC and according to the chip manufacturer, they do support RDMA.
The only task lossless Ethernet has, is to minimize NIC-to-NIC retransmissions to ensure optimal performance during high loads otherwise the RoCE endpoint itself need to perform the actuall retransmission and point to point congestion control.
That is not true. It is not only to save on retransmissions. RDMA allows a wide variety of scenarios in the field like remote GPU direct access for example. Also, RDMA is a benefit not for the middle-man equipments like switches/routers. It removes all the networking process from the CPU down to the NIC with huge performance gains and cost savings. An application can allocate buffers straight on the "network card memory" allowing one to bypass the whole kernel and any CPU-related stack giving the overall application orders of magnitude of performance since they won't suffer from I/O scheduling problems as CPU is removed from the equation.
CXL will eventually replace RoCE/IB.
Not true either (probably IB). Large cloud service providers like AWS and Azure have heavily invested in RDMA with RoCE given the benefits of it. In Azure for example, there are papers which show that 80%+ of their network traffic is driven by RDMA with RoCE which represented to them 60%+ cost savings in CPU resources and power usage on the workloads that use it. I really doubt it will be replaced in the next decade or so by anything. Besides, CXL state of affairs right now is a big mess with no considerable manufacturer of NICs wanting to support it. Support on major OSes is just non-existent. Don't hold your breath on this.
Nice sources
I could believe on you if you shared some credible sources but... Broadcom?
Jokes aside, anyway, again, hardware wise, Mikrotik has everything they need to implement it. Market wise, they have a whole new industry from small to midsize business which can leverage the savings and power of RDMA network but often have to rely on switchless deployments (which is bad overall) or purchase switches like Dell Power which sometimes are more expensive than their own storage or compute gear, making them either move to switchless deployments (again, bad) or have to abort the usage of RDMA and keep untapped power because of this.
This is not a niche or "massive" data feature as you said. It is just a very optimized way of networking which customers of any size can leverage, specially because those don't require any license or patent fees for anyone.