MLXCX(4D) 4D MLXCX(4D)

NAME


mlxcx - Mellanox ConnectX-4/5/6 Ethernet controller driver

SYNOPSIS


/dev/net/mlxcx*

DESCRIPTION


The mlxcx driver is a GLDv3 NIC driver for the ConnectX-4, ConnectX-4 Lx,
ConnectX-5 and ConnectX-6 families of ethernet controllers from Mellanox.
It supports the Data Link Provider Interface, dlpi(4P).

This driver supports:

- Jumbo frames up to 9000 bytes.

- Checksum offload for TCP, UDP, IPv4 and IPv6.

- Group support with VLAN and MAC steering to avoid software
classification when using VNICs.

- Promiscuous access via snoop(8) and dlpi(4P)

- LED control

- Transceiver information

- Internal temperature sensors

At this time, the driver does not support Large Send Offload (LSO), energy
efficient Ethernet (EEE), or the use of flow control through hardware pause
frames.

CONFIGURATION


The mlxcx.conf file contains user configurable parameters, including the
ability to set the number of rings and groups advertised to MAC, the sizes
of rings and groups, and the maximum number of MAC address filters
available.

PROPERTIES


The driver supports the following device properties which may be tuned
through its driver.conf file, /kernel/drv/mlxcx.conf. These properties
cannot be changed after the driver has been attached.

These properties are not considered stable at this time, and may change.

eq_size_shift
Minimum: 2 | Maximum: device dependent (up to 255)

The eq_size_shift property determines the number of entries on
Event Queues for the device. The number of entries is calculated
as (1 << eq_size_shift), so a value of 9 would mean 512 entries are
created on each Event Queue. The default value is 9.

cq_size_shift
Minimum: 2 | Maximum: device dependent (up to 255)

The cq_size_shift property determines the number of entries on
Completion Queues for the device. The number of entries is
calculated as (1 << cq_size_shift), so a value of 9 would mean 512
entries are created on each Event Queue. The default value is
device dependent, 10 for devices with maximum supported speed of
10Gb/s or less and 12 for devices with higher supported speeds.
This should be kept very close to the value set for rq_size_shift
and sq_size_shift.

rq_size_shift
Minimum: 2 | Maximum: device dependent (up to 255)

The rq_size_shift property determines the number of descriptors on
Receive Queues for the device. The number of descriptors is
calculated as (1 << rq_size_shift), so a value of 9 would mean 512
descriptors are created on each Receive Queue. This sets the
number of packets on RX rings advertised to MAC. The default value
is device dependent, 10 for devices with maximum supported speed of
10Gb/s or less and 12 for devices with higher supported speeds.

sq_size_shift
Minimum: 2 | Maximum: device dependent (up to 255)

The sq_size_shift property determines the number of descriptors on
Send Queues for the device. The number of descriptors is
calculated as (1 << sq_size_shift), so a value of 9 would mean 512
descriptors are created on each Send Queue. This sets the number
of packets on RX rings advertised to MAC. The default value is
device dependent, 11 for devices with maximum supported speed of
10Gb/s or less and 13 for devices with higher supported speeds.
Note that large packets often occupy more than one descriptor slot
on the SQ, so it is sometimes a good idea to increase this if using
a large MTU.

tx_ngroups
Minimum: 1 | Maximum: device dependent

The tx_ngroups property determines the number of TX groups
advertised to MAC. The default value is 1.

tx_nrings_per_group
Minimum: 1 | Maximum: device dependent

The tx_nrings_per_group property determines the number of rings in
each TX group advertised to MAC. The default value is 64.

rx_ngroups_large
Minimum: 1 | Maximum: device dependent

The rx_ngroups_large property determines the number of "large" RX
groups advertised to MAC. The size of "large" RX groups is set by
the rx_nrings_per_large_group property. The default value is 2.

rx_nrings_per_large_group
Minimum: 1 | Maximum: device dependent

The rx_nrings_per_large_group property determines the number of
rings in each "large" RX group advertised to MAC. The number of
such groups is determined by the rx_ngroups_large property. The
default value is 16.

rx_ngroups_small
Minimum: 1 | Maximum: device dependent

The rx_ngroups_small property determines the number of "small" RX
groups advertised to MAC. The size of "small" RX groups is set by
the rx_nrings_per_small_group property. It is recommended to use
many small groups when using a large number of VNICs on top of the
NIC (e.g. on a system with many zones). The default value is 256.

rx_nrings_per_small_group
Minimum: 1 | Maximum: device dependent

The rx_nrings_per_small_group property determines the number of
rings in each "small" RX group advertised to MAC. The number of
such groups is determined by the rx_ngroups_small property. The
default value is 4.

ftbl_root_size_shift
Minimum: 4 | Maximum: device dependent

The ftbl_root_size_shift property determines the number of flow
table entries on the root flow table, and therefore how many MAC
addresses can be filtered into groups across the entire NIC. The
number of flow entries is calculated as (1 <<
ftbl_root_size_shift), so a value of 9 would mean 512 entries are
created in the root flow table. The default value is 12.

cqemod_period_usec
Minimum: 1 | Maximum: 65535

The cqemod_period_usec property determines the maximum delay after
a completion event has occurred before an event queue entry (and
thus an interrupt) is generated. The delay is measured in
microseconds. The default value is 50.

cqemod_count
Minimum: 1 | Maximum: 65535

The cqemod_count property determines the maximum number of
completion events that can have occurred before an event queue
entry (and thus an interrupt) is generated. The default value is
80% of the CQ size.

intrmod_period_usec
Minimum: 1 | Maximum: 65535

The intrmod_period_usec property determines the maximum delay after
an event queue entry has been generated before an interrupt is
raised. The delay is measured in microseconds. The default value
is 10.

tx_bind_threshold
Minimum: 1 | Maximum: 65535

The tx_bind_threshold property determines the minimum number of
bytes in a packet before the driver uses
ddi_dma_addr_bind_handle(9F) to bind the packet memory for DMA,
rather than copying the memory as it does for small packets. DMA
binds are expensive and involve taking locks in the PCI nexus
driver, so it is seldom worth using them for small packets. The
default value is 2048.

rx_limit_per_completion
Minimum: 16 | Maximum: 4096

The rx_limit_per_completion property determines the maximum number
of packets that will be processed on a given completion ring during
a single interrupt. This is done to try and guarantee some amount
of liveness in the system. The default value is 256.

FILES


/kernel/drv/amd64/mlxcx Device driver (x86)

/kernel/drv/mlxcx.conf Driver configuration file containing
user-configurable options

SEE ALSO


dlpi(4P), driver.conf(5), dladm(8), snoop(8)

OmniOS August 27, 2020 OmniOS