Skip to content

Commit 45ee046

Browse files
WeiFang-NXPgregkh
authored andcommitted
net: stmmac: fix the crash issue for zero copy XDP_TX action
[ Upstream commit a48e232 ] There is a crash issue when running zero copy XDP_TX action, the crash log is shown below. [ 216.122464] Unable to handle kernel paging request at virtual address fffeffff80000000 [ 216.187524] Internal error: Oops: 0000000096000144 [#1] SMP [ 216.301694] Call trace: [ 216.304130] dcache_clean_poc+0x20/0x38 (P) [ 216.308308] __dma_sync_single_for_device+0x1bc/0x1e0 [ 216.313351] stmmac_xdp_xmit_xdpf+0x354/0x400 [ 216.317701] __stmmac_xdp_run_prog+0x164/0x368 [ 216.322139] stmmac_napi_poll_rxtx+0xba8/0xf00 [ 216.326576] __napi_poll+0x40/0x218 [ 216.408054] Kernel panic - not syncing: Oops: Fatal exception in interrupt For XDP_TX action, the xdp_buff is converted to xdp_frame by xdp_convert_buff_to_frame(). The memory type of the resulting xdp_frame depends on the memory type of the xdp_buff. For page pool based xdp_buff it produces xdp_frame with memory type MEM_TYPE_PAGE_POOL. For zero copy XSK pool based xdp_buff it produces xdp_frame with memory type MEM_TYPE_PAGE_ORDER0. However, stmmac_xdp_xmit_back() does not check the memory type and always uses the page pool type, this leads to invalid mappings and causes the crash. Therefore, check the xdp_buff memory type in stmmac_xdp_xmit_back() to fix this issue. Fixes: bba2556 ("net: stmmac: Enable RX via AF_XDP zero-copy") Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Hariprasad Kelam <hkelam@marvell.com> Link: https://patch.msgid.link/20251204071332.1907111-1-wei.fang@nxp.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent aa743b0 commit 45ee046

1 file changed

Lines changed: 15 additions & 2 deletions

File tree

drivers/net/ethernet/stmicro/stmmac/stmmac_main.c

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,7 @@ MODULE_PARM_DESC(phyaddr, "Physical device address");
8787
#define STMMAC_XDP_CONSUMED BIT(0)
8888
#define STMMAC_XDP_TX BIT(1)
8989
#define STMMAC_XDP_REDIRECT BIT(2)
90+
#define STMMAC_XSK_CONSUMED BIT(3)
9091

9192
static int flow_ctrl = FLOW_AUTO;
9293
module_param(flow_ctrl, int, 0644);
@@ -4998,6 +4999,7 @@ static int stmmac_xdp_get_tx_queue(struct stmmac_priv *priv,
49984999
static int stmmac_xdp_xmit_back(struct stmmac_priv *priv,
49995000
struct xdp_buff *xdp)
50005001
{
5002+
bool zc = !!(xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL);
50015003
struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
50025004
int cpu = smp_processor_id();
50035005
struct netdev_queue *nq;
@@ -5014,9 +5016,18 @@ static int stmmac_xdp_xmit_back(struct stmmac_priv *priv,
50145016
/* Avoids TX time-out as we are sharing with slow path */
50155017
txq_trans_cond_update(nq);
50165018

5017-
res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, false);
5018-
if (res == STMMAC_XDP_TX)
5019+
/* For zero copy XDP_TX action, dma_map is true */
5020+
res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, zc);
5021+
if (res == STMMAC_XDP_TX) {
50195022
stmmac_flush_tx_descriptors(priv, queue);
5023+
} else if (res == STMMAC_XDP_CONSUMED && zc) {
5024+
/* xdp has been freed by xdp_convert_buff_to_frame(),
5025+
* no need to call xsk_buff_free() again, so return
5026+
* STMMAC_XSK_CONSUMED.
5027+
*/
5028+
res = STMMAC_XSK_CONSUMED;
5029+
xdp_return_frame(xdpf);
5030+
}
50205031

50215032
__netif_tx_unlock(nq);
50225033

@@ -5366,6 +5377,8 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
53665377
break;
53675378
case STMMAC_XDP_CONSUMED:
53685379
xsk_buff_free(buf->xdp);
5380+
fallthrough;
5381+
case STMMAC_XSK_CONSUMED:
53695382
rx_dropped++;
53705383
break;
53715384
case STMMAC_XDP_TX:

0 commit comments

Comments
 (0)