diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-08-03 16:29:08 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-08-03 16:29:08 -0700 |
commit | f86d1fbbe7858884d6754534a0afbb74fc30bc26 (patch) | |
tree | f61796870edefbe77d495e9d719c68af1d14275b /drivers/net/ethernet/mediatek | |
parent | 526942b8134cc34d25d27f95dfff98b8ce2f6fcd (diff) | |
parent | 7c6327c77d509e78bff76f2a4551fcfee851682e (diff) |
Merge tag 'net-next-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking changes from Paolo Abeni:
"Core:
- Refactor the forward memory allocation to better cope with memory
pressure with many open sockets, moving from a per socket cache to
a per-CPU one
- Replace rwlocks with RCU for better fairness in ping, raw sockets
and IP multicast router.
- Network-side support for IO uring zero-copy send.
- A few skb drop reason improvements, including codegen the source
file with string mapping instead of using macro magic.
- Rename reference tracking helpers to a more consistent netdev_*
schema.
- Adapt u64_stats_t type to address load/store tearing issues.
- Refine debug helper usage to reduce the log noise caused by bots.
BPF:
- Improve socket map performance, avoiding skb cloning on read
operation.
- Add support for 64 bits enum, to match types exposed by kernel.
- Introduce support for sleepable uprobes program.
- Introduce support for enum textual representation in libbpf.
- New helpers to implement synproxy with eBPF/XDP.
- Improve loop performances, inlining indirect calls when possible.
- Removed all the deprecated libbpf APIs.
- Implement new eBPF-based LSM flavor.
- Add type match support, which allow accurate queries to the eBPF
used types.
- A few TCP congetsion control framework usability improvements.
- Add new infrastructure to manipulate CT entries via eBPF programs.
- Allow for livepatch (KLP) and BPF trampolines to attach to the same
kernel function.
Protocols:
- Introduce per network namespace lookup tables for unix sockets,
increasing scalability and reducing contention.
- Preparation work for Wi-Fi 7 Multi-Link Operation (MLO) support.
- Add support to forciby close TIME_WAIT TCP sockets via user-space
tools.
- Significant performance improvement for the TLS 1.3 receive path,
both for zero-copy and not-zero-copy.
- Support for changing the initial MTPCP subflow priority/backup
status
- Introduce virtually contingus buffers for sockets over RDMA, to
cope better with memory pressure.
- Extend CAN ethtool support with timestamping capabilities
- Refactor CAN build infrastructure to allow building only the needed
features.
Driver API:
- Remove devlink mutex to allow parallel commands on multiple links.
- Add support for pause stats in distributed switch.
- Implement devlink helpers to query and flash line cards.
- New helper for phy mode to register conversion.
New hardware / drivers:
- Ethernet DSA driver for the rockchip mt7531 on BPI-R2 Pro.
- Ethernet DSA driver for the Renesas RZ/N1 A5PSW switch.
- Ethernet DSA driver for the Microchip LAN937x switch.
- Ethernet PHY driver for the Aquantia AQR113C EPHY.
- CAN driver for the OBD-II ELM327 interface.
- CAN driver for RZ/N1 SJA1000 CAN controller.
- Bluetooth: Infineon CYW55572 Wi-Fi plus Bluetooth combo device.
Drivers:
- Intel Ethernet NICs:
- i40e: add support for vlan pruning
- i40e: add support for XDP framented packets
- ice: improved vlan offload support
- ice: add support for PPPoE offload
- Mellanox Ethernet (mlx5)
- refactor packet steering offload for performance and scalability
- extend support for TC offload
- refactor devlink code to clean-up the locking schema
- support stacked vlans for bridge offloads
- use TLS objects pool to improve connection rate
- Netronome Ethernet NICs (nfp):
- extend support for IPv6 fields mangling offload
- add support for vepa mode in HW bridge
- better support for virtio data path acceleration (VDPA)
- enable TSO by default
- Microsoft vNIC driver (mana)
- add support for XDP redirect
- Others Ethernet drivers:
- bonding: add per-port priority support
- microchip lan743x: extend phy support
- Fungible funeth: support UDP segmentation offload and XDP xmit
- Solarflare EF100: add support for virtual function representors
- MediaTek SoC: add XDP support
- Mellanox Ethernet/IB switch (mlxsw):
- dropped support for unreleased H/W (XM router).
- improved stats accuracy
- unified bridge model coversion improving scalability (parts 1-6)
- support for PTP in Spectrum-2 asics
- Broadcom PHYs
- add PTP support for BCM54210E
- add support for the BCM53128 internal PHY
- Marvell Ethernet switches (prestera):
- implement support for multicast forwarding offload
- Embedded Ethernet switches:
- refactor OcteonTx MAC filter for better scalability
- improve TC H/W offload for the Felix driver
- refactor the Microchip ksz8 and ksz9477 drivers to share the
probe code (parts 1, 2), add support for phylink mac
configuration
- Other WiFi:
- Microchip wilc1000: diable WEP support and enable WPA3
- Atheros ath10k: encapsulation offload support
Old code removal:
- Neterion vxge ethernet driver: this is untouched since more than 10 years"
* tag 'net-next-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1890 commits)
doc: sfp-phylink: Fix a broken reference
wireguard: selftests: support UML
wireguard: allowedips: don't corrupt stack when detecting overflow
wireguard: selftests: update config fragments
wireguard: ratelimiter: use hrtimer in selftest
net/mlx5e: xsk: Discard unaligned XSK frames on striding RQ
net: usb: ax88179_178a: Bind only to vendor-specific interface
selftests: net: fix IOAM test skip return code
net: usb: make USB_RTL8153_ECM non user configurable
net: marvell: prestera: remove reduntant code
octeontx2-pf: Reduce minimum mtu size to 60
net: devlink: Fix missing mutex_unlock() call
net/tls: Remove redundant workqueue flush before destroy
net: txgbe: Fix an error handling path in txgbe_probe()
net: dsa: Fix spelling mistakes and cleanup code
Documentation: devlink: add add devlink-selftests to the table of contents
dccp: put dccp_qpolicy_full() and dccp_qpolicy_push() in the same lock
net: ionic: fix error check for vlan flags in ionic_set_nic_features()
net: ice: fix error NETIF_F_HW_VLAN_CTAG_FILTER check in ice_vsi_sync_fltr()
nfp: flower: add support for tunnel offload without key ID
...
Diffstat (limited to 'drivers/net/ethernet/mediatek')
-rw-r--r-- | drivers/net/ethernet/mediatek/Kconfig | 2 | ||||
-rw-r--r-- | drivers/net/ethernet/mediatek/mtk_eth_soc.c | 668 | ||||
-rw-r--r-- | drivers/net/ethernet/mediatek/mtk_eth_soc.h | 34 | ||||
-rw-r--r-- | drivers/net/ethernet/mediatek/mtk_ppe_offload.c | 30 | ||||
-rw-r--r-- | drivers/net/ethernet/mediatek/mtk_star_emac.c | 529 |
5 files changed, 986 insertions, 277 deletions
diff --git a/drivers/net/ethernet/mediatek/Kconfig b/drivers/net/ethernet/mediatek/Kconfig index da4ec235d146..97374fb3ee79 100644 --- a/drivers/net/ethernet/mediatek/Kconfig +++ b/drivers/net/ethernet/mediatek/Kconfig @@ -17,6 +17,8 @@ config NET_MEDIATEK_SOC select PINCTRL select PHYLINK select DIMLIB + select PAGE_POOL + select PAGE_POOL_STATS help This driver supports the gigabit ethernet MACs in the MediaTek SoC family. diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 59c9a10f83ba..d9426b01f462 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -34,6 +34,10 @@ MODULE_PARM_DESC(msg_level, "Message level (-1=defaults,0=none,...,16=all)"); #define MTK_ETHTOOL_STAT(x) { #x, \ offsetof(struct mtk_hw_stats, x) / sizeof(u64) } +#define MTK_ETHTOOL_XDP_STAT(x) { #x, \ + offsetof(struct mtk_hw_stats, xdp_stats.x) / \ + sizeof(u64) } + static const struct mtk_reg_map mtk_reg_map = { .tx_irq_mask = 0x1a1c, .tx_irq_status = 0x1a18, @@ -141,6 +145,13 @@ static const struct mtk_ethtool_stats { MTK_ETHTOOL_STAT(rx_long_errors), MTK_ETHTOOL_STAT(rx_checksum_errors), MTK_ETHTOOL_STAT(rx_flow_control_packets), + MTK_ETHTOOL_XDP_STAT(rx_xdp_redirect), + MTK_ETHTOOL_XDP_STAT(rx_xdp_pass), + MTK_ETHTOOL_XDP_STAT(rx_xdp_drop), + MTK_ETHTOOL_XDP_STAT(rx_xdp_tx), + MTK_ETHTOOL_XDP_STAT(rx_xdp_tx_errors), + MTK_ETHTOOL_XDP_STAT(tx_xdp_xmit), + MTK_ETHTOOL_XDP_STAT(tx_xdp_xmit_errors), }; static const char * const mtk_clks_source_name[] = { @@ -990,7 +1001,7 @@ static int txd_to_idx(struct mtk_tx_ring *ring, void *dma, u32 txd_size) } static void mtk_tx_unmap(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, - bool napi) + struct xdp_frame_bulk *bq, bool napi) { if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { if (tx_buf->flags & MTK_TX_FLAGS_SINGLE0) { @@ -1020,15 +1031,27 @@ static void mtk_tx_unmap(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, } } - tx_buf->flags = 0; - if (tx_buf->skb && - (tx_buf->skb != (struct sk_buff *)MTK_DMA_DUMMY_DESC)) { - if (napi) - napi_consume_skb(tx_buf->skb, napi); - else - dev_kfree_skb_any(tx_buf->skb); + if (tx_buf->data && tx_buf->data != (void *)MTK_DMA_DUMMY_DESC) { + if (tx_buf->type == MTK_TYPE_SKB) { + struct sk_buff *skb = tx_buf->data; + + if (napi) + napi_consume_skb(skb, napi); + else + dev_kfree_skb_any(skb); + } else { + struct xdp_frame *xdpf = tx_buf->data; + + if (napi && tx_buf->type == MTK_TYPE_XDP_TX) + xdp_return_frame_rx_napi(xdpf); + else if (bq) + xdp_return_frame_bulk(xdpf, bq); + else + xdp_return_frame(xdpf); + } } - tx_buf->skb = NULL; + tx_buf->flags = 0; + tx_buf->data = NULL; } static void setup_tx_buf(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, @@ -1045,7 +1068,7 @@ static void setup_tx_buf(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, dma_unmap_addr_set(tx_buf, dma_addr1, mapped_addr); dma_unmap_len_set(tx_buf, dma_len1, size); } else { - tx_buf->skb = (struct sk_buff *)MTK_DMA_DUMMY_DESC; + tx_buf->data = (void *)MTK_DMA_DUMMY_DESC; txd->txd1 = mapped_addr; txd->txd2 = TX_DMA_PLEN0(size); dma_unmap_addr_set(tx_buf, dma_addr0, mapped_addr); @@ -1221,7 +1244,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, soc->txrx.txd_size); if (new_desc) memset(tx_buf, 0, sizeof(*tx_buf)); - tx_buf->skb = (struct sk_buff *)MTK_DMA_DUMMY_DESC; + tx_buf->data = (void *)MTK_DMA_DUMMY_DESC; tx_buf->flags |= MTK_TX_FLAGS_PAGE0; tx_buf->flags |= (!mac->id) ? MTK_TX_FLAGS_FPORT0 : MTK_TX_FLAGS_FPORT1; @@ -1235,7 +1258,8 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, } /* store skb to cleanup */ - itx_buf->skb = skb; + itx_buf->type = MTK_TYPE_SKB; + itx_buf->data = skb; if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { if (k & 0x1) @@ -1274,7 +1298,7 @@ err_dma: tx_buf = mtk_desc_to_tx_buf(ring, itxd, soc->txrx.txd_size); /* unmap dma */ - mtk_tx_unmap(eth, tx_buf, false); + mtk_tx_unmap(eth, tx_buf, NULL, false); itxd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) @@ -1432,11 +1456,320 @@ static void mtk_update_rx_cpu_idx(struct mtk_eth *eth) } } +static bool mtk_page_pool_enabled(struct mtk_eth *eth) +{ + return !eth->hwlro; +} + +static struct page_pool *mtk_create_page_pool(struct mtk_eth *eth, + struct xdp_rxq_info *xdp_q, + int id, int size) +{ + struct page_pool_params pp_params = { + .order = 0, + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .pool_size = size, + .nid = NUMA_NO_NODE, + .dev = eth->dma_dev, + .offset = MTK_PP_HEADROOM, + .max_len = MTK_PP_MAX_BUF_SIZE, + }; + struct page_pool *pp; + int err; + + pp_params.dma_dir = rcu_access_pointer(eth->prog) ? DMA_BIDIRECTIONAL + : DMA_FROM_DEVICE; + pp = page_pool_create(&pp_params); + if (IS_ERR(pp)) + return pp; + + err = __xdp_rxq_info_reg(xdp_q, ð->dummy_dev, eth->rx_napi.napi_id, + id, PAGE_SIZE); + if (err < 0) + goto err_free_pp; + + err = xdp_rxq_info_reg_mem_model(xdp_q, MEM_TYPE_PAGE_POOL, pp); + if (err) + goto err_unregister_rxq; + + return pp; + +err_unregister_rxq: + xdp_rxq_info_unreg(xdp_q); +err_free_pp: + page_pool_destroy(pp); + + return ERR_PTR(err); +} + +static void *mtk_page_pool_get_buff(struct page_pool *pp, dma_addr_t *dma_addr, + gfp_t gfp_mask) +{ + struct page *page; + + page = page_pool_alloc_pages(pp, gfp_mask | __GFP_NOWARN); + if (!page) + return NULL; + + *dma_addr = page_pool_get_dma_addr(page) + MTK_PP_HEADROOM; + return page_address(page); +} + +static void mtk_rx_put_buff(struct mtk_rx_ring *ring, void *data, bool napi) +{ + if (ring->page_pool) + page_pool_put_full_page(ring->page_pool, + virt_to_head_page(data), napi); + else + skb_free_frag(data); +} + +static int mtk_xdp_frame_map(struct mtk_eth *eth, struct net_device *dev, + struct mtk_tx_dma_desc_info *txd_info, + struct mtk_tx_dma *txd, struct mtk_tx_buf *tx_buf, + void *data, u16 headroom, int index, bool dma_map) +{ + struct mtk_tx_ring *ring = ð->tx_ring; + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_tx_dma *txd_pdma; + + if (dma_map) { /* ndo_xdp_xmit */ + txd_info->addr = dma_map_single(eth->dma_dev, data, + txd_info->size, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(eth->dma_dev, txd_info->addr))) + return -ENOMEM; + + tx_buf->flags |= MTK_TX_FLAGS_SINGLE0; + } else { + struct page *page = virt_to_head_page(data); + + txd_info->addr = page_pool_get_dma_addr(page) + + sizeof(struct xdp_frame) + headroom; + dma_sync_single_for_device(eth->dma_dev, txd_info->addr, + txd_info->size, DMA_BIDIRECTIONAL); + } + mtk_tx_set_dma_desc(dev, txd, txd_info); + + tx_buf->flags |= !mac->id ? MTK_TX_FLAGS_FPORT0 : MTK_TX_FLAGS_FPORT1; + tx_buf->type = dma_map ? MTK_TYPE_XDP_NDO : MTK_TYPE_XDP_TX; + tx_buf->data = (void *)MTK_DMA_DUMMY_DESC; + + txd_pdma = qdma_to_pdma(ring, txd); + setup_tx_buf(eth, tx_buf, txd_pdma, txd_info->addr, txd_info->size, + index); + + return 0; +} + +static int mtk_xdp_submit_frame(struct mtk_eth *eth, struct xdp_frame *xdpf, + struct net_device *dev, bool dma_map) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); + const struct mtk_soc_data *soc = eth->soc; + struct mtk_tx_ring *ring = ð->tx_ring; + struct mtk_tx_dma_desc_info txd_info = { + .size = xdpf->len, + .first = true, + .last = !xdp_frame_has_frags(xdpf), + }; + int err, index = 0, n_desc = 1, nr_frags; + struct mtk_tx_dma *htxd, *txd, *txd_pdma; + struct mtk_tx_buf *htx_buf, *tx_buf; + void *data = xdpf->data; + + if (unlikely(test_bit(MTK_RESETTING, ð->state))) + return -EBUSY; + + nr_frags = unlikely(xdp_frame_has_frags(xdpf)) ? sinfo->nr_frags : 0; + if (unlikely(atomic_read(&ring->free_count) <= 1 + nr_frags)) + return -EBUSY; + + spin_lock(ð->page_lock); + + txd = ring->next_free; + if (txd == ring->last_free) { + spin_unlock(ð->page_lock); + return -ENOMEM; + } + htxd = txd; + + tx_buf = mtk_desc_to_tx_buf(ring, txd, soc->txrx.txd_size); + memset(tx_buf, 0, sizeof(*tx_buf)); + htx_buf = tx_buf; + + for (;;) { + err = mtk_xdp_frame_map(eth, dev, &txd_info, txd, tx_buf, + data, xdpf->headroom, index, dma_map); + if (err < 0) + goto unmap; + + if (txd_info.last) + break; + + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA) || (index & 0x1)) { + txd = mtk_qdma_phys_to_virt(ring, txd->txd2); + txd_pdma = qdma_to_pdma(ring, txd); + if (txd == ring->last_free) + goto unmap; + + tx_buf = mtk_desc_to_tx_buf(ring, txd, + soc->txrx.txd_size); + memset(tx_buf, 0, sizeof(*tx_buf)); + n_desc++; + } + + memset(&txd_info, 0, sizeof(struct mtk_tx_dma_desc_info)); + txd_info.size = skb_frag_size(&sinfo->frags[index]); + txd_info.last = index + 1 == nr_frags; + data = skb_frag_address(&sinfo->frags[index]); + + index++; + } + /* store xdpf for cleanup */ + htx_buf->data = xdpf; + + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { + txd_pdma = qdma_to_pdma(ring, txd); + if (index & 1) + txd_pdma->txd2 |= TX_DMA_LS0; + else + txd_pdma->txd2 |= TX_DMA_LS1; + } + + ring->next_free = mtk_qdma_phys_to_virt(ring, txd->txd2); + atomic_sub(n_desc, &ring->free_count); + + /* make sure that all changes to the dma ring are flushed before we + * continue + */ + wmb(); + + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { + mtk_w32(eth, txd->txd2, soc->reg_map->qdma.ctx_ptr); + } else { + int idx; + + idx = txd_to_idx(ring, txd, soc->txrx.txd_size); + mtk_w32(eth, NEXT_DESP_IDX(idx, ring->dma_size), + MT7628_TX_CTX_IDX0); + } + + spin_unlock(ð->page_lock); + + return 0; + +unmap: + while (htxd != txd) { + txd_pdma = qdma_to_pdma(ring, htxd); + tx_buf = mtk_desc_to_tx_buf(ring, htxd, soc->txrx.txd_size); + mtk_tx_unmap(eth, tx_buf, NULL, false); + + htxd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) + txd_pdma->txd2 = TX_DMA_DESP2_DEF; + + htxd = mtk_qdma_phys_to_virt(ring, htxd->txd2); + } + + spin_unlock(ð->page_lock); + + return err; +} + +static int mtk_xdp_xmit(struct net_device *dev, int num_frame, + struct xdp_frame **frames, u32 flags) +{ + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_hw_stats *hw_stats = mac->hw_stats; + struct mtk_eth *eth = mac->hw; + int i, nxmit = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (i = 0; i < num_frame; i++) { + if (mtk_xdp_submit_frame(eth, frames[i], dev, true)) + break; + nxmit++; + } + + u64_stats_update_begin(&hw_stats->syncp); + hw_stats->xdp_stats.tx_xdp_xmit += nxmit; + hw_stats->xdp_stats.tx_xdp_xmit_errors += num_frame - nxmit; + u64_stats_update_end(&hw_stats->syncp); + + return nxmit; +} + +static u32 mtk_xdp_run(struct mtk_eth *eth, struct mtk_rx_ring *ring, + struct xdp_buff *xdp, struct net_device *dev) +{ + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_hw_stats *hw_stats = mac->hw_stats; + u64 *count = &hw_stats->xdp_stats.rx_xdp_drop; + struct bpf_prog *prog; + u32 act = XDP_PASS; + + rcu_read_lock(); + + prog = rcu_dereference(eth->prog); + if (!prog) + goto out; + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + count = &hw_stats->xdp_stats.rx_xdp_pass; + goto update_stats; + case XDP_REDIRECT: + if (unlikely(xdp_do_redirect(dev, xdp, prog))) { + act = XDP_DROP; + break; + } + + count = &hw_stats->xdp_stats.rx_xdp_redirect; + goto update_stats; + case XDP_TX: { + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); + + if (mtk_xdp_submit_frame(eth, xdpf, dev, false)) { + count = &hw_stats->xdp_stats.rx_xdp_tx_errors; + act = XDP_DROP; + break; + } + + count = &hw_stats->xdp_stats.rx_xdp_tx; + goto update_stats; + } + default: + bpf_warn_invalid_xdp_action(dev, prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dev, prog, act); + fallthrough; + case XDP_DROP: + break; + } + + page_pool_put_full_page(ring->page_pool, + virt_to_head_page(xdp->data), true); + +update_stats: + u64_stats_update_begin(&hw_stats->syncp); + *count = *count + 1; + u64_stats_update_end(&hw_stats->syncp); +out: + rcu_read_unlock(); + + return act; +} + static int mtk_poll_rx(struct napi_struct *napi, int budget, struct mtk_eth *eth) { struct dim_sample dim_sample = {}; struct mtk_rx_ring *ring; + bool xdp_flush = false; int idx; struct sk_buff *skb; u8 *data, *new_data; @@ -1444,8 +1777,8 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, int done = 0, bytes = 0; while (done < budget) { + unsigned int pktlen, *rxdcsum; struct net_device *netdev; - unsigned int pktlen; dma_addr_t dma_addr; u32 hash, reason; int mac = 0; @@ -1477,47 +1810,97 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, if (unlikely(test_bit(MTK_RESETTING, ð->state))) goto release_desc; + pktlen = RX_DMA_GET_PLEN0(trxd.rxd2); + /* alloc new buffer */ - if (ring->frag_size <= PAGE_SIZE) - new_data = napi_alloc_frag(ring->frag_size); - else - new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC); - if (unlikely(!new_data)) { - netdev->stats.rx_dropped++; - goto release_desc; - } - dma_addr = dma_map_single(eth->dma_dev, - new_data + NET_SKB_PAD + - eth->ip_align, - ring->buf_size, - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) { - skb_free_frag(new_data); - netdev->stats.rx_dropped++; - goto release_desc; - } + if (ring->page_pool) { + struct page *page = virt_to_head_page(data); + struct xdp_buff xdp; + u32 ret; + + new_data = mtk_page_pool_get_buff(ring->page_pool, + &dma_addr, + GFP_ATOMIC); + if (unlikely(!new_data)) { + netdev->stats.rx_dropped++; + goto release_desc; + } + + dma_sync_single_for_cpu(eth->dma_dev, + page_pool_get_dma_addr(page) + MTK_PP_HEADROOM, + pktlen, page_pool_get_dma_dir(ring->page_pool)); + + xdp_init_buff(&xdp, PAGE_SIZE, &ring->xdp_q); + xdp_prepare_buff(&xdp, data, MTK_PP_HEADROOM, pktlen, + false); + xdp_buff_clear_frags_flag(&xdp); + + ret = mtk_xdp_run(eth, ring, &xdp, netdev); + if (ret == XDP_REDIRECT) + xdp_flush = true; - dma_unmap_single(eth->dma_dev, trxd.rxd1, - ring->buf_size, DMA_FROM_DEVICE); + if (ret != XDP_PASS) + goto skip_rx; + + skb = build_skb(data, PAGE_SIZE); + if (unlikely(!skb)) { + page_pool_put_full_page(ring->page_pool, + page, true); + netdev->stats.rx_dropped++; + goto skip_rx; + } + + skb_reserve(skb, xdp.data - xdp.data_hard_start); + skb_put(skb, xdp.data_end - xdp.data); + skb_mark_for_recycle(skb); + } else { + if (ring->frag_size <= PAGE_SIZE) + new_data = napi_alloc_frag(ring->frag_size); + else + new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC); + + if (unlikely(!new_data)) { + netdev->stats.rx_dropped++; + goto release_desc; + } + + dma_addr = dma_map_single(eth->dma_dev, + new_data + NET_SKB_PAD + eth->ip_align, + ring->buf_size, DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(eth->dma_dev, + dma_addr))) { + skb_free_frag(new_data); + netdev->stats.rx_dropped++; + goto release_desc; + } + + dma_unmap_single(eth->dma_dev, trxd.rxd1, + ring->buf_size, DMA_FROM_DEVICE); + + skb = build_skb(data, ring->frag_size); + if (unlikely(!skb)) { + netdev->stats.rx_dropped++; + skb_free_frag(data); + goto skip_rx; + } - /* receive data */ - skb = build_skb(data, ring->frag_size); - if (unlikely(!skb)) { - skb_free_frag(data); - netdev->stats.rx_dropped++; - goto skip_rx; + skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN); + skb_put(skb, pktlen); } - skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN); - pktlen = RX_DMA_GET_PLEN0(trxd.rxd2); skb->dev = netdev; - skb_put(skb, pktlen); - if (trxd.rxd4 & eth->soc->txrx.rx_dma_l4_valid) + bytes += skb->len; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + rxdcsum = &trxd.rxd3; + else + rxdcsum = &trxd.rxd4; + + if (*rxdcsum & eth->soc->txrx.rx_dma_l4_valid) skb->ip_summed = CHECKSUM_UNNECESSARY; else skb_checksum_none_assert(skb); skb->protocol = eth_type_trans(skb, netdev); - bytes += pktlen; hash = trxd.rxd4 & MTK_RXD4_FOE_ENTRY; if (hash != MTK_RXD4_FOE_ENTRY) { @@ -1555,7 +1938,6 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, skip_rx: ring->data[idx] = new_data; rxd->rxd1 = (unsigned int)dma_addr; - release_desc: if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) rxd->rxd2 = RX_DMA_LSO; @@ -1563,7 +1945,6 @@ release_desc: rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size); ring->calc_idx = idx; - done++; } @@ -1582,6 +1963,9 @@ rx_done: &dim_sample); net_dim(ð->rx_dim, dim_sample); + if (xdp_flush) + xdp_do_flush_map(); + return done; } @@ -1590,15 +1974,16 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, { const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct mtk_tx_ring *ring = ð->tx_ring; - struct mtk_tx_dma *desc; - struct sk_buff *skb; struct mtk_tx_buf *tx_buf; + struct xdp_frame_bulk bq; + struct mtk_tx_dma *desc; u32 cpu, dma; cpu = ring->last_free_ptr; dma = mtk_r32(eth, reg_map->qdma.drx_ptr); desc = mtk_qdma_phys_to_virt(ring, cpu); + xdp_frame_bulk_init(&bq); while ((cpu != dma) && budget) { u32 next_cpu = desc->txd2; @@ -1613,22 +1998,26 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, if (tx_buf->flags & MTK_TX_FLAGS_FPORT1) mac = 1; - skb = tx_buf->skb; - if (!skb) + if (!tx_buf->data) break; - if (skb != (struct sk_buff *)MTK_DMA_DUMMY_DESC) { - bytes[mac] += skb->len; - done[mac]++; + if (tx_buf->data != (void *)MTK_DMA_DUMMY_DESC) { + if (tx_buf->type == MTK_TYPE_SKB) { + struct sk_buff *skb = tx_buf->data; + + bytes[mac] += skb->len; + done[mac]++; + } budget--; } - mtk_tx_unmap(eth, tx_buf, true); + mtk_tx_unmap(eth, tx_buf, &bq, true); ring->last_free = desc; atomic_inc(&ring->free_count); cpu = next_cpu; } + xdp_flush_frame_bulk(&bq); ring->last_free_ptr = cpu; mtk_w32(eth, cpu, reg_map->qdma.crx_ptr); @@ -1640,27 +2029,30 @@ static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, unsigned int *done, unsigned int *bytes) { struct mtk_tx_ring *ring = ð->tx_ring; - struct mtk_tx_dma *desc; - struct sk_buff *skb; struct mtk_tx_buf *tx_buf; + struct xdp_frame_bulk bq; + struct mtk_tx_dma *desc; u32 cpu, dma; cpu = ring->cpu_idx; dma = mtk_r32(eth, MT7628_TX_DTX_IDX0); + xdp_frame_bulk_init(&bq); while ((cpu != dma) && budget) { tx_buf = &ring->buf[cpu]; - skb = tx_buf->skb; - if (!skb) + if (!tx_buf->data) break; - if (skb != (struct sk_buff *)MTK_DMA_DUMMY_DESC) { - bytes[0] += skb->len; - done[0]++; + if (tx_buf->data != (void *)MTK_DMA_DUMMY_DESC) { + if (tx_buf->type == MTK_TYPE_SKB) { + struct sk_buff *skb = tx_buf->data; + + bytes[0] += skb->len; + done[0]++; + } budget--; } - - mtk_tx_unmap(eth, tx_buf, true); + mtk_tx_unmap(eth, tx_buf, &bq, true); desc = ring->dma + cpu * eth->soc->txrx.txd_size; ring->last_free = desc; @@ -1668,6 +2060,7 @@ static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, cpu = NEXT_DESP_IDX(cpu, ring->dma_size); } + xdp_flush_frame_bulk(&bq); ring->cpu_idx = cpu; @@ -1877,7 +2270,7 @@ static void mtk_tx_clean(struct mtk_eth *eth) if (ring->buf) { for (i = 0; i < MTK_DMA_SIZE; i++) - mtk_tx_unmap(eth, &ring->buf[i], false); + mtk_tx_unmap(eth, &ring->buf[i], NULL, false); kfree(ring->buf); ring->buf = NULL; } @@ -1927,13 +2320,15 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) if (!ring->data) return -ENOMEM; - for (i = 0; i < rx_dma_size; i++) { - if (ring->frag_size <= PAGE_SIZE) - ring->data[i] = netdev_alloc_frag(ring->frag_size); - else - ring->data[i] = mtk_max_lro_buf_alloc(GFP_KERNEL); - if (!ring->data[i]) - return -ENOMEM; + if (mtk_page_pool_enabled(eth)) { + struct page_pool *pp; + + pp = mtk_create_page_pool(eth, &ring->xdp_q, ring_no, + rx_dma_size); + if (IS_ERR(pp)) + return PTR_ERR(pp); + + ring->page_pool = pp; } ring->dma = dma_alloc_coherent(eth->dma_dev, @@ -1944,16 +2339,33 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) for (i = 0; i < rx_dma_size; i++) { struct mtk_rx_dma_v2 *rxd; - - dma_addr_t dma_addr = dma_map_single(eth->dma_dev, - ring->data[i] + NET_SKB_PAD + eth->ip_align, - ring->buf_size, - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) - return -ENOMEM; + dma_addr_t dma_addr; + void *data; rxd = ring->dma + i * eth->soc->txrx.rxd_size; + if (ring->page_pool) { + data = mtk_page_pool_get_buff(ring->page_pool, + &dma_addr, GFP_KERNEL); + if (!data) + return -ENOMEM; + } else { + if (ring->frag_size <= PAGE_SIZE) + data = netdev_alloc_frag(ring->frag_size); + else + data = mtk_max_lro_buf_alloc(GFP_KERNEL); + + if (!data) + return -ENOMEM; + + dma_addr = dma_map_single(eth->dma_dev, + data + NET_SKB_PAD + eth->ip_align, + ring->buf_size, DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(eth->dma_dev, + dma_addr))) + return -ENOMEM; + } rxd->rxd1 = (unsigned int)dma_addr; + ring->data[i] = data; if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) rxd->rxd2 = RX_DMA_LSO; @@ -1969,6 +2381,7 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) rxd->rxd8 = 0; } } + ring->dma_size = rx_dma_size; ring->calc_idx_update = false; ring->calc_idx = rx_dma_size - 1; @@ -2020,7 +2433,7 @@ static void mtk_rx_clean(struct mtk_eth *eth, struct mtk_rx_ring *ring) dma_unmap_single(eth->dma_dev, rxd->rxd1, ring->buf_size, DMA_FROM_DEVICE); - skb_free_frag(ring->data[i]); + mtk_rx_put_buff(ring, ring->data[i], false); } kfree(ring->data); ring->data = NULL; @@ -2032,6 +2445,13 @@ static void mtk_rx_clean(struct mtk_eth *eth, struct mtk_rx_ring *ring) ring->dma, ring->phys); ring->dma = NULL; } + + if (ring->page_pool) { + if (xdp_rxq_info_is_reg(&ring->xdp_q)) + xdp_rxq_info_unreg(&ring->xdp_q); + page_pool_destroy(ring->page_pool); + ring->page_pool = NULL; + } } static int mtk_hwlro_rx_init(struct mtk_eth *eth) @@ -2639,6 +3059,48 @@ static int mtk_stop(struct net_device *dev) return 0; } +static int mtk_xdp_setup(struct net_device *dev, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_eth *eth = mac->hw; + struct bpf_prog *old_prog; + bool need_update; + + if (eth->hwlro) { + NL_SET_ERR_MSG_MOD(extack, "XDP not supported with HWLRO"); + return -EOPNOTSUPP; + } + + if (dev->mtu > MTK_PP_MAX_BUF_SIZE) { + NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP"); + return -EOPNOTSUPP; + } + + need_update = !!eth->prog != !!prog; + if (netif_running(dev) && need_update) + mtk_stop(dev); + + old_prog = rcu_replace_pointer(eth->prog, prog, lockdep_rtnl_is_held()); + if (old_prog) + bpf_prog_put(old_prog); + + if (netif_running(dev) && need_update) + return mtk_open(dev); + + return 0; +} + +static int mtk_xdp(struct net_device *dev, struct netdev_bpf *xdp) +{ + switch (xdp->command) { + case XDP_SETUP_PROG: + return mtk_xdp_setup(dev, xdp->prog, xdp->extack); + default: + return -EINVAL; + } +} + static void ethsys_reset(struct mtk_eth *eth, u32 reset_bits) { regmap_update_bits(eth->ethsys, ETHSYS_RSTCTRL, @@ -2934,6 +3396,12 @@ static int mtk_change_mtu(struct net_device *dev, int new_mtu) struct mtk_eth *eth = mac->hw; u32 mcr_cur, mcr_new; + if (rcu_access_pointer(eth->prog) && + length > MTK_PP_MAX_BUF_SIZE) { + netdev_err(dev, "Invalid MTU for XDP mode\n"); + return -EINVAL; + } + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) { mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id)); mcr_new = mcr_cur & ~MAC_MCR_MAX_RX_MASK; @@ -3123,11 +3591,18 @@ static void mtk_get_strings(struct net_device *dev, u32 stringset, u8 *data) int i; switch (stringset) { - case ETH_SS_STATS: + case ETH_SS_STATS: { + struct mtk_mac *mac = netdev_priv(dev); + for (i = 0; i < ARRAY_SIZE(mtk_ethtool_stats); i++) { memcpy(data, mtk_ethtool_stats[i].str, ETH_GSTRING_LEN); data += ETH_GSTRING_LEN; } + if (mtk_page_pool_enabled(mac->hw)) + page_pool_ethtool_stats_get_strings(data); + break; + } + default: break; } } @@ -3135,13 +3610,35 @@ static void mtk_get_strings(struct net_device *dev, u32 stringset, u8 *data) static int mtk_get_sset_count(struct net_device *dev, int sset) { switch (sset) { - case ETH_SS_STATS: - return ARRAY_SIZE(mtk_ethtool_stats); + case ETH_SS_STATS: { + int count = ARRAY_SIZE(mtk_ethtool_stats); + struct mtk_mac *mac = netdev_priv(dev); + + if (mtk_page_pool_enabled(mac->hw)) + count += page_pool_ethtool_stats_get_count(); + return count; + } default: return -EOPNOTSUPP; } } +static void mtk_ethtool_pp_stats(struct mtk_eth *eth, u64 *data) +{ + struct page_pool_stats stats = {}; + int i; + + for (i = 0; i < ARRAY_SIZE(eth->rx_ring); i++) { + struct mtk_rx_ring *ring = ð->rx_ring[i]; + + if (!ring->page_pool) + continue; + + page_pool_get_stats(ring->page_pool, &stats); + } + page_pool_ethtool_stats_get(data, &stats); +} + static void mtk_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data) { @@ -3169,6 +3666,8 @@ static void mtk_get_ethtool_stats(struct net_device *dev, for (i = 0; i < ARRAY_SIZE(mtk_ethtool_stats); i++) *data_dst++ = *(data_src + mtk_ethtool_stats[i].offset); + if (mtk_page_pool_enabled(mac->hw)) + mtk_ethtool_pp_stats(mac->hw, data_dst); } while (u64_stats_fetch_retry_irq(&hwstats->syncp, start)); } @@ -3261,6 +3760,8 @@ static const struct net_device_ops mtk_netdev_ops = { .ndo_poll_controller = mtk_poll_controller, #endif .ndo_setup_tc = mtk_eth_setup_tc, + .ndo_bpf = mtk_xdp, + .ndo_xdp_xmit = mtk_xdp_xmit, }; static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) @@ -3761,6 +4262,7 @@ static const struct mtk_soc_data mt7986_data = { .txd_size = sizeof(struct mtk_tx_dma_v2), .rxd_size = sizeof(struct mtk_rx_dma_v2), .rx_irq_done_mask = MTK_RX_DONE_INT_V2, + .rx_dma_l4_valid = RX_DMA_L4_VALID_V2, .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, .dma_len_offset = 8, }, diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 0a632896451a..7405c97cda66 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -18,6 +18,8 @@ #include <linux/rhashtable.h> #include <linux/dim.h> #include <linux/bitfield.h> +#include <net/page_pool.h> +#include <linux/bpf_trace.h> #include "mtk_ppe.h" #define MTK_QDMA_PAGE_SIZE 2048 @@ -49,6 +51,11 @@ #define MTK_HW_FEATURES_MT7628 (NETIF_F_SG | NETIF_F_RXCSUM) #define NEXT_DESP_IDX(X, Y) (((X) + 1) & ((Y) - 1)) +#define MTK_PP_HEADROOM XDP_PACKET_HEADROOM +#define MTK_PP_PAD (MTK_PP_HEADROOM + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) +#define MTK_PP_MAX_BUF_SIZE (PAGE_SIZE - MTK_PP_PAD) + #define MTK_QRX_OFFSET 0x10 #define MTK_MAX_RX_RING_NUM 4 @@ -563,6 +570,16 @@ struct mtk_tx_dma_v2 { struct mtk_eth; struct mtk_mac; +struct mtk_xdp_stats { + u64 rx_xdp_redirect; + u64 rx_xdp_pass; + u64 rx_xdp_drop; + u64 rx_xdp_tx; + u64 rx_xdp_tx_errors; + u64 tx_xdp_xmit; + u64 tx_xdp_xmit_errors; +}; + /* struct mtk_hw_stats - the structure that holds the traffic statistics. * @stats_lock: make sure that stats operations are atomic * @reg_offset: the status register offset of the SoC @@ -586,6 +603,8 @@ struct mtk_hw_stats { u64 rx_checksum_errors; u64 rx_flow_control_packets; + struct mtk_xdp_stats xdp_stats; + spinlock_t stats_lock; u32 reg_offset; struct u64_stats_sync syncp; @@ -677,6 +696,12 @@ enum mtk_dev_state { MTK_RESETTING }; +enum mtk_tx_buf_type { + MTK_TYPE_SKB, + MTK_TYPE_XDP_TX, + MTK_TYPE_XDP_NDO, +}; + /* struct mtk_tx_buf - This struct holds the pointers to the memory pointed at * by the TX descriptor s * @skb: The SKB pointer of the packet being sent @@ -686,7 +711,9 @@ enum mtk_dev_state { * @dma_len1: The length of the second segment */ struct mtk_tx_buf { - struct sk_buff *skb; + enum mtk_tx_buf_type type; + void *data; + u32 flags; DEFINE_DMA_UNMAP_ADDR(dma_addr0); DEFINE_DMA_UNMAP_LEN(dma_len0); @@ -745,6 +772,9 @@ struct mtk_rx_ring { bool calc_idx_update; u16 calc_idx; u32 crx_idx_reg; + /* page_pool */ + struct page_pool *page_pool; + struct xdp_rxq_info xdp_q; }; enum mkt_eth_capabilities { @@ -1078,6 +1108,8 @@ struct mtk_eth { struct mtk_ppe *ppe; struct rhashtable flow_table; + + struct bpf_prog __rcu *prog; }; /* struct mtk_mac - the structure that holds the info about the MACs of the diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 5d457bc9acc1..25dc3c3aa31d 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -88,32 +88,28 @@ mtk_flow_offload_mangle_eth(const struct flow_action_entry *act, void *eth) static int mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_info *info) { - struct net_device_path_ctx ctx = { - .dev = dev, - }; - struct net_device_path path = {}; + struct net_device_path_stack stack; + struct net_device_path *path; + int err; - if (!ctx.dev) + if (!dev) return -ENODEV; - memcpy(ctx.daddr, addr, sizeof(ctx.daddr)); - if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED)) return -1; - if (!dev->netdev_ops->ndo_fill_forward_path) - return -1; - - if (dev->netdev_ops->ndo_fill_forward_path(&ctx, &path)) - return -1; + err = dev_fill_forward_path(dev, addr, &stack); + if (err) + return err; - if (path.type != DEV_PATH_MTK_WDMA) + path = &stack.path[stack.num_paths - 1]; + if (path->type != DEV_PATH_MTK_WDMA) return -1; - info->wdma_idx = path.mtk_wdma.wdma_idx; - info->queue = path.mtk_wdma.queue; - info->bss = path.mtk_wdma.bss; - info->wcid = path.mtk_wdma.wcid; + info->wdma_idx = path->mtk_wdma.wdma_idx; + info->queue = path->mtk_wdma.queue; + info->bss = path->mtk_wdma.bss; + info->wcid = path->mtk_wdma.wcid; return 0; } diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c index 95839fd84dab..3f0e5e64de50 100644 --- a/drivers/net/ethernet/mediatek/mtk_star_emac.c +++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c @@ -17,6 +17,7 @@ #include <linux/module.h> #include <linux/netdevice.h> #include <linux/of.h> +#include <linux/of_device.h> #include <linux/of_mdio.h> #include <linux/of_net.h> #include <linux/platform_device.h> @@ -32,6 +33,7 @@ #define MTK_STAR_SKB_ALIGNMENT 16 #define MTK_STAR_HASHTABLE_MC_LIMIT 256 #define MTK_STAR_HASHTABLE_SIZE_MAX 512 +#define MTK_STAR_DESC_NEEDED (MAX_SKB_FRAGS + 4) /* Normally we'd use NET_IP_ALIGN but on arm64 its value is 0 and it doesn't * work for this controller. @@ -129,6 +131,11 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" }; #define MTK_STAR_REG_INT_MASK 0x0054 #define MTK_STAR_BIT_INT_MASK_FNRC BIT(6) +/* Delay-Macro Register */ +#define MTK_STAR_REG_TEST0 0x0058 +#define MTK_STAR_BIT_INV_RX_CLK BIT(30) +#define MTK_STAR_BIT_INV_TX_CLK BIT(31) + /* Misc. Config Register */ #define MTK_STAR_REG_TEST1 0x005c #define MTK_STAR_BIT_TEST1_RST_HASH_MBIST BIT(31) @@ -149,6 +156,7 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" }; #define MTK_STAR_REG_MAC_CLK_CONF 0x00ac #define MTK_STAR_MSK_MAC_CLK_CONF GENMASK(7, 0) #define MTK_STAR_BIT_CLK_DIV_10 0x0a +#define MTK_STAR_BIT_CLK_DIV_50 0x32 /* Counter registers. */ #define MTK_STAR_REG_C_RXOKPKT 0x0100 @@ -181,9 +189,14 @@ static const char *const mtk_star_clk_names[] = { "core", "reg", "trans" }; #define MTK_STAR_REG_C_RX_TWIST 0x0218 /* Ethernet CFG Control */ -#define MTK_PERICFG_REG_NIC_CFG_CON 0x03c4 -#define MTK_PERICFG_MSK_NIC_CFG_CON_CFG_MII GENMASK(3, 0) -#define MTK_PERICFG_BIT_NIC_CFG_CON_RMII BIT(0) +#define MTK_PERICFG_REG_NIC_CFG0_CON 0x03c4 +#define MTK_PERICFG_REG_NIC_CFG1_CON 0x03c8 +#define MTK_PERICFG_REG_NIC_CFG_CON_V2 0x0c10 +#define MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF GENMASK(3, 0) +#define MTK_PERICFG_BIT_NIC_CFG_CON_MII 0 +#define MTK_PERICFG_BIT_NIC_CFG_CON_RMII 1 +#define MTK_PERICFG_BIT_NIC_CFG_CON_CLK BIT(0) +#define MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2 BIT(8) /* Represents the actual structure of descriptors used by the MAC. We can * reuse the same structure for both TX and RX - the layout is the same, only @@ -216,7 +229,8 @@ struct mtk_star_ring_desc_data { struct sk_buff *skb; }; -#define MTK_STAR_RING_NUM_DESCS 128 +#define MTK_STAR_RING_NUM_DESCS 512 +#define MTK_STAR_TX_THRESH (MTK_STAR_RING_NUM_DESCS / 4) #define MTK_STAR_NUM_TX_DESCS MTK_STAR_RING_NUM_DESCS #define MTK_STAR_NUM_RX_DESCS MTK_STAR_RING_NUM_DESCS #define MTK_STAR_NUM_DESCS_TOTAL (MTK_STAR_RING_NUM_DESCS * 2) @@ -231,6 +245,11 @@ struct mtk_star_ring { unsigned int tail; }; +struct mtk_star_compat { + int (*set_interface_mode)(struct net_device *ndev); + unsigned char bit_clk_div; +}; + struct mtk_star_priv { struct net_device *ndev; @@ -246,7 +265,8 @@ struct mtk_star_priv { struct mtk_star_ring rx_ring; struct mii_bus *mii; - struct napi_struct napi; + struct napi_struct tx_napi; + struct napi_struct rx_napi; struct device_node *phy_node; phy_interface_t phy_intf; @@ -255,6 +275,11 @@ struct mtk_star_priv { int speed; int duplex; int pause; + bool rmii_rxc; + bool rx_inv; + bool tx_inv; + + const struct mtk_star_compat *compat_data; /* Protects against concurrent descriptor access. */ spinlock_t lock; @@ -357,19 +382,16 @@ mtk_star_ring_push_head_tx(struct mtk_star_ring *ring, mtk_star_ring_push_head(ring, desc_data, flags); } -static unsigned int mtk_star_ring_num_used_descs(struct mtk_star_ring *ring) +static unsigned int mtk_star_tx_ring_avail(struct mtk_star_ring *ring) { - return abs(ring->head - ring->tail); -} + u32 avail; -static bool mtk_star_ring_full(struct mtk_star_ring *ring) -{ - return mtk_star_ring_num_used_descs(ring) == MTK_STAR_RING_NUM_DESCS; -} + if (ring->tail > ring->head) + avail = ring->tail - ring->head - 1; + else + avail = MTK_STAR_RING_NUM_DESCS - ring->head + ring->tail - 1; -static bool mtk_star_ring_descs_available(struct mtk_star_ring *ring) -{ - return mtk_star_ring_num_used_descs(ring) > 0; + return avail; } static dma_addr_t mtk_star_dma_map_rx(struct mtk_star_priv *priv, @@ -414,6 +436,36 @@ static void mtk_star_nic_disable_pd(struct mtk_star_priv *priv) MTK_STAR_BIT_MAC_CFG_NIC_PD); } +static void mtk_star_enable_dma_irq(struct mtk_star_priv *priv, + bool rx, bool tx) +{ + u32 value; + + regmap_read(priv->regs, MTK_STAR_REG_INT_MASK, &value); + + if (tx) + value &= ~MTK_STAR_BIT_INT_STS_TNTC; + if (rx) + value &= ~MTK_STAR_BIT_INT_STS_FNRC; + + regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, value); +} + +static void mtk_star_disable_dma_irq(struct mtk_star_priv *priv, + bool rx, bool tx) +{ + u32 value; + + regmap_read(priv->regs, MTK_STAR_REG_INT_MASK, &value); + + if (tx) + value |= MTK_STAR_BIT_INT_STS_TNTC; + if (rx) + value |= MTK_STAR_BIT_INT_STS_FNRC; + + regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, value); +} + /* Unmask the three interrupts we care about, mask all others. */ static void mtk_star_intr_enable(struct mtk_star_priv *priv) { @@ -429,20 +481,11 @@ static void mtk_star_intr_disable(struct mtk_star_priv *priv) regmap_write(priv->regs, MTK_STAR_REG_INT_MASK, ~0); } -static unsigned int mtk_star_intr_read(struct mtk_star_priv *priv) -{ - unsigned int val; - - regmap_read(priv->regs, MTK_STAR_REG_INT_STS, &val); - - return val; -} - static unsigned int mtk_star_intr_ack_all(struct mtk_star_priv *priv) { unsigned int val; - val = mtk_star_intr_read(priv); + regmap_read(priv->regs, MTK_STAR_REG_INT_STS, &val); regmap_write(priv->regs, MTK_STAR_REG_INT_STS, val); return val; @@ -714,25 +757,44 @@ static void mtk_star_free_tx_skbs(struct mtk_star_priv *priv) mtk_star_ring_free_skbs(priv, ring, mtk_star_dma_unmap_tx); } -/* All processing for TX and RX happens in the napi poll callback. - * - * FIXME: The interrupt handling should be more fine-grained with each - * interrupt enabled/disabled independently when needed. Unfortunatly this - * turned out to impact the driver's stability and until we have something - * working properly, we're disabling all interrupts during TX & RX processing - * or when resetting the counter registers. - */ +/** + * mtk_star_handle_irq - Interrupt Handler. + * @irq: interrupt number. + * @data: pointer to a network interface device structure. + * Description : this is the driver interrupt service routine. + * it mainly handles: + * 1. tx complete interrupt for frame transmission. + * 2. rx complete interrupt for frame reception. + * 3. MAC Management Counter interrupt to avoid counter overflow. + **/ static irqreturn_t mtk_star_handle_irq(int irq, void *data) { - struct mtk_star_priv *priv; - struct net_device *ndev; - - ndev = data; - priv = netdev_priv(ndev); + struct net_device *ndev = data; + struct mtk_star_priv *priv = netdev_priv(ndev); + unsigned int intr_status = mtk_star_intr_ack_all(priv); + bool rx, tx; + + rx = (intr_status & MTK_STAR_BIT_INT_STS_FNRC) && + napi_schedule_prep(&priv->rx_napi); + tx = (intr_status & MTK_STAR_BIT_INT_STS_TNTC) && + napi_schedule_prep(&priv->tx_napi); + + if (rx || tx) { + spin_lock(&priv->lock); + /* mask Rx and TX Complete interrupt */ + mtk_star_disable_dma_irq(priv, rx, tx); + spin_unlock(&priv->lock); + + if (rx) + __napi_schedule(&priv->rx_napi); + if (tx) + __napi_schedule(&priv->tx_napi); + } - if (netif_running(ndev)) { - mtk_star_intr_disable(priv); - napi_schedule(&priv->napi); + /* interrupt is triggered once any counters reach 0x8000000 */ + if (intr_status & MTK_STAR_REG_INT_STS_MIB_CNT_TH) { + mtk_star_update_stats(priv); + mtk_star_reset_counters(priv); } return IRQ_HANDLED; @@ -821,32 +883,26 @@ static void mtk_star_phy_config(struct mtk_star_priv *priv) val <<= MTK_STAR_OFF_PHY_CTRL1_FORCE_SPD; val |= MTK_STAR_BIT_PHY_CTRL1_AN_EN; - val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX; - val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX; - /* Only full-duplex supported for now. */ - val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX; - - regmap_write(priv->regs, MTK_STAR_REG_PHY_CTRL1, val); - if (priv->pause) { - val = MTK_STAR_VAL_FC_CFG_SEND_PAUSE_TH_2K; - val <<= MTK_STAR_OFF_FC_CFG_SEND_PAUSE_TH; - val |= MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR; + val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX; + val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX; + val |= MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX; } else { - val = 0; + val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_RX; + val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_FC_TX; + val &= ~MTK_STAR_BIT_PHY_CTRL1_FORCE_DPX; } + regmap_write(priv->regs, MTK_STAR_REG_PHY_CTRL1, val); + val = MTK_STAR_VAL_FC_CFG_SEND_PAUSE_TH_2K; + val <<= MTK_STAR_OFF_FC_CFG_SEND_PAUSE_TH; + val |= MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR; regmap_update_bits(priv->regs, MTK_STAR_REG_FC_CFG, MTK_STAR_MSK_FC_CFG_SEND_PAUSE_TH | MTK_STAR_BIT_FC_CFG_UC_PAUSE_DIR, val); - if (priv->pause) { - val = MTK_STAR_VAL_EXT_CFG_SND_PAUSE_RLS_1K; - val <<= MTK_STAR_OFF_EXT_CFG_SND_PAUSE_RLS; - } else { - val = 0; - } - + val = MTK_STAR_VAL_EXT_CFG_SND_PAUSE_RLS_1K; + val <<= MTK_STAR_OFF_EXT_CFG_SND_PAUSE_RLS; regmap_update_bits(priv->regs, MTK_STAR_REG_EXT_CFG, MTK_STAR_MSK_EXT_CFG_SND_PAUSE_RLS, val); } @@ -898,14 +954,7 @@ static void mtk_star_init_config(struct mtk_star_priv *priv) regmap_write(priv->regs, MTK_STAR_REG_SYS_CONF, val); regmap_update_bits(priv->regs, MTK_STAR_REG_MAC_CLK_CONF, MTK_STAR_MSK_MAC_CLK_CONF, - MTK_STAR_BIT_CLK_DIV_10); -} - -static void mtk_star_set_mode_rmii(struct mtk_star_priv *priv) -{ - regmap_update_bits(priv->pericfg, MTK_PERICFG_REG_NIC_CFG_CON, - MTK_PERICFG_MSK_NIC_CFG_CON_CFG_MII, - MTK_PERICFG_BIT_NIC_CFG_CON_RMII); + priv->compat_data->bit_clk_div); } static int mtk_star_enable(struct net_device *ndev) @@ -951,11 +1000,12 @@ static int mtk_star_enable(struct net_device *ndev) /* Request the interrupt */ ret = request_irq(ndev->irq, mtk_star_handle_irq, - IRQF_TRIGGER_FALLING, ndev->name, ndev); + IRQF_TRIGGER_NONE, ndev->name, ndev); if (ret) goto err_free_skbs; - napi_enable(&priv->napi); + napi_enable(&priv->tx_napi); + napi_enable(&priv->rx_napi); mtk_star_intr_ack_all(priv); mtk_star_intr_enable(priv); @@ -988,7 +1038,8 @@ static void mtk_star_disable(struct net_device *ndev) struct mtk_star_priv *priv = netdev_priv(ndev); netif_stop_queue(ndev); - napi_disable(&priv->napi); + napi_disable(&priv->tx_napi); + napi_disable(&priv->rx_napi); mtk_star_intr_disable(priv); mtk_star_dma_disable(priv); mtk_star_intr_ack_all(priv); @@ -1020,13 +1071,45 @@ static int mtk_star_netdev_ioctl(struct net_device *ndev, return phy_mii_ioctl(ndev->phydev, req, cmd); } -static int mtk_star_netdev_start_xmit(struct sk_buff *skb, - struct net_device *ndev) +static int __mtk_star_maybe_stop_tx(struct mtk_star_priv *priv, u16 size) +{ + netif_stop_queue(priv->ndev); + + /* Might race with mtk_star_tx_poll, check again */ + smp_mb(); + if (likely(mtk_star_tx_ring_avail(&priv->tx_ring) < size)) + return -EBUSY; + + netif_start_queue(priv->ndev); + + return 0; +} + +static inline int mtk_star_maybe_stop_tx(struct mtk_star_priv *priv, u16 size) +{ + if (likely(mtk_star_tx_ring_avail(&priv->tx_ring) >= size)) + return 0; + + return __mtk_star_maybe_stop_tx(priv, size); +} + +static netdev_tx_t mtk_star_netdev_start_xmit(struct sk_buff *skb, + struct net_device *ndev) { struct mtk_star_priv *priv = netdev_priv(ndev); struct mtk_star_ring *ring = &priv->tx_ring; struct device *dev = mtk_star_get_dev(priv); struct mtk_star_ring_desc_data desc_data; + int nfrags = skb_shinfo(skb)->nr_frags; + + if (unlikely(mtk_star_tx_ring_avail(ring) < nfrags + 1)) { + if (!netif_queue_stopped(ndev)) { + netif_stop_queue(ndev); + /* This is a hard error, log it. */ + pr_err_ratelimited("Tx ring full when queue awake\n"); + } + return NETDEV_TX_BUSY; + } desc_data.dma_addr = mtk_star_dma_map_tx(priv, skb); if (dma_mapping_error(dev, desc_data.dma_addr)) @@ -1034,17 +1117,11 @@ static int mtk_star_netdev_start_xmit(struct sk_buff *skb, desc_data.skb = skb; desc_data.len = skb->len; - - spin_lock_bh(&priv->lock); - mtk_star_ring_push_head_tx(ring, &desc_data); netdev_sent_queue(ndev, skb->len); - if (mtk_star_ring_full(ring)) - netif_stop_queue(ndev); - - spin_unlock_bh(&priv->lock); + mtk_star_maybe_stop_tx(priv, MTK_STAR_DESC_NEEDED); mtk_star_dma_resume_tx(priv); @@ -1076,31 +1153,40 @@ static int mtk_star_tx_complete_one(struct mtk_star_priv *priv) return ret; } -static void mtk_star_tx_complete_all(struct mtk_star_priv *priv) +static int mtk_star_tx_poll(struct napi_struct *napi, int budget) { + struct mtk_star_priv *priv = container_of(napi, struct mtk_star_priv, + tx_napi); + int ret = 0, pkts_compl = 0, bytes_compl = 0, count = 0; struct mtk_star_ring *ring = &priv->tx_ring; struct net_device *ndev = priv->ndev; - int ret, pkts_compl, bytes_compl; - bool wake = false; - - spin_lock(&priv->lock); - - for (pkts_compl = 0, bytes_compl = 0;; - pkts_compl++, bytes_compl += ret, wake = true) { - if (!mtk_star_ring_descs_available(ring)) - break; + unsigned int head = ring->head; + unsigned int entry = ring->tail; + while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) { ret = mtk_star_tx_complete_one(priv); if (ret < 0) break; + + count++; + pkts_compl++; + bytes_compl += ret; + entry = ring->tail; } netdev_completed_queue(ndev, pkts_compl, bytes_compl); - if (wake && netif_queue_stopped(ndev)) + if (unlikely(netif_queue_stopped(ndev)) && + (mtk_star_tx_ring_avail(ring) > MTK_STAR_TX_THRESH)) netif_wake_queue(ndev); - spin_unlock(&priv->lock); + if (napi_complete(napi)) { + spin_lock(&priv->lock); + mtk_star_enable_dma_irq(priv, false, true); + spin_unlock(&priv->lock); + } + + return 0; } static void mtk_star_netdev_get_stats64(struct net_device *ndev, @@ -1180,7 +1266,7 @@ static const struct ethtool_ops mtk_star_ethtool_ops = { .set_link_ksettings = phy_ethtool_set_link_ksettings, }; -static int mtk_star_receive_packet(struct mtk_star_priv *priv) +static int mtk_star_rx(struct mtk_star_priv *priv, int budget) { struct mtk_star_ring *ring = &priv->rx_ring; struct device *dev = mtk_star_get_dev(priv); @@ -1188,107 +1274,85 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv) struct net_device *ndev = priv->ndev; struct sk_buff *curr_skb, *new_skb; dma_addr_t new_dma_addr; - int ret; + int ret, count = 0; - spin_lock(&priv->lock); - ret = mtk_star_ring_pop_tail(ring, &desc_data); - spin_unlock(&priv->lock); - if (ret) - return -1; + while (count < budget) { + ret = mtk_star_ring_pop_tail(ring, &desc_data); + if (ret) + return -1; - curr_skb = desc_data.skb; + curr_skb = desc_data.skb; - if ((desc_data.flags & MTK_STAR_DESC_BIT_RX_CRCE) || - (desc_data.flags & MTK_STAR_DESC_BIT_RX_OSIZE)) { - /* Error packet -> drop and reuse skb. */ - new_skb = curr_skb; - goto push_new_skb; - } + if ((desc_data.flags & MTK_STAR_DESC_BIT_RX_CRCE) || + (desc_data.flags & MTK_STAR_DESC_BIT_RX_OSIZE)) { + /* Error packet -> drop and reuse skb. */ + new_skb = curr_skb; + goto push_new_skb; + } - /* Prepare new skb before receiving the current one. Reuse the current - * skb if we fail at any point. - */ - new_skb = mtk_star_alloc_skb(ndev); - if (!new_skb) { - ndev->stats.rx_dropped++; - new_skb = curr_skb; - goto push_new_skb; - } + /* Prepare new skb before receiving the current one. + * Reuse the current skb if we fail at any point. + */ + new_skb = mtk_star_alloc_skb(ndev); + if (!new_skb) { + ndev->stats.rx_dropped++; + new_skb = curr_skb; + goto push_new_skb; + } - new_dma_addr = mtk_star_dma_map_rx(priv, new_skb); - if (dma_mapping_error(dev, new_dma_addr)) { - ndev->stats.rx_dropped++; - dev_kfree_skb(new_skb); - new_skb = curr_skb; - netdev_err(ndev, "DMA mapping error of RX descriptor\n"); - goto push_new_skb; - } + new_dma_addr = mtk_star_dma_map_rx(priv, new_skb); + if (dma_mapping_error(dev, new_dma_addr)) { + ndev->stats.rx_dropped++; + dev_kfree_skb(new_skb); + new_skb = curr_skb; + netdev_err(ndev, "DMA mapping error of RX descriptor\n"); + goto push_new_skb; + } - /* We can't fail anymore at this point: it's safe to unmap the skb. */ - mtk_star_dma_unmap_rx(priv, &desc_data); + /* We can't fail anymore at this point: + * it's safe to unmap the skb. + */ + mtk_star_dma_unmap_rx(priv, &desc_data); - skb_put(desc_data.skb, desc_data.len); - desc_data.skb->ip_summed = CHECKSUM_NONE; - desc_data.skb->protocol = eth_type_trans(desc_data.skb, ndev); - desc_data.skb->dev = ndev; - netif_receive_skb(desc_data.skb); + skb_put(desc_data.skb, desc_data.len); + desc_data.skb->ip_summed = CHECKSUM_NONE; + desc_data.skb->protocol = eth_type_trans(desc_data.skb, ndev); + desc_data.skb->dev = ndev; + netif_receive_skb(desc_data.skb); - /* update dma_addr for new skb */ - desc_data.dma_addr = new_dma_addr; + /* update dma_addr for new skb */ + desc_data.dma_addr = new_dma_addr; push_new_skb: - desc_data.len = skb_tailroom(new_skb); - desc_data.skb = new_skb; - spin_lock(&priv->lock); - mtk_star_ring_push_head_rx(ring, &desc_data); - spin_unlock(&priv->lock); + count++; - return 0; -} - -static int mtk_star_process_rx(struct mtk_star_priv *priv, int budget) -{ - int received, ret; - - for (received = 0, ret = 0; received < budget && ret == 0; received++) - ret = mtk_star_receive_packet(priv); + desc_data.len = skb_tailroom(new_skb); + desc_data.skb = new_skb; + mtk_star_ring_push_head_rx(ring, &desc_data); + } mtk_star_dma_resume_rx(priv); - return received; + return count; } -static int mtk_star_poll(struct napi_struct *napi, int budget) +static int mtk_star_rx_poll(struct napi_struct *napi, int budget) { struct mtk_star_priv *priv; - unsigned int status; - int received = 0; - - priv = container_of(napi, struct mtk_star_priv, napi); - - status = mtk_star_intr_read(priv); - mtk_star_intr_ack_all(priv); + int work_done = 0; - if (status & MTK_STAR_BIT_INT_STS_TNTC) - /* Clean-up all TX descriptors. */ - mtk_star_tx_complete_all(priv); + priv = container_of(napi, struct mtk_star_priv, rx_napi); - if (status & MTK_STAR_BIT_INT_STS_FNRC) - /* Receive up to $budget packets. */ - received = mtk_star_process_rx(priv, budget); - - if (unlikely(status & MTK_STAR_REG_INT_STS_MIB_CNT_TH)) { - mtk_star_update_stats(priv); - mtk_star_reset_counters(priv); + work_done = mtk_star_rx(priv, budget); + if (work_done < budget) { + napi_complete_done(napi, work_done); + spin_lock(&priv->lock); + mtk_star_enable_dma_irq(priv, true, false); + spin_unlock(&priv->lock); } - if (received < budget) - napi_complete_done(napi, received); - - mtk_star_intr_enable(priv); - - return received; + return work_done; } static void mtk_star_mdio_rwok_clear(struct mtk_star_priv *priv) @@ -1442,6 +1506,25 @@ static void mtk_star_clk_disable_unprepare(void *data) clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks); } +static int mtk_star_set_timing(struct mtk_star_priv *priv) +{ + struct device *dev = mtk_star_get_dev(priv); + unsigned int delay_val = 0; + + switch (priv->phy_intf) { + case PHY_INTERFACE_MODE_MII: + case PHY_INTERFACE_MODE_RMII: + delay_val |= FIELD_PREP(MTK_STAR_BIT_INV_RX_CLK, priv->rx_inv); + delay_val |= FIELD_PREP(MTK_STAR_BIT_INV_TX_CLK, priv->tx_inv); + break; + default: + dev_err(dev, "This interface not supported\n"); + return -EINVAL; + } + + return regmap_write(priv->regs, MTK_STAR_REG_TEST0, delay_val); +} + static int mtk_star_probe(struct platform_device *pdev) { struct device_node *of_node; @@ -1460,6 +1543,7 @@ static int mtk_star_probe(struct platform_device *pdev) priv = netdev_priv(ndev); priv->ndev = ndev; + priv->compat_data = of_device_get_match_data(&pdev->dev); SET_NETDEV_DEV(ndev, dev); platform_set_drvdata(pdev, ndev); @@ -1510,7 +1594,8 @@ static int mtk_star_probe(struct platform_device *pdev) ret = of_get_phy_mode(of_node, &priv->phy_intf); if (ret) { return ret; - } else if (priv->phy_intf != PHY_INTERFACE_MODE_RMII) { + } else if (priv->phy_intf != PHY_INTERFACE_MODE_RMII && + priv->phy_intf != PHY_INTERFACE_MODE_MII) { dev_err(dev, "unsupported phy mode: %s\n", phy_modes(priv->phy_intf)); return -EINVAL; @@ -1522,7 +1607,23 @@ static int mtk_star_probe(struct platform_device *pdev) return -ENODEV; } - mtk_star_set_mode_rmii(priv); + priv->rmii_rxc = of_property_read_bool(of_node, "mediatek,rmii-rxc"); + priv->rx_inv = of_property_read_bool(of_node, "mediatek,rxc-inverse"); + priv->tx_inv = of_property_read_bool(of_node, "mediatek,txc-inverse"); + + if (priv->compat_data->set_interface_mode) { + ret = priv->compat_data->set_interface_mode(ndev); + if (ret) { + dev_err(dev, "Failed to set phy interface, err = %d\n", ret); + return -EINVAL; + } + } + + ret = mtk_star_set_timing(priv); + if (ret) { + dev_err(dev, "Failed to set timing, err = %d\n", ret); + return -EINVAL; + } ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); if (ret) { @@ -1550,16 +1651,92 @@ static int mtk_star_probe(struct platform_device *pdev) ndev->netdev_ops = &mtk_star_netdev_ops; ndev->ethtool_ops = &mtk_star_ethtool_ops; - netif_napi_add(ndev, &priv->napi, mtk_star_poll, NAPI_POLL_WEIGHT); + netif_napi_add(ndev, &priv->rx_napi, mtk_star_rx_poll, + NAPI_POLL_WEIGHT); + netif_napi_add_tx(ndev, &priv->tx_napi, mtk_star_tx_poll); return devm_register_netdev(dev, ndev); } #ifdef CONFIG_OF +static int mt8516_set_interface_mode(struct net_device *ndev) +{ + struct mtk_star_priv *priv = netdev_priv(ndev); + struct device *dev = mtk_star_get_dev(priv); + unsigned int intf_val, ret, rmii_rxc; + + switch (priv->phy_intf) { + case PHY_INTERFACE_MODE_MII: + intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_MII; + rmii_rxc = 0; + break; + case PHY_INTERFACE_MODE_RMII: + intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_RMII; + rmii_rxc = priv->rmii_rxc ? 0 : MTK_PERICFG_BIT_NIC_CFG_CON_CLK; + break; + default: + dev_err(dev, "This interface not supported\n"); + return -EINVAL; + } + + ret = regmap_update_bits(priv->pericfg, + MTK_PERICFG_REG_NIC_CFG1_CON, + MTK_PERICFG_BIT_NIC_CFG_CON_CLK, + rmii_rxc); + if (ret) + return ret; + + return regmap_update_bits(priv->pericfg, + MTK_PERICFG_REG_NIC_CFG0_CON, + MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF, + intf_val); +} + +static int mt8365_set_interface_mode(struct net_device *ndev) +{ + struct mtk_star_priv *priv = netdev_priv(ndev); + struct device *dev = mtk_star_get_dev(priv); + unsigned int intf_val; + + switch (priv->phy_intf) { + case PHY_INTERFACE_MODE_MII: + intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_MII; + break; + case PHY_INTERFACE_MODE_RMII: + intf_val = MTK_PERICFG_BIT_NIC_CFG_CON_RMII; + intf_val |= priv->rmii_rxc ? 0 : MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2; + break; + default: + dev_err(dev, "This interface not supported\n"); + return -EINVAL; + } + + return regmap_update_bits(priv->pericfg, + MTK_PERICFG_REG_NIC_CFG_CON_V2, + MTK_PERICFG_REG_NIC_CFG_CON_CFG_INTF | + MTK_PERICFG_BIT_NIC_CFG_CON_CLK_V2, + intf_val); +} + +static const struct mtk_star_compat mtk_star_mt8516_compat = { + .set_interface_mode = mt8516_set_interface_mode, + .bit_clk_div = MTK_STAR_BIT_CLK_DIV_10, +}; + +static const struct mtk_star_compat mtk_star_mt8365_compat = { + .set_interface_mode = mt8365_set_interface_mode, + .bit_clk_div = MTK_STAR_BIT_CLK_DIV_50, +}; + static const struct of_device_id mtk_star_of_match[] = { - { .compatible = "mediatek,mt8516-eth", }, - { .compatible = "mediatek,mt8518-eth", }, - { .compatible = "mediatek,mt8175-eth", }, + { .compatible = "mediatek,mt8516-eth", + .data = &mtk_star_mt8516_compat }, + { .compatible = "mediatek,mt8518-eth", + .data = &mtk_star_mt8516_compat }, + { .compatible = "mediatek,mt8175-eth", + .data = &mtk_star_mt8516_compat }, + { .compatible = "mediatek,mt8365-eth", + .data = &mtk_star_mt8365_compat }, { } }; MODULE_DEVICE_TABLE(of, mtk_star_of_match); |