diff options
author | Dave Airlie <airlied@redhat.com> | 2020-05-20 13:36:44 +1000 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2020-05-20 13:36:45 +1000 |
commit | 6cf991611bc72c077f0cc64e23987341ad7ef41e (patch) | |
tree | 67334c950eef69562b350764b60889c1e1575ff6 /drivers/gpu/drm/i915/intel_pm.c | |
parent | bfbe1744e4417986419236719922a9a7fda224d1 (diff) | |
parent | 3a36aa237e4ed04553c0998cf5f47eda3e206e4f (diff) |
Merge tag 'drm-intel-next-2020-05-15' of git://anongit.freedesktop.org/drm/drm-intel into drm-next
UAPI Changes:
- drm/i915: Show per-engine default property values in sysfs
By providing the default values configured into the kernel via sysfs, it
is much more convenient for userspace to restore those sane defaults, or
at least know what are considered good baseline. This is useful, for
example, to cleanup after any failed userspace prior to commencing new
jobs.
Cross-subsystem Changes:
- video/hdmi: Add Unpack only function for DRM infoframe
- Includes pull request gvt-next-2020-05-12
Driver Changes:
- Restore Cherryview back to full-ppgtt (Chris, Mika)
- Document locking guidelines for i915 (Chris, Daniel, Joonas)
- Fix GitLab #1746: Handle idling during i915_gem_evict_something busy loops (Chris)
- Display WA #1105: Require linear fb stride to be multiple of 512 bytes on
gen9/glk (Ville)
- Add Wa_14010685332 for ICP/ICL (Matt R)
- Restrict w/a 1607087056 for EHL/JSL (Swathi)
- Fix interrupt handling for DP AUX transactions on Tigerlake (Imre)
- Revert "drm/i915/tgl: Include ro parts of l3 to invalidate" (Mika)
- Fix HDC pipeline flush hardware bit on Gen12 (Mika)
- Flush L3 when flushing render on Gen12 (Mika)
- Invalidate aux table entries forcibly between BB on Gen12 (Mika)
- Add aux table invalidate for all engines on Gen12 (Mika)
- Force pte cacheline to main memory Gen8+ (Mika)
- Add and enable TGL+ SAGV support (Stanislav)
- Implement vm_ops->access on i915 mmaps for GDB (Chris, Kristian)
- Replace zero-length array with flexible-array (Gustavo)
- Improve batch buffer pool effectiveness to mitigate soft-rc6 hit (Chris)
- Remove wait priority boosting (Chris)
- Keep driver module referenced when PMU is active (Chris)
- Sanitize RPS interrupts upon resume (Chris)
- Extend pcode read timeout to 20 ms (Chris)
- Wait for ACT sent before enabling MST pipe (Ville)
- Extend support to async relocations to SNB (Chris)
- Remove CNL pre-prod workarounds (Ville)
- Don't enable WaIncreaseLatencyIPCEnabled when IPC is disabled (Sultan)
- Record the active CCID from before reset (Chris)
- Mark concurrent submissions with a weak-dependency (Chris)
- Peel dma-fence-chains for await to allow engine-to-engine sync (Lionel)
- Prevent using semaphores to chain up to external fences (Chris)
- Fix GLK watermark calculations (Ville)
- Emit await(batch) before MI_BB_START (Chris)
- Reset execlists registers before HWSP (Chris)
- Drop no-semaphore boosting in favor of fast timeslicing (Chris)
- Fix enabled infoframe states of lspcon (Gwan-gyeong)
- Program DP SDPs on pipe updates (Gwan-gyeong)
- Stop sending DP SDPs on ddi disable (Gwan-gyeong)
- Store CS timestamp frequency in Hz (Ville)
- Remove unused HAS_FWTABLE macro (Pascal)
- Use batchbuffer chaining for relocations to save ring space (Chris)
- Try different engines for relocs if MI ops not supported (Chris, Tvrtko)
- Lazily acquire the device wakeref for freeing objects (Chris)
- Streamline display code arithmetics around rounding etc. (Ville)
- Use bw state for per crtc SAGV evaluation (Stanislav)
- Track active_pipes in bw_state (Stanislav)
- Nuke mode.vrefresh usage (Ville)
- Warn if the FBC is still writing to stolen on removal (Chris)
- Added new PCode commands prepping for QGV rescricting (Stansilav)
- Stop holding onto the pinned_default_state (Chris)
- Propagate error from completed fences (Chris)
- Ignore submit-fences on the same timeline (Chris)
- Pull waiting on an external dma-fence into its routine (Chris)
- Replace the hardcoded I915_FENCE_TIMEOUT with Kconfig (Chris)
- Mark up the racy read of execlists->context_tag (Chris)
- Tidy up the return handling for completed dma-fences (Chris)
- Introduce skl_plane_wm_level accessor (Stanislav)
- Extract SKL SAGV checking (Stanislav)
- Make active_pipes check skl specific (Stanislav)
- Suspend tasklets before resume sanitization (Chris)
- Remove redundant exec_fence (Chris)
- Mark the addition of the initial-breadcrumb in the request (Chris)
- Transfer old virtual breadcrumbs to irq_worker (Chris)
- Read the DP SDPs from the video DIP (Gwan-gyeong)
- Program DP SDPs with computed configs (Gwan-gyeong)
- Add state readout for DP VSC and DP HDR Metadata Infoframe SDP
(Gwan-gyeong)
- Add compute routine for DP PSR VSC SDP (Gwan-gyeong)
- Use new DP VSC SDP compute routine on PSR (Gwan-gyeong)
- Restrict qgv points which don't have enough bandwidth. (Stanislav)
- Nuke pointless div by 64bit (Ville)
- Static checker code fixes (Nathan, Mika, Chris)
- Add logging function for DP VSC SDP (Gwan-gyeong)
- Include HDMI DRM infoframe, DP HDR metadata and DP VSC SDP in the
crtc state dump (Gwan-gyeong)
- Make timeslicing explicit engine property (Chris, Tvrtko)
- Selftest and debugging improvements (Chris)
- Align variable names with BSpec (Ville)
- Tidy up gen8+ breadcrumb emission code (Chris)
- Turn intel_digital_port_connected() in a vfunc (Ville)
- Use stashed away hpd isr bits in intel_digital_port_connected() (Ville)
- Extract i915_cs_timestamp_{ns_to_ticks,tick_to_ns}() (Ville)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200515160703.GA19043@jlahtine-desk.ger.corp.intel.com
Diffstat (limited to 'drivers/gpu/drm/i915/intel_pm.c')
-rw-r--r-- | drivers/gpu/drm/i915/intel_pm.c | 328 |
1 files changed, 266 insertions, 62 deletions
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index bfb180fe8047..696491d71a1d 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -43,6 +43,7 @@ #include "i915_fixed.h" #include "i915_irq.h" #include "i915_trace.h" +#include "display/intel_bw.h" #include "intel_pm.h" #include "intel_sideband.h" #include "../../../platform/x86/intel_ips.h" @@ -3637,10 +3638,6 @@ static bool skl_needs_memory_bw_wa(struct drm_i915_private *dev_priv) static bool intel_has_sagv(struct drm_i915_private *dev_priv) { - /* HACK! */ - if (IS_GEN(dev_priv, 12)) - return false; - return (IS_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) && dev_priv->sagv_status != I915_SAGV_NOT_CONTROLLED; } @@ -3760,34 +3757,116 @@ intel_disable_sagv(struct drm_i915_private *dev_priv) void intel_sagv_pre_plane_update(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); + const struct intel_bw_state *new_bw_state; + const struct intel_bw_state *old_bw_state; + u32 new_mask = 0; + + /* + * Just return if we can't control SAGV or don't have it. + * This is different from situation when we have SAGV but just can't + * afford it due to DBuf limitation - in case if SAGV is completely + * disabled in a BIOS, we are not even allowed to send a PCode request, + * as it will throw an error. So have to check it here. + */ + if (!intel_has_sagv(dev_priv)) + return; + + new_bw_state = intel_atomic_get_new_bw_state(state); + if (!new_bw_state) + return; - if (!intel_can_enable_sagv(state)) + if (INTEL_GEN(dev_priv) < 11 && !intel_can_enable_sagv(dev_priv, new_bw_state)) { intel_disable_sagv(dev_priv); + return; + } + + old_bw_state = intel_atomic_get_old_bw_state(state); + /* + * Nothing to mask + */ + if (new_bw_state->qgv_points_mask == old_bw_state->qgv_points_mask) + return; + + new_mask = old_bw_state->qgv_points_mask | new_bw_state->qgv_points_mask; + + /* + * If new mask is zero - means there is nothing to mask, + * we can only unmask, which should be done in unmask. + */ + if (!new_mask) + return; + + /* + * Restrict required qgv points before updating the configuration. + * According to BSpec we can't mask and unmask qgv points at the same + * time. Also masking should be done before updating the configuration + * and unmasking afterwards. + */ + icl_pcode_restrict_qgv_points(dev_priv, new_mask); } void intel_sagv_post_plane_update(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); + const struct intel_bw_state *new_bw_state; + const struct intel_bw_state *old_bw_state; + u32 new_mask = 0; + + /* + * Just return if we can't control SAGV or don't have it. + * This is different from situation when we have SAGV but just can't + * afford it due to DBuf limitation - in case if SAGV is completely + * disabled in a BIOS, we are not even allowed to send a PCode request, + * as it will throw an error. So have to check it here. + */ + if (!intel_has_sagv(dev_priv)) + return; + + new_bw_state = intel_atomic_get_new_bw_state(state); + if (!new_bw_state) + return; - if (intel_can_enable_sagv(state)) + if (INTEL_GEN(dev_priv) < 11 && intel_can_enable_sagv(dev_priv, new_bw_state)) { intel_enable_sagv(dev_priv); + return; + } + + old_bw_state = intel_atomic_get_old_bw_state(state); + /* + * Nothing to unmask + */ + if (new_bw_state->qgv_points_mask == old_bw_state->qgv_points_mask) + return; + + new_mask = new_bw_state->qgv_points_mask; + + /* + * Allow required qgv points after updating the configuration. + * According to BSpec we can't mask and unmask qgv points at the same + * time. Also masking should be done before updating the configuration + * and unmasking afterwards. + */ + icl_pcode_restrict_qgv_points(dev_priv, new_mask); } -static bool intel_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state) +static bool skl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state) { - struct drm_device *dev = crtc_state->uapi.crtc->dev; - struct drm_i915_private *dev_priv = to_i915(dev); struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct intel_plane *plane; + const struct intel_plane_state *plane_state; int level, latency; + if (!intel_has_sagv(dev_priv)) + return false; + if (!crtc_state->hw.active) return true; if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) return false; - for_each_intel_plane_on_crtc(dev, crtc, plane) { + intel_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) { const struct skl_plane_wm *wm = &crtc_state->wm.skl.optimal.planes[plane->id]; @@ -3803,7 +3882,7 @@ static bool intel_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state latency = dev_priv->wm.skl_latency[level]; if (skl_needs_memory_bw_wa(dev_priv) && - plane->base.state->fb->modifier == + plane_state->uapi.fb->modifier == I915_FORMAT_MOD_X_TILED) latency += 15; @@ -3819,35 +3898,110 @@ static bool intel_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state return true; } -bool intel_can_enable_sagv(struct intel_atomic_state *state) +static bool tgl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + enum plane_id plane_id; + + if (!crtc_state->hw.active) + return true; + + for_each_plane_id_on_crtc(crtc, plane_id) { + const struct skl_ddb_entry *plane_alloc = + &crtc_state->wm.skl.plane_ddb_y[plane_id]; + const struct skl_plane_wm *wm = + &crtc_state->wm.skl.optimal.planes[plane_id]; + + if (skl_ddb_entry_size(plane_alloc) < wm->sagv_wm0.min_ddb_alloc) + return false; + } + + return true; +} + +static bool intel_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + + if (INTEL_GEN(dev_priv) >= 12) + return tgl_crtc_can_enable_sagv(crtc_state); + else + return skl_crtc_can_enable_sagv(crtc_state); +} + +bool intel_can_enable_sagv(struct drm_i915_private *dev_priv, + const struct intel_bw_state *bw_state) +{ + if (INTEL_GEN(dev_priv) < 11 && + bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes)) + return false; + + return bw_state->pipe_sagv_reject == 0; +} + +static int intel_compute_sagv_mask(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); + int ret; struct intel_crtc *crtc; - const struct intel_crtc_state *crtc_state; - enum pipe pipe; + struct intel_crtc_state *new_crtc_state; + struct intel_bw_state *new_bw_state = NULL; + const struct intel_bw_state *old_bw_state = NULL; + int i; - if (!intel_has_sagv(dev_priv)) - return false; + for_each_new_intel_crtc_in_state(state, crtc, + new_crtc_state, i) { + new_bw_state = intel_atomic_get_bw_state(state); + if (IS_ERR(new_bw_state)) + return PTR_ERR(new_bw_state); - /* - * If there are no active CRTCs, no additional checks need be performed - */ - if (hweight8(state->active_pipes) == 0) - return true; + old_bw_state = intel_atomic_get_old_bw_state(state); - /* - * SKL+ workaround: bspec recommends we disable SAGV when we have - * more then one pipe enabled - */ - if (hweight8(state->active_pipes) > 1) - return false; + if (intel_crtc_can_enable_sagv(new_crtc_state)) + new_bw_state->pipe_sagv_reject &= ~BIT(crtc->pipe); + else + new_bw_state->pipe_sagv_reject |= BIT(crtc->pipe); + } + + if (!new_bw_state) + return 0; + + new_bw_state->active_pipes = + intel_calc_active_pipes(state, old_bw_state->active_pipes); + + if (new_bw_state->active_pipes != old_bw_state->active_pipes) { + ret = intel_atomic_lock_global_state(&new_bw_state->base); + if (ret) + return ret; + } + + for_each_new_intel_crtc_in_state(state, crtc, + new_crtc_state, i) { + struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.optimal; - /* Since we're now guaranteed to only have one active CRTC... */ - pipe = ffs(state->active_pipes) - 1; - crtc = intel_get_crtc_for_pipe(dev_priv, pipe); - crtc_state = to_intel_crtc_state(crtc->base.state); + /* + * We store use_sagv_wm in the crtc state rather than relying on + * that bw state since we have no convenient way to get at the + * latter from the plane commit hooks (especially in the legacy + * cursor case) + */ + pipe_wm->use_sagv_wm = INTEL_GEN(dev_priv) >= 12 && + intel_can_enable_sagv(dev_priv, new_bw_state); + } + + if (intel_can_enable_sagv(dev_priv, new_bw_state) != + intel_can_enable_sagv(dev_priv, old_bw_state)) { + ret = intel_atomic_serialize_global_state(&new_bw_state->base); + if (ret) + return ret; + } else if (new_bw_state->pipe_sagv_reject != old_bw_state->pipe_sagv_reject) { + ret = intel_atomic_lock_global_state(&new_bw_state->base); + if (ret) + return ret; + } - return intel_crtc_can_enable_sagv(crtc_state); + return 0; } /* @@ -4574,6 +4728,20 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state, return total_data_rate; } +static const struct skl_wm_level * +skl_plane_wm_level(const struct intel_crtc_state *crtc_state, + enum plane_id plane_id, + int level) +{ + const struct skl_pipe_wm *pipe_wm = &crtc_state->wm.skl.optimal; + const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id]; + + if (level == 0 && pipe_wm->use_sagv_wm) + return &wm->sagv_wm0; + + return &wm->wm[level]; +} + static int skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) { @@ -4610,7 +4778,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state) plane_data_rate, uv_plane_data_rate); - skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state, total_data_rate, alloc, &num_active); alloc_size = skl_ddb_entry_size(alloc); @@ -4810,7 +4977,7 @@ skl_wm_method1(const struct drm_i915_private *dev_priv, u32 pixel_rate, wm_intermediate_val = latency * pixel_rate * cpp; ret = div_fixed16(wm_intermediate_val, 1000 * dbuf_block_size); - if (INTEL_GEN(dev_priv) >= 10) + if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) ret = add_fixed16_u32(ret, 1); return ret; @@ -4945,18 +5112,19 @@ skl_compute_wm_params(const struct intel_crtc_state *crtc_state, wp->y_min_scanlines, wp->dbuf_block_size); - if (INTEL_GEN(dev_priv) >= 10) + if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) interm_pbpl++; wp->plane_blocks_per_line = div_fixed16(interm_pbpl, wp->y_min_scanlines); - } else if (wp->x_tiled && IS_GEN(dev_priv, 9)) { - interm_pbpl = DIV_ROUND_UP(wp->plane_bytes_per_line, - wp->dbuf_block_size); - wp->plane_blocks_per_line = u32_to_fixed16(interm_pbpl); } else { interm_pbpl = DIV_ROUND_UP(wp->plane_bytes_per_line, - wp->dbuf_block_size) + 1; + wp->dbuf_block_size); + + if (!wp->x_tiled || + INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) + interm_pbpl++; + wp->plane_blocks_per_line = u32_to_fixed16(interm_pbpl); } @@ -5022,7 +5190,7 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state, * WaIncreaseLatencyIPCEnabled: kbl,cfl * Display WA #1141: kbl,cfl */ - if ((IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv)) || + if ((IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv)) && dev_priv->ipc_enabled) latency += 4; @@ -5145,6 +5313,20 @@ skl_compute_wm_levels(const struct intel_crtc_state *crtc_state, } } +static void tgl_compute_sagv_wm(const struct intel_crtc_state *crtc_state, + const struct skl_wm_params *wm_params, + struct skl_plane_wm *plane_wm) +{ + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); + struct skl_wm_level *sagv_wm = &plane_wm->sagv_wm0; + struct skl_wm_level *levels = plane_wm->wm; + unsigned int latency = dev_priv->wm.skl_latency[0] + dev_priv->sagv_block_time_us; + + skl_compute_plane_wm(crtc_state, 0, latency, + wm_params, &levels[0], + sagv_wm); +} + static void skl_compute_transition_wm(const struct intel_crtc_state *crtc_state, const struct skl_wm_params *wp, struct skl_plane_wm *wm) @@ -5197,10 +5379,6 @@ static void skl_compute_transition_wm(const struct intel_crtc_state *crtc_state, trans_offset_b; } else { res_blocks = wm0_sel_res_b + trans_offset_b; - - /* WA BUG:1938466 add one block for non y-tile planes */ - if (IS_CNL_REVID(dev_priv, CNL_REVID_A0, CNL_REVID_A0)) - res_blocks += 1; } /* @@ -5216,6 +5394,8 @@ static int skl_build_plane_wm_single(struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, enum plane_id plane_id, int color_plane) { + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct skl_plane_wm *wm = &crtc_state->wm.skl.optimal.planes[plane_id]; struct skl_wm_params wm_params; int ret; @@ -5226,6 +5406,10 @@ static int skl_build_plane_wm_single(struct intel_crtc_state *crtc_state, return ret; skl_compute_wm_levels(crtc_state, &wm_params, wm->wm); + + if (INTEL_GEN(dev_priv) >= 12) + tgl_compute_sagv_wm(crtc_state, &wm_params, wm); + skl_compute_transition_wm(crtc_state, &wm_params, wm); return 0; @@ -5385,8 +5569,12 @@ void skl_write_plane_wm(struct intel_plane *plane, &crtc_state->wm.skl.plane_ddb_uv[plane_id]; for (level = 0; level <= max_level; level++) { + const struct skl_wm_level *wm_level; + + wm_level = skl_plane_wm_level(crtc_state, plane_id, level); + skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id), &wm->trans_wm); @@ -5419,8 +5607,12 @@ void skl_write_cursor_wm(struct intel_plane *plane, &crtc_state->wm.skl.plane_ddb_y[plane_id]; for (level = 0; level <= max_level; level++) { + const struct skl_wm_level *wm_level; + + wm_level = skl_plane_wm_level(crtc_state, plane_id, level); + skl_write_wm_level(dev_priv, CUR_WM(pipe, level), - &wm->wm[level]); + wm_level); } skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm); @@ -5584,23 +5776,25 @@ skl_print_wm_changes(struct intel_atomic_state *state) continue; drm_dbg_kms(&dev_priv->drm, - "[PLANE:%d:%s] level %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm" - " -> %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm\n", + "[PLANE:%d:%s] level %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm" + " -> %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm\n", plane->base.base.id, plane->base.name, enast(old_wm->wm[0].plane_en), enast(old_wm->wm[1].plane_en), enast(old_wm->wm[2].plane_en), enast(old_wm->wm[3].plane_en), enast(old_wm->wm[4].plane_en), enast(old_wm->wm[5].plane_en), enast(old_wm->wm[6].plane_en), enast(old_wm->wm[7].plane_en), enast(old_wm->trans_wm.plane_en), + enast(old_wm->sagv_wm0.plane_en), enast(new_wm->wm[0].plane_en), enast(new_wm->wm[1].plane_en), enast(new_wm->wm[2].plane_en), enast(new_wm->wm[3].plane_en), enast(new_wm->wm[4].plane_en), enast(new_wm->wm[5].plane_en), enast(new_wm->wm[6].plane_en), enast(new_wm->wm[7].plane_en), - enast(new_wm->trans_wm.plane_en)); + enast(new_wm->trans_wm.plane_en), + enast(new_wm->sagv_wm0.plane_en)); drm_dbg_kms(&dev_priv->drm, - "[PLANE:%d:%s] lines %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d" - " -> %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d\n", + "[PLANE:%d:%s] lines %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d" + " -> %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d\n", plane->base.base.id, plane->base.name, enast(old_wm->wm[0].ignore_lines), old_wm->wm[0].plane_res_l, enast(old_wm->wm[1].ignore_lines), old_wm->wm[1].plane_res_l, @@ -5611,6 +5805,7 @@ skl_print_wm_changes(struct intel_atomic_state *state) enast(old_wm->wm[6].ignore_lines), old_wm->wm[6].plane_res_l, enast(old_wm->wm[7].ignore_lines), old_wm->wm[7].plane_res_l, enast(old_wm->trans_wm.ignore_lines), old_wm->trans_wm.plane_res_l, + enast(old_wm->sagv_wm0.ignore_lines), old_wm->sagv_wm0.plane_res_l, enast(new_wm->wm[0].ignore_lines), new_wm->wm[0].plane_res_l, enast(new_wm->wm[1].ignore_lines), new_wm->wm[1].plane_res_l, @@ -5620,37 +5815,42 @@ skl_print_wm_changes(struct intel_atomic_state *state) enast(new_wm->wm[5].ignore_lines), new_wm->wm[5].plane_res_l, enast(new_wm->wm[6].ignore_lines), new_wm->wm[6].plane_res_l, enast(new_wm->wm[7].ignore_lines), new_wm->wm[7].plane_res_l, - enast(new_wm->trans_wm.ignore_lines), new_wm->trans_wm.plane_res_l); + enast(new_wm->trans_wm.ignore_lines), new_wm->trans_wm.plane_res_l, + enast(new_wm->sagv_wm0.ignore_lines), new_wm->sagv_wm0.plane_res_l); drm_dbg_kms(&dev_priv->drm, - "[PLANE:%d:%s] blocks %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d" - " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d\n", + "[PLANE:%d:%s] blocks %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d" + " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d\n", plane->base.base.id, plane->base.name, old_wm->wm[0].plane_res_b, old_wm->wm[1].plane_res_b, old_wm->wm[2].plane_res_b, old_wm->wm[3].plane_res_b, old_wm->wm[4].plane_res_b, old_wm->wm[5].plane_res_b, old_wm->wm[6].plane_res_b, old_wm->wm[7].plane_res_b, old_wm->trans_wm.plane_res_b, + old_wm->sagv_wm0.plane_res_b, new_wm->wm[0].plane_res_b, new_wm->wm[1].plane_res_b, new_wm->wm[2].plane_res_b, new_wm->wm[3].plane_res_b, new_wm->wm[4].plane_res_b, new_wm->wm[5].plane_res_b, new_wm->wm[6].plane_res_b, new_wm->wm[7].plane_res_b, - new_wm->trans_wm.plane_res_b); + new_wm->trans_wm.plane_res_b, + new_wm->sagv_wm0.plane_res_b); drm_dbg_kms(&dev_priv->drm, - "[PLANE:%d:%s] min_ddb %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d" - " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d\n", + "[PLANE:%d:%s] min_ddb %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d" + " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d\n", plane->base.base.id, plane->base.name, old_wm->wm[0].min_ddb_alloc, old_wm->wm[1].min_ddb_alloc, old_wm->wm[2].min_ddb_alloc, old_wm->wm[3].min_ddb_alloc, old_wm->wm[4].min_ddb_alloc, old_wm->wm[5].min_ddb_alloc, old_wm->wm[6].min_ddb_alloc, old_wm->wm[7].min_ddb_alloc, old_wm->trans_wm.min_ddb_alloc, + old_wm->sagv_wm0.min_ddb_alloc, new_wm->wm[0].min_ddb_alloc, new_wm->wm[1].min_ddb_alloc, new_wm->wm[2].min_ddb_alloc, new_wm->wm[3].min_ddb_alloc, new_wm->wm[4].min_ddb_alloc, new_wm->wm[5].min_ddb_alloc, new_wm->wm[6].min_ddb_alloc, new_wm->wm[7].min_ddb_alloc, - new_wm->trans_wm.min_ddb_alloc); + new_wm->trans_wm.min_ddb_alloc, + new_wm->sagv_wm0.min_ddb_alloc); } } } @@ -5811,6 +6011,10 @@ skl_compute_wm(struct intel_atomic_state *state) if (ret) return ret; + ret = intel_compute_sagv_mask(state); + if (ret) + return ret; + /* * skl_compute_ddb() will have adjusted the final watermarks * based on how much ddb is available. Now we can actually @@ -5939,6 +6143,9 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc, skl_wm_level_from_reg_val(val, &wm->wm[level]); } + if (INTEL_GEN(dev_priv) >= 12) + wm->sagv_wm0 = wm->wm[0]; + if (plane_id != PLANE_CURSOR) val = I915_READ(PLANE_WM_TRANS(pipe, plane_id)); else @@ -6916,9 +7123,6 @@ static void cnl_init_clock_gating(struct drm_i915_private *dev_priv) val = I915_READ(SLICE_UNIT_LEVEL_CLKGATE); /* ReadHitWriteOnlyDisable:cnl */ val |= RCCUNIT_CLKGATE_DIS; - /* WaSarbUnitClockGatingDisable:cnl (pre-prod) */ - if (IS_CNL_REVID(dev_priv, CNL_REVID_A0, CNL_REVID_B0)) - val |= SARBUNIT_CLKGATE_DIS; I915_WRITE(SLICE_UNIT_LEVEL_CLKGATE, val); /* Wa_2201832410:cnl */ |