Diagnosing direct-lake spikes in Fabric & fixing them (fast)

02 December 2025
Sambhav Anant Rakhe


If you use Direct Lake semantic models, you may see sudden Fabric capacity CPU spikes and slow interactive reports when many users hit the same model at once. The dataset can look small, yet CPU and query times on capacity climb because of query shape, formula-engine work, or high concurrency.

We recently faced this exact situation with one of our clients and given that a lot of Fabric users are now leveraging the new Direct Lake mode for their analytics needs, we are going to look into how we fixed this issue and share some resources to help you do this yourself.

Background -

How to identify the real culprit -

Actionable insights based on diagnostics -

Based on your analysis in DAX Studio

TL;DR -

  1. Identify the top consumers on the capacity for interactive usage using the Fabric Capacity Metrics App.
  2. Pick the top direct lake model; identify the top report during peak CPU minutes.
  3. Recreate the report action and run Performance Analyzer.
  4. Trace visuals with DAX Studio / Query Diagnostics to determine Storage vs Formula Engine CPU duration split.
  5. Apply the suggested fixes.
  6. Repeat step 3 - 5 to check if further optimization is possible.
  7. Re-measure in the Capacity app during the next busy window to confirm reduction.

Closing Takeaway -

In conclusion, we have summarized what can be done if your direct lake models seem to be using a lot of your compute on your capacity even when you seem to have a small dataset size on the delta tables or have a small number of users interacting with the reports. In our case, we were able to successfully identify 2 reports that turned out to be the heaviest consumers.

In one of the reports, it was concluded that we had too many measure fields that could have been calculated columns instead (this worked better in our testing). Additionally, we reduced the number of visuals on the main page keeping only the ones that were important to all end users and moving all others to their own departmental pages. In the other report, we had to rethink our implementation of the direct lake model and we identified some fields that had strong business use cases and therefore could be permanently moved to the delta tables directly doing ETL in notebooks instead and then bringing this summarized data into our data model for reporting.

Our main takeaway was despite Fabric evolving at such a fast pace, it is important to not forget our fundamentals as Analytics Engineers to ensure optimized implementation of solutions for best end user experience. It is important to identify bottlenecks, test the alternative approaches and implement them as we go! If you had a similar experience at your organization and you implemented fixes that helped tackle this problem or would like to implement fixes based on what you have read in this post, I’d love to hear about different experiences and perspectives! 😃