Optimizing your TM1 and Planning Analytics server for Performance

The combination of in-memory calculation, amazing design and years of optimization have resulted in IBM TM1 and Planning Analytics being renowned as one of the fastest real-time analytical engines on the market. The default settings you get "out of the box" is all that is required to have a fast TM1 model. This article focuses on taking it a step further using the many parameters (over 100) which allow you to tune your system to get maximum performance of your TM1/Planning Analytics server. 



Multi-threaded querying allows you to enable multiple cores to conduct queries. This feature will provide you with significant performance improvements, especially for large queries with lot of consolidations. An optimal number of cores (sweat spot) needs to be established to achieve the maximum performance. Ensure you conduct various tests to find the “sweet spot” and maximize the performance. Be cautious to not exceed your licensing arrangements. Basically, ensure you have enough PVU licenses:



MTFeeders is a new parameter from Planning Analytics (TM1 server v11). By turning on this new parameter in tm1s.cfg, MTQ will be then triggered when recalculating feeders:

  • CubeProcessFeeders() is triggered from a TM1 process.
  • A feeder statement is updated in the rules.
  • Construction of feeders at startup.

MTFeeders will provide you significant improvement but you need to be aware that is does not support conditional feeders. If you are using conditional feeders where the condition clause contains a fed value, you have to turn it off.

To turn on MTFeeders during server start-up you will need to add MTFeeders.AtStartup=T.


ParallelInteraction (TM1 only)

This feature is turned on but default in Planning Analytics (TM1 11+), you need to set it to true only if you are still using TM1 10.2.

Parallel interaction allows for greater concurrency of read and write operations on the same cube objects. It can be crucial for optimizing lengthy data load processes. Instead of loading all data sequentially, you could load all months at the same time which is called Parallel loading.  Parallel loading will allow you to segment your data and subsequently leverage multiple cores to simultaneously load the data into cubes.

To manage the threads and to keep the number of threads under the number of cores, we recommend you to use a free utility, Hustle.



This parameter impacts only the start-up time of your PA/TM1 instance. It specifies whether the cube and feeder calculation phases of the server loading are multi-threaded, so multiple processes can be used in parallel. You will need to specify the number of cores that you would like to dedicate to cube loading and feeder processing.

Particularly useful if you have many large cubes and there is an imperative to improve performance upon server start-up. It is recommended that you specify the maximum amount of cores – 1.

Similar as for MTQ, to find the optimal number of cores which will provide optimal performance you will need to test multiple scenarios.



Persistent Feeders allows you to improve the loading of cubes with feeders, which will also improve the server start-up time. When you active persistent feeders, it will create a .feeders file for each cube that has rules. Upon server startup the tm1 server will reference the .feeders files and will re-load the feeders for cubes.

It is best practice to activate persistent feeders if you have large cubes which have an extensive number of fed cells.

In many cases start-up time can be significantly reduced, examples of a 80-90% reduction are common.

Things to look out for

  • Feeders are saved to the .feeders file. Therefore, even if you remove a particular feeder from the rule file it will remain in the .feeders file. You will need to delete the .feeders file and allow TM1 to re-generate the file.
  • If you have dynamic rules or consolidated elements on the right-hand side, you will need to use the function reprocess feeders if you choose to add a new version for instance.
  • Although this is a greater feature, judgement is required on when to use it. For instance, if your cubes are small and don’t have much rules/feeders it may be more beneficial to leave this off.

Other parameters which will improve user experience

  • AllRuleCalcStargateOptimization can improve performance in calculating views that contain only rule-calculated consolidations.
  • UseStargateForRules By default when retrieving a calculated cell, the value will be retrieved from a stargate view stored in memory, in some unique instances using a stargate view can be slower than requesting the value from the server, so you can turn off Stargate views for rules by using UseStargateForRules=F.
  • ViewConsolidationOptimization enables or disables view consolidation optimization. It increases the performance but increases the amount of memory required for a given view.
  • CalculationThresholdForStorage: The number of cells needed as minimum before stargate view creation is triggered. Set to a low number to maximize caching but increase memory.
  • MaximumViewSize: if the view memory when accessing this view reaches the threshold it will abort view construction rather than client waiting forever the view.
  • CheckFeedersMaximumCells: if a user tries to check feeders in the cube viewer from cell that is too many cells in the consolidation it will refuse, rather than a very long client hang or eventual crash.
  • MaximumUserSandboxSize: Stop server using excessive memory in case users try to do very large sandbox changes
  • LogReleaseLineCount: If admins are doing transaction log queries this stops users getting locked for a long time.
  • StartupChores: A stargate view is created the first time a user opens the view. If a second a user open the same view, it will be faster because the view would have already been cached. To avoid the first user to wait a bit more, you could set up a chore which ran when the server starts to cache views then this could give users better performance.
  • SubsetElementBreatherCount: Allows lock on subset to be released when there are other requests pending
  • UseLocalCopiesForPublicDynamicSubsets: Improve performance by not invalidating and causing write lock of the public dynamic subset just the user’s local copy.
  • JobQueuing: Turns on queuing for Personal Workspace or Sandbox submissions.
  • JobQueueThreadPoolSize: Job queue is specific for using contributor/application web which uses sandbox by default. It manages all user sandbox commits into a job queue so users don’t wait.

It is important to be aware that all the parameters in the tm1s.cfg file are now dynamic in IBM Planning Analytics, meaning they can be changed with immediate effect.